 So, the subject of my talk is how to use LLM in observability, and one other thing I'm going to tell you is that everything you've heard so far and everything you've heard, you're going to hear, the rest of the day about observability is all going to change in the next few months. And I'm going to show you how. So, my name is Asaf. I am one of the co-founders and the CTO of Logs.io. We're an observability company doing observability with open source and happy to be here. We're also presenting at KubeCon, booth H24. So, obviously, you've all heard about the kind of like the LLM, chat GPT and all the what's happening right now in the industry. So, for one thing I started figuring out, okay, how do we, let's ask chat GPT what is the best way to integrate LLM inside of observability and it's the response that you get. So, obviously, kind of like use whatever chat GPT thinks it is. And you can read here a bunch of the examples that were thought about, but this was actually surprised of the level of intimate information that chat GPT has with the all observability domain. So, obviously, doing some analysis about log analytics, pattern recognition, anomaly detection, both from metrics, logs and traces, natural language querying and reporting, which is also becoming very difficult and definitely a very good use of chat GPT and other LLMs to do it. Alert management and triage, the ability to communicate directly with people around the learning, around what's going on. This was kind of like other topics that chat GPT came up with how to use LLM inside of observability. Root cause analysis, there's so much data in observability, the ability to understand all this data in a very, very quick way in a very quick format is another thing. Predicting the analytics, knowledge-based enrichment and other different aspects that we can use LLM. And this is all came from actually using chat GPT about how to use LLM within observability. You can run this query yourself and realize. But we think there are other things to do it. And I think one of the one of the limitations of LLM is the lack of creativity. And these answers are great, but they're not as creative as we can find. So this is our take about how to use it and how to use LLM. So there's two main aspects of how we think of using it. One of them is simplifying user experience. We all know open source is great, but trying to use it is a nightmare. Anyone here running Lucine queries, anyone ever tried to build a visualization in Grafana, it's a lot of fun. It takes a lot of time. After you've done it five times, it's enough. Not to mention creating alerts in Grafana or OpenSearch. This is enough to do once. You don't have to do it five times to realize that this is too much work. We actually use it internally, our support team and our customer success team today already use chat GPT. It does amazing job about creating visualization, gets you all the information. Same thing about creating alerts. So this is a very good way of kind of like using it. So one way is let's make these observability solutions which are hard to use, simple and more accessible for human consumption. The second thing is how do we create an error product? How do I get my data out of my Kubernetes cluster? Again, any of you have tried to write, change, modify, helm charts, install the different things inside the cluster. It's another thing which chat GPT does a very, very good job at. You give it what the configuration is. You ask it to add more configuration. It does it in a very good way. Let's remove the mundane error prone tasks from a human. But the real future is not about that. The real future is like, I don't want to look at graphs. I don't want to look at the graph, figure out at the line. I have an anomaly. I don't need to look at the legend of the graph, try to figure out what is the name of the service, what is the name of the pod, what is the name of the node, which cluster, which region. I don't need to read all these lines and numbers. And we all know how many graphs, if you're running a Kubernetes clusters, if you're running a complex environment, how many graphs you need to look at in order to understand the data. What I actually want to do, I want to chat with my data. I want to be able to ask questions. I want something else to visually look at all the graphs and all the information that I have and make the best out of it. Because I need to know if there is an anomaly and I want the LLM to be able to say, hey, you have an anomaly on this pod that happens between this time and that time. It happened post the deployment because we're tracking all the information that exists. So the amount of data is amazing and the ability of human being to be able to look at it is very limited. This is, I think, the six or seven year in a row that we're running an observability survey. Every year the MTTR, the meantime to resolution, increases. So despite the fact that the observability solution are becoming better and better, the increased level of complexity is overrunning the progress of observability. And we do think that this is going to change the ability for me to interact with my data, ask very simple questions like post deployment, tell me if the state of the system is the same or do I have a degradation in the system. Tell me what's anomalous in all of these hundreds of graphs that I see in my system. This is what we think the future is going to be like. And our vision is that in the next few months, not even a year, the way observability solutions are going to look like are going to be completely different. It's going to be a lot of chat-based communication and less graphs to obtain it. So if you do want to see the future, we already have it working. Come check us out at booth 24. So we're LogsIO. We're happy to demo it to you and get some insight. I appreciate the feedback. Thank you very much.