 Hi, this is Yoho Sapim Bhartiya and welcome to TFR. Let's talk today. We have with us once again, Asafi Ghal, CDO and co-founder of Logs IO. Asafi, it's great to have you on the show. Hey, great to be here. Thanks for having me. Today, we are going to talk about AI and observability. But before we talk about AI and observability, I would just like to hear from your perspective. When we look at Logs IO, talk a bit about your positioning, your role in this ever-evolving landscape of observability. The way we view it is that observability today and what used to be called APM in the past is broken. People are collecting too much data. The solutions are too costly. And at the end of the day, the meantime to resolution just increases and increases as complexity of the system grow. We view it as a systematic challenge, not only from a technology perspective, but also from a processes perspective from the way organizations are being managed. And instead of setting up specific guidelines of what should you have an observability, they just throw everything at it and say, hey, we'll make sense out of it at some point. A lot of it has been contributed with traditional vendors of observability that their messaging was, give us all your data, we're gonna make sense out of it. But that turned out to be very costly and not really accurate because they couldn't make sense out of it. So the way we see it, observability needs to be well-defined, identified, objectives need to be defined. And then you can develop a system which is very, very optimized for cost and very optimized for meantime to resolution. So that's kind of like how we use observability and that's our role in it on offering the system that is more observability. I was at KubeCon also before that and of course, observability, a lot of times, there are a lot of things that organizations are trying to do, especially after moving to the cloud because there is a lot of cloud complexity, they have to deal with a lot of cloud costs, they have to deal with security, they have to deal with performance. How do you see observability as a practice, discipline, process, tool or title kind of silo in its own right? Or you see that it overlaps a lot of things because the overall idea even with observability is to help developer steam, help operator steam to increase, improve the health efficiency of the system. So I want to look at the whole system from the view of observability and then we'll talk about how AI can come and help them. Does that question make sense? Yeah, it does make sense. And I think one of the things is that people need to realize that the observability should work for them. And but they end up doing it being in a position where they work for their observability. They spend too much time setting it up. They spend too much time trying to filter out the noise. They spend too much time creating alerts. They spend too much time on alert fatigue and it actually doesn't work for them. So observability is a solution that's supposed to make your life better. And it's not that you're gonna make the observability better. And we see it as a gap that exists today in the market. What kind of challenges are national space when they look adopt observability strategy? And also if you can talk about, when we talk about observability, what kind of teams, what kind of personas are affected or responded for it? I think the organization need to realize that there are two ways and basically two different sets of teams that are looking at the production environment. One of them is the engineering team, the product team, the one that developed the solution to service. And they look at it from almost like a vertical perspective. They have an application that they're using multiple microservices or being laid out on specific pods, running on nodes, running on clusters, running in different regions, running in different clouds maybe, but they don't care what they care about. It's that their application is gonna deliver the service that they're obligated towards their customers. And that's how they look at the world. There's a different side of the house. And that house is the DevOps, the SREs, the production engineer. They allocate the infrastructure. They own the clusters. They own the infrastructure on the cloud. And their job is to make sure that the applications are laid out properly on the infrastructure in the most optimal way from a performance, from a security, from availability, and obviously from a cost perspective. And they have a different view in the market, sorry, on production. And I think any observability vendor would need to realize that same way that we did and build these two different ways of looking at the world. One of them is from the application layer, down to the infrastructure. And the other one is from the infrastructure, laying out all the application on it and making sure that this is fully optimized and cost effective. No, let's talk about the role of AI. We have been talking about automation for a long time. AI has been used for a long time, but we are not talking about Genetic AI, which is kind of changing the game. So talking a bit about observability AI and then also Gen AI. Yes, so I'll start with the observability AI. The challenge of AI is to help you reduce the alert that you have. So make sure that when you get an issue and when you get an alert, this is something you need to do something about. Today, when we survey companies, more than 50% of the alerts that they get, they do nothing with it. They can sit around and watch the screen and watch for the alert to go by and it means nothing. And it's quite a lot of alerts. So the role of AI is try to identify it. The challenge that today exists in traditional, what called anomaly detection or AI for anomaly detection, is first of all on what you set it on. In a lot of organizations, try to set up anomalies on everything. So just alert me when something is anomalous. But the reality is that all of the environment are one big anomaly. Things happen, big customer joins, a cluster crashes, a pod being restarted. All of these things happen all the time and it's all one big anomaly. The way we see it and the way we believe it is that you should first of all define your SLO, you define your service level objectives. Say, hey, I have a service, I have a payment service. It's supposed to be responsive within 200 milliseconds and I need to have an error rate below 0.001. And then you can set up AI and you can set up anomaly detection on these things. I wanna make sure that my error rate doesn't increase. I wanna make sure that my response time doesn't change or if it changes, then I need to know about it from an anomaly detection perspective. So it's a lot about where you set it on kind of like how you defined the AI. I think today there is a lot of very good algorithm that every company kind of like adopted for anomaly detection and it's definitely providing a lot of value in observability. When you look at the whole overall art of evolution of observability, it's especially when we talk about system health or security and a lot of things, sometimes it looks like it's more or less reactionary. We have to be prepared for when something goes wrong versus things are prepared when something goes wrong. So we don't have to deal with the complexity. Of course, the whole holidays season is coming and developers sweat that it'll be peak traffic. So are you seeing something where like a good example is that in the cars, of course, there are things like airbags which is when something goes wrong. But then there are a lot of other system in the car as well that helps you stay traction control and a lot of things that fix things before something goes wrong. So how are you seeing observability here? Or you feel like, hey, this is still out of the scope at least at this point. No, I think it's definitely in scope for this point. And I think this is where a lot of the AI today is being used if it's defined properly. And like in a car, you're not setting up alerts on any movement your engine is making. You're just setting up alert on car proximity. You're setting up alert on if you change the lane you didn't mean to change that lane. So there are very few things in a car where you set up alerts to notify you on. And I think this is very similar to kind of like observability. You need to pick your battles. You cannot set up alerts and everything. And not everything can be top priority. And we do see being used properly, AI dramatically reduces the alert fatigue. Being used not properly, it creates a lot more noise than it should. So it's a lot about the usage of it. It's less about kind of like the specific algorithms which today are very well-known and commonly used. Talk a bit about the rule of generative AI that you see in observability. I think it's a good question. And I think generative AI obviously since the debut of Czech GPT made a lot of noise and definitely it's becoming very, very helpful for organization to use. We also have to understand the limitation of generative AI. And some of the limitations of generative AI is the model takes a long time to train. It's reliant on all data. So you cannot use generative AI to see what's going on in my system today because it was trained like on a year ago or something like that. On the other hand, it can process a lot of information and give you all the right answers that you need. So some of the ways that we're using generative AI and I think other companies in the observability space as well are is how can I help users achieve their tasks in a simpler and quicker way? If a user wanted to search, how can I help them search in a more meaningful way in a quicker way? Because I know the search format. The generative AI knows the search format instead of searching in SQL or Lucene or whatever language I have, I can just search in plain English. Give me all the servers that had the most amount of errors in the past two hours. This is something that generative AI does amazing job on converting it to any other question that you have. The other thing is that we know, which is a limitation and a challenge for organizations, how do I create dashboards and visualizations? How do I create alerts that are meaningful? If you need to create a meaningful alert, it can have a lot of steps. And one of the things that generative AI can do is help identify that and help create sophisticated alerts that would give you the condition that you need. So this is something that we see generative AI being used in the observability space. I think a lot of companies are one idea is that they can add all of their documentation and all of the way of data collection into generative AI and then you can ask a question about it because you're just completing the model. We're seeing that type of usage within observability and obviously in the future, we would see companies actually indexing or leveraging generative AI with their actual production data in order to ask more sophisticated questions and achieve the answer quicker. And how do you see the scope of generative AI is it limited to just providing that information or at some point it will also actually enable taking some actions. I talked to a lot of companies who are leveraging generative AI and they're putting them in the product and they're like, yes, generative AI can do a lot of work but you still need human intervention there. You can't fully depend or rely on generative AI at this point. I think it's less about the trust. I think it does an amazing job for things that you want and if you use JGPT, I mean, you know that the answers are pretty much accurate. I think the challenges with data accuracy, the timeliness of the data, so am I asking a question and I'm gonna get the answer that was true a year ago or two years ago whenever the model was trained and this is why you need some oversight. So I think if you use it in the right way, then you can achieve a lot with it. If you're trying to make it do something which is not supposed to do, then it's gonna be very challenging to achieve that. A few weeks ago, the Biden administration, they came up with an executive order for AI Generative AI to make it safer for people. Have you seen that executive and what are your thoughts on that? Yeah, I think obviously there was a lot of challenges with generative AI, some of it is has to, for example, with the intellectual property. So capabilities like co-pilot and other capabilities that are actually writing code for you, it's like you don't know who is the IP that is responsible for this code. It's like generate for something but it actually copied it from somewhere else or copied it together from somewhere else. So there's gonna be a lot of challenges with like this. We also see it in, I think in the US, there's been a case where a lawyer was using generative AI and from as trial arguments and generative AI brought up cases that never existed. So part of generative AI is also to try a new thing and that's the way to learn because it's not only leveraging the existing knowledge base, it's also improvising in a way. And that improvisation can be good from creativity perspective but it also can be challenging from that perspective. I think this is a revolution what's happening with generative AI. I think we're gonna see more and more into it in other industry, whether it's the legal, whether it's the movie entry domain and stuff like that and it's gonna create some challenges but it's also gonna create some opportunities. I think the interesting thing is that I think overall that kind of like humanity has always been afraid that robots are gonna replace the kind of like the mundane work, the type of work that everybody can do but we end up using robots to replace the more sophisticated work about creativity, about art, about this and I think it's gonna be interesting to see where all this is gonna roll out. Asaf, thank you so much for taking time out today and talk about the evolution of observability and also the role of AI generative AI in this space. Thanks for all those great insights and I would love to have you back on the show. Thank you. Thank you so much. Thanks.