 Good morning NerdFam, and welcome back to beautiful Paris. We're here at KubeCon, CloudNativeCon, CNCF's flagship European event. Gotta say this is one of my favorite work trips. My name's Savannah Peterson, joined here by my fabulous co-host Rob Strecce. Rob, what a pleasure to be here in Paris with you. This is awesome. I mean, it's just, it gets better and better. This community gets better and better. It's evolving, and there's a lot, and you know, we're going to be talking about one of my favorite subjects, observability. So I- I do love some observability. You might as well welcome us off. Welcome to the show. Thank you very much. How are you enjoying Paris? I am enjoying it very much. It's really nice. Haven't been here in a long time, but it's really nice to be back. Yes, same. I know it's a joy. All right, so we were just talking about it. Logs, you're more than logs. Yes. Tell us. So we're logs, but we're an observability solution. Obviously we collect logs metric and traces, but more importantly, we overlay an observability layer on top of it. So be able to set up a cellos, be able to work with organization to define what's important for them, how to basically monitor it, and essentially how to tie in, how the infrastructure and the code relates to the business and how it impacts it. The thing we've added recently is about AI and LLM, and I'll be happy to kind of share a little bit more about what we're doing and kind of like what our vision stores observability in. Let's talk about that for a second. How critical is observability as companies are ramping up their AI and ML efforts? I think it's really super critical for observability to be able to do it, especially as companies ramping up their AI and the companies developing AI, they do require really tight observability in order to be able to measure the processes, the time times, like days and weeks to run, that they finish in time, that they get the right responses, so definitely observability is critical. Yeah, and absolutely. Yeah, and we've known each other for quite some time. I was just going to bring that up, the little bromance going on. I'm in the sandwich here, it's great. It's awesome, but I mean like 15 years, but we've both seen this really mature. From when you started the company, what, 10 years ago, which, funny enough, it's the 10 year anniversary of Kubernetes as well, so very, very, very, very, big milestones. But AI is just changing the game. How do you see a changing observability? I know you did a lightning talk yesterday. Oh yeah. Yeah, so I did a lightning talk yesterday. There's been talks about AI for many, many years, even when you and I worked together, is there's been like anomaly detection, and it never caught on. It was never like a really prevalent solution around it. I think the changes has happened in the past year or so with the launch of ChetGPT and the race towards like the best AI and the best kind of like chat experience is that it really changes the way people interact with data and observability. It's all about collecting a lot of data and how do I interact with my data. Now, if you look at all of the observability vendors today, including ourselves, you see basically the same thing, bunch of graphs, bunch of alerts, dashboards, and you see what's going on. This thing is going to change because people are going to have a new way of interacting with the data. People are going to say, I want to ask questions, I want to ask it in plain English. I don't need to go through a graph, figure out which line is spiking, match the label to a different graph to do it. I just need to ask which line, which service of mine is having an issue right now, and give me all the data on that specific service and that's going to be it. So I do think that it's going to be a big change, not only in observability, but also the way people interact with data in general. Yeah, I think so. And you guys, I was going to say, you guys also did a poll survey recently and you actually looked at how organizations are facing challenges still with just observability for Kubernetes. What did you see in that survey? So it's surprising enough because we thought observability is a mature domain and people are doing it, but only 10% of organization mentioned that they have very comfortable. 10%? Yes. And the challenge is that people... Were shocked by that. We were shocked as well. Wow. That's insane. The other thing which we find is that, I think it's like six years in a row that we're running these surveys and the MTTR, the meantime to resolution keeps on going up and up and you would see so many observability vendors, so many advanced technology, why is it going up? And I think the challenge is not only Kubernetes and cloud infrastructure and serverless, they give different layers of flexibility. With increased flexibility comes increase in complexity. So they cannot catch up with the level of complexity that is being introduced now when obviously organization want to be as agile as possible adopting all the new technologies to create the flexibility that they want, but they cannot catch up with the complexity. So, and I think part of it is AI is going to help with addressing the increased level of complexity and the amount of data being collected to be able to interact with it in a more efficient way. So that's it. The other thing which we see is that organizations think that they have logs, metrics and traces and they have observability. The reality is that it has to start from the top. It has to start with what's important for me as a company and a lot of companies don't take the time to define it. Like I want to know what is my commitment to the business as an engineering team. I don't care that I have microservices running. I need to know that my shopping cart experience is under 200 millisecond, 99.99% of the time and I have a way to troubleshoot if it happens. And you'd be surprised how many organizations are still when we go and ask them, okay, what is your business objectives? And it's like, no, we don't have it or we have it, we don't know how to measure it. And so they come from the bottom up from the engineering side and say, I have a microservice. I'm going to monitor all the parameters of microservice. I have the interaction between them. I'm going to monitor everything. And it's just too complex because there's so many microservices and so many infrastructure component today that it's just very complex. So it's almost people are two in the weeds. They are two in the weeds. To a degree, I mean, we're both business owners. I can't imagine not knowing my top line company goals or the user experience I want for my customers, especially when adopting a new, very expensive technology. Okay, I'm seriously blown away by that 10% observability stat. What else did you find in that pulse survey? Anything? I think we found about the trouble of kind of like reaching the main time to resolution. Maybe the troubles should really, really quickly. The other thing we found is that there is always, there is a decrease of knowledge with people. So many new technologies, so many newcomers into this space. Totally. The level of expertise goes down and with the level of expertise goes down and the complexity goes up. The time to troubleshoot goes up. Like quite a gap too, I would imagine. The other thing which we found is getting security is interesting as part of the observability. Because back in the days, people used to have different security for they have exchange servers, they have an office security, which was completely separated than the production system. Today, all of their office infrastructure runs within the same cloud, but it's the same environment, it's all the same thing and it's under the same responsibility of people. So if you think about it, what used to be called like a SIEM or a security information event management is basically observability for security information. Now the security information lies within the same domain managed by the same people. They expect to get the same solutions and the same tools to do it. I think we were talking about this yesterday when we caught up and I think one of the things that you talked about is that the complexity keeps and we kind of had this discussion earlier on today that there's 114 sandbox projects here right now in CNCF. It went to Observability Day yesterday and watched somebody go through all of the different projects they had to try to actually build out their observability platform. How, as a company providing based, I mean you use OpenSearch underneath and a number of other things, how do you simplify that down for organizations? It's a good question. One of the things we realized and obviously we started as a logging company offering OpenSearch as a service, we realized that the open source are really, really good at projects, but they're not actually good at products. And there is a gap that needs to be- I'll state it. And there's a gap that needs to be filled there when it comes to how users interact with these amazing databases. They're just projects and amazing projects like scalability through the roof and complexity through the roof. And it's amazing, everything that's going on. So one thing is about the data collection. We're using open telemetry, 100% native open telemetry. When you ingest data on observability, you need to make sure that the data is aligned, that it's synchronized. Every log line should have the pod name, the service name inside of it. Otherwise, you're not going to be able to match it later on. So simplifying the data collection is a basic. If you're just using open telemetry, using FluentD to get the data into Elasticsearch or OpenSearch, you're using OpenTelemetry to get the traces, you're getting Prometheus to get the metrics, the data is not correlated. And this is really important to make sure that the data is aligned. The other thing is that the UI or the user experience of OpenSearch, the user experience of Grafana, the user experience of Yeager, are completely dispersed. Completely different, yeah. They're different and they're not aligned. So we've taken all of that and we offer them, but we overlay a layer of observability on top of it. So you can see the topology and you can see the metrics, you can see the logs, and you can see everything in the same thing. It's about the story eventually, not about how many alerts I get and how many graphs I have. So this is kind of like, we've taken the open source, we really simplified data collection, we really simplified the storytelling out of it, and that's what we're doing. You still have underneath the OpenSearch, OpenSearch dashboards, you can import all your dashboards, same thing with Grafana and Yeager and stuff like that. There's so many new tools and so many things. I love that you just used topology as a reference to observability. I'm an avid hiker, so you're speaking my literal language, but I think that's so important because there's so many tools and players and different things happening right now. And like you said with all the different projects, creating that unified experience for customers and making it easy, everyone wants that simple button or that easy button, but it's not that simple. So how do you prioritize as a team what you make easiest first? As a team for us, to make easiest first is a data ingestion because this is where things start. If the data has been correlated throughout ingestion, then it's going to be easier to make things, make sense out of it eventually. There hasn't been, not in the observability, not in the BI, a simple thing to be able to say, I want to click a button and get the answers so far. But I think what we think right now is that LLM is changing it because now I don't have to build the data to be simple. I can have the data to be complex. I have something else which is very complex, can understand my data and make it simple for me. I'm actually shocked about the level of response that we get from it because we let it run on a Kubernetes dashboard with hundreds of tabs and it just got to be a very simple question and it gives you an answer and then you can say, okay, make it shorter and it gives you a shorter answer and it's going to say, hey, you had the deco on. Which is awesome, right? Yeah. It is awesome. Yeah. Because that's kind of like what we're looking for. I don't know that there is a good solution about how do I create a visualization and graph in a simpler way because it's a complex problem. I need to know what I want. Right. But I definitely think the LLM is making a big difference in this world. Just the simplicity of it has to be there because I think also in your study it talks about platform engineering and how there are teams being built out but how extensively they're being built out and I think LLMs help to level that playing field because you're just, I mean, platform engineering is just the new way of saying IT. So what else did you find out about that and how are you seeing platform engineering? We're seeing definitely teams being formed as a platform engineering. Like I said, they are kind of like an excellence center within the organization that they're responsible for basically like IT. IT used to deliver servers back in the day, used to buy them Reckon-Steckham and stuff like that. Then they moved to the cloud now they're moving to Kubernetes. They own the Kubernetes infrastructure. The way we see it is that there are two ways of looking at the world. There's one way which is like a vertical way. I am an engineer and an application developer I'm a business owner. I need to know how my application is performing across the infrastructure. The second way is kind of like a horizontal way. I am the person allocating the infrastructure. I need to make sure that the application are laid out correctly. I have a completely different view and what we have, we have the Kubernetes 360 which gives you like the horizontal view and we have app 360 which give you the vertical view because if I am an engineer, my realm of change is limited. I cannot move things around. I can change my code. I can add more pods and stuff like that but I cannot say, hey I want to move this from this node to this node but if I am kind of like a platform engineer or a DevOps engineer I can move things around and this is kind of like two different roles that are looking at the world in two different ways. I like that visual. That actually makes it easier for me to even understand. I got to ask your shirt. Yes. Making a statement here. We've all leveled up. Yes. What's the pun here? Give us the pitch on the shirt. First of all, the color. The bright yellow color. It is wonderful. You're like a ray of sunshine next to me right now. I'm loving it. I feel like I'm getting tan on set today. I think definitely the industry here and it's a little bit more color. Yes. Amen to that. Yes, a little bit more color. I think what we're playing is kind of like a level up based first of all for open telemetry, being able to support the native open telemetry, being able to use it. A lot of companies say it's complex. It's actually not complex. It just needs to be able to configure it properly and it actually works. So, leveling up the uses of open telemetry and level up observability and saying, hey, what's been so far? It's done. It's history. It's time for a new thing. It's time for a new realm of observability and that's kind of what we're here for. I love that. Are you ready to level up, Rob? I'm ready to level up. I love the shirt. I love the shirts. Well, we do a swag segment here at KubeCon every time. So, I might have to cover and steal one of your shirts so we can chill it. That sounds good. That's definitely our thing. All right, question for you as we wrap this up. What do you hope that, well, I have two more questions for you, actually. I'm making my own rules today. What are some of the trends you're seeing? You probably interact with a lot of different customers. What are some of the trends you're seeing? What are we, what's the future look like? The future of observability. Yeah. Few things. First of all, we see more and more customers wanting to go top down instead of bottom up with being able to define SLOs and being able to really think what they care about the business. It's a big change because it's not being driven by engineers anymore, it's being driven by the product managers, the business owners, and they want to make sure that the engineers are aligning and they meet the goals. So this is kind of like one thing that you see up. I also think the whole, it also addressing the whole issue of alert fatigue, which has been an issue for many years. I think this is definitely something that's being addressed by doing because if I have my service, which is if I'm an e-commerce website, my inventory management is not as important as my shopping cart. As an engineer, everything is the same. It's a service, need to be managed the same. But as a business, there is a difference and the ability to tear it makes a big difference in that. So these are things that we're seeing. The other thing is cost. Cost has been a big thing last year. It's a big thing this year as well. The ability to control not only the cloud cost, but also the observability cost. It's becoming very, very expensive and we see a trend of it going down. Now the cloud cost is interesting because at the end of the day, it's a trade-off between my performance and availability and the cost that I'm saying. Today these are two dispersed systems. There's the fin-ups people and there's the observability and we see people say, hey, I need to understand the relationship. I need to understand I'm paying this money. I'm getting this service. If I'm going to pay a little bit less, what's going to be the impact of my business and the service? So we do think people are starting to correlate these two together. And obviously the security thing is a big thing. Just casually security is also a big thing. It's a conversation we're having. It matters in tech just a little bit. All right, final question for you. When we interview you next time, because you've been absolutely fabulous. Let's say you're from now at the next, hopefully equally fabulous location for CNCS event. What do you hope you can say that you can't say today? I am hoping to be able to say a lot more that is on our plate around AI and LLM. I think we still have a lot in our, kind of like in our back to that we haven't released and we haven't done and I'm hoping to be able to say that. I really think it's going to be a big change in the way people interact with observability. That's what I'm hoping. Well, Asaf, I can't wait to hear more about it when we have you on next time. That was fantastic. Thanks for bringing your bestie on. Thank you. I really appreciate all the good vibes on stage and thank all of you for tuning in wherever you are on this beautiful rock. We're here in Paris for three days of live coverage at CNCF, KubeCon, CloudNativeCon. My name's Savannah Peterson. You're watching theCUBE, the leading source for enterprise tech news.