 Good morning nerds and welcome back to Chicago. We're here at KubeCon, CloudNativeCon. My name's Savannah Peterson, joined with John Furrier and some wonderful industry analysts this morning. We are going to be serving you AI for breakfast. I hope you're ready. John, are you ready? I'm drunk on AI already. I've been drinking it all day. You got after you had AI mimosas this morning. We've been drinking the AI love all week here. Loving the AI. Yes, we haven't. It has been such a theme of the show. You're all familiar now with Dustin, our newest analyst, but an industry veteran, and we also have Andy, who has AI in his last name. So this is, you've been, you were born for this. Quite literally for that. I don't know about that, but okay, I'll take that as a compliment. You're AI native. AI native. That's a good one, yeah. Okay. Great to see you. I get that. Woo, it's going to be a punny day here on the Kube folks. Get ready and buckle your seatbelt. All right, so since we're talking to you, Andy, what has been the theme of AI at the show? I mean, obviously it's the buzz, there's hype, but distill that for us. What are you seeing as the theme? So when it comes to, you know, first of all, cloud native is trying to jump on the bandwagon of AI, which I think they are a little late to the game. The China's already left the station because particularly, CNC has known for providing open source and bringing this massive developer base into helping things out with the cloud, which they did a really good job if you look at all the projects that graduated. Kubernetes is right on the top. And Promethea, everybody uses it, right? Yeah, yeah. So that portion of it, they nailed it. With AI, they're still trying to figure out that what do we want to do with it? What do we help? And meanwhile, there are other open sourcey projects, whether it's, you know, Google is contributing to it or even LinkedIn doing some Spotify, Amazon doing it, everybody is throwing their head in it. And then even Databricks has the massive offerings that they have that in there for MLops and other stuff. And things like Lanchain and Lama Index and even open source, everybody is throwing open source stuff around. So CNC is somewhat left behind this massive base and they're like, they woke up and they're all of us and like, oh, we got to do stuff. We're going to enforce that. I'm like, I still don't see any really meaty, worthy announcements here, but at least they are talking about it now. But we'll see. Yeah, it's more the hallway track than, of course. Dustin, would you agree? Mostly, I think. No, that's a no. I don't agree with that, but I'll counter that. I'll just take it to a slightly different direction. I think Andy's right to a large extent, you know, and we said it on the first day, Tim Hawkins said it on the first day, Kubernetes was not built for AI, not initially anyway. You know, and that's just, that's a fact. Now, can Kubernetes pivot into that direction or maybe just simply not pivot, but expand into that direction? Yes, absolutely. Kubernetes is a great abstraction of massive amounts of hardware, network, compute, and storage, all of which, the biggest AI ML workloads, the end. I mean, I would disagree with Andy on this point. Well, it's nuance. I would disagree in the sense of, CNCF is really not in the AI game. They're in more of the foundation game, but the company's here, they're infrastructure, so they're late to the party, mainly because they can't rely on the current AI that people are getting buzzed about, because it's infrastructure. You can't run LLMs on, it's not easy. Generative AI is not yet baked, but you know, you look at Ansible and all these automation-fat features. They've been doing automation for years, so I see this more skeptical practitioners are more AI aware, less AI in production, mainly because they don't see a path, in my opinion. So like looking at, now every company here has an AI story, some are AI washing, some are saying, I'm leaning into AI. So I just think that it's more on the app side where you're seeing the AI rappers versus like, hey, I'm going to roll out Kubernetes AI, because there's no real fit there, in my opinion. I mean, that's my take. You're nodding, what are you saying? Yeah, I think that's right. I think there's quite a bit of- He's tempted this morning, doesn't he? Yeah, yeah. He's being nice. You don't have to be nice to us, and tell us what you really think. That's the whole point of this segment. No, it's very nuanced. He's right, Andy's right, because there's no AI, like direct bullseye here. It's all kind of in the glue layers, a lot of pre-existing data, so log files you're seeing, for example, in observability. Tons of AI thinking, right, but not- So there's still way too much complexity in Kubernetes to be generally useful for any given AI developer, okay? And there was a lot in the keynote this morning about reducing that complexity, abstracting away all of the underlying infrastructure that you still need to understand. You're like Tim Hawkins' keynote. Yeah, Tim's keynote. What was he saying? You know, I think the keynote is entitled into the second decade of Kubernetes. Amazing to think it's been 10 years, but now we're looking- Wow, that actually just made me feel old. Like that, right? Like that, right? Looking into the next decade, what's going to drive the next trillion core hours of Kubernetes with the question? AIML is the obvious immediate answer. Maybe there's others that come after that. So driving that, what needs to happen? And one of the big things that was brought up, and I think is a very astute point, is that we need to reduce the complexity, that Kubernetes is still way too complex for any given app developer to just write their app and expect it to scale globally. It can, but you've got to know way too much about cluster management networking storage baked into your application, as opposed to simply expecting that from your underlying platform. So, I agree with that. So the next trillion core hours is going to come from AIML, whether like or not. But again, to going back to your original point, containers were not originally bid for AI workloads. And even now, it's not majority AI workloads that's going into the cubes and containers. So what they're trying to do is rather than, like you said, the complexity of managing the clusters and the environment and having multi-clusters is extremely complicated. AI cannot wait for that. You cannot go and micromanage these things to build training models, because the training models are massive. Last I want to worry about that's such a small thing. So what they're doing is that they're moving app as we are talking about. We are talking about distributed computing, like Ray and Dask and Apache Spark and other things. So all those smaller micromanaging things of cluster, none of that is needed now. We're going to move a level up in order to train massive things. And they were talking about the fact that the LLMs, the open AI one particularly, was trained on 7,500 containers. I mean, if you're a micromanager of containers, you'll lose your mind, you know? So the hallucination risk is huge. I mean, in fact, that's what I just saw Amar Awadallah, former Cloud Air founder, was on New Year's Times about his Victorra was doing a LLM hallucination kind of test. Kind of give you a level of how high you are or how hallucinogen you are. But it's interesting because if you look at infrastructure and Google Next, Dustin, we had this conversation because on the consumer side, it's all cool. On the infrastructure side, it's about what data do you have, right? So you're back up and saying, what data is out there in this world? Is there a lot, is Kubernetes going off a lot of data? Is, where's the AI opportunity now in this world if it's IaaS, past SAS? Crunching that data is Kubernetes real opportunity. Storage is an attachment to Kubernetes. Kubernetes, in some cases, depends on, relies on, provides access to lots of data. In some cases, fast access to data. In others, it can be temperate a bit. But Kubernetes is all about crunching, crunching those numbers. Interestingly, I wanted to ask Andy a question on this. Please do. Yes. So we were just talking. I'm right here. We were talking just before we went live about our shared experiences at IBM, two decades ago. IBM Watson being one of the, maybe call it the prototype for AI. What do you see? The MVP for sure. Yeah, exactly. OG, so what do you see as some of the big differences and changes to go from where we had Watson a decade ago to where we are today with open AI to where do we need to go to next for that real next step function change? Right, so what IBM was building on those days was more of a cognitive knowledge systems. They were trying to solve a specific problem. Not a general AI, but it's more of a specific problem. And it was way ahead of its time. Given the storage, technology, model training limitations, what they achieved with the deep blue to beat the best chess player in the world and beat the jeopardy champion in the world. Specialized workloads. That's the point. It's a very specialized, it does only one job. It does phenomenally well. That, they mastered. And then applied to healthcare. I know that was one of them. Which didn't particularly go well. But again, now what's happening is almost all this other, the LLMs and everything else that's coming out, including open AI's, it is almost reaching the level of general AI knowledge and intelligence, even though it's not there with the cognitive abilities, but it's about, you know, I know the entire, the world database, all the data that the Indian know. I will try to figure out what you want to know. It is at that level, it's not still kind of a cognitive system yet. But eventually when they are able to marry these two, I mean, some of the releases, we were talking about the open AI GPT-4 Turbo. Oh my God. I mean, the innovation that they did in the last six months. I mean, the initial couple of releases. There, DevJ was inspiring. I got to say, watching that whole keynote was very Apple-esque. It felt like an Apple iPhone moment where it's like, people were cheering. This was pretty cool. I mean, like, what was less than you saw a keynote that good? I mean, Apple doesn't even have good keynotes anymore. They do videos. They feel like a customer video. There's no more innovation. It has reached that moment. Yeah, and not only that, going back to the point, by the way, open AI started, it all started here in the CNCF. That's the first one he was giving a keynote here, wasn't it? About five, six years ago. And they just let that go. And then they are still falling behind here. But whereas, you know, open AI went on to become the AI moment out of the world, right? If you look at that conference, the Dev Day alone, the attendees, we are talking about, you know, tens of thousands, you know, people are engaging in that, leaving these other open source programs behind. So they're going to contribute to that ecosystem to build things out of that. And open AI is going to figure out a way to monitor that. I think you're right on on this one, because I think the AI conversation is not so much who has the AI. It's standard now. The experience and the consumerization of AI, the chat GPT has done, and now the technologies available is going to make every company and every vertical have to integrate it in. Because there's a clear advantage of having it, whether it's some sort of augmentation to humans, scaling data. I just don't see a company not having it. It's like the web saying, I'm not going to have a website. Because, oh, it's not productive for people to serve, get information on their own. That's going to be the next wave, right? I mean, right now, most of these companies are trying to figure out, you know, I want to compete against open AI's LLM, or I want to produce an LLM. Unless you have deep pockets, and unless you have a stockpile of GPUs, you can't compete against that. I mean, there are already three or four established players in that open AI, Anthropic, Coherry, and Google, and whatnot. And even the big guys like AWS stayed behind. They're like, we don't want to get involved in that, right? So that's not a winning game for you if you want to do that. So the obvious way would be either go on the data side of things, whether it's Snowflake or Databricks or any of these guys. You know what, I'll figure out a way to give you the data lake and whatnot, what you want to do. Easy way to do that. Or you can get into that model by doing MLops, by providing the productionization of the AI models. So that's another way to do it. There are tons of opportunities, not just getting an open LLM or LLM out there and competing against JADGPD. There are smaller components that you can excel in. There are companies working on that big time. Well, I think you bring up a really good point. And it's kind of late to the hype curve party, but still very early in terms of how this is all an advantage of its application. So I'm curious, Andy, building on that, what does the CNCF community need to do to catch back up? Do you think that it's possible? Do you think the AI train is running away from us and there's nothing we can do? You know, what do you think about it? Well, it is running, but there are things that they're doing, which I think they did a little bit more. For example, we talked about the MLOps thingy and one of the major components of MLOps or source, the offering, rather, is called CubeFlow, as you know, which is part of the CNCF community. Originally it was actually developed by Google and then that was about seven years ago and they took about seven years to bring that in just about late last year, right? And they waited long enough. And there are other components like that just sitting out there and they should make a move to go there. We were just talking about it on the other day in the AI hub and I'm like, I'd love to get mulled if you want me to be, you know. They got to go and get engaged those things because you have the massive developer community who can write anything, right? And you have problems to solve. Why are you keeping away from it? Just engage, bring them in, bring them on board. What is the project? They got the data sitting there. Well, the key is, what do you guys think are the most disruptive areas? I mean, ML Ops has been around for a while. Some of it may be new and old. What are the areas you see in this community, this show, where there's an instant disruption in a good way? Is it observability? Is it telemetry? Is it exhaust? What exhaust? What data is valuable, I guess is the question. And that's what I'm trying to figure out. And then what's not valuable? What's old? Like what do we not use from AI Ops from, you go back three or four years ago, AI Ops was a whole different conversation. It had nothing to do with January, anything. Yeah, I feel strongly about this. Not all data is going to be controversial, but I don't think all data is valuable. I think some data needs to be processed and then discarded. The amount of money and time and energy that goes into data storage, I mean, go check your snowflake bill, go check your Amazon S3 bill. There's a lot of data out there that is stored because we've been trained to, oh, data's cheap, storage is free, just store everything. We might have a use for it sometime in the future. I don't know about that, especially at the edge when we're talking about lots of devices. Really agree with you there. It's so useful to use that data to retrain models, but at some point, I don't know, discarded, delete your data. Yeah, it's a good question. Should you be a data hoarder and just for the sake of hoarding data? Or should you Marie Kondo that stuff? It's storage wars. And it's an even glitter cleaner useful, you know? It's not storing it to storage, is it? Does it spark joy? And the funny part is the whole data hoarding what you're talking about, it's storage wars. Mostly structured data, which means it's comparatively cheaper to store. Even that itself is, people are feeling a pinch. The next wave is the unstructured data that's coming in images and video and audio. If you start hoarding that, oh man, you're going to be nailed with that. But talking about what is useful, the observability being part of, so when it comes to AI application, there are two parts to it. One is applying how to use the AI systems and they're like LLMs. The other part is more like bringing in AI to do some of the observability and monitoring and other things. So things like AI off. I totally agree. But just Dustin, I'm the opposite of you. I like to store everything. Everything. I'm a hoarder. I like to store everything. I was like, just in case. That's a good thing or bad thing. I mean, you could come back and bite it up. The point is, this is a tough conversation because some people, what do you store? I mean, for cybersecurity, if there's a day zero attack, okay, and someone's infiltrated your network, you got to have the lineage to understand when the attack comes in. So do you store everything from a compliance standpoint or not? Is time series data valuable? Do you train the model? There's proof of data supply chain problems? So I mean, it's interesting conversation because it's like, there's a cost. Now, in stores, you get cheaper, but like, is the hoarders and there's not? I mean, it's kind of a personality thing. I mean, I think there totally is. I guess I was thinking a little more of like, I love what you've done here, by the way. This is great. I did not expect Marie Kondo to be a pure point of deliberation here on theCUBE this morning. Yeah, this is great. I was thinking more security cameras, right? I mean, you probably would expire that 4K color security camera footage after some amount of time, 30 days, 60 days, 90 days. I don't think you need seven years of security camera footage, but maybe you do. I mean, that's kind of what I was thinking. I talked to a CIO and he said, of all the observability data he has, didn't even look at 10% of it, not even 10%. That's just how it works. They don't even look at it. So it's like, they store it, but it's like... Yeah, exactly. The same problem that even the call center have, right? I mean, they say that we monitor all calls, but how many, they are financed in the regular industries. They have to at least go through about 10 or 20% of the calls. Even that, they don't have the budget for it. So that's why now they are using AI systems to summarize the text summarization and voice summarization. That's a huge, applicable use case. They look at it saying that, you know what? We could solve this problem by doing 1% of the budget. That's where AI could be, it is maybe. How many Zoom meetings have you downloaded that you've never watched? Ha ha ha. Let's not think about that. Okay. Ha ha ha. I record the meeting on that, I'll watch it. Ha ha ha. For posterity, for posterity. I mean, I rewatch every one of our CUBE segments though. Of course. So we've got to have that, and it's fed our engine. Oh, there are some pretty, yeah. You talk about disruptive. Try Gong AI. Record your sales calls and start extracting information and data out of that. Incredible. To be able to transcribe all of those two texts, to be able to search those and look for certain keywords or acronyms, to set up effectively like a Google alert, except for all of your meetings and every time your magic word comes up and highlights, just like highlighting a Slack channel. Having that come straight to your inbox. So yeah, you don't have to watch all of it, but you can index straight into the part that you care about to extract a little bit of signal from that. Productivity is the number one asset of AI. Sure. Where do you look at it? Sales, infrastructure, if you're a dev, you're ops. That's going to be the test. Am I more productive? So that's the question in my mind. So if you're doing cluster management, what does that look like? What's productivity look like there? Agreed. Almost all of the use cases that come in. So again, the distinction between the AI use cases, one is production and efficiency use cases. The other one is more of innovative use cases. Innovative use cases are a little far out there, like next two, three, four, whatever years. But the production use cases, when they look at it, they're like, oh my God, I could do that. And especially the code generation one, if you look at it. Oh, I can develop the same code. So I was in conversation with one of the CSIs. They take one of the existing OpenERLA alum and they train it with their own source code of about five million lines of source code. Now they have a code generation tool that's about 95, 98, wasn't accurate, which is much more accurate than some of the developers. So as long as you're able to figure out how to productize that. Yeah, it's a great use case in those areas. We're talking about orders of magnitude in terms of efficiency and what people are able to do, which is remarkable. I mean, it's why it's so exciting. Okay, last question for you all. This has been absolutely fascinating. When we're in Paris in a few months, do you think we'll still be talking about how the CNCF community is a little bit behind the AI trainer? Do you think they'll have caught up? I don't think they would have caught up by then because I didn't see enough from them here to say that they're going to move fast. They are still at the talking stages. So it's going to take a while. Unfortunately, that's the truth. Dustin, you agree? Maybe. I don't know the CNCF strategy. The CNCF wants to be the home for AI. Yeah, I don't think that they'll necessarily catch up by that point. But I mean, to me, cloud native computing is about the infrastructure underneath whatever is going to sit on top. And maybe historically that's been SaaS software and web apps. Next, it's AIML or web three or whatever it might be. I think we're looking for just a general purpose, compute infrastructure. If AIML drives that adoption, great. If it's something else, so be it. I think the CNCF probably is indifferent on whether they're AI enabled or in their messaging. But every company, by Paris timeframe, will have AI in their story. If they don't, they're not going to attract employees, not going to attract talent. Every single company will have an AI aspect of it. If they don't, they're going to be basically not on the next gen because every company has data. They have a productivity challenge. They have customers. And I think customers' expectation is, make your product better and AI can do that. I mean, this is an instance of your hardware. I agree with that. I think to answer the question in a different way, AI is increasingly becoming ubiquitous. Will it be ubiquitous in six months? Maybe, and getting darned close. But I think AI will be as ubiquitous as, I don't know, the internet or Wi-Fi. We've been hearing that a lot this week. Remember, Wi-Fi once upon a time was this exotic thing that you could only find in some places and now it's ubiquitous. It's everywhere. I think that's where we see AI give to. The exotic Wi-Fi. Exotic Wi-Fi. It's like oxygen. Breathing oxygen. On the note of being exotic, we're going to go ahead and wrap this segment. Dustin and Annie, thank you both so much for being here. John, always a pleasure to hear your insights. My name's Savannah Peterson. We're here in Chicago at KubeCon CloudNativeCon. You're watching theCUBE, the leading source for technology news.