 Welcome to theCUBE's coverage of KubeCon EU 2024, live from Paris, France. Join hosts Savannah Peterson, Dustin Kirkland, and Rob Stratche, as they interview some of the brightest minds in cloud native computing. Coverage of KubeCon cloud native con is brought to you by Red Hat, CNCF, and its ecosystem partners. The CUBE's coverage of KubeCon EU 2024 begins right now. Hello, and welcome to KubeCon cloud native con EU in Paris. On theCUBE, we're going to have wall-to-wall coverage this week, bringing you all of the things that you need to know about cloud native applications and how they're really changing. Today, I'm joined by Dustin Kirkland, who's going to help me break down the keynote that just got out a few minutes ago. Welcome. Hello, my name is Rob. How are you? I know. That's about the extent of my French. Very good. There we go, yeah. Yes, but, well, I think it's great to be on with you again. I know you have a lot of background here. Again, love being here. You know, really bring the developer perspective, the engineering perspective to this, and it's going to be a great week, and I think there's a lot going on. Let's kind of break down some of the stuff that was going on this morning and yesterday in the day zero events where we had a whole number of things from Red Hat Commons to Backstage Day and Observability Day, and a whole number of other different things going on, plus the keynotes today, which really AI was front and center. Yeah, I mean, just start with the size of this place. I don't think I've seen the total numbers yet, but I mean, this Expo Hall, we're in Expo Hall seven of, I don't know how many there are, eight or nine here in Paris, the lines for badge check-in, security. There's a lot, a lot of people here, Rob. It's a tremendous conference. I think they were saying it was over 12,000, the largest KubeCon so far. In Europe or in total? In total. Wow, okay. In total, so I think Amsterdam was somewhere around 10 is where they capped it, so being over 12, I think they were expecting to be somewhere between 12 and 15,000. It shows. The energy is palpable for sure. Yes, it is, and so let's jump in. I think what was really interesting yesterday and today was there was a lot of different sessions on AI, no matter if you were in the AI, Kubernetes, AI, CloudNative AI, things like Kouplo or what have you, but everything talked about AI. And this morning's keynote was no different from that perspective. What really stood out to you from the keynote this morning? Well, yes, AI certainly was the open, the middle, and the close. Every segment of the keynote, well orchestrated, but every segment of the keynote all about AI ML and really about what I think the CNCF is trying to do here is ensure that the CloudNative computing foundation, Kubernetes, and that family of applications really are the home for AI ML training, inferencing, fine-tuning, all the different types of workloads that need to run at this point. Yeah, I think that was one of the keys that I heard. And even back to yesterday when I went to Observability Day and I had been at the Red Hat Commons, there was a lot of talk about how the Kubernetes infrastructure needs to really evolve to take on model serving, to take on resource allocation. I think that was a big theme this morning. Yeah, DRA, Dynamic Resource Allocation, saw a major upgrade in Kubernetes 1.3.0, 1.30. And I think that's just the tip of the iceberg. NVIDIA had a keynote this morning where they talked about a couple of different ways to drive better GPU utilization sharing. They showed four different ways of sharing GPUs and that in fact those can be layered on top of one another. And I think one of the sort of paradoxes here is that there's both this worldwide GPU shortage. I mean, you can see that just in the prices of GPUs either new or used if you're buying them, slightly used. There's this shortage of hardware, in part created by some supply chain issues. But at the same time, there's also underutilized GPUs in clouds. We heard that from CERN also this morning in the keynote. And I thought what was interesting from CERN, and just building off of what he was saying, is that CERN was very involved in previous open source groups and around resource allocation, all the way back to what was the global grid forum, the open grid forum, and then open stack. And there is prior art there to be able to do this. I think what was interesting was a lot of the discussion was not about how you train foundation models. It was about how you deploy these foundation models for inference and how you fine tune them. And there was a lot of talk about, actual arm based processors and running inference on arm based processors. There was Oracle made an announcement, the last KubeCon that they were actually going to provide $3 million a year in free to the projects so that they can go out and do this. I think that we're going to see a lot around this. One of the interesting pieces also that was a thread through yesterday in the data on Kubernetes or doc, went to that session as well was that, it's around how do you actually get the data to the models or the models to the data and there's a lot of what's going to happen out at the edge. And I think that was a really interesting dichotomy was hey, containers have been great, we've been talking about are they persistent or non-persistent data? And there was a lot of discussion about how you do persistency with Postgres and SQL and stuff. But what are you seeing as you look out and see that people are just interested in AI but there's a whole lot of pieces that go along with that. I actually don't think we heard all that much about the edge actually in that keynote here today. It's something that I hope we hear a lot more about looking at the guest list we have coming on theCUBE this week. I think there should be some interesting discussions there. I would be very interested in some of the talk tracks as well around edge. You're spot on. I mean the amount of data that it takes to train a model, moving that across networks is just, it's unrealistic, it's infeasible. And so just driving some of that actual training, GPU, CPU, compute as close as possible to the actual data that it needs to consume, it's an important problem that I think we need to hear a lot more about. Yeah, and I think it's a very interesting task at hand for the CNCF with the fact that they keep bringing in different groups and keep on having more in Sandbox as of, I think it was as of March, there was 113 in Sandbox. There's 37 in incubating. There's 26 in graduated. Is that total projects or are those AI related projects? Well, they all have AI now in their name, I'm sure, but to get people in. But when you start to look at the 113 Sandbox, some of those are already being used in production by companies out there or there are commercial applications being built out. And I think one of the interesting things is there was a whole track yesterday for startups. And about how to build a business on top of open source and how to do it in a way that doesn't violate some of the agreements with the CNCF. What are you seeing from organizations out there? Do you see a lot more people trying to figure those models out? Yeah, for sure. I mean, things have come a long way. The first startup I was associated with, we were trying to raise money in 2009, 10, 11, built around an open source encrypted file system that I'd co-authored and co-maintained. The VC pitches in that timeframe, 2009, 10, and 11, we were explaining open source to the VCs themselves and many of the VCs ran the other way. We've come a long way in the last 15 years now and open source is it's table stakes for most startups talking to VCs. They do want to see a business model, obviously, but there's some tried and true ones at this point. I did find it notable. I heard it, I counted at least three times in the keynote this morning, specifically Apache 2 and MIT licenses were mentioned. I don't know if it was a nudge or encouragement, but Priyanka and it seems like CNCF is certainly pushing for, asking for Apache and MIT licensed software. And it's a little different than the GPL that drove Linux as a lot of comparisons between Linux, the kernel, Linux, the OS, and Kubernetes. So, but I think in part, the business models surrounding Apache and MIT licensed software are a little better understood and maybe a little more tried and true at this point then. I think we're seeing it and I did some briefings before we got here and there was a lot about, hey, this part of it's Apache 2, this is GPL or it's a custom license and how they're trying to actually create business value around Apache 2 software, the MIT licensed software, which is I think the right way to go about doing it. People like DBT Labs have done that for years and I think they actually have figured out a pretty successful model around that and said that, hey, we're not going that direction. I think what will be interesting is, is that there was a half day open tofu day yesterday. Open tofu, for those who don't know, is once HashiCorp changed their license, really this was- For Terraform. For Terraform, this really exploded and I think what was interesting coming into this was there's now rumors on the street that Hashi may be for sale as a public company. So we'll have to stay tuned to that this week because I'm trying to understand what could happen if somebody bigger bought them and actually went and changed the licenses back or did something with it that made it a more agreeable open source license versus where they went. So I think it's going to be really interesting. Yeah, for sure. And I would say also one of the big things is right now some of the crew, Dave and John are out at GTC in San Jose. Meeting with Jensen and talking- You think they're talking to AI right now? I think they might be. I think they were talking about George Lucas being there and a whole number of others that I saw A-listers coming out. So AI has jumped the shark when you have George Lucas going and I can't remember if it was Exhibit or one of the other rappers going to the actual keynote for Jensen. Wow. It's crazy but I think what I expect for the rest of the week is that we'll get a heavy dose of AI throughout and interweave through all of the different discussions that we have here, which I think will be great. I also think that it's 10 years of Kubernetes coming up. We're coming up on the anniversary. Do you think most of the major things that the Kubernetes level are solved for and it's really about cloud native? Yes and no. I think that question was posed to Clayton in the keynote as well, what's left to be done and he said what's not left to be done. I mean there's still plenty of work. The monitoring and observability still leaves a little bit to be desired. It's a complicated ecosystem of ways, other things you have to plug into Kubernetes to get the logging and metrics and observability that you ultimately need, especially when now you're talking about these AIML workloads that are finding their way deep into the heart of Kubernetes, the scheduler, the resource allocation and so forth. There's a lot still left to do there. I think that's one key piece that we still have a ways to go, Rob. I think so and I think that's a place where we're definitely dive deep with a number of the different companies that are part of that ecosystem. I think another one that's going to be interesting to see is really that service mesh and Istio and what's gone on there. Connectivity, yeah, absolutely. Connectivity and networking because I think it's still tough and I think it's still overly complicated to actually have a mesh of applications talking to each other and I think that organizations are looking to have that simplified and I think a lot of that's going to be what we see over the coming years is like how do we get some of these other things that, okay, yes, maybe the containers and the format and how they get deployed is kind of solved for but actually the cloud native aspects and observability is definitely one of the ones that I, that and the networking and all of these meshes. Security too, I mean, just the security requirements in an enterprise, however they're leveraging, first, the Kubernetes pieces, second, the rest of the cloud native ecosystem and then third, trying to take on this whole new world of AI enablement of that app. We'll actually have the GM from OpenSSF tomorrow and we'll actually have also Melinda Marks who's the practice director for security over at Enterprise Strategy Group on with us to kind of wrap on that stuff as well tomorrow afternoon. So I think you're right, Denon, that this is going to be a lot going on and we have a lot of different and interesting things that we can dig into. So thank you for coming on board and this is going to be an awesome time and thank you for joining us here at KubeCon CloudNativeCon EU in Paris on V-Cube, the leader in high tech analysis and news. Keep it tuned here, we'll be right back.