 Good morning, Cloud Native community, and welcome back to the second city. We're here at KubeCon, Cloud NativeCon, CNCF's largest North American event. My name is Savannah Peterson, joined with my co-host, John Furrier. Good morning, John. Are you as excited as I am today? Yes, it's a great day, great content, great energy, show floors buzzing, got great guests, got experts coming on, AI conversations in full swing, and this is going to be a great month. I know, speaking of, it sounds like you're teeing up our next segment. Please welcome our fabulous guests. We've got Chuck and Chris with us today from Red Hat. Thank you both for being here. AI, Kubernetes, definitely the buzz of the show. Chuck, I'm going to turn it to you just to open us up a little bit. Why is Kubernetes the chosen platform for the AI development that's going on right now? Well, I think that AI is a unique and new workload that has a lot of requirements that Kubernetes fits really well. We've found that, for example, there's not going to be a lot of folks who are actually building these models, but for those that are like IBM with Watson X AI, they needed a solution that can scale to build and train their foundational models. They're using OpenShift in order to do that, and Kubernetes is at the heart of OpenShift. So I think that Kubernetes over the last 10 years has evolved so that it works in the data center, it works on virtualization, it works in multiple clouds, it's extending to the edge, and all of these footprints are where AI comes together with developers, it comes together with the applications, with where the data sources are, and it's sort of a marriage made in heaven and also a little bit of a nice coincidence that these two waves have sort of come together. In May we had a great chat with Ashish Badani, who's the chief product officer of Red Hat, he said, OpenShift is the generative AI for hybrid cloud, and it was kind of a grandiose statement, but in the sense of he's pointing to where it's going. What does that mean, because OpenShift is really well positioned in hybrid cloud, what is the generative aspect of AI, how does that feed into some of the customer problems you guys have, because there's a lot of business value being unlocked with AI, and that's like the key enterprise value right now. I mean, the consumer side's great, we see it, generative AI gets answers to questions, but on the business side unlocking the value, on the business side's key, how does that hybrid equation accelerate? I mean, Red Hat wants you to be able to run your models on your hardware with your data, right? We don't want you to get stuck in somewhere where you can't, you get locked in. So we're enabling AI on those hyperscalers, and you can move between them, you can put it on prem, so OpenShift being built on top of Kubernetes gives us that power anywhere in the hybrid cloud, so you're going to be able to run your AI everywhere. What would you say to customers that asked, how do I leverage my existing environment? Because it seems that customers that have done the work are positioned well for the tailwind of AI. How do you guys see your customers leveraging what they got and to add in, say, net new capabilities? Because that seems to be the equation, what's it look like? Yeah, I mean, again, for example, once you buy or license or decide to use maybe one of the open source foundational models that are readily available now, you need to train that model preferably on your data, and over the last 10 or 15 years, your data is in lots of different places with lots of different infrastructure choices. It could be the public cloud is where you keep all of your customer engagement data, but your business value data is locked behind your firewall and your data center. You need an AI platform that can run in those environments, and then you need to be able to take that train model to your development teams so they can build it into their applications. And again, that may be a different cluster in a different location with different team members. And one of the things that we talk about with OpenShift is the consistency of the environment from the rel underneath the covers as the operating system to the libraries to the developer tooling so that developers feel free to move as fast as they can knowing that everything's consistent and things that worked in dev are going to work in prod, but also so that the platform engineering teams can really set up their policies and their standards at a corporate level across wherever their teams are running. And I got to say, a lot of these people who've been doing it a while, they built their own platform, which I got to say, it's kind of like the hardware, right? You know, things have moved so fast in the past five years where people building these custom apps. Now we've got just a ton of great open source projects to lean on new models that are so much better than they were years ago, new toolings to manage your development pipeline, your build pipeline, distributed training jobs across thousands of GPUs without any additional work. So the new stuff is so good. I feel like we're going to add a lot of value by just managing that for you or just at least giving you the tooling so that you aren't trying to manage your own stack. That's a great point. If I may, Savannah, real quick, because the open source community's buzzing, the organic innovation is coming out of the open source. Everyone kind of knows that. But the big guys, it's interesting, they call them proprietary models. I mean, the word proprietary, first of all, is weird, right? So, I mean, I guess open AI is proprietary, but they call themselves the open AI. But the value are in more of the smaller pockets. And some are even saying on theCUBE when we talk to either entrepreneurs and customers, I have small models that are my IP, my crown jewels. And I want to use that, but I don't want to have leakage. Exactly. And okay, or I see an innovative open source project that's got some traction, momentum. Security there is open source. I want to integrate that. And how do I incorporate that? And because what's great about open source? Leveraging code from other people. Yeah, you shouldn't be the victim of data gravity. You should be able to run your AI where that data is, if you're a model with no one else seeing it, not having to trust this ass with that kind of workload. So that's one benefit for sure. So take us through what Red Hat's doing, because I think this is a customer challenge I think you guys solved. I want to take advantage of the open source innovation. At the same time, I want to, I won't say risk management. I don't want any kind of weird stuff happening in my infrastructure. So I want to control the environment, but I want to let it run a little bit. How do I manage those guardrails? Yeah. How do I set it up so it's going to be working for me? I was hoping to do that. A lot of the work that we've done in the platform is made choices for our customers in terms of things like assuming that you want Dev to quickly go into production. And so having, you know, we make choices like you can't run as route in your development environment. It's sometimes a quick and speedy way of getting things done, but it leads to problems when you run into production. So the default security stance is designed, you know, very much to the shift left movement of DevSecOps so that developers are building on a secure platform. At the same time, we want to make sure that developers don't feel hindered by that. Like they know they have the safety belts and the airbags and the ABS brakes, but we're not making them have to necessarily deal with all of that stuff before they can start the key to the engine and really start moving. The platform itself in terms of being built on Red Hat Enterprise Linux, we're, Red Hat's all about life-cycling and making Enterprise ready the innovation for open source. And so sometimes that means that, you know, a new feature becomes available, but there's a day zero. So who's making sure that testing and validation has happened before it touches your Dev or prod environments? And then later, when you're running in production, a new CVE becomes available. It's fixed in open source, but it's fixed in the latest version. What if I'm running a recent but older version? What do I have to rewrite my whole application? Do I have to break things or fix things? And a lot of times what value of Red Hat provides is that we're bringing back that innovation. We're also bringing back hardware enablement for things like GPUs and TPUs. We're bringing back security without having to necessarily rewrite, re-instantiate, touch your applications, just do a quick refresh of the library and move forward. What I'm hearing from you is a really great emphasis on user experience. And I think that's one of the benefits of working with you guys on OpenShift is you've thought about this at scale with a lot of different things. So you can decrease the cognitive load for the developer on their journey or for these teams as they're adopting. Yeah, Red Hat definitely wants to make this the Kubernetes v-platform for AI and enabling all those workloads with the least amount of work. And we want you to be able to operationalize it. Small task. It's a can of corn as they would say here in the middle, it's a can of corn. Well, it is a big task actually. And a lot of data science projects die in the experimental phase. You end up with a notebook that nobody else can read and a model sitting there doing no one any good. So we want to be the platform that allows you to operationalize that and make sure it gets into your applications and is useful all the way through that developed life cycle. I think, yeah, we've talked a lot about how companies are leveraging this, but we're here celebrating the open source community. I know that that is extremely important to y'all. This is actually, you're losing your KubeCon virginity together here, your first time getting to meet the whole community, which is very exciting. We're excited to be part of that experience for you just to get the guests on their toes. But I'm curious and I know that we all, this is something we very much agree on. Why is the open source community so important to AI and the hype and everything that we're seeing happen right now? Chuck, I'm going to start with you. Well, for AI and just in general, I think it's the engine for all of the innovation that we're doing. I think one company and one company's vision is a voice and open source is the symphony that's really with the narrative and it's moving us forward. Lovely analogy, are you a singer? No, you don't want to hear that. I was going to say you could give us a taste right now. But at the same time, there are a lot of different choices that are out there. Red Hat tries to catch waves when we can, but we also listen to our customers and listen to their needs. So we have a strong history of, we caught the Kubernetes wave pretty early in the container orchestration wars, if you remember that. Oh yeah. That was a bet and it was a successful bet. If it hadn't been successful, we would have had a bunch of enterprise customers that we needed to continue to life cycle, support, manage to whatever the thing that would have won in that parallel universe. And that's kind of the secret sauce of where Red Hat fits in. It's like, we're not always going to get it right, but we want to make sure that we're open to you either pulling in a purely open source project or something that's been productized by one of the partners who are out on the show floor that are part of our ecosystem. You can plug in their service mesh if it has capabilities that are not available to us in hours or you can add on things that are bleeding edge and there are things that we're, we're always participating in stuff that we're not necessarily productizing yet. So we're trying to catch those waves or trying to be involved in those backstage was a great example of a very fast community that all of a sudden had everybody's attention and we were already sort of in there in the mix and we're working to productize that for our customers. And props on that security angle on backstage too, you guys put together pretty strong. Chris, you mentioned something earlier, I want to come back to if you don't mind. You mentioned you want to have Kubernetes be the best place to run AI workloads. Okay, let's unpack that a little if you don't mind. What does that mean? Because seeing Kubernetes become boring is a good thing. It's like, it's working. It's like Linux, it's working. Why is that successful? Why has Kubernetes best to run AI workloads? And the second part of that question is as we start to see stuff move into production, which is the key metric, what is actually going to be scaling into production? Because the models are great, but you know, it's a system, right? So I think the scaling is going to be, one of the huge problems with AI in general is the cost and the compute obviously. So in Kubernetes being as scalable and performing as it is, it's just a huge benefit to that. If you're going to train your model across a thousand GPUs for 30 days, Kubernetes is going to let you do that, right? And you know, OpenShift AI helps you manage that, but it's Kubernetes underneath it and plenty of companies are doing so. And also the inference, you need that kind of scalability to control that cost, right? If you've only got 10 users, you want to have two pods or if you have a thousand users, you need that to automatically scale up to a thousand pods or whatever you need. And Kubernetes allows us to do that. It's interesting, Kubernetes is growing up so much to the dream state we imagined years ago. It's going to orchestrate everything. Well, let's unpack that as open source models come in, more stuff's coming in, there's more moving parts in essentially a complex distributed computing system. And that's what's going on in these environments. Mission backstage, we have other companies saying that solving the end-to-end work streams and workflows is a complex system. It's not like you throw something at it and it solves everything. It's got to be engineered or architected. That's the conversation here. Is Kubernetes ready for prime time in the sense of scaling that next level? How would you guys describe that to give someone who's enthusiastic about it the confidence that Kubernetes is totally moving in the right direction? I mean, I think that the one thing that Kubernetes and the community have done really well is keeping Kubernetes focused on a core set of capabilities and enabling an ecosystem of CNCF projects that build on top of that. So Kubernetes itself, like you said, has become sort of boring, stable, productive. But a lot of its extendability is part of the secret sauce that's been built into that core and allows us to do things that weren't even anticipated at the beginning of Kubernetes. Red Hat has been one of the bigger proponents of Kubevert. So making a container orchestration engine run virtual machines was never envisioned by the Kubernetes community at the beginning, but because they left us with a orchestration engine that was extendable in a predictable way, we're able to extend it to all these new use cases. And we're seeing that one size does not fit all. We see that certainly on the edge as well, where we've done a lot of work with OpenShift to get Kubernetes to smaller and smaller and smaller, but we just recently announced Red Hat Device Edge, which is a way of getting those workloads that you have designed in OpenShift and in Kubernetes down to, you know, sensors and endpoints. And they'll be AI enabled, which brings the question of what's the blockers for getting AI workloads at the production? Are the things that you guys seeing emerging, is it evolutionary, is it more of there are things in the way, how would you answer that question? I would say it's kind of hard because there's a handoff there from a data scientist to the ML ops and ops personnel to finally getting consumed by your developers. And a successful AI ML project doesn't stop at the development stage. It's got to get built repeatedly using maybe pipelines and distributed training. You got to put it on storage. Now you've got to wrap that in a service so that your developers can make use of it, just like they would any other microservice in your organization. So, you know, we need to make sure that that workflow is optimized for AI and that people are able to make good use of their clusters that way. I think that's an important, it's always come down to ease of use and we just want to make things faster and easier and let more people have access to do that. Gentlemen, this is such a fun chat to have. I'm curious since we'll all be in Paris together. What do you hope you can say six months from now that we can't yet say? I'm going to start with you, Chris. Oh my gosh, I'm going to say let's hope the tooling comes up to the promise of AI and we all get to enjoy it for our betterment. Yeah. I think that I would like to see the walls continuing to break between the traditional ops teams and the dev teams as more companies take this journey. You know, I think we've gone from developers having to do mother-may-eye and put in tickets to get stuff done and I'm really excited just about now we can set up environments maybe based on backstage and others where there are policies in place but the freedom is there to move within those policies and provide developers just the ability to go but know that they're protected, know that they're not going to have to rewrite stuff when they go into production. That momentum for platform engineering is awesome. I think that's going to be a big driver. Yeah. That's people see platform engineering as a dev ops joint shared mission. Yeah, I think that becomes the new title that everybody wants to have in the opposite part of the desk as they shed their VMware admin title. We'll take that argument to Twitter. Everyone's going to be weighing in on that. I love that. Yeah. We have an opinion on that. We love platform engineering, we're biased. We think it's a, we think platform engineering is the next modern IT. I think we think that's going to be, because SRE kind of grew up in that whole area of that dev ops movement, but now when you go mainstream, it's a completely re-architected complex system. Absolutely. And the single pane of glass could be voice activated. Provision those clusters, you know? Yeah. Load up, load open shift. So you got to. We're all looking forward to a bright future. Yeah, exactly. And I look forward to having conversations about breaking down walls and about tooling with you both in Paris. Thank you for being here with us on theCUBE. John, thank you always for being my co-host. And thank you for tuning in to the Paris of the Prairie, here at KubeCon, CNCF, CloudNativeCon. My name's Savannah Peterson. You're watching theCUBE, the leading source for emerging tech news.