 Welcome to the Cube's coverage of KubeCon EU 2024, live from Paris, France. Join hosts Savannah Peterson, Dustin Kirkland, and Rob Stratche, as they interview some of the brightest minds in cloud-native computing. Coverage of KubeCon cloud-native con is brought to you by Red Hat, CNCF, and its ecosystem partners. The Cube's coverage of KubeCon EU 2024 begins right now. Good morning, nerd fam, and welcome to Paris. We are here at KubeCon, broadcasting live for the Cube for the next three days. Very excited to be here. The energy is awesome. The food is awesome. The champagne is awesome. Rob, I'm delighted to be joining you here in Paris. Best work trip ever, right? I think it's awesome. The wine is awesome as well, you know, besides the food. But the conversations have been awesome as well. And I think that's with the day zero being yesterday. Yeah, you got to know worse. Oh yeah, over at Red Hat Commons for a little bit, getting to see some roadmap. I think, again, this is a great way to kick it off because AI has been such a topic for the entire week. I don't even know what that acronym means, Rob. I'm shocked you would bring that up. Very excited to have our fabulous guest from Red Hat here. Thank you both for joining us. Welcome to the show. Thank you. Thanks for inviting us. How is it? How's Paris so far for you? This is a big show for y'all. A lot of people love Red Hat here. It's been great. It's been amazing. We started yesterday with the Kubeflow collocated event. So that was amazing to see everyone coming in to ask about AI and now, you know, opening the floor today with the big booth and all their Red Hats, like the one that we have here in the background. Yeah, oh, funny you mentioned it. Yes. Let's talk about that actually. So Kubeflow, what's that? Oh, so Kubeflow is one of the new Kubernetes, under the CNCF umbrella project that basically Red has started to contribute on to help out bring a little bit more of AI ML to the Kubernetes platform. But I'm going to give it a way to Jeremy who's been participating on that. I was just going to say, Jeremy, you see, yeah. You're one of the leaders of this team. So Kubeflow is, it's critically important to us. We've been shipping, Kubeflow is an umbrella project. It's got five projects underneath it. We implement two and a half, three of them at this point. And, you know, it forms the basis of the workflow that we are trying to build out for our data scientists and ML ops practitioners across the hybrid cloud. Let's dig in there a little bit. What is your interaction like with that community? This is an open source community. All your people, all your users are basically in this room right now, which is a very exciting time. Yeah, yeah. What's that feedback loop like? So we had had engagements with them years ago. The sort of sea change moment for us was when it was donated to the CNCF. You know, open source doesn't mean open governance. And when we're a vendor trying to sell something, open governance is critical from a risk management standpoint. So we got engaged after the trademarks and IP moved over. So that's one of the great reasons that the CNCF, sorry, that's one of the things that makes CNCF so great and impactful to the ecosystem is that risk management for vendors. And we also contribute as much as we get. So it's a symbiotic relationship in that way. Our relationship with Kubeflow started in August of last year a little bit tentatively and we've ramped up significantly. Josh Bottom is the community manager there. He welcomed us with open arms, you know, and we recently put out a blog kind of detailing what our journey has been so far. I think it released yesterday. Some of the key points are one of the first things I wanted our guys to do was do a release, be the release manager, do the dirty work, prove that we know how the teams talk to each other, how the components interconnect, how CI works. So we're doing that for the July release, 1.9. They recently had the first steering committee elections. There's a Nutanix person, there's a Apple person and a Red Hatter on there. So that'll give us some kind of, I don't know if it's a control lever, but it helps. And then we sponsored Kubeflow Summit and as of a couple weeks ago, we contributed our first component, which was based on feedback from the last community survey that Kubeflow did, which was the top one or two needs, what are they? And one of them was a model registry. So our team, we spun up an engineering group because our product team wanted a model registry. The community wanted a model registry and that's the Goldilocks scenario. And you're also involved with Open Data Hub as well, right? Talk about that because data to me was a big story yesterday about how you get data to the AI or AI to the data depending on where you're trying to do it if you're doing model training, fine tuning or inference from that. Yeah, so Open Data Hub is an integration project for us. We integrate a variety of, whether they're CNCF or not, open source projects. We integrate them into Open Data Hub. That's kind of the equivalent of Fedora or the OpenShift Kubernetes distribution, OKD. So it's free. Microsoft this morning was talking about how to build an open source AI ML platform. It's been around in Red Hat for three years using VestaBread components from the open source community for several years, I should say. And downstream of that is Red Hat OpenShift AI for sale in managed or on-prem and including disconnected environments. What are the customers saying about this or what are you hearing feedback-wise as you go into the community? Yeah, one of the things that has been very impactful is seeing the way that many of our customers, because many of our customers are OpenShift customers. So they have been using the platform for a while. They understand Kubernetes. They have adopted DevOps practices, right? And one of the great things to see them is doing that evolution now to AI ML, like understanding that the best way to do this is to expand their current infrastructure and treat the AI workload as another workload on top of Kubernetes. And the Kubeflow community and K-Serve and many of these tools basically are facilitating the work of organizations to do that. And now to open it up to what we call MLOps practices, which is basically bringing DevOps but now to managing the AI lifecycle of the applications in general. So we have seen a lot of customers being very successful at extending the DevOps practice to MLOps. And not only that, one of the things that they're doing is before starting with GenAI, which is a big topic, I'm sure you want to cover that later, is that they are doing regular machine learning and they're using it with nonsensitive data, which allows them to have this trial and error process and understand how the AI ML lifecycle works on top of Kubernetes. And from there, once they have their application deployed, they seem, at that point, they have seen the ROI of these use cases and now they get ready to implement GenAI, but now they have a processing place. They have that MLOps practice in place. They have their whole stack in place, really. Yes, exactly. So what are some of the customer success patterns that you've seen, Jeremy? I would imagine you're helping onboard a lot of folks in this space. Yeah, we are. I think the thing that seems to be tripping people up is that it's a riff on DevOps that introduces a couple new critical personas to data scientists and people calling it an ML engineer or an MLOps engineer. And these are the folks like, it's also a riff on an SRE who has a little bit more specialized knowledge around what it takes to lifecycle models, what metrics matter, the differences in metrics that for an application versus an LLM or a model of some kind. So we're seeing people initially get tripped up with that, which is why we're focusing on our platform positioning to kind of alleviate those pain points. Some of the customers that we've been dealing with have tried DIY themselves, have gotten into basically every bad habit that we've seen across the board like, okay, these are the solutions for the remedies for these problems. And in the aggregate, what a vendor like Red Hat does is capture all of those sort of anti-patterns and pivot them in product so that you can become productive as quickly as possible. Our platform has one goal, time to value. And so whatever we can do to remove roadblocks to help the key piece here is making sure that the data scientists and machine learning practitioners and the sort of SRE or MLOps personas are tightly coupled. I have this question out for you all. Love this. What? Eventually we got to DevOps, right? So how long will it take for those two personas to remain discreet? And will they get, like, I think the most advanced teams will blur them quickly. I think you're right. I think what's really interesting about this, we're going down this path and we'll be talking about it a lot this week around platform engineering and where SREs, DevOps, ITOps, really the new IT comes together. And I think MLOps does get absorbed into that as it goes through because I think organizations, to your point, it's the skills and where are the skills for doing infrastructure? It's really that platform engineering groups that are really starting to take off. What are you seeing from how they get started, how they know how to do this? You were talking about, hey, they're taking on bad habits. How do they learn about good habits at this point? Great question. You want to take that one? Yeah, sorry. Oh, I was going to say we have a consulting arm that will walk through our sort of blueprints. We spend a lot of time on AI on OpenShift.io, I think. It's a website where we put all of our demos so people can get a look and feel of what it's like to actually lifecycle an application on OpenShift, on OpenShift AI. So we like to get people's feet wet in a non-critical scenario and so that they can, another thing is taking on projects that are super low risk to kind of build up your corpus of experience, build up your SOPs on how to do it, figure out your cost structure. Because a lot of this is like they're just throwing money at this problem and like we'll figure it out later because we have to be first movers. I was wondering about that and I'm curious because you've seen a lot of customer stories. I do want to talk about some of the fun, exciting ones, but I'm curious what those initial first missteps are because it does seem like everyone wants to solve their AIML problem, but it's kind of a throw money at it, maybe throw some bodies at it. It's kind of messy right now. So what are some of the key things that you're helping customers avoid as they start navigating this career? Yeah, one of the things that I've seen is that a lot of the customers that come to us is that we already bought the GPUs, we don't know how to use them. Ah! We're doing this exactly. Okay, I'm so glad you brought that up because it's a whole GPU party, right? Yeah. And then what? And then what? You've got to optimize that. Right? So they're trying to figure out which one is the platform. Can we run like they come to us? Can you actually support GPUs? Can you support this type of GPUs? And it's very interesting to see that first problem being solved. It's fine. I mean, everyone's trying to understand how the platform has to look, what is the infrastructure stack. You know, another problem that we see often is that they engage with small boutique companies that are doing some type of specific GNAI or foundational model, but they don't know where to run it, right? Like then suddenly like they have to use something like the operator fragment to make sure that they can run the application on top of Kubernetes and certifying those platforms. Totally. So we see a lot of that, but at the end of the day, I think it's just a partnership with our customers, right? Sitting with them and helping them out, understand all the process, helping them out, identify what is the use case? Is it on the development part or is it in the inferencing part? What things are you doing with open source tools? Do you need support for those tools? So you know, you take away that burden for them to having to manage the lifecycle or the support of these open source tools and rather buy maybe or get some support version of the product, so we walk them to that process. Of course, there are a bunch of tools that you can use as a do it yourself. That's great. We always encourage that as a red-hatter, but there are points where they need to automate the process and they need to worry about what is important, their use case, right? At the end of the day, they need to create the right use case, the right model, bring it to production and see the ROI of their investments. These are big investments and you need to make sure that you're realizing the value of your investment. Yeah, I mean, we even heard that in the keynote this morning around the GPUs and they're running, you know, horribly underutilized in some of these peoples' on-premises. What other innovations and things are you guys looking at to bring to the party here? What is it for? For cost control? Across the board. Okay. So I mean, right now people are getting a little bit of sticker shock. Yeah. I was at the Kubernetes contributor summit yesterday and I was trying to orient the room around what is the actual problem we're trying to solve and what is the engineering target we're hitting for? What we've kind of between myself and the IBM Research Team, what we've kind of settled on is cost per million tokens at a consistent latency. So for GenAI. So like these are chatbots which you're expected to have like a real time, it's supposed to feel like a conversation. And so we, performance matters. I spent seven years in performance engineering at Red Hat so like, yeah, I'm happy to see that. But the, this is the way we're trying to orient the conversation is like, have your product expressed in a quantifiable way or your product targets expressed in a quantifiable way and let the engineers hit those targets. So when we talk to our customers they often don't understand or haven't yet to put it into numbers. It's more about defining goals. I think that's actually really, that's a key differentiator because I think like most type curve technologies and not that AI and ML hasn't been around for 30 years, it has, but there's always trying to solve the problem or trying to do something. But what are we doing? I mean, you're actually stepping them back and making sure they have the right bird's eye view of what's going on within there. Yeah, that's something that the Kubeflow community has, I proposed like, here's the five big rocks for this year and one of them is we have to establish a better user feedback loop. They did a yearly survey. We need to break that down to be more granular, more detailed so we can understand what they like about it, what's a gap, and so that we can more quickly address the needs of the community. And we're moving so fast a year is not a reasonable feedback loop. Yeah, it's, we're in a quarterly, at least in my opinion, every 90 days we should be touching base on what we're doing and if we're building the right thing. And I think what's interesting is that, and you brought up IBM research, that you're part of the AI Alliance, which IBM, AMD, and a number of other organizations, I think Meta, Hugging Face, and others. That would seem like really important in this kind of community is having these alliances, even though it's outside of CNCF, but having that big tent and bringing in not just the software side, but the hardware side and how that's going to work together. There's hardware members, there's academic members, Berkeley, Notre Dame, Boston University. It's just at Boston University a couple weeks ago talking about the AI Alliance. There's things that the AI Alliance is going to do that CNCF is not in their scope. Policy. Right. And the skills gap. I was going to talk about like, right now there's kids coming out of college that are helping to fill a skills gap with the folks that have been in market for 10 years who are established on a certain technology base and now need to retrain. So there's currently a skills gap in terms of the depth of experience. And so we're going to, IBM and the AI Alliance and Red Hat, we're pushing universities to bring curriculum earlier, even into high schools. Yes. AI space, yeah. And on the policy side, we spent time, not to be personally, at the World Economic Forum advocating for legislation that doesn't create a, doesn't bifurcate the market into haves and have nots. Which is where it's currently headed if there's no pushback. Yeah, and the EU just passed their AI, I guess you could say, what's going on with AI and some of the regulations that have some pretty significant teeth to them as well. So I think you're dead on with that. And Go Terriers, I'm a BU alum, so I like to hear that. You're loving this moment. I know I can feel you smiling on my left. Yeah, I love that, so. I, no, I think that's really important. And I mean, it is, you just hit on it. It's the intersection of companies and creators and enterprise and entrepreneurs, but also it's that academic and the government side of things. Because it takes us all regulating this to make it work. Yeah. Yeah, otherwise we're not going to have the right freeways for all this stuff to run on. I think it's great. All right, more of a personal question for both of you. And I'm going to start with you, Jen. You both get to see a lot of different applications of AI, ML, I mean, Gen AI, and even just all the stuff you're doing at Red Hat. What has you most personally excited about where we're going or an application maybe that's really interesting? Oh, wow. I think like this AI alliance part of it, like just making sure that, like I'm excited to see everyone collaborating to make sure that what you were saying just right now, like having the standards, people sharing the technology, understanding what are the use cases are going to work at the end of the day. I think that's the part that I'm excited about. You know, particular use case, I'm not attached to any of them. I think right now there's so much innovation and fun things. I just, I like to watch the customers coming up with the different use cases they're putting together. But the collaboration, I'm very excited. I've been in a couple of events and seeing people interact and figure out where are we going to get to, you know, AI in the future. That's the part that it's very exciting. It's a really exciting time and the collaboration is great. What about you, Jeremy? Any personal pet projects you really like? We have an example of rural hospitals that are budget constrained and how can we help them? And one of them is a vision model that can convert a 2D ultrasound to a 3D ultrasound. So that's a good one. There's also, I'm a huge proponent of applying AI for good. And so that into the medical and healthcare field, anything we can do to accelerate disease research, protein research, these are things that have societal level impact. And what it takes, it's not like this technology is net new, but the amount of people suddenly applying their domain expertise to it, that's where we get these outcomes from. It just made me feel good to hear you say that. I do think you've touched on two things that are really important. One, everyone talks about the democratization of AI and ML. It's lip service right now. It's not democratized unless we put it in the right hands and the right people, just like you were talking about. But I do think what is exciting is AI and ML will help everyone in theory when we talk about healthcare applications, for example, or telemedicine, or the things we're able to do in rural communities that we can't do now. And the skill gap we're able to close because we'll be able to meet those people where they are, which is really exciting. All right, final question for you both. And I'm gonna start with you, Jeremy, since I started with you Jen last time. What do you hope you can say when we interview you here at KubeCon next year that we can't currently say yet? Here's what I think we need to do. I think we need to rapidly identify the open source LAMP stack for AI. I think we need to quickly commoditize the platform and around communities like Kubernetes and Kubeflow and PyTorch, potentially Ray, to really define what is the lingua franca of the incoming round of students and the current practitioners. The faster we get to that, we get out of this mess of, let's find out the grounding kind of concepts and move on to the application of those and that's where we get the maximum value like I was talking about in healthcare and things like that. So I think it's, in particularly, Red Hat's interest and sort of our role in the ecosystem is to help catalyze the communities that can comprise a platform that is stable, secure, performant and boring. Well said. What about you, Jen? Well, that's funny because I think I have like more a business answer like what Jeremy may have before for the use cases and I guess it's like bringing the customers, being able to bring the customers here to explain what they're doing. I think it'll be very fun if we can showcase some of their use cases, there's nothing like learning from what other people are doing and how they can replicate it in different industries. So we see a lot of use cases out there, right? But right now a lot of the companies are trying just to get it done to make sure that it's working. So hopefully next year we have a nice array of customers coming here and showcasing their use cases. We want to have them all right here on theCUBE. I look forward to showing off their hard work. Jennifer, Jeremy, thank you both for being here and for all the hard work you're doing to help make AI and ML more approachable, easier and more cost-effective and accessible for your customers. Rob, thank you so much for joining me and thank all of you for tuning in to this fabulous first interview of our live coverage here on theCUBE at KubeCon, CNCF's flagship European event in Paris. My name's Savannah Peterson. Thank you for watching theCUBE, the leading source in enterprise tech news.