 Thank you, Amber. And it's nice to finally meet you in person. My name is Jeremy Eater. Thank you for staying for our sponsored keynote. So wanted to talk to you about, well, first, let me do this. Let me first say thanks. So it hasn't been very long that Red Hat's been deeply involved in the Kubeflow community. We've picked it up for a number of years. And then we had it for a number of years and then slowly drifted away. And we're back. We're back kind of as of October or so. And I really love that presentation from Pepsi just now. They did a gap analysis against Kubeflow. Red Hat's doing the same thing with our customers and hopefully feed that stuff back into the community. So thank you from the Red Hat team for welcoming us. Amber, Andre, there's been a lot of folks that have helped. Josh, of course, I don't know if he's here, have helped us get ourselves oriented. Cool. OK, so again, gap analysis. The personas here are not intuitive for the cloud native community. We have to go and discover data science personas. We have to invent the personas of the ML engineer. Since folks are taking pictures, I will say, this is not my image. I got this from a hacker noon blog. And it is, I believe, roughly correct. So attribution. OK, so yeah, we're interested in deeply understanding the use cases that the Pepsi folks mentioned. We've got teams internally that are running OpenShift AI. I'll talk to you about that in a second. It's our distribution that includes Kubeflow. And they're doing the same thing. They're giving us great feedback. Our role and the way Red Hat sees our entire role in the ecosystem is to capture that information, bring it to the relevant upstream, do the implementation there, and then ultimately bring it back into our products. So regardless of whether you care about Red Hat and I don't expect all of you do, hopefully our work benefits you all indirectly. So a lot of this stuff is not in Kubeflow. And I'll pose a challenge to you all later that I need help with because of that. Now, here's the first question. Let's try and figure out what does it take to reach escape velocity for the Kubeflow community? Hopefully we've kind of telegraphed our opinion by showing up here and dedicating engineering resources more and more over time that we think this community has a really good chance of becoming a de facto standard for an open source reference implementation for machine learning hybrid cloud ML ops platform. I really do hope that Kubeflow becomes that. Can it ever become the entirety of the stack? No, but it can be one of the letters for sure. We've got things like PyTorch and others that are full blown communities of their own. I'd like to know whether the Kubeflow developers in here can envision themselves building that kind of a platform together, not just with Red Hat, but with the whole community. Can we do that together? I think we can. And I also think the world of open source developers and protagonists need us to do this. So tell me, if we're going to build a LAMP stack for AI, or we're going to try and be one of those letters, where do we start and stop? I believe this comes down to who is willing to build and support and stand behind their components and that the Kubeflow community should build a process for embracing new sort of use cases into the Kubeflow community. Red Hat looked at the survey. By the way, great job on the survey last year. Let's do more of those, user survey. We looked at that survey and we said, what are the top couple of needs? And do any of those overlap with what our product people are asking us for? It turns out the registry is one of them. So if you haven't seen that yet, we've put together a team internally and we've begun pushing, we had a design session with over 30 people looking at a model registry. I think of the respondents, I think over 40% felt that the model registry was a gap because of that kind of Goldilocks scenario where our product team wants it and the community wants it. It was really easy choice for us to put some engineers on that. Well, my question for you is, where, it has to stop somewhere, right? Where should it stop? Are there other components under the Pepsi people? Again, I keep talking about this. They did a great job just now. What stuff did they implement that represents pure tech debt? Pure tech debt for them that they might not need to manage over time, right? That's the type of stuff we want to learn about and see if we can fit it into the community. Red Hat does that all the time, kind of our thing. So I'll limit the sponsorship section to like 30 seconds if you'll bear with me. During the cloud native AI day, talk about that working group that's going on in the CNCF. They talked about a reference implementation for an ML Ops platform. We kind of have that already and it's been in place for years. So we call it Open Data Hub. If you just Google Open Data Hub, you'll see it. It's Kubeflow and a bunch of other stuff, including the auth part that I think was implemented at Pepsi. A lot of that stuff that hasn't made it into Kubeflow yet is part of a downstream vendor thing. We don't want to have downstream carriers because it also represents tech debt for us and all these things are candidates for pushing into the upstream. We would love to do that. We don't yet understand what needs to be upstreamed. So that kind of feedback is huge and that's why I think we need to do more surveys. Okay, so what is coming kind of like, I mentioned the lamb stack, what else is there? So we've got, we know we need PyTorch. We know we need an inferencing engine where we're looking really heavily into Berkeley's VLLM. And Open AI Triton is a huge, I think that represents a huge opportunity for abstracting away GPU vendors, right? So application developers are then insulated and I think one API made a stab at this too. We just happened to be thinking more along the lines of Triton but, and this isn't Nvidia Triton by the way, totally different thing. And then of course across the bottom we ship K-Serve now and Kube Ray we ship and we don't ship Hug & Face TGI. We have a fork of it because they did a licensing shenanigans and then the Transformer Reinforcement Learning also part of our stack. Thank you again for letting us become a steering member, steering committee member, which is Yuan Tang from Red Hat, release manager Ricardo DiMartinelli, the Olivera, model registry and we're sponsoring this. So again, here's the big rocks I think for this year. Somebody's gonna bark me off stage in a second. But this is what I think from Red Hat's perspective are the big things that can help move this community from a great, from a success, a good project to a great project that has tremendous future ahead of it. Help us tackle these important challenges here. I desperately want user feedback, like desperately. Cause that's almost like, like if you're building stuff you want it to be, you know, you want it to be used, right? So build it for the people and do nothing else. Build it for the users and nothing else. Our customers demand long-term support. We're gonna have to figure out a way inside Red Hat to deal with that. We took us a year to get that in place in Kubernetes like seven or eight years ago. It was a long effort with a lot of sharp edges. Yeah, we need to do that. We need to graduate this thing. The idea of conformance and over my time limit. The last thing I'll leave you with is, well, two things. One is here's the KPI that our users are expecting. They wanna know how much it costs to serve for Gen AI, I should say. They wanna know how much is it gonna cost me to serve a million tokens, P90 latency, 150 milliseconds. So we gotta engineer towards that. That's actually a good goal for us to have even if we can't hit it. It's a good place to engineer around. And I think that's it for me. Thank you very much.