 All right. Good morning, KubeCon. So AI is clearly the word of the day, of the week. You'll hear it echoed throughout the halls. You'll see it in the keynotes, amazing speakers, great breakouts throughout the conference. It's no surprise. We're truly living in the age of AI revolution. A lot of it has been said over the last months in how the AI train has departed the station, and if you're like me, you're trying to find ways to ensure that you're in it. You're probably trying to see how you can actually use generative AI to unlock new scenarios, to unlock new opportunities, experiences for your customers and users, internal or external. You're trying to find at the end of the day a way to make your applications more intelligent. Today, your applications might look a little bit like this, maybe a bit more microservices, maybe a little bit more monolithic, hopefully in Kubernetes. And truly, the easiest way to get started is really leveraging a SaaS service like OpenAI that really allows you to get started super quickly, prototype, go fine-tune, go all the way to production. But a lot of you I'm sure have requirements that go a little bit beyond that. You might actually have data residency requirements. You might have compliance requirements. You might want to have more control and flexibility. You might even have existing infrastructure investments on GPU capacity that you want to leverage to make it more economic. So what we honestly see a lot these days is what we've been dubbing local models, as in models deployed in your own infrastructure, typically on VMs. Now, as a lot of you know, and as Priyanka mentioned, container images are really a great format, not just for software, but also for models. They're really easy to distribute. You can keep both your code and your models in the same format, and you can very easily manage a lot of them with the easy access to a registry. Moreover, you can then deploy them into Kubernetes and actually leverage all the nice primitives and abstractions that Kubernetes gives you, for example, managing that heterogeneous infrastructure and do it at scale. All right, so job done, right? And we wanted local models through in Kubernetes, done. Not quite yet, because even though this is easier, it doesn't really make it easy. You actually need to look at a number of steps to actually achieve that outcome, starting from containerizing the models. A lot of them are not containerized today. Second, getting that GPU capacity into your cluster and bootstrapping it, troubleshooting any issues with GPUs or drivers, joining the model deployment parameters to the hardware that you're using. So a lot of it has actually to be done in order for you to get there. So we're really happy to present the Kubernetes AI Toolchain Operator, or Kaito for friends. Kaito is an open source operator that aims to simplify and automate the deployment and usage of large language models in Kubernetes. At its heart is a workspace CRD that really is about bootstrapping all of those steps that you just saw before. It then partners with something like a node provisioner to get just-in-time infrastructure for both the models and your workloads. You can even leverage well-known node provisioner controllers like Carpenter to use it. All Kaito components are fully open source and really what we're saying here is instead of doing all of those steps to end with your LLM and Kubernetes, you basically use Kaito to deploy and run that inference for you very, very easily. Now you might be asking, how easy is it really? How easy is the CRD? Pretty easy, actually. You pick up what model you want, like Falcon, set up the preset you do, and then you can even select the infrastructure you desire. And you're ready to go. In two steps, you deploy it to Kaito, you deploy the CRD, and you're pretty much ready to go. You can check the model, see when the inference is ready, see when it's ready to go, and Kaito even does something else for you. It will create an inference endpoint so your application can leverage it right away as an HTTP server. So you can actually check the automatically created service and that 10 to 40 IP right there actually has a slash chat endpoint that you can immediately leverage with your application to make use of that Falcon model. And there's many models. We have 10 models today and we're welcoming every contribution of any open source model that you might want. Go to aka.ms slash Kaito models and contribute your model today. Our robot right now is focused on inferencing, infrastructure provisioning, and containerization. But we really want to go fast into fine tuning, into RAG, and even into training. But let us know. Let us know what's most important for you and what we should help you with. I hope you can join the Kaito project and community and really be part of our community calls, check sessions, find the team around KubeCon and ask them to do a demo for you or see if it meets your use case. And please join the Kaito community to enable all of you and our users to create an open source AI platform. And as Priyanka said, facilitate Kubernetes to be the engine of the AI revolution. Enjoy the conference. Thank you. Thank you.