 So I want to talk about why Kubernetes should be boring and how Red Hat is helping make that happen. So I don't mean this kind of boring, even though this is roughly connected. So this is about infrastructure. Kubernetes is infrastructure. Infrastructure helps us build things. But the goal of infrastructure actually isn't infrastructure itself. It's what we build with it. It's how our users and our applications work better with it. And so if I'm a user who is using a service or using an application, exciting to me means, OK, you got a new feature. As a developer, exciting means building my application. But if I'm running an application for a user, exciting usually means something a little bit different. It means everything is on fire again. So in my opinion, if we want happy users, Kubernetes needs to be boring so everybody above the stack can rely on us. Red Hat helps build boring software. There's not open sources and always an exciting thing. People have to chop wood and carry water. But the end result is an outcome for everybody that's better than if we all worked alone. So rather than get all high level, I'm going to get really concrete. Red Hat runs hosted Kubernetes because you should never ship what you don't run yourself. And so one of those variants is OpenShift Online Starter, which is free Kubernetes for everyone. It's a multi-tenant hosted Kubernetes. These are some of the largest and densest Kubernetes clusters in the world, if not the largest Kubernetes clusters. And of course, when you have very large, dense, free Kubernetes clusters, they're very exciting. So one of the things I want to go over kind of what we've done to help make Kubernetes a little bit less exciting in this respect. So events are a very important part of Kubernetes. They communicate when something in the system isn't working. It's actually a breadcrumb for a user to follow to know why their application isn't working. But when you have lots of applications, odds are most of them are, well, maybe not most of them, depending on how good you are as a programmer, some of them are broken all of the time. And so in Kubernetes, we were sending too many events. We found this because we love Prometheus and we use Prometheus from the open source community to make Kubernetes better. This graph is showing you all of the incoming API requests to Kubernetes over a period of time. That giant green bar is events coming from the applications in the cluster that just aren't working. And we also found and fixed a number of bugs in Kubernetes that make metrics better at communicating back to end users. So the fix was actually pretty obvious, which was you don't need to send the same event 32,000 times. You just keep it on the client side, build them all up, and send an update every now and then, hey, your application's still not working. And it's also set the stage for longer term changes that we're making to Kubernetes to make Kubernetes scale better, to make events better with other people in the community. As these clusters got bigger and bigger, though, we started to hit the limits in the system. So the example for us is secrets because we have a very dense multi-tenant cluster. Everybody has their own secrets. Those started to add up. One morning, we woke up and realized that every time one of the servers started up, it was fetching 800 megabytes of secret data. 800 megabytes of anything is pretty bad. And we were doing it pretty frequently. So this trickled through the system. There was a number of bugs found and fixed. We set a long-term arc for what we were gonna do to make Kubernetes better. So the next person who hits this is gonna benefit from features that we've put into Kubernetes that make it work at higher and higher scales. But we also got to pay down an investment that's been building for years. So Kubernetes has been built on LCD since the very beginning. And in the very early days, we worked with the LCD community and team to set a direction for LCD that would meet where Kubernetes would be three years from now. And for us, running in these very large dense clusters, we realized that the time was right. But it's a very complex process because you need to do a migration. Migration is critically important, and we should never, ever as developers, give our end users something that we haven't tested ourselves. And surprisingly enough, giant complex dense clusters are a really great way to test migration. And so we helped triage, fix, and identify issues, and move those out into the community so that everybody can benefit from it. And the end result for us was huge. We dropped memory use on our large clusters down to a third of what it was before. CPU dropped significantly, and API latency dropped as well. So unfortunately as well, when you have lots and lots of applications, you tend to hit almost every edge case that could ever exist in any bit of software. All of these applications fighting against each other, claiming for resources, people coming and going, walking away. Those edge cases we found and fixed. This is one particular example of we enabled cron jobs too early, and an enterprising user turned it on. Well, it ended up creating and deleting about 40 pods a second. And so through Prometheus and through the other operational tools that we developed around Kubernetes, we were able to spot this, go and make those fixes possible. So I could go on. We've done a lot of work in Kubernetes to make it better. But I actually would say, it's really much more important that everyone here recognizes that we as a community need to make Kubernetes boring. To make it boring, predictable, and reliable in open source, because at the end of the day, the only thing that matters is our users. So please come and be boring with us. Thank you.