 Orchestrating innovation is what we do here. We're all pursuing different things and it is really difficult sometimes. You come together and we have different infrastructure that we're building on top of. We're building against different constraints and we're pursuing dozens of different goals. You might be trying to invent an entirely new kind of business or streamline an existing business. You could be trying to scale Kubernetes down to fit on a tiny device on the far edge or run an entire build farm spanning in a data center. You might be running dozens of different clusters that you're trying to keep compliant and stable on multiple clouds because your production workloads are already there. We're each trying to make something different, but when we find a way to play together, our impact multiplies. We're trying to make music together, but if we're not careful, we're going to make noise instead. We have to simplify, select, and combine what we're making, almost like a concierge. It's one of the hardest things that we do and it doesn't always lead you where you would expect. Take long-term support, for example. They recently came out with a great proposal, trying to make upgrades more reliable. The idea is to split and upgrade into two parts. First, you upgrade your cluster, but you don't turn on any new features. And later, you turn on those new features when you can do it and you can check the stability independently. It's turning the problem around and saying how do we make upgrades more reliable rather than stagnating on a particular version, having to deal with deferred maintenance. Cell is another example, common expression language. It started out as a means to make CRD validation easier and more reliable, but now we're taking it and using it in operational means. So we can make web hooks safer to use by limiting their blast radius should something go wrong. Or you can see work done in SIG off to actually take the integration with existing identity providers that you already run and interfacing with CUBE more reliably. We're building out core technology and it's unlocking and applying to all sorts of areas, areas we didn't even think of to enable that next leap in capability. And more and more, that capability is coming from platform extensions. These are powerful projects growing outside of CUBE and sharing back and forth and the multiplier here is tremendous. We have two great projects that just went V1 this year. We've got CUBEvert and Gateway API. If you look at CUBEvert as an example, we have them looking and saying, how do we do automated performance testing similar to the way Kubernetes does so that when our customers upgrade to the next level of CUBEvert, they know it's going to perform as well as the previous version they had. Gateway API goes in the other direction. I had a conversation for those that don't know, Gateway API is a networking project that has a single client facing API and then multiple vendors implementing it so that a cluster admin can choose which vendor they want to use and users can use it like any other. But they had a problem and at this conference back in Amsterdam, we sat down and talked to them and one of the things that they wanted to be able to do was make it easier for an end user to integrate with a particular provider and say, I want you to be able to see this particular aspect of my cluster and not these other ones. And so we sat down and we started thinking about what it would take to actually create a new kind of authorization model, one that would work better for them. And I'm really looking forward to seeing where that's going to go in 2024. Everyone is trying to make technology for different goals, but when we combine these projects, our limits are extended and entirely new possibilities open up. And what would you use those capabilities for? In 2023, it's AI. And you say, how? You have a bunch of different areas all coming together in this spot. So you have node features, things like GPU support you heard about already, but continuing on pieces like the device plug-in manager and dynamic resource APIs to make it possible to bring your custom hardware to your nodes and leverage it from inside of a Kubernetes cluster. At the same time, we're working on workload APIs. Batch is still improving. Jobs still have innovation happening today. Things like index jobs make it easier to run training workloads. At the same time, concurrent to that, we have new scheduler capabilities coming out. Things like Q that are trying to enable you to schedule an entire batch, not just an individual pod. And then concurrent to that, we have extensions building on extensions. Things like KServe, building on top of Knative to serve your models. We're trying all the paths at once and we're each adding our own melody. And that's the real power of this community. There's no single conductor. We're all doing different things, producing different solutions to the same problems or different problems. And then we share them back. And doing that, we can recombine them in novel ways to produce new things. You occasionally get a crazy ant singing a little bit off key, but it could be she's just got the next melody. So come join us and let's build something new. If you want to go deep into platform extensions to workload enablement, drop by the Red Hat booth. I'll be there for a couple hours after the keynote today. Looking forward to seeing you and enjoy the conference.