 My previous KubeCon talks focused on how boring Kubernetes needs to be. After a pretty crazy year, I feel this is the perfect time to talk about the ideas that really excite me. Ideas that have evolved within and alongside our community that I think can help application authors and operation teams alike. Seven years ago, we began this project around a really simple idea, orchestration of containers with a declarative API model designed to capture intent our machines would realize. We heard clearly from early adopters make that model easily extensible to new concepts, which has helped us standardize how teams build and run all types of applications. But we can't rest on our laurels. What do we need to do to improve security, make new application abstractions possible, and continue to improve resiliency and operational flexibility and help simplify what's still pretty complicated? Today, to many here, Kubernetes, the container orchestrator and Kubernetes, the declarative API engine may seem inextricably linked. But what if we reverse direction? What if we could let Kubernetes, the API, take the lead? What will Kubernetes look like without pods? What would be the basic tools in our pod list toolbox? These are some pretty powerful tools. Namespaces can subdivide our work, RBAC can protect it, secrets and config maps can accept generic config, and CRDs can allow us to define all the new APIs we need. So what you're seeing is a really simple prototype of how a kube-like control plane might behave. Kubernetes API without pods, containers, or nodes, with all the extensibility, client support, and tooling needed to integrate declarative APIs describing a much more integrated world. So what would we do with this lightweight control plane? Well, we can do quite a lot, not knowing about pods. The CRDs I'm installing today are just some examples of existing integrations to kube that program things off clusters, programming cloud resources, external DNS or load balancing that coordinates across clusters, or even provisioning databases to connect back to apps on the cluster. And a key challenge for all these integrations today is they were great to know who owns the integration. A less opinionated control plane could actually be a central spot that helps centralize these tools and provides new avenues for coordination. After all, they'd just be declarative Kubernetes compatible APIs. Obviously, as we put more and more stuff together, things get more complicated. If we just put everything together in a higher level control plane, that doesn't actually solve the real problem. We need to tease apart different problems from different teams, use cases and roles. You know, for better or worse today, even though our long history of building into tendency within Kubernetes, a single cluster is still a really practical boundary for teams and extensions. So that led to asking, well, at our control plane level, we can make getting one more cluster incredibly cheap. Maybe we wouldn't need to do as much subdivision within a cluster. So I'm currently connected to our prototype. I'm going to switch to another cluster hosted by that same server, the prototype conveniently generated acute config and I've got a user context. So you can see that the first URL and the second URL have the same host, my local system, but our path changed and it included the name of a cluster. What does that mean? Well, it means that in this new context, in this new cluster, the CRDs I installed in the other cluster aren't there. If we could tenant CRDs and separate them out, that means one team could install what they like and another team would be isolated from them. That's a pretty strong boundary. If you go further, imagine having, instead of one cluster with thousands of services, what if we had a thousand little clusters, each running one service? How might we change our applications and the way that they, the APIs that we need to talk in order to bring that to light? Team A and Team B collaborate on a service without knowing about the implementation details of the other. But of course, we won't have thousands of applications to run, unless we can bring them all to our new lightweight control plane. So what if we could connect our control plane to an existing cluster, turn QV API back into QV Orchestrator? I provided location and credentials to this CRD of a real cluster, which allows us to go and import the APIs, right? It's just an API, the QV model makes this easy. And this is a demo, so let's throw a second cluster in there so that we don't have a new single point of failure. You can see that we grabbed the deployment API object from the underlying cluster, which allows me to now go create one. But this server doesn't know anything about deployments, it's just an API. I've created this API in the declarative nature of a deployment, works for our favor, we can have an integration, a really simple one that takes that top level deployment and just creates two and then tells each cluster to go and create them. We've made it so that the application author doesn't really need to know about the details of those underlying clusters. And for applications that are shared nothing, this integration would work great. You can see I could put two different replicas on two different clusters and nobody has to know the difference. So to an application author, I've said, here's what I want in the standard way that I do today. But now suddenly, my control plane is hiding some of the details from me. And this brings us full circle. We've gone from CUBE to API, now back to a full CUBE application. Maybe seven years into Kubernetes, we've finally come to the point where we can define our applications without needing to know about pods, nodes, and the implementation details of clusters. I think this is a great start to some of the ideas where we could go in the future. So everything I showed you today is part of a simple and delightfully hacky prototype that we're calling KCP. It's just a way to explore some of their concepts that might be part of our future. If your work ideas are interested or set with ours, check out our GitHub repository at kcp.com. The read me covers the real and not quite real parts of this demo. The more importantly details the ideas, inspirations, and possibilities I hope can excite the next seven years of Kubernetes. Thank you and have a great CUBECon.