 Welcome to our session. Kubernetes is the control plane for the hybrid cloud. This is going to be a more in-depth version of a similar keynote that Clayton is doing at KubeCon this year related to some work we're thinking about in the upstream Kubernetes community. We wanted to give you a bit more context and connect it more deeply to the problems we hear from customers and how OpenShift and Kubernetes are evolving to address those. My name is Joe Fernandes. I'm the GM of the Core Cloud Platforms Business Unit here at Red Hat. And my name is Clayton Coleman. I'm architect of Hybrid Cloud at Red Hat and I've been focused on Kubernetes and OpenShift for a very long time. If you've seen some of my previous KubeCon talks, I've focused on how boring Kubernetes should be, needed to be in order for us to all be successful after a pretty crazy year. I feel like this is the perfect time to talk about things that excite me. Joe agreed that these are exciting ideas that mean we're going to do them, but it means they're a way for us to think about where we want the future of our community and our project and how we can deliver the most value for application teams and operations teams at the same time, which is really what Kubernetes has been about from the beginning. Before we get into the details, let's just talk briefly about Red Hat's Open Hybrid Cloud strategy and what it's all about and how OpenShift and Kubernetes enable that. All right, so Red Hat strategy is Open Hybrid Cloud. It's something that we've been talking about for many years and really our focus is on two key things. First, how do we enable enterprise customers to build and manage a hybrid collection of apps and services that span from traditional architectures to cloud native to data analytics, AI, ML integrated and beyond? And then second, how do we enable those apps to run anywhere across a hybrid infrastructure spanning from the data center to multiple public clouds and out to the edge? OpenShift is our hybrid cloud platform and it's built on a foundation of Kubernetes and Red Hat Enterprise Linux but provides a comprehensive platform that enables enterprise customers to build, deploy and manage applications wherever they want. If you attended our OpenShift roadmap session at Commons Gathering or any of the other venues, you saw a lot of our recent work has been around how we add new and better capabilities for managing multiple OpenShift clusters across multiple environments. Features that we're developing to help customers manage OpenShift and their applications across a hybrid environment are the same ones that we rely on ourselves to deliver OpenShift as a managed cloud service. So as you can see from this slide, OpenShift is available as a fully managed cloud service across all the major public clouds. And then we also deliver it as a self-managed software solution that you can deploy and manage yourself wherever you want to run it. But either way, Kubernetes is at the core of this platform. Yeah, and I mean, seven years ago, we began this project working in the community on a broad and expansive vision for how containers could help make applications teams more successful. It was a really simple idea. It's orchestration of containers, a declarative API model. The API model is about intent, right? That's saying what you want and then making the machines go realize it because we have other things to do. We have to go write those apps. We have to debug those apps. We don't want to be there telling the machines what to do every day. That's the machines can do that for themselves. We heard clearly in the early phases from early adopters, we needed to bring new concepts in. Like declarative APIs are really powerful. Can we bring new concepts alongside all the ones that we're incubating? That's been successful beyond our wildest dreams. And today, seven years later, a huge number of organizations and companies and individuals run services successfully on top of Kubernetes in a way that standardizes deployment. And so we need to ask, what can we do to move Kubernetes forward? Talk about the evolution of Kubernetes over those last seven years and how it's evolved to address customer needs. And this is just one way to look at it. I looked at it in terms of these three phases. So in phase one of our Kubernetes journey, the Kubernetes API and the core primitives and declarative resource controllers that are part of that allowed users to orchestrate an expanding number of application workloads. And we saw this with customers and partners alike. Yeah, and this is the evolution of Kubernetes has been driven by people putting it into use and then finding gaps and helping us identify where the project as a group we could go. So Amadeus is one of our earliest Kubernetes clusters. They started using replication controllers for their long running services. And this is Amadeus before deployments. They realized they also needed a solution for batch jobs. And at the heart of Kubernetes, we anticipated this, but through that community collaboration, Amadeus was actually able to help and drive those features. It was the very beginning of CUBE and those features today exist as part of that collaboration in the community. And earlier, oh yeah, that's right. And also like, before I forget, like CouchBase, one of our key partners also wanted to deploy databases and containers. This was a hugely controversial topic for the first five years of Kubernetes. And it was really people who were willing to believe that this was a better way to standardize deployments for all their applications. People who put the time in and the community to make sure that these were reliable and stateful sets, which themselves have gone through a long evolution are there to support workloads that need to be predictable over a long period of time. And that was through, those kinds of collaborations that was possible. In the second phase, you see here in the middle, we needed to expand beyond the Kubernetes API, right? And so operators and custom resources, custom resource definitions, which powered those operators, allowed users to extend the Kubernetes API to manage more complex workloads a day too by adding customized automation that was specific to each component or each service. Yeah, and early in the development of OpenShift, we made the decision in the Kubernetes community that we wanted to have a small, compact core functionality that wasn't platform as a service, which it was about running applications. And obviously the scope of applications is practically unbounded. Working in OpenShift, we wanted to contribute these concepts and build them in, but we had no way to do that within Kubernetes itself. And so over time, in partnership with a lot of folks in the community, that led to custom resource definitions and common controller logic, which have now enabled an empowered huge amount of extensibility over the years. Custom resources, let us put the config for the cluster on the cluster. That's something that the Cubanman project is used as well. So it's this idea that everything could be extended and you bring new concepts cleanly, really is a key part of Kubernetes. CoreOS brought it up, kind of formalized and helped settle the pattern, which is it's not just the API and it's not just the controller, but it's the two of them together and it was called operators. The operator pattern is really about hiding complexity, whether it's for deployment or for extending Kubernetes. And through their work we've done, that's been integrated back into OpenShift and we've seen a huge uptake in the broad ecosystem of people extending Kubernetes with their concepts. And we think it can go further. Yeah, so now in this third phase, we're thinking in terms of lots of applications spanning many clusters, those clusters spanning many different environments. And so really what we're trying to do now is explore and we'll do that in this session. How can we better leverage that Kubernetes API to manage services across multiple clusters and have those clusters running across different clouds, data centers and edge environments? Yeah, and today in a sense we're continuing that extension pattern, we're bringing new concepts. We're doing it, depending on how you approach the problem, some folks are building this out from their cloud console projects like ARC or Anthos. Within Red Hat, we've been actually thinking about this as what if you had a hub cluster that was a little bit of your management cluster for all your other clusters? What are the extensions you wanna add, the ability to create new clusters? The ability to run integrations that ensured policy was synchronized across those. And so it's pretty natural for us to think of how can we add those new concepts in that make multi-cluster easy? As we've started to go, we know that that's not enough, right? There's always better ways to subdivide work. And so some of the learnings from the very early days of Kubernetes, adding new concepts, concepts that we never got around to, going in and building operators and the broad ecosystem of people plugging into Qube. And then this multi-cluster ideas, we started to look at this and we're kind of exploring how can we take some of those ideas and compose them in novel ways? And this is really early. Think about this as a, we don't even call that, I don't call this a project, it's definitely not a product. We're calling it kind of a prototype. It's a way to think about these ideas together that can help us look at the same problems we've been having in new and different ways. So I mentioned, Kubernetes standardizes deployment. We've kind of said, what can we do to improve security? If you've got all these clusters all over the place, there's a lot of duplication. How do you look for opportunities to separate out control planes from data planes and how can you improve resiliency, operational flexibility? If you've got to install more and more stuff into the same cluster, you run into some limits. I think to a lot of people today, Kubernetes, the container orchestrator, Kubernetes, the declarative API, they may seem inextricably linked. I think that like, this is the simplest architectural slide that Joe and I could find of all of these concepts. And there was a huge amount of detail hidden here. But you think about these pieces, most of us think, these are all part of Q, but we wanted to come into this and say, well, you know, what if we change direction? What if it wasn't about all these other pieces, it was about Kubernetes, the API, what it would look like without pods or services, without nodes or cubelets, without controllers or schedulers. So it was like, I like to call this talk, somebody came up with this the other day, it's like nodes where we're going, we don't need nodes. So it's my doc Brown impression, and that's about as good as it's gonna get. So while I start sharing, Joe, if you can tee me up. Yeah. Sure, let me just go back here. So while Clayton is sharing his command line, we're gonna show you a preview again of some really early stuff that we've been playing with around these concepts. You'll see this again during the KubeCon keynote if you are able to attend that, but we figured we'd be able to go through it here a little bit more slowly even behind the scenes look and then ask questions as we go. So hopefully, yes, we're seeing Clayton's command line and take it away, Clayton. Just to continue my doc Brown joke, we've gone back in time. So this is, you're seeing the future from the past. So just pretend like the KubeCon talk has already happened, you're getting a deep dive and I promise you you won't miss anything. So what would Kubernetes look like without pods? So this is the first question. So pretty standard command line, I run it. What if the server told me what doesn't know what pods are? Okay, that's a, it's an interesting idea. What can I do without pods? So try to boil down the list of all the resources in Kubernetes. You know, you have namespaces so you can subdivide your work. Different teams can collaborate together on similar but not identical things. You know, you have RBAC so you can protect your resources. You know, you secrets and config maps and CRDs that lets you extend and lets you put generic data there. This is what we're calling a prototype of a Kube like control plane. Kubernetes API without pods, containers, nodes with extensibility, client support, tooling that works today, right? Like if you seem to use in Kube control here what if we didn't have to throw away all of our tools and we could just take everything we have and move forward? Well, let me stop you there. So if I understand you correctly you're essentially talking to the Kube rays API server with Kube control as you normally would but rather than having it deploy containers to a particular cluster you're gonna use the interfaces that it already has in order to create this notion of a hybrid cloud control plane. That's right, pure control plane, Kube focused. You know, it's the heart of the API. I can create and update resources but the only resources there are the resources that help me. There's nothing that has to do with running workloads. It's just, what do I need to integrate anything? And so like this kind of comes down to is like what can we do with this to follow up on your question, Joe? So there's a ton of integrations actually out there today that integrate stuff into a Kube cluster but that don't actually live on that cluster. So cloud resource operators, I got a couple of examples scattered in here. We actually shortened some of it so you can actually read it because everybody gets really long with their names but I can create buckets or topics. I can create functions. These are all features that exist today in various operators and extensions. Sometimes you're dealing with multiple clusters and you need an integration that lets you work. You say like, I want this cluster to expose this part and then I'll go to my other cluster and install that CRD. And kind of one of these challenges is they all require you to know which one owns it. So like if I had a database and it was installed on this cluster, screwed up in the demo, even in recorded sessions, demos are still unfaithful but that database, I would still have to know where it is. And so we kind of ask the question like what control plane could be the place where it all is and that means that I don't have to think about which cluster to secure and okay to install an extension there. I could run the control plane and then my clusters are separate. So you're basically taking all those Kubernetes primitives that we described earlier that came out of the first two phases of the projects evolution, users, roles, namespaces, controllers and so forth and really applying them in a different way, right? So you're now applying them to kind of manage services or apply to usage that really spans clusters and spans the users and the applications that run across those clusters, right? Absolutely, and it's the basics of Kubernetes but we don't think about them because we're always talking about services and pods. And so I showed like 10 examples getting installed here and I think one of the challenges and we see this today everywhere is I installed one extension and I installed another extension and I installed third extension. The more I add, it's more concepts I have to keep track of. And so teasing apart those problems so that we're talking about them in different ways helps get a lot of things. So if I'm a security team or I'm the infrastructure team I may not want to know about higher level integrations like this. That's not my job, that's not my role. And so if we can tease those apart what are the things that would help us tease it apart? So one real challenge is multiple teams sharing a single Kubernetes cluster with something we've been exploring for a long time, OpenShift has spent a ton of time and effort in adding tenancy to Kubernetes, keeping teams apart, making stuff secure. There's a lot of different trade-offs. There's no perfect security. There's just what is right for someone at the time this better or worse, a single cluster I think is still one of the strongest boundaries we have. So if we're imagining a world where we have more clusters and we have that cluster as a strong boundary it led to a real question that I think is super exciting which is if instead of just having lots of clusters if we could make getting one more cluster really, really cheap would we still need to have all those big physical clusters? And Joe, like we talk about this all the time it's a key challenge customers have. Yeah, definitely you're highlighting a challenge that we've been talking to customers about for years not just OpenShift customers but Kubernetes users in general which is how do you manage tenancy across your various developers and teams, right? We saw this from the earliest days of OpenShift three customers would start with a single cluster start with a small team and then inevitably that would grow. And we did a lot of work around multi-tenancy within the cluster to address that, right? So if you saw where Red Hat invested our resources it was into evolving features like namespaces and quotas and roles-based access controls and then even additional concepts beyond Kubernetes things like the multi-tenant OpenShift SDN to segregate application traffic we work then on network policies and more. So despite these capabilities customers always found requirements that would call for creating yet one more cluster, right? So all the tenancy in the world doesn't eliminate the need for multiple clusters and then as the number of OpenShift clusters grew so did the need for more multi-cluster management and we already discussed that earlier that's really what's driven our roadmap recently around bringing in better multi-cluster management capabilities. But it sounds like what you're talking about here is how can we make it easier to just ask Kubernetes itself to give us clusters when we want them and then make them available for what we need them for. Is that right? Yeah, absolutely. And this is a little bit mind-bending and I think there's a lot of things that we're still exploring but there's a hard limit in Kubernetes if you want to tease apart all your different extensions and people still need to run in that environment you still have to install the CRDs together there's no tenancy for CRDs. So it was really obvious was like, okay we've got this control plane that's really stripped down and we're adding CRDs to it to make it something that people can work together. So I'm going to show you here, I'm connected to my local control plane prototype and I'm going to show as you can see the URL that we're connected to from kubectl that's my local server. We're going to switch to a different context that's provided that prototype generates a kubectl config file that actually points to two different clusters. So they're called the second one user and when you look at the config line what you'll see is the URL is different. And so it's the same server but I've got two clusters. So there's the first cluster that I developed that showing you did that pods and I installed CRDs into. But in this second cluster, if I call get databases it tells me it doesn't have any databases and that's because the different clusters see different CRDs. So if I call kubectl get CRDs, there are no CRDs. So in a sense, the database from the other cluster is invisible, the new tenant can't even interact with it. That's pretty hard security boundaries. So like two different teams on the same, they're talking to the same server under the covers maybe there's some stuff being shared but to each individual team looks like two completely different clusters. So there's a lot of possibilities in here. Imagine instead of one cluster with thousands of services what if we have thousands of little clusters each running one service? What are the things that we could change development wise and operations wise that starts to split that problem up? Yeah, it's pretty interesting. It also aligns with some of the work we've been doing lately around getting smaller clusters. So we've seen that related to customers who want to run Kubernetes at the edge which we've been trying to enable with OpenShift whether that's in a three node cluster configuration a distributed worker node or even single node clusters. We're also doing some of that work with the IBM cloud team around a project which we've called HyperShift and HyperShift is a project that allows you to deploy Kubernetes in a managed control planning model. So what that means is you have a central management cluster it's running control planes for a bunch of other clusters and then the end user clusters are literally just the nodes that they bring to the party essentially. So the cluster can be just their one node and they get assigned a control plane. So kind of lots of interesting concepts that have been coming up lately around how do we make these clusters smaller? How do we get more of them? And it's kind of interesting to think about in this context when you try to, when you're saying let's flip the view here and think about thousands of applications, thousands of services each in their own little cluster versus putting them together, does that? Yeah, there's a bunch of advantages like split control plan in your data plan you keep all your high level logic on the control side. So like if you could have a control plane for applications you don't need a ton of stuff and actually I'm gonna show like a couple of examples here but if I have to even get to the point where we could do this, well I need, I wanna have thousands of applications but if I can't bring all my existing applications it's gonna take a while and that would be really painful. So like a big idea for this prototype is how can we bring as much as possible of Kubernetes forward without having to change everything? So what if we could connect our control plane to existing clusters? So you can turn QB API server, the lightweight control plane back into Kubernetes, the orchestrator. So I've got a little CRD here and it is a, it's just pointer to a cluster. It's got a Qt config as well and I got a secret in there that I'm not showing and then I'm gonna apply that to the control plane and so that's, oh okay, it looks like one of those bugs. Okay, so it applied the resource and it's created and so now I've got this created and I'm gonna go ahead and create a second one because if you just have one cluster it's not a really good demo so we'll do two clusters but you're gonna get the same error again. Yep, so it went and created the clusters and there's still some bugs, I said it was a prototype, right? So by installing this cluster, what have I done? Well, what I've done is I've imported from those clusters all of their resources as CRDs. So I didn't need to implement that, that other cluster's handling it. So just like we added those CRDs for external integrations what if Qube was an external integration? So I'm going to, now that I see, you can see deployments are in this list, actually, let's scroll back up. Yep, deployments are up here and so I have a deployment and I asked for 15 replicas and it's connecting to two local clusters and I just run Qube control apply deployment against the control plane and it goes and creates it. So if I call get deployments now, you'll see it if I'm going to wait and wait and wait and wait and this local is kind of fast. So if I do it, I see those resources get created. So I had a simple controller here that split it up, right? So we created the one and then it was like, hey, what if instead of just running just the deployment, what if we had a little controller on the control plane that would split them up and run them on the individual clusters? And some of this is like pretty prototypy, we're still just kind of hacking this together. But the idea would be that you define your app, you take the apps that work today, you stay on the high level details instead of getting down into the weeds, like we want to focus on deployments and services and integrations with databases. I don't want to be dealing with this low level. It's like, we've kind of come full circle from Qube to API and then back to Qube but maybe what's different this time around. This is kind of like the big idea and that's why the prototype was so exciting to me is what if we just didn't add pods back, right? What would it mean to have a Qube without pods? We've been doing this for seven years. I've got this fully running application, it's got these other clusters out there doing the hard work. Maybe seven years into Kubernetes, maybe that fourth step in the slide that Joe showed is like maybe we should be thinking about applications like pods, nodes, clusters, those are details. Let's think about applications, services, how I glue things up. To me, that's hybrid, right? It's connecting all the different things, not just pods and nodes, but all of my application in any cloud, in any environment, on-premise, hosted, service or not, like trying to pull that together. I think some of these ideas could be pretty instrumental in getting to that point. That sounds awesome. And that's how you start turning Qrays into a hybrid cloud control plan, right? So how can people learn more about this? Where can they go? So tomorrow, and I think this is the session before, the day before QubeCon. So my talk at tomorrow, we're gonna, we'll publish the repository and it's github.com slash kcp dash dev slash kcp. And kcp is just a, this is a prototype. Kcp is an arbitrarily chosen acronym that has nothing to do with anything about Kubernetes or control planes, it just looks that way. We're really thinking about this as like the seat of a bunch of ideas. We wanna see how those go. We're not too opinionated at any point here. Our goal really, I think, as Joe said, is like we're trying to bring together these ideas and really move the conversation forward. So I'm excited to be here. I hope everybody loves these concepts. Please reach out to us. And I hope everyone here does have a great QubeCon and please watch my much more compact talk now that we've given you the insider's preview to it. Awesome. Well, thanks everybody. Thanks for joining us.