 Hello, everybody, and welcome to another OpenShift Commons briefing. As we'd like to do on Mondays, we'd like to talk about upstream projects and new ideas and new technologies. And today, we're really happy to have Adele Zalook with us, who is a product manager in the OpenShift group at Red Hat. And he's going to talk about some emerging multi-cluster patterns. You've probably heard about Kubernetes control planes and other things along that line that have been being talked about and discussed in different community groups. But this today is Adele's take on it, and I'm really looking forward to it. So Adele, introduce yourself, your background a little bit, and then take us down this path. Sure. Hi, everyone. I'm Adele Zalook, and I'm a product manager for OpenShift. I have, like my experience is a mixture between networking, consulting, development and research, and recently, project management. And today, I'm going to be talking about multi-cluster patterns. And as you see here, the subtitle is a bit confusing, but I'm going to be explaining it along the way. There's a path to virtual, dualistic, logically centralized, physically distributed clusters. And I would like to start by having us look at this figure. The way, and given also my background, I come from networking. So the first thing I wanted to do is, or thought of doing is basically mapping the stack that we have with OpenShift onto something like an OSI model. And that turned out to be the OpenShift interconnection model. And that does not exist anywhere. That is something I came up with. So I'm sorry for the OSI folks. But yeah, the reason I actually did that is because OSI is a representation of layers of what they do. And then there are protocols that are basically interconnecting with one another at different layers of the stack. And there are even specialized stacks like the TCP-IP model that kinds of like, for example, could have an application that would run on UDP or an application that would run on TCP. And then there is IP on the layer, three layer. And then there's a physical layer that everything runs on top. I think of OpenShift or the stack that we provide is basically similar. Red Hat has been historically known for Red Hat Linux. That's the bases where we build on top of everything. And then OpenShift is just an addition that brings all the goodies of upstream Kubernetes to customers and to you. And this is basically comes in different shapes, forms, and layers. If we think about Kubernetes, you'll find a lot of cluster interfaces being defined upstream. Initially, Kubernetes didn't have that, but the more we go with time, the more standards get defined like cluster API, which deals more of how machines get created on a cluster level on any infrastructure provider. Like CNI deals with the networking on the cluster, like container runtime interface, which deals of basically what you use as a runtime to run your workloads, whether that is normal or sandboxed or any other type of runtime that you choose, or the CSI layer, which basically consists of plugins and so on. I don't have time to talk about each of these layers in details, but I can tell you that with Kubernetes that a lot of these and each layer can span an entire session on its own. And with OpenShift, basically, we bring that with support and add on top a lot of layers that help the usability of these things. Then we go a layer up, and then we have modes of operation for OpenShift clusters. We have, for example, self-managed or managed. You can run your clusters in a connected mode or disconnected. You have a standalone version of it. You have an external control plane. That's basically what we're going to a bit explain about today, which is more an architectural pattern, but still bringing OpenShift. And so that choice brings you more and more use cases. And then we go a layer up, and then you have the multi-cluster management and orchestration. There's a lot of things happening here. And I am sure I forgot many things to add here, but I was lazy at some point, and I didn't add. Things like image resistories, GitOps, pipelines, all these things fits as blocks. And the nice thing is, we provide this as choices. We could say it depends on the use case. We have the luxury to say it depends because we have these blocks that could interconnect with one another in any way, shape, or form. And this is, in my opinion, the real value that we provide these building blocks and you come and look at them and say, oh, this makes sense. I have this use case, and I would like to apply, for example, policy or run multi-cluster with a disconnected cluster and run it in an externalized control plane. So you can use it in the same way that the OSI model is built for or networking protocols, basically, that match and run on top of one another. And in this session today, I'm going to be focusing only on three blocks spanning two layers. So let's start. Yeah. As I said, the term was a bit complicated, was virtual, logically centralized, dualistic, physically distributed. I'm going to go and try to explain what I mean by each of these. So bear with me. So the first layer, I'm going to start from the top, which is, I'm going to take part of the multi-cluster management story, very small part of it, not all the blocks. And I'm going to talk about KCP, right? And KCP, to me, represents these two cluster, two blocks on the top. It could present more, but I'm going to be talking only about the two blocks today, which is basically, if you look at the GitHub repo for KCP, you're going to find, it is defined as a minimal Kubernetes API server. It exposes just enough resources, and it extends or makes the API server pluggable enough so that you can also define or get rid of the resources that you don't need. In addition, if you look at the documentation, you're going to find three major use cases that KCP tries to address. The first one is that minimalistic API server that gives you that not normal, like if you look at the Kubernetes API server, you're going to find a ton of resources. You don't necessarily need all these. So it strips out all the things that are needed, only the ones that are needed, and provides you with an interface that you can interact with without all the overhead of Kubernetes, like components, like keep control manager, or any of these things that deals with spot or deployments and all these things. In KCP, you could choose to not have pause even as an understandable resource. The second thing is more about multi-tenancy. So the way multi-tenancy has been presented so far was with the use of RBAC in namespaces, but what if, the question that gets asked is what if we can present that and take it a layer up and make each cluster present a tenant, for example, and then orchestrate multi-tenancy on a cluster level of primitives instead of namespaces, which presents a stronger bubble, which might be appealing to some. And then the third one is transparent multi-cluster, and the argument here is like whatever you want to apply to a cluster should also work with KCP. KCP presents one layer on top. You could attach multiple children clusters to KCP. So whatever gets deployed on KCP layer should not be problematic or should not, it should not be a problem to propagate that resource that you just deployed to the children clusters. That's basically transparency. Also, I like to call it lossless multi-cluster because whatever you apply on the top layer doesn't lose value or entropy when it gets translated into these children clusters. Yeah. And so there is also the stack with a Slack channel for KCP if you want to get more details. I think Clayton, Jason, and David gave a lot of, or talk a lot about a lot of topics that I just mentioned here and go into the details of how things would look like in the future. So if you're interested, go and join that Slack channel on the Kubernetes Slack and join the discussions. They have also community meetings, so please, you know, just get any questions or things, just go and join that Slack channel. Yeah. So the second part that I would like to talk about today, I'm not going to get into the weeds of hypership, but I just want to present, because I'm going to present a use case that is actually more generic and can be used by other products, not necessarily or other projects. It's a pattern more than I'm presenting a product. Hypershift deploys OpenShift, but it deploys it with a slightly different architectural pattern. So that's the dualism here. And the dualistic part is, you know, was in philosophy, dualism is basically you could have the mind and the body residing in different places yet functioning, right? So could we do that? Can we take out the mind of OpenShift and put it somewhere else and still have a functioning cluster? Basically the question that we ask in Hypershift and yet it is possible. So you could take the control plane, the logic of that cluster and deploy it somewhere else. Not even, not only that, but you could also centralize that. So you could have multiple minds of different people work together or under the same body and have that and we call that a management cluster. So you could have one management cluster that hosts the logic of all these control or all these clusters or the control planes on the same place. And the physical distribution is basically you could have your nodes physically distributed across regions, across zones, across cloud providers. It doesn't really matter. So you would get then the virtual part and the dualistic part and the logical centralization by centralizing the control plane of different cluster, separating that from the normal way of deploying the user. Earlier in this course to respond. Sorry, did I get a question or? No, go on. It was just unmuted something. Okay, no worries. Someone is asking a question. Yeah, so with the normal OpenShift you have your nodes, you're requiring a certain minimum number of nodes, which needs to be, or in most of the cases, co-located with the control. With the workers in HyperShift, we're removing that requirement of multiple nodes and potentially we could host more than one logic, more than one mind, more than one control plane on the same node and scale that up or down and use all the Kubernetes primitives to do so. So that the HyperShift repo is upstream and it's open source. You could go to GitHub and also raise issues and try it out. And yeah, it's still early, very early in the early phases. So it's a good time to ask questions and challenge stuff. Why would we want dualistic or separate control plane, like a included here set of some advantages or features? One thing is, because we are getting rid of them or not getting rid of them, we're complimenting the requirement that some customers, some users want that co-location. They don't want dualism. They say, no, I want my mind and body in the same place. I don't want to be like a free. While others see the benefits of having mind separated. So we provide with OpenShift the two flavors. When we deploy with HyperShift, you get immediate clusters because you don't have to comply with a requirement of a minimum number of nodes to get to a cluster. You are running on an existing cluster and you're hosting just the control plane spots. They're getting kind of immediate clusters. The control planes are cheaper because you can host now multiple of these control planes instead of on one control plane per three nodes. You can host multiple control planes on potentially one node. We're using Kubernetes to host the Kubernetes control plane. So it's Kubernetes and Kubernetes. I could scale up the pods and so on. And by the way, the pattern of dualism or control plane and workers separated, this is not new. It has been used. You will find cloud providers and so on using that. But this here is bringing OpenShift. It applies that to OpenShift, brings you all the benefits, that entire stack that I showed you, that choice and brings that to you in that architectural pattern. So it is important to understand that it's not new, but it gives you all these features with OpenShift. You can also have life-cycling the couple because you could upgrade, for example, the management cluster without affecting the workload clusters or the physical node. They could even be on different versions. They could even be on different architectures. You could also have your SREs, like focusing instead of having to memorize the Kube config of hundreds of clusters. You just have one Kube config and then you can log into or have access if you allow them to debug the control plane of your cluster. Or if you are the one providing the clusters, then your SREs are having that benefit of surfing across control planes and easily detecting a problem because observability becomes easier, logging becomes easier, and all these things become easier. Not to say that it's not possible, it is definitely possible with multi-cluster management, the other blocks that I didn't talk about. You could also have multi-clusters, centralized logging, centralized monitoring. It's just the footprint that might be a bit lower. Cool. It comes to the department a bit more about the use case. I'm sure that's not a one-to-one relationship. I'm sure that it's not only HyperShift that loves KCP. I'm sure KCP has a lot of other use cases. As I said, HyperShift is just a pattern that I'm presenting today. Lots of other controllers could reuse that pattern or maybe build upon that pattern or completely do other patterns. KCP could love HyperShift, HyperShift could love KCP. Other things love KCP. Today, we're going to talk about that relationship of HyperShift and KCP. From a higher level, the use case that I'd like to present today is more about, can we have KCP as the top, although it doesn't need to always be the top layer, but can we have it as a top layer and orchestrate how we want to schedule clusters? If you remember the figure on the right here, I know the text is not visible, but that block that I'm highlighting was Kubernetes-native clusters. By that, I mean we want to apply all the Kubernetes concepts, but on a clustered layer. Basically, scheduling. I have a pod, I have a scheduler, keep scheduler, keep scheduler, schedule the pod. Can I apply that to a cluster? Can I apply auto-scaling to a cluster? Can I apply a lease, like leader election, all these primitives that exist with Kubernetes, we want to take them a notch up and do that with a multi-cluster with a virtual pane of glass, which is KCP. KCP would act as the virtual interface to multiple management clusters. As I said, in HyperShift, the management cluster is the place where you host the control planes of your cluster. In that case, I have multiple management clusters. These management clusters would act as the children clusters. KCP is orchestrating the placement, for example, or the scheduling of clusters to any of these management clusters so that they can activate upon and create clusters the HyperShift way, which is by separating the control plane and the workers. A bit of an overview about KCP's internals, because that is needed. KCP consists of mainly a KCP server, an API server, that's the minimalistic API server, and optionally, a cluster controller. KCP server could live and survive without the cluster controller. But when you add the cluster controller, you get a lot of benefits, which is the multi-cluster or the Kubernetes native cluster stuff that I talked about. That cluster controller at the moment has three components, a splitter, which is basically taking care of, when you create a resource, it takes care of making sure it's scheduled or devised that resource like a deployment. For example, if I deploy a deployment and I have a replicas, then I could replicate two different children clusters in a load-balanced way. The CRD puller is, for example, whenever I define a CRD in one of the children cluster that gets pulled up through the virtual cluster, so I build awareness of that CRD. And the sinker is basically more an agent that lives on the child cluster to replicate components that I create. I will go into more details in each. And today, I'm going to be extending a bit the sinker and the cluster controller to kind of match the use case that I have in mind. All right. So, as I said, that's another view of it. The sinker, the reason it is slashed is because it does not live on the or communicate directly to the virtual cluster. It is in the children cluster. So, in the case if you have the virtual cluster or the control center, I call it the control center. It's not really called that, but I call it this for simplicity. And then you have the management cluster and the sinker gets deployed to each management cluster. And that becomes the agent that has awareness of resources that gets deployed on the control center on KCP's virtual cluster. And the KCP virtual cluster is really a light way. It is actually, so the main, the API server plus HDD is now a single binary. And it's very easy to run it. I will show it later in the demo. So, the first thing that we want to do is the resources that like Hypershift defines two resources or defines a cluster by a resource name hosted cluster. So, if I'm a user and I create a hosted cluster, I would like, for example, KCP or use the splitter pattern to schedule and divide the hosted cluster and place it on the cluster that has more resources, for example. In that case, a user creates a cluster. The splitter watches is, huh, let's see. Management cluster one doesn't have any resources. Management cluster two, it has, I could create that. So, this is what I call, or what is usually being called push mode. Why? Because the splitter needs to talk to the management cluster and the splitter has the awareness of budgeting, of resources. In that case, the splitter becomes more or less the scheduler. On the other hand, you could have pull mode. And that is useful in cases of like, let's say, I don't have absolute connectivity between the control center and management cluster. I just want awareness to be one way. In that case, the sinker could then watch resources getting created on the virtual cluster and replicate them locally. In that case, so the hosted cluster becomes virtual. This is why it is called virtual hosted cluster here. And then that resource gets replicated locally to the hyper shift operator, which is an operator that takes a hosted cluster and starts creating clusters and namespaces for each cluster. That is pull mode, because it's pulling the resource instead of being pushed from up. Now, can I use the pull mode to schedule resources for more than one cluster? That's the question I asked myself. And the answer is obviously, well, it was not so obvious for me because I have not put it for a long time. But when I was coding this, I realized, oh, but then you have a resource that gets deployed on a control center or the virtual cluster. And that is being watched by two management clusters at the same time. So the queue for each has that resource. So it will be created no matter what, so that like you don't have time to decide or schedule stuff, like thinkers are going to watch and create at the same time. So it's not useful for scheduling one versus the other management cluster. It is, however, useful if you're thinking about H8, where basically you want to make sure that you have the same resource in more than one place. That's a good pattern if you want to do high availability in any use case. And then there's a mixture of these two approaches. And that's basically the way I used it. So I used the sinker to kind of be an informant, if some form. It tells the KCP server or the controller manager, or it watches the resources on the management cluster it is deployed on. And it tells the controller manager or talks to KCP and says, hey, the budget for this cluster is, for example, seven. KCP has an own cluster resource. So the update actually happens there. And on the same side, the management cluster two actually uses the same pattern, says I have this budget. I have eight namespaces or seven. And then the cluster controller looks at the budget of each. And locally, because it has access to the local resources, it looks at these and finds out which cluster has more budget so that it can create resources. And then it makes a decision. In that case, you see the loop happening here. And you're going to find like for like the decision will say, okay, seven is less than eight. Seven cluster management cluster one has less namespaces, meaning has less clusters because in host and hypershift, a cluster gets a namespace. And so it decides to assign management cluster one, for example, as the owner of the cluster resource or the virtual hosted cluster resource that gets deployed. So the sinker again watches and finds that its name got assigned to that resource. The same way, like this scheduler does was the pod. When a pod gets deployed and the node tells the scheduler, I could host that where I could take care of scheduling that pod on my resources. And it's the same way here. The sinker tells the controller manager, I have more resources to schedule that resource. The controller manager decides to assign management cluster one as the owner of that resource. And then the sinker simply just does its job and replicates that resource locally. And from there on, normal operations happen. Basically, the hypershift operator would then take that hosted cluster resource that got deployed in that cluster and then does its job to create namespaces. So in that sense, it's more or less similar to how Kubernetes does its things. You could even take that approach and shard it a bit so you could have one sinker that covers a region, for example, and then the resources gets deployed. Now, I haven't tried that. And I think this is worth a lot of discussions. But this is just an example of one pattern that is enabled. There's a lot of other patterns because KCP gives you that ability with the controller manager plus the minimalistic API server. It pulls for CRDs. And it is lightweight. So more of these gets enabled. So this is now the scheduling part. And this is what would be the topic of my demo. There's another use case, which is interesting as well, which is auto scaling. Now, I said we want to be looking at clusters the same way we look at Kubernetes resources. So can I actually auto scale actual clusters? And the answer is yes. So for example, in that case, let's go to the hosted cluster path again. So a user creates a hosted cluster. But then management cluster, that is, there's only one management cluster, and it doesn't have resources. So it kind of informs KCP or the controller manager and it then asks to create, using hyper shift patterns, a hosted cluster. And we, the difference here is that a hyper shift operator would then be deployed on the upper layer or the management or the control center. And then basically takes care of auto scaling and creating a new management cluster, which then brings us to the original use case. Because now I have two management clusters, I create a hosted cluster resource. And then these two management cluster will report their budgets. The controller manager will act as the scheduler, assigns one of them, and then one of them picks the resource and deploys it. So I know there's many things that might be unclear at the moment. Hopefully the demo will clear stuff a bit. But yeah, I think yeah, that's basically the conceptual part. Then we could jump to the demo unless, like, maybe we should have questions or see if anyone has questions before we get into the demo. Yeah, let's see if anyone has any questions. There's a bunch of people in the chat right now and I haven't seen anyone post a question yet. So let's give them a minute if folks have one. Let's see. There's a lot in what you've just said. So I'm thinking that maybe a demo might be a really great thing now. Okay. So why don't you power through that and let's see if we can explain. Sure. Cool. Okay. So I have tried to label the pains, according to the architecture that I just talked about. They have the KCP server. Then you have the controller manager. And then you have the control center. And that's the virtual cluster that we're going to be talking through. And then we have management cluster one and management cluster two. And simply what management cluster one and management cluster two are kind clusters. So you see here management cluster one and management cluster two. And I am currently pointed at a KCP cluster. So this is the API server. And the controller manager. So if I get API resources, this is something unique that you'll not find anywhere else. You're going to find this, only this set of resources. This is, this tells you you're in a KCP pointed cluster. For example, there's no endpoints, there's no endpoints slices. There's many things are not there, only really what is needed as basis and CRDs. And from the CRDs, it is important to see hypershift here. So hypershift is a CRD that represent that hosted cluster resource that we have talked about. And additionally, there is one namespace, which is also hypershift where the hypershift resources gets created. So that's one thing. The other thing here is the controller manager. And the controller manager is watching for stuff. As you can see here, it's reporting but budgets for the two clusters. I'm going to explain that now. But before that, let me do something that I have not done to make sure that we are, so let's delete that now. That's the sinker. I'm going to delete the sinker from the clusters to make sure that we're seeing the recent, the most recent logs. And by the way, K is just a QPCTL. Just in case I'm lazy. So yeah. All right. So in management cluster one, we're going to find that there was a new sinker deployed in management cluster two has also a new sinker. All right. So, and now one thing we need to clear is the clusters. So what I have, oh, you'll not see it from okay, because I'm sharing that's then. So this is the resources that I will create. The first one is called cluster. This is a KCP resource, specific resource. So that will get deployed on the control center. It is already deployed by the way. I'm just showing it and the secrets here are fine because these are kind clusters. I'm going to delete them after the demo. So you see kind management two and kind management one. These are the two clusters. And what I'm saying here is basically telling KCP, hey, ingest or I'm defining these two clusters to be aware of them. That's what this means. Cool. And later on, I would be creating hosted cluster. And these are just demos. For example, if I look at hosted cluster resource across W1, I'm just going to find everything test, test, test, test, test, but this will be enough to demonstrate the idea. Okay. Cool. So back to our control view. Right. Let's look at the logs of the sinkers. Here it's telling me that it is aware of a hosted cluster and a cluster resources. And it is setting informers or basically that's the controller pattern on hyper shift clusters and clusters for both the guest and or the or the virtual cluster and the local cluster. And then it said updated budget. What does this mean? It means that it told the virtual control center how many namespaces it has. That's the definition of a budget here. So here I have, let's see. I have nine. And if I remove the headers, eight, I have eight namespaces in that cluster. And if I repeat the same command, I have also eight namespaces. So the budget should be eight. So the controller manager here. So, okay, management one has eight. The budget is eight for management one, eight namespaces, meaning I theoretically have eight clusters. Let's think of it this way. And management one has also eight clusters. So most of them are now equal resources. So I could basically choose when I get when I create a hosted cluster resource, I could basically choose any right. So when I create a host of cluster resource, what I need is on the management clusters, I need a controller an actual controller because on the virtual cluster layer, I don't have a controller. I just like if I literally don't have any pods. So no actual control happens on the virtual cluster. In this case, it is more proxy, right? So the actual controller will be in hyper shift namespace. So that's where the hyper shift operator basically lives and acts on resources. So now if I go ahead and create the host of cluster resource, like one of them, that's the dummy resource again, I told you about. So let's create that. And I'm creating that from the control center. So I'm pointing my kubectl whatever to the kube config of the virtual cluster. So sample, post a cluster, tell me one. So that got created. Let's see what happened. Okay. So we see the budget, the controller manager was aware that the budget was nine. That means that one of these two clusters schedule the hyper shift resource. So in that case, let's get that name. Let's first look at the logs of the sinker. What did it say? So the sinker said has owner annotation. It does not have the owner annotation, but it has a cluster ID. And it updated itself because it's so that it has less like it's so it's name on the cluster resource. So that's something I forgot to show. Sorry. Let's see. So the cluster controller manager will update the hosted cluster resource with the owner. So first it is aware it's picking up the budgets and then it sees which one has less budget. And in that case, both were equal. And it updates the actual resource that wants to be deployed. Let's script for TCP. So the owner got updated to be management to not management one. So I should not find the namespace here. I should find it on management too. There were no cluster namespace, only the hyper shift namespace. But here if I look at the namespaces, I should expect yes. So there was an additional namespace that got created that should represent the cluster. And if I look at the sinker logs, it looks like I'm the cluster guardian provisioning in a second. So it recognized that it is the owner of this by looking at the watching the cluster resource, the hosted cluster cluster, and started replicating that to the local cluster for the hyper shift operator to act upon. So yeah, look at the namespaces as I said, hyper shift example. And in a nice or in a beautiful world, hyper shift. So here I'm not planning to demo hyper shift, but hyper shift. The namespace here is empty, because I literally didn't define anything in the resource. So nothing got scheduled. But it provisioned the namespace, which represents a cluster. And usually, if we're demoing hyper shift, that cluster for the namespace would contain the control pane of these components. So yeah, if I now create a new resource, like now because it shows management too, because management one and two have equal budgets, they both had eight namespaces. So I choose randomly. Now I create another one. I would expect that it gets deployed to one because one had less namespaces meaning less clusters. So it could accustom for more resources. So check again. Nothing got so this is this is all three minutes ago. Check again. 18 seconds. So that the cluster got scheduled to the management cluster that had more or had less namespaces and thus more resources. So that basically just shows that with minimal effort, I was able to apply scheduling mechanisms and scheduling primitives at the cluster layer. And I could do a lot more. I could do auto scaling and basically anything. And I said that relationship between KCP and hyper shift does not want to one. And the reason I haven't shown anything related to hyper shift is because this is more of a pattern. Any controller could literally just use the same thing that I did here. So you could apply, for example, I don't know, an HCD resource that follows the same pattern and gets scheduled to the cluster that has the controls in the back end. So scheduling at a cluster layer. And yeah, that that's basically the demo. Yeah. So I think now we try to take questions. All right. Well, we have one question from Michael is asking if if we can leverage KCP to write a parser to split a cube app, SVC deployment against two distinct Kubernetes clusters. Yeah. So as I said, like, of course, like KCP is just acting as a proxy. And something like the splitter pattern here. So let me, yeah, this splitter pattern, or if we look at the repo, the splitter looks at the deployment, for example, looks at the replica, and it has awareness of the clusters that it ingested. So it could separate, for example, a service or deployment across two different clusters. So that is also possible. There is not 100%, I would say, support for everything now because KCP at this point in time is a prototype, but it's very extensible to match the use case that you just said. We'll see. I think that I think that answered his question. And then I'm going to see if anyone else has any questions here. Give everybody a few minutes. I'll clear here. Okay. It's all clear and all the other ones. I think what's really interesting to me about this whole use case that you're describing is the applicability to so many other use cases. And, you know, I know we're Red Hat and we're all open shifters and so Hypershift is in our bailiwick, but it really bodes well for, I think, the concepts between KCP and applying them across the board, regardless of what the use case is. So the slide that you had earlier with how to get in touch with the KCP community, I think that's probably where you want people to go if they want to continue the conversation, the KCP prototype one, or is there another place where you would like people to reach out to you and talk to you about this topic? Yeah. So there are two things that I briefly talked about here, right? If you look at that layer architecture, that the first thing is the KCP bit, which is the multi-cluster of it, and that you can go and talk to the folks like Dr. Cleat and David and Jason about the use cases. They're discussing that every day or every week, sorry. And there's another place which is also very interesting, which is the Hypershift, which is basically this pattern of decoupling the control plane and the workers, or the management and the workers, and deploying OpenShift in a more centralized, much more centralized, cheaper, faster way. But again, as I said, it is a complementary, like to the existing pattern that we have today. It just gives the users the chance to have that externalized control plane pattern to save costs and to do all these things. There's a Github repo there. The contributions are very welcome. We don't have a Slack channel, unfortunately, but that's another place that I would try to point people. So if you have questions about Hypershift itself, also in the Kubernetes Slack, there is an OpenShift-dev, an OpenShift-user channel that you can pop into and ask questions as well. Have you seen this pattern at all in production, or is it still a theoretical POC kind of thing? So the KCP itself, KCP is unique. If I talked about KCP and Hypershift, KCP itself is very unique in certain aspects, like it tries to do where you have a minimalistic API server. As I said, if I look at the cluster, I immediately recognize it's KCP. I don't see this anywhere else. And you have that transparent multi-cluster use case and the stronger multi-tenancy where you can deploy resources and they get translated. There are efforts like Federation and so on that tackle that, but not from the same angle like KCP was the stronger focus on multi-cluster and transparency or lossless multi-cluster. On the Hypershift side, as I said, that pattern is not new. There are many providers that give that, separate that control plane and workers. And with Hypershift, we're bringing that pattern, all the goodies and benefits of that pattern to OpenShift. So you could have OpenShift clusters following that pattern. So I would say it's not new, but it's new with OpenShift because you get the bonus of features and then it covers more use cases. And you can then mix and match like protocols in the OSI layer, but with the OpenShift interconnection more than instead. And you have all these layers and stacks and whatever use case you have, we have the luxury to say you could pick one block with the other. And that luxury is strengthened by the ability to provide these blocks in the first place. Well, if people want to get a hold of you, do you have a final slide there with your contact information or anything or how should we follow up? Yeah, I could add that, but my handle is then it's worker. So you could follow me on Twitter or you could reach to me on the stack, but I could add it to the slide deck. Perfect. All right. Well, first of all, thank you very much for taking the time to do this today. I know we're all really busy with the 4.8 release and everything else that's going out the door in the next few months. So it really helps to set the playing field here for where these use cases fit and how the different pieces and parts of this back work together. So thank you very much for taking the time today. And I don't see any other questions in the chat, so I'm going to give people a few seconds here before I close it out. And Michael, thank you for your question. If you have other ones, just reach out and ask us in the Slack and we'll be hanging out there or on Twitter where we also hang out too, but it's much better to have a threaded conversation in the Slack channel, I think, these days. So I'm not seeing any other questions coming in. So Adele, I'm going to give you a huge shout out on the Internet later today and we'll upload this video. And thanks to Chris Short for making the production happen today. And we'll call it a wrap and we'll have you back with each new release, I think. Get you back to tell us how this goes. And I'll share this with the KCP prototype channel once it's up too, because I think that'll be a good interesting place for people to give you feedback. Thank you for hosting me. It was really fun. And I will be back shortly with another topic. All right. Take care, guys. Thank you.