 Hello everybody and welcome to another OpenShift Commons briefing. As we'd like to do on Mondays, we'd like to talk about upstream projects and new ideas and new technologies. And today, we're really happy to have Adele Zaluk with us, who is a product manager in the OpenShift group at Red Hat. And he's going to talk about some emerging multi-cluster patterns. You've probably heard about Kubernetes control planes and other things along that line that have been being talked about and discussed in different community groups. But this today is Adele's take on it, and I'm really looking forward to it. So Adele, introduce yourself, your background a little bit, and then take us down this path. Sure. Hi everyone. I'm Adele Zaluk, and I'm a product manager for OpenShift. I have, like my experience as a mixture between networking, consulting, and development and research and recently product management. And today, I'm going to be talking about multi-cluster patterns. And as you see here, the subtitle is a bit confusing, but I'm going to be explaining it along the way. There's a path to virtual, dualistic, logically centralized, physically distributed clusters. And I would like to start by having us look at this figure. The way, and given also my background, I come from networking, so the first thing I wanted to do is, or thought of doing is basically mapping the stack that we have with OpenShift onto something like an OSI model, and that turned out to be the OpenShift interconnection model. And that does not exist anywhere, that is something I came up with. So I'm sorry for the OSI folks, but yeah, the reason I actually did that is because OSI is a representation of layers, what they do, and then there are protocols that are basically interconnecting with one another at different layers of the stack. And there are even specialized stacks, like the TCP-IP model, that kind of like, for example, you could have an application that would run on UDP or an application that would run on TCP, and then there's IP on the layer, three layer. And then there's a physical layer that everything runs on top. I think of OpenShift or the stack that we provide is basically similar. Red Hat has been historically known for Red Hat Linux, that's the basis where we build on top of everything. And then OpenShift is just an addition that brings all the goodies of upstream Kubernetes to customers and to you. And this is basically comes in different shapes, forms, and layers. If we think about Kubernetes, you'll find a lot of cluster interfaces being defined upstream. Initially, Kubernetes didn't have that, but the more we go with time, the more standards get defined, like cluster API, which deals more of how machines get created on a cluster level on any infrastructure provider, like CNI deals with the networking on the cluster, like container runtime interface, which deals of basically what you use as a runtime to run your workloads, whether that is normal or sandboxed or any other type of runtime that you choose, or the CSI layer, which basically consists of plugins and so on. I don't have time to talk about each of these layers in details, but I can tell you that like with Kubernetes, there's a lot of these and each layer can span an entire session on its own. And with OpenShift, basically, we bring that with support and add on top a lot of layers that help the usability of these things. When we go a layer up and then we have modes of operation for OpenShift clusters, we have, for example, self-managed or managed, you can run your clusters in a connected mode or disconnected, you have a standalone version of it, you have an external control pane, that's basically what we're going to a bit explain about today, which is more an architectural pattern, but still bringing OpenShift. And so that choice brings you more and more use cases. And then we go a layer up and then you have the multi-cluster management and orchestration. There's a lot of things happening here. And I am sure I forgot many things to add here, but I was lazy at some point and I didn't add. Things like image resistories, GitOps, pipelines, all these things fits as blocks. And the nice thing is we provide this as choices, like we could say it depends on the use case. We have the luxury to say it depends because we have these blocks that could interconnect with one another in any way, shape or form. And this is, in my opinion, the real value that we provide these building blocks and you come and look at them and say, oh, this makes sense. I have this use case and I would like to apply, for example, policy or run multi-cluster with a disconnected cluster and run it in an externalized control plane. So you can use it in the same way that the OSI model is built for or networking protocols basically that match and run on top of one another. And in this session today, I'm going to be focusing only on three blocks spanning two layers. So let's start. Yeah, as I said, the term or was a bit complicated, was virtual, logically centralized, dualistic, physically distributed. I'm going to go and try to explain what I mean by each of these. So bear with me. So the first layer, I'm going to start from the top, which is I'm going to take part of the multi-cluster management story. Right, very small part of it, not all the blocks. And I'm going to talk about KCP, right? And KCP to me represents these two clusters, two blocks on the top. It could present more, but I'm going to be talking only about the two blocks today, which is basically, if you look at the GitHub repo for KCP, you're going to find this defined as a minimal Kubernetes API server. It exposes just enough resources and it extends or makes the API server pluggable enough so that you can also define or get rid of the resources that you don't need. In addition, if you look at the documentation, you're going to find three major use cases that KCP tries to address. The first one is that minimalistic API server that gives you that not normal. If you look at the Kubernetes API server, you're going to find a ton of resources. You don't necessarily need all these. So it strips up all the things that is needed, only the ones that are needed and provides you with an interface that you can interact with without all the overhead of Kubernetes components, keep control manager, any of these things that deals with POD or deployments and all these things. In KCP, you could choose to not have PODs even as an understandable resource. And the second thing is more about multi-tenancy. So the way multi-tenancy has been presented so far was with the use of RBAC and namespaces. But what if the question that gets asked is what if we can present that and take it a layer up and make each cluster present a 10, for example, and then orchestrate multi-tenancy on a cluster level primitives instead of namespaces, which presents a stronger bubble, which might be appealing to some. And then the third one is transparent multi-cluster. And the argument here is whatever you want to apply to a cluster should also work with KCP. KCP presents one layer on top. You could attach multiple children clusters to KCP. So whatever gets deployed on KCP layer should not be problematic or should not. It should not be a problem to propagate that resource that you just deployed to the children clusters. That's basically transparency. Also, I like to call it lossless multi-cluster because whatever you apply on the top layer doesn't lose value or entropy when it gets translated onto these children clusters. Yeah. And so there is also the stack with a Slack channel for KCP if you want to get more details. I think Clayton, Jason, and David gave a lot of or talk a lot about a lot of topics that I just mentioned here and go into the details of how things would look like in the future. So if you're interested, go and join that Slack channel on the Kubernetes Slack and join the discussions. They have also community meetings. So please, you know, just if you have any questions or things, just go and join that Slack channel. Yeah. So the second part that I would like to talk about today, I'm not going to get into the weeds of hypershift, but I just want to present because I'm going to present a use case that is actually more generic and can be used by other products, not necessarily or other projects. It's a pattern more than I'm presenting a product. Hypershift deploys OpenShift, but it deploys it with a slightly different architectural pattern. So that's the dualism here. And the dualistic part is, you know, was in philosophy dualism is basically you could have the mind and the body residing in different places yet functioning, right? So could we do that? Can we take out the mind of OpenShift and put it somewhere else and still have a functioning cluster? I basically the question that we ask in hypershift and yet it is possible. So you could take the control plane, the logic of that cluster and deploy it somewhere else. Not even, not only that, but you could also centralize that. So you could have multiple minds of different people work together or under the same body and have that and we call that a management cluster. So you can have one management cluster that hosts the logic of all these control or all these clusters or the control planes on the same place. And the physical distribution is basically you could have your nodes physically distributed across regions, across zones, across cloud providers. It doesn't really matter. So you would get then the virtual part and the dualistic part and the logical centralization by centralizing the control plane of different clusters, separating that from the normal way of deploying the user. So earlier in this course to respond. Sorry, did I get a question or? No, go on. It was just an unmuted something. Okay, no worries. I thought someone is asking a question. Yeah. So with the normal OpenShift you have your nodes, you're requiring a certain minimum number of nodes, which needs to be or in most of the cases, co-located with the control with the workers. In HyperShift we're removing that requirement of multiple nodes and potentially we could host more than one logic, more than one mind, more than one control plane on the same node and scale that up or down and use all the Kubernetes primitives to do so. So that the HyperShift repo is upstream and it's open source. You could go to GitHub and also raise issues and try it out. And yeah, it's still early, very early in the early phases. So it's a good time to ask questions and challenge stuff. Why would we want dualistic or separate control plane? Okay, included here a set of some advantages or features. One thing is, because we are getting rid of them or not getting rid, we're complimenting the requirement that some customers, some users, want that co-location. They don't want dualism. They say, no, I want my mind and body in the same place. I don't want to go be like a free. While others see the benefits of having one separated. So we provide with OpenShift the two flavors. When we deploy with HyperShift, you get immediate clusters because you don't have to comply with a requirement of a minimum number of nodes to get to a cluster. You are running on an existing cluster and you're hosting just the control plane as Bob's, they're getting kind of immediate clusters. The control planes are cheaper because you can host now multiple of these control planes instead of on one control plane per three nodes. You can host multiple control planes on potentially one node. We're using Kubernetes to host the Kubernetes control plane. So it's Kubernetes and Kubernetes. I could scale up the pods and so on. And by the way, the pattern of dualism or control plane and workers separated, this is not new. It has been used. You will find cloud providers and so on using that. But this here is bringing OpenShift. It applies that to OpenShift, brings you all the benefits that entire stack that I showed you, that choice and brings that to you in that architectural pattern. So it is important to understand that it's not new, but it gives you all these features with OpenShift. You could also have life-cycling the couple because you could upgrade, for example, the management cluster without affecting the workload clusters or the physical node. They could even be on different versions. They could even be on different architectures. You could also have your SREs, like focusing instead of having to memorize the keep config of hundreds of clusters. They just have one keep config and then they can log into or have access if you allow them to debug the control plane of your cluster. Or if you are the one providing the clusters, then your SREs are having that benefit of surfing across control planes and easily detecting a problem because observability becomes easier, logging becomes easier, and all these things become easier. Not to say that it's not possible. It is definitely possible with multi-cluster management, the other blocks that I didn't talk about. But you could also have multi-clusters, centralized logging, centralized monitoring. It's just a footprint that might be a bit lower. Yeah, cool. So, comes the department a bit more about the use case. So, I'm sure that's not a one-to-one relationship. I'm sure that it's not only HyperShift that loves KCP. I'm sure KCP has a lot of other use cases. As I said, HyperShift is just a pattern that I'm presenting today. Lots of other controllers could reuse that pattern, or maybe build upon that pattern, or completely do other patterns. But it's kind of like KCP could love HyperShift, HyperShift could love KCP, other things love KCP. Today we're going to talk about that relationship of HyperShift and KCP. Okay, so from a higher layer or a higher level, the use case that I would like to present today is more about, can we have KCP as the top, although it doesn't need to always be the top layer, but can we have it as a top layer and orchestrate how we want to schedule clusters. So, if you remember the figure on the right here, I know the text is not visible, but that block that I'm highlighting was Kubernetes native clusters. And by that I mean, we want to apply all the Kubernetes concepts, but on a clustered layer. So, basically scheduling. I have a pod, I have a scheduler, keep scheduler, keep scheduler, schedule is a pod. Can I apply that to a cluster? Can I apply autoscaling to a cluster? Can I apply a lease, like leader election, all these primitives that exist with Kubernetes, we want to take them a notch up and do that with more a multi-cluster with a virtual pane of glass, which is KCP. So, KCP would act as the virtual interface to multiple management clusters. And as I said, in hypershift, the management cluster is the place where you host the control planes of your cluster. So, in that case, I have multiple management clusters. These management clusters would act as the child children clusters. And then KCP is kind of orchestrating the placement, for example, or the scheduling of clusters to any of these management clusters so that they can activate upon and create clusters the hypershift way, which is by stipulating the control plane and the workers. So, a bit of an overview about KCP's internals, because that is needed. KCP consists of mainly a KCP server, an API server, that's the minimalistic API server, and optionally a cluster controller. KCP server could live and survive without the cluster controller. But when you add the cluster controller, you get a lot of benefits, which is the multi cluster or the Kubernetes native cluster stuff that I talked about. And that cluster controller at the moment has three components, a splitter, which is basically taking care of like, when you create a resource, it takes care of making sure it's scheduled or devised that resource like a deployment. For example, if I deploy a deployment and I have a replicas, then I could replicate two different children clusters in a load balanced way. The CRD puller is, for example, whenever I define a CRD in one of the children cluster that gets pulled up through the virtual cluster, so I build awareness of that CRD. And the sinker is basically more an agent that lives on the child cluster to replicate components that I create. I will go into more details in each. And today I'm going to be extending a bit the sinker and the cluster controller to kind of match the use case that I have in mind. All right, so as I said, that's another view of it. The sinker, the reason it is slashed is because it does not live on the, or communicate directly to the virtual cluster. It is in the children cluster. So in case if you have the virtual cluster or the control center, I call it the control center. It's not really called that, but I call it this for simplicity. And then you, with HyperShift, you have the management cluster and the sinker gets deployed to each management cluster. And that becomes the agent that has awareness of resources that gets deployed on the control center on KCP's virtual cluster. And the KCP virtual cluster is really lightweight. It is actually, so the main, the API server plus HD is now a single binary. And it's very easy to run and I will show later in the demo. So the first thing that we want to do is the resources that like HyperShift defines the resources or defines a cluster by a resource named hosted cluster. So if I'm a user and I create a hosted cluster, I would like, for example, KCP, or use the splitter pattern to schedule and divide the hosted cluster and place it on the cluster that has more resources, for example. In that case, a user creates a cluster. The splitter watches is, huh, let's see. Management cluster one doesn't have any resources. Management cluster two, it has, I could create that. So this is what I call, or what is usually being called push mode. Why? Because the splitter needs to talk to the management cluster and the splitter has the awareness of budgeting of resources. In that case, the splitter becomes more or less the scheduler. On the other hand, you could have pull mode and that is useful in cases of like, let's say, I don't have absolute connectivity between the control center and the management cluster. I just want awareness to be one way. In that case, the sinker could then watch resources getting created on the virtual cluster and replicate them locally. In that case, so the hosted cluster becomes virtual. This is why it is called virtual hosted cluster here. And then that resource gets replicated locally to the hyper shift operator, which is an operator that takes a hosted cluster and starts creating clusters and namespaces for each cluster. That is pull mode because it's pulling the resource instead of being pushed from up. Now, can I use the pull mode to schedule resources for more than one cluster? That's a question I asked myself and the answer is obviously, well, it was not so obvious for me because I have not coded for a long time. But when I was coding this, I realized, oh, but then you have a resource that gets deployed on a control center or the virtual cluster. And that is being watched by two management clusters at the same time. So the queue for each has that resource. So it will be created no matter what. So you don't have time to decide or schedule stuff. Like, sinkers are going to watch and create at the same time. So it's not useful for scheduling one versus the other management cluster. It is, however, useful if you're thinking about H.A., where basically you want to make sure that you have the same resource in more than one place. That's a good pattern if you want to do high availability in any use case. And then there's a mixture of these two approaches. And that's basically the way I used it. So I used the sinker to kind of be an informant if some form. It tells the KCP server or the controller manager or it watches the resources on the management cluster. It is deployed on. And it tells the controller manager or talks to KCP and says, hey, the budget for this cluster is, for example, seven. KCP has an own cluster resource. So the update actually happens there. And on the same side, the management cluster too, actually uses the same pattern, says I have this budget. I have eight namespaces or seven. And then the cluster controller looks at the budget of each. And locally, because it has access to the local resources, it looks at these and finds out which cluster has more budget so that it can create resources. And then it makes a decision. In that case, you see the loop happening here. And you're going to find the decision will say, okay, seven is less than eight. Seven, the management cluster one has less namespaces, meaning it has less clusters. Because in HyperShift, a cluster gets a namespace. And so it decides to assign management cluster one, for example, as the owner of the cluster resource or the virtual hosted cluster resource that gets deployed. So the sinker again watches and finds that its name got assigned to that resource. The same way, like the scheduler does, was the pod. When a pod gets deployed, the node tells the scheduler, I could host that where I could take care of scheduling that pod on my resources. And it's the same way here. The sinker tells the controller manager, I have more resources to schedule that resource. The controller manager decides assign management cluster one as the owner of that resource. And then the sinker simply just does its job and replicates that resource locally. And from there on, normal operations happen, basically the HyperShift operator would then take that hosted cluster resource that got deployed in that cluster and then does its job to create namespaces. So in that sense, it's more or less similar to how Kubernetes does its things. You could even take that approach and shard it a bit. So you could have one sinker that covers a region, for example, and then the resources gets deployed. Now I haven't tried that. And I think this is worth a lot of discussions. But this is just an example of one pattern that is enabled. There's a lot of other patterns because KCP gives you that ability with the controller manager plus the minimalistic API server. It pulls for CRDs and it is lightweight. So more of these gets enabled. This is now the scheduling part. And this is what would be the topic of my demo. There's another use case, which is interesting as well, which is O2 scaling. Now I said we want to be looking at clusters the same way we look at Kubernetes resources. So can I actually O2 scale actual clusters? And the answer is yes. So for example, in that case, let's go to the hosted cluster path again. So a user creates a hosted cluster, but then management cluster that exists, there's only one management cluster and it doesn't have resources. So it informs KCP or the controller manager and it then asks to create using hypershift patterns, a hosted cluster. And the difference here is that a hypershift operator would then be deployed on the upper layer or the management or the control center and then basically takes care of O2 scaling and creating a new management cluster, which then brings us to the original use case, because now I have two management clusters. I create a hosted cluster resource and then these two management cluster will report their budgets. The controller manager will act as the scheduler, assigns one of them, and then one of them picks the resource and deploys it. So I know there's many things that might be unclear at the moment. Hopefully the demo will clear stuff of it. But yeah, I think that's basically the conceptual part. Then we could jump to the demo unless maybe we should have questions or see if anyone has questions before we get into the demo. Yeah, let's see if anyone has any questions. There's a bunch of people in the chat right now and I haven't seen anyone post a question yet. So let's give them a minute if folks have one. Let's see. There's a lot in what you've just said. So I'm thinking that maybe a demo might be a really great idea now. Yeah, okay. So why don't you power through that and let's see if we can explain a little bit. Sure. Let me try to then re-share my screen. Yeah, and let me know if you see it. We can see it. And I think the font is pretty good. So I can read it pretty clearly here. So you're good to go. Cool. Okay. So I have tried to label the paints accordingly to the architecture that I just talked about. They have the KCP server. Then you have the controller manager. And then you have the control center and that's the virtual cluster that we're going to be talking through. And then we have management cluster one and management cluster two. And simply what management cluster one and management cluster two are kind clusters. So you see here management cluster one and management cluster two. And I am currently pointed at a KCP cluster. So this is the API server and the controller manager. So if I get API resources, this is something unique that you'll not find. Anywhere else you're going to find this, only this set of resources. This tells you here in the KCP pointed cluster. For example, there's no end points. There's no end point slices. There's many things are not there. Only really what is needed as basis and CRDs. And from the CRDs, it is important to see hyper shift here. So hyper shift is a CRD that represents that whole set cluster resource that we have talked about. If I, and additionally, there is one name space, which is also hyper shift, where the hyper shift resources gets created. So that's one thing. The other thing here is the controller manager. And the controller manager is watching for stuff. As you can see here, it's reporting budgets for the two clusters. I'm going to explain that now. But before that, let me do something that I have not done to make sure that we are, so let's delete that now. That's the sinker. I'm going to delete the sinker from the clusters to make sure that we're seeing the most recent logs. And by the way, K is just a QPCTL. Just in case I'm lazy. So yeah. All right. So in management cluster one, we're going to find that there was a new sinker deployed and management cluster two has also a new sinker. All right. So, and now one thing that we need to clear is the clusters. So what I have, oh, you'll not see it from, okay, because I'm sharing. That's then, so this is the resources that I will create. The first one is called cluster. This is a KCP resource, specific resource. So that will get deployed on the control center. It is already deployed, by the way. I'm just showing it and the secrets here are fine because these are kind clusters. I'm going to delete them after the demo. So you see kind management two and kind management one. These are the two clusters. And what I'm saying here is basically telling KCP, hey, ingest or I'm defining these two clusters to be aware of them. That's what this means. Cool. And later on, I would be creating a hosted cluster and these are just demos. For example, if I look at hosted cluster resource, the cluster W1. I'm just going to find everything. Test, test, test, test, test. But this would be enough to demonstrate the idea. Okay. Cool. So back to our control view. Right. Let's look at the logs of the sinkers. Here it's telling me that it is aware of a hosted cluster and a cluster resources. And it is setting in formers, or basically that's the controller pattern, on hyper shift clusters and clusters for both the guest and or the, or the virtual cluster and the local cluster. And then it said updated budget. What does this mean? It means that it told the virtual control center how many namespaces it has. That's the definition of a budget here. So here I have, let's see. I have nine. And if I remove the headers, eight. Have eight namespaces in that cluster. And if I repeat the same command, I have also eight namespaces. So the budget should be eight. So the controller manager here. So, okay. Management one has eight. The budget is eight for management one. Eight namespaces, meaning I theoretically have eight clusters. Let's, let's think of it this way. And management one has also eight clusters. So most of them are now have equal resources. So I could basically choose when I get, when I create a hosted cluster resource, I could basically choose any. Right? So when I create a hosted cluster resource, what I need is on the management clusters, I need a controller, an actual controller. Because on the virtual cluster layer, I don't have a controller. I just, like if I literally don't have any pods. So no actual control happens on the virtual cluster. In this case, it is more proxy, right? So the actual controller will be in hyper shift namespace. So that's where the hyper shift operator basically lives and acts on resources. So now if I go ahead and create the hosted cluster resource, like one of them, that's the dummy resource again I told you about. So let's create that. And I'm creating that from the control center. So I'm pointing my kubectl, whatever, to the kubectl of the virtual cluster. So sample, hosted cluster, dummy one. So that got created. Let's see what happened. Okay. So we see the budget, the controller manager was aware that the budget was nine. That means that one of these two clusters schedule the hyper shift resource. So in that case, let's get that name. Let's first look at the logs of the sinker. What did it say? So the sinker said, has owner annotation. It does not have the owner annotation. But it had a cluster ID and it updated itself because it's so that it has less, like, it's so it's name on the cluster resource. So that's something I forgot to show. Sorry. Let's see. So the cluster controller manager will update the hosted cluster resource with the owner. So first it is aware it's picking up the budgets and then it sees which one has less budget. And in that case, both were equal. And it updates the actual resource that wants to be deployed. Script, let's script for ECP. So the owner got updated to be management two, not management one. So I should not find the namespace here. I should find it on management two. See there were no cluster namespace, only the hyper shift namespace. But here, if I look at the namespaces, I should expect yes. So there was an additional namespace that got created that should represent the cluster. And if I look at the thinker logs, they, oh, it looks like I'm the cluster guardian provisioning in a second. So it recognized that it is the owner of this by looking at the watching the cluster resource, the hosted cluster resource cluster and started replicating that to the local cluster for the hyper shift operator to act upon. So, yeah, look at the namespaces as I said, hyper shift example. And in a nice or in a beautiful world, hyper shift. So here I'm not planning to demo hyper shift, but hyper shift. The namespace here is empty because I literally didn't define anything in the resource. So nothing got scheduled, right? But it provisioned the namespace which represents a cluster. And usually if we're demoing hyper shift, that cluster or the namespace would contain the control pane of these components. So, yeah, if I now create a new resource, like now because it shows management too, because management one and two have equal budgets. They both had eight namespaces. So I choose randomly. Now I create another one. Host cluster dummy. I would expect that it gets deployed to one because one had less namespaces, meaning less clusters. So it could accustom for more resources. So check again. Nothing got, so this is all three minutes to go. Check again, 18 seconds. So the cluster got scheduled to the management cluster that had more or had less namespaces and thus more resources. So that basically just shows that with minimal effort, I was able to apply scheduling mechanisms and scheduling primitives at the cluster layer. And I could do a lot more. I could do auto-scaling. I could do basically anything then. And I said that relationship between KCP and hyper shift does not one to one. And the reason I haven't shown anything related to hyper shift is because this is more of a pattern. Any controller could literally just use the same thing that I did here. So you could apply, for example, I don't know, an XED resource that follows the same pattern and gets scheduled to the cluster that has the controls in the back end. So scheduling at a cluster layer. And yeah, that's basically the demo. Yeah, so I think now we've tried to take questions. All right, well, we have one question from Michael is asking if we can leverage KCP to write a parser to split a cube app SVC deployment against two distinct Kubernetes clusters. Yeah, so as I said, like, of course, like KCP is just acting as a proxy. And something like the splitter pattern here. So let me, yeah, the splitter pattern or if we look at the repo, the splitter looks at the deployment, for example, looks at the replica and it has awareness of the clusters that it ingested. So it could separate, for example, a service or a deployment across two different clusters. So that is also possible. So there is not 100%, I would say, support for everything now because KCP at this point in time is a prototype, but it's very extensible to match the use case that you just said. All right. Well, let's see. I think that answered his question. And I'm going to see if anyone else has any questions here. Give everybody a few minutes. All clear here. Okay, it's all clear and all the other ones. I think what's really interesting to me about this whole use case that you're describing is the applicability to so many other use cases. And I know we're Red Hat and we're all open shifters and so Hypershift is in our bailiwick, but it really bodes well for, I think, the concepts between KCP and applying them across the board, regardless of what the use case is. So the slide that you had earlier with how to get in touch with the KCP community, I think that's probably where you want people to go if they want to continue the conversation, the KCP prototype one, or is there another place where you would like people to reach out to you and talk to you about this topic? Yeah. So there are two things that I briefly talked about here. If you look at that layer architecture, the first thing is the KCP bit, which is the multicloser bit, and that you can go and talk to the folks like Dr. Clayton, David, and Jason about the use cases. They're discussing that every day or every week, sorry. And there's another place which is also very interesting, which is the Hypershift, which is basically this pattern of decoupling the control plane and the workers or the management and the workers and deploying OpenShift in a more centralized, much more centralized, cheaper, faster way. But again, as I said, it is a complementary like to the existing pattern that we have today. It just gives the users the chance to have that externalized control plane pattern to save costs and to do all these things. But there's a Github repo there. The contributions are very welcome. We don't have a Slack channel, unfortunately, but that's another place that I would try to appoint people. So if you have questions about Hypershift itself, also in the Kubernetes Slack, there is an OpenShift-dev, an OpenShift-user channel that you can pop into and ask questions as well. So this is really... Have you seen this pattern at all in production or is it still a theoretical POC kind of thing? So the KCP itself, like KCP is unique. And like if I talked about KCP and Hypershift, KCP itself is very unique in certain aspects like it tries to do where you have a minimalistic API server. As I said, like if I look at the cluster, I immediately recognize it's KCP. I don't see this anywhere else. And you have that transparent multi-cluster use case and the stronger multi-tenancy where you can deploy resources and you get it translated. There are efforts like Federation and so on that tackle that, but not from the same angle like KCP was the stronger focus on multi-cluster and transparency or lossless multi-cluster. On the Hypershift side, as I said, that pattern is not new. There are many providers that give that separate, that control plane and workers and with Hypershift, we're bringing that pattern, all the goodies and benefits of that pattern to OpenShift. So you could have OpenShift clusters following that pattern. So I would say it's not new, but it's new with OpenShift because you get the bonus of features and then it covers more use cases and you can then mix and match like protocols in the OSI layer, but with the OpenShift interconnection model instead. And you have all these layers and stacks and whatever use case you have, we have the luxury to say you could pick one block with the other and that luxury is strengthened by the ability to provide these blocks in the first place. Well, if people want to get ahold of you, do you have a final slide there with your contact information or anything or how should we follow up? I could add that, but my handle is then it's worker. You could follow me on Twitter or you could reach to me on Slack, but I could add it to the slide deck and... Perfect. All right. Well, first of all, thank you very much for taking the time to do this today. I know you were all really busy with the 4.8 release and everything else that's going out the door in the next few months. So it really helps to set the playing field here for where these use cases fit and how the different pieces and parts of this stack work together. So thank you very much for taking the time today. And I don't see any other questions in the chat, so I'm going to give people a few seconds here before I close it out. And Michael, thank you for your question. If you do, just reach out. If you have other ones, just reach out and ask us in Slack and we'll be hanging out there or on Twitter, where we also hang out too, but it's much better to have a threaded conversation in the Slack channel, I think, these days. I'm not seeing any other questions coming in, so Adele, I'm going to give you a huge shout out on the Internet later today and we'll upload this video. And thanks to Chris Short for making the production happen today. And we'll call it a wrap and we'll have you back with each new release, I think. Get you back to tell us how this goes. And I'll share this with the KCP prototype channel once it's up too, so I think that'll be a good interesting place for people to give you feedback. So thanks again. Thank you for hosting me. It was really fun. And I will be back shortly with another topic. So, I'm second. Yeah, second is soon. All right. All right. Take care, guys. Thank you.