 Hello, welcome to the cloud multiplier Today is our first episode. So we're gonna have a bit more preamble than usual. We'll introduce everyone that is here with us today But first it is probably a good occasion to talk a little bit about this new live show presented by red hat So we are going to be doing a show to talk about all things distributed cloud So this is on-premises to edge multi cloud multi cluster multi tenant multi everything It seems to be a theme in the industry ever since I started And it is my honor to be joined by Joy deep and then our get my co-host and then Caesar and Josh We'll go around the room and introduce each other here in about a minute All kick things off and say I am on the dev ops team at the art for red hat advanced cluster management So I have been I've been working at red hat for about three years now And I've been on the same team with joy deep and Josh for a good bit of that. So I'm honored to have them join based out of Raleigh Durham, North Carolina and Most of my hobbies are kind of on the back wall here with board games You want to kick us off joy deep tell us a bit about yourself? Yeah, thanks, Gurney. Yeah I'm also as Gurney said with the red hat advanced customer management team. I deal mainly with observability and a little bit of how to scale the product red hat advanced cluster management and Deal with edgy things about you know moving things to the edge and I guess My passion is data how to you know all the data that we collect how to make sense out of the data How to use that data to make decisions? That's That's the you know, that's my mantra. That's my That's the thing that keeps me alive. So having said that Caesar. You want to take it over? Sure? So I am Caesar Wong. I'm my developer in the hyper shift team I've been working on open shift since version 3 And I've been doing all kinds of things, but mainly my focus has been on making Consumers life easier for for open shift users, right like creating clusters developing on clusters and you know and Before working on hyper shift, I worked on high which is also another You know way to grow another service to create clusters and and so I worked I knew Gurney from there and Josh as well Because we we interacted on hive so yeah That's me. Yeah, we made headaches for Caesar every day Yeah So true on my name is Josh Packer and I am an architect for red advanced cluster management And I guess you could say, you know my being in life here at Red Hat is to take all of those clusters that Caesar is He's a Caesar is creating both the original type and we'll get into that the standalone That came out of hive and then do that fall under the control post a control plane or hyper shift or brand name That we're going to talk about today But the idea of managing that fleet and it can be a fleet of clusters that are your own a fleet of clusters that are managed a Hybrid fleet of clusters you name it We're in that We're in that ballpark and as Joy deep mentioned that I point up because he's above me at least in the picture I see in our little Brady bunch window, but as well as at the edge And so, you know, I'm excited to be here and Gurney and Joe deep I want to thank you for bringing us here on your inaugural episode You know, we have the honor and of being a part of it, which is great Yeah, thanks guys. Thanks for joining They gave a bit of a teaser for today's episode today is of course episode one Joy deep myself will be joining you every week Thank you to joy deep for allowing me to bump him into co-hosting this show to get a bit more of experience on the show That isn't just my my meager three years But today we're gonna be talking about a cool piece of technology called hyper shift Something that I plan to break cause cause unbelievable issues with soon once Caesar lets me loose on it But before we get into that I have just blatantly stolen something from Andrew Sullivan of the ask-and-open shift admin bunch Another show on red hats live stream platforms if anyone's interested And we're gonna do top of mind topics Thanks to thanks to him for allowing me to steal that name And I've already talked to joy deep a little bit, but I have not prepped Caesar or Josh So they might we'll get true top of mind for them So the big topic today this first live show was supposed to occur when we were having Red Hat summit, but now that we're two weeks out from summit we can talk about our favorite parts of summit So joy deep do you want to hit us first? I bet it's something observability based, right? So, so, you know when we first spoke about this when I had some totally different thing in mind But since this is fluid, I'll tell you about edge So, you know over the over the weekend I was engrossed with trying to understand how the industrial edge works, right edge in telco edge in Finance edge in manufacturing. They all look very different, right? So I was focusing on industrial edge because as Gurney said drinks and experience Gurney is three years into red hat I work for nine years in industrial automation in industrial process control So that's close to my heart and that relates in some shape or form with how how Kubernetes does things. So What what what I was Thinking about is, you know in steel plants you have these Huge gigantic machines blast furnaces ruling mills and stuff like that which are controlled with innumerable devices, right? So let's say you want to control pressure of a valve or temperature, right? You have a controller behind that that controller is a closed loop controller or It's sometimes called a PID controller Main thing is it's given a reference point and it monitors the output and depending on the error It drives an action, right? And then there are supervised the resistance that control the reference that's being said Think about Kubernetes Kubernetes has a desired state Has an actual state If there's an error it does something which is actually not proportional to that error But in the PID control systems, which is which is going on for at least 60 years or so in industry you do things based on error. So that was that was my Realization and that was you know, I kind of ruminate it over it So I'll still continue to ruin it and maybe someday you'll talk about things related to that Yeah, so what you're saying is we've we can now officially Go into different spaces with less and more tolerance. I think it's what you're saying for error API error rates start hit a little harder when they're controlling controlling industrial equipment, I guess. Yeah That's awesome. Let's see. We'll go around the corner Josh any highlights from From summit this year, you know what you said you were putting me on the spot. I was thinking well, you know I could self plug a little bit so I Did it's not but we did do a summit recorded summit demo on GitOps and it actually involves Hosted control planes that we're going to talk about today and how you might do that as an infrastructure as code example So I recommend you check it out if you look up my name. You'll find it I did it with Christian Hernandez who runs the GitOps The GitOps show I think it's it's called the get up show get up sky to the galaxy You know guide to the galaxy which is a show just like this that we live stream on a weekly basis And so we talk about hosting control planes in there. I would otherwise say yeah It was you know, there were all kinds of sort of new pieces that came out There was some good stuff on edge to that I found interesting I think you know that seems to be the next space that we're we're moving towards Is we're running these clusters not just you know as clusters for your applications But in these more purpose-built edge scenarios, it seems to be where the market is leaning at about to explode Yeah, that's that is awesome. So you're telling me I'm going to get the AWS bill later this month Yes Awesome that DevOps team is here to here to handle that for you Josh Caesar. What did you catch? I have you haven't said mine yet. No one said my my highlight yet your highlight. No, I I I'd say watch Josh's demo Yeah, mine was definitely no one said it micro shift that was definitely the highlight for me I am all for running a raspberry pi with some some Kubernetes there Might consider switching out k3s I have a friend who runs k3s in his home But has a lot more skilled up on the open shift side of things So micro shift might be great for him. He runs a home assistant I don't know if anyone here runs home assistant, but really cool open-source project for all things smart home So awesome project Okay, let's see Since we've gone we got some highlights in we got some cool industry things in The last one that I that might mix into top of mine topics that I want to start doing and I've talked to some fellow red Hatters about this idea as well I I'm gonna put joy deep on the spot yet again, and I warned him any open-source projects We should highlight today. I think we have one coming up in the next live stream So you might be able to plug that one first But I don't know if you have a favorite So one of the things I'm Looking at is the is the low key stuff, you know, yeah, which is which is very fascinating and being having having Managed services in the past having burnt fingers, you know getting woken up in the middle of the night and things like that I have a Particular take on how we should collect logs, you know, not just collect all the logs and put it there and That that helps in root cause analysis, but when things are on fire, you need something more specific So I'm taking a look at low key with great deal of interest especially seeing you know if we can collect critical events from remote places and also What we call as metricized logs get key messages out of logs convert them to metrics and things like that. I'm looking at the other fantastic Way to makes it very easy to correlate metrics and logs. So, you know, maybe we'll talk someday about that too Yeah, it's interesting to me. I was gonna say I hope that'll be a future topic Feel free. I'll drop the card up here if anyone has any topic suggestions. We have an email address I put that together yesterday and made sure it worked So that's the due diligence here. We do have an e contact email address But I think joy deep once we've had one or two weeks of having Caesar and other folks on to talk about how to get yourself Into a big problem of having a couple thousand clusters Once you have a thousand clusters, then you're really gonna start caring about only getting the logs back. You really need So if anyone has any ideas, I did drop a link to Loki because that's an incredibly interesting project Joy, I have to highlight my project was much less work related as I the energy. I'll probably bring to this show I don't know if anyone here is toyed with really low-level emulation, but I've very much enjoyed Video game console emulation for a good bit of my life Recently I was reading the PCS X to which is a PlayStation 2 emulator project has gotten a facelift So they actually got some UI folks in the community to make changes to their UI So they're able to get a much much improved experience. That's the sort of stuff. That's awesome To see good highlight for Andrew Ronaldson who made all the graphics for the show as well He's a UX designer at Red Hat Which is always good to see because they actually make an open-source design system and language in pattern fly. It's incredible So there's some there's some reading I'll send it to you Joyty. It'll be an interesting read I've hopped in the community meetings. So And by the way, Gary, the music that he used in the latter part of the video that was indeed Smokey's lounge by by stack tribe Okay, it's open source. Yeah, I mean, it's not By track tribe. Yep. We had to bet that we did make sure But that yeah, that's awesome and he and he hit the I told him free reign and he hit jazz which was perfect for us Okay, well without further ado, I've made them wait long enough I know Josh and Caesar are here with some infrastructure ready to show us So let me give the quick primer the few lines that I did very little research on hyper shift so I can ask all the dumb questions So I guess my first question is what is hyper shift? That's that's the best way I can ask it. What is it? So who's gonna share first? I think it's you Josh Sure, I'll take the share But I think we're definitely gonna tag team on this a little bit as we go And you know the first thing I'll I'll say like I say in all our other meetings since you know Gurney works with me all the time is there are no dumb questions ever You know any question you asked there's definitely at least one other person thinking the exact same thing So let's start from the top. What is hyper shift? And so actually Caesar just mentioned this and and you know, I lose track of this fact But hyper shift itself is not a product that is coming out of red hat or elsewhere It is it is literally a project a project that we're working on within the open shift space that that brings us this separation of Worker or workload and control plane and so today what you see up on the screen We've got one of the forms of open shift that you can get today And so there's standalone open shift you'll hear us sort of refer to that term versus hosted Hosted open shift and hosted control planes So you have standalone open shift that comes in a number of flavors One of them is the multi nodes where you've got the control plane nodes and the worker nodes Then you've got the one we've got up here, which is the more compact sized one where you've got Your master or sorry your control plane nodes and your worker nodes combined together So it starts out as maybe a three node We've got in the bare metal space which has been released is the single node open shift or sn also referred to as snow where you've got again your control plane and your worker workload Running on the the same piece and so hyper shift is a project that sort of takes this the the next step But number one it's to solve a number of things as well Caesar what I can so basically People came to us with problems, right? Yeah, the current open shift solves a lot of problems and it's great But then there's a few issues that people came up with like It takes longer or longer than I would like for the cluster to come up, right? I I a lot of the Compute that I'm dedicating to the cluster is being taken up by infrastructure and the master the control plane You know, it's I have a large cluster, but I I Mean a large machine and I want to partition it into multiple clusters But you know the current open shift requires separate machines. I I want to have you know workloads in the cloud and But I want to manage it vocally or vice versa I want I want my my workloads to be Inside my my VPC, but then have them managed elsewhere, right? I want more space for my for my cluster I the workloads the Control plane is taking too much, right? And so to answer these We we started working on hyper shift Hypershift is the concept of Moving the the control plane into another cluster Right. They were what we would call the management cluster and then free up the workers to only run workloads and so When you look at this graphic where you see the control plane and workloads in a standalone cluster You start out with three and three right like it's half and half control plane and and workloads With a hyper shift cluster, you're only using a little bit and that's on the on the Management cluster and then most of the space is given to workloads the the actual workers run your workloads And so the control plane It's just a set of pods in a regular Open shift cluster, right and then the the workers get initialized they connect to those pods and and become part of those Cluster and in this model this management cluster can host many control planes, right? So you can have you know hundreds of control planes in a single management cluster Which saves you on you know separate compute for for masters It makes management More more central in the sense that you have a management cluster that has all the pieces that need that are needed for the control plane and then the the the workers are More free to run just the workloads Yeah, and I'm guessing it's I'm guessing it's probably namespace bounded So you have namespaces as your boxes for all of these control planes of clusters running as Yes, okay. Yeah, so every control plane is Located in a single namespace all the pods The that belong to our control plane are going to One namespace of the management cluster So debugging a bad cooblet or a bad Etsy D Replicate is as easy as looking at a pod on a cluster rather than ss hing into my node that is on AWS. It's angry Correct. Yes. Oh goodness that that opens a lot of possibilities for just deleting bad replicates and waiting for them to come back, right? might not be a good idea And to piggyback on that The let's take for instance the physically distributed yellow data plane. So that's a That's a set of nodes That's a set of nodes that running like Gurney said in Amazon and the control plane That is the QB API server of that is running in a regular namespace the QB API server It's CD etc running in a regular namespace on the management cluster, right? Correct. Yes Fantastic, so Go ahead. I was gonna say so in the other the other question always pops to mind We have like a eight. So I know ha usually have three nodes and single node that you have a single node I I assume that the hyper shift control plane is ha three replicate usual Deployment on the on the hosted on the management cluster You can run it either way so you can run it if we're looking for compact and small and you're you feel confident that you know If you're if the node that parts of that control plane We're running on where to go down that they would be able to come up And you had enough resource to back those on the additional nodes Then you could run that way and there you know the catches obviously the RTO There is a possibility of a short outage Say the API servers disappeared on that but but you can also run it in ha mode We called DR mode Where it's done in triplicate as well And so you have three of each just like you would on a normal physical set of control planes Control plane nodes that is and so we actually did have a question. So maybe yeah, we'll jump in with that one And so I was about to hop onto that. Let's see. I have a I have a show button. Let's see There we go. We can actually show it on screen Virtual worker nodes so I'm bare metal that I have to bother with ignition configs I assume for provisioning the actual Worker nodes the physical worker nodes So yeah, Caesar you want to talk a little bit about No, go ahead. No, I was just gonna start to talk about what you know when I see this and we talk about sort of being able to put something together on on bare metal as an example and not have physical other Servers used as worker nodes, you know, it gets into this the Cupert space And I guess I don't think we have a slide that specifically calls out the different platforms that Are available, which means we definitely should add one But you've got today at least support for that's in development preview and tech preview depending on which which platform it is we've got AWS we've got a zoo or support We've got Cupert support, which would allow you to create virtual worker nodes on your single So all contained in a single openshift cluster. You can have the control plane and the worker node the virtual worker nodes running and The last one is escaping me now, and I don't know why Agent that's right for bare metal so you can do you can have the control plane running on bare metal and then you can have individual servers become nodes for that in a virtual control plane space Okay, yeah, so Cupert's your answer for for those virtual Virtual worker nodes then probably And then in Go ahead Caesar. No, I was gonna say so far that is I mean the way we are we allow you to partition these large machines the question was about about the I Don't know if it's a virtual cubelets, which are which are different than just Virtual machines right and that that's an interesting thing, but we have not looked into that right like having one single machine running Everyone's workloads and then running the virtual keep the virtual cubelets to represent nodes on different clusters That's an interesting idea. It's just we we haven't explored it yet So season what you're saying is GCP support then maybe vSphere then we'll look at letting you nest more I mean I'm gonna try to go to Cougvert and put a cluster inside of a cluster inside of a cluster Because I think if you can get three layers deep you can you can kind of keep going from their inception style What is this saying? What do you say on a live stream is now in stone forever? Hey, if you if you really want to I dropped the link on a hyper shift Community repo in chat. So that's another great place to follow up with Caesar and the team Absolutely Yeah, I I think you guys I have a list of questions So spoilers a little bit of back room talk to everyone who's live here I put together a list of dumb questions to ask And then they came and said well We'd love to talk about this and they happen to answer all of the questions in the correct order, which was shocking But my my first questions were what is it? How does it work? I'm gonna ask a lot of questions and and what who is it for my next question though Cesar Josh Where did the idea arise is this is this gonna be practically used anywhere inside of redhead inside of other? Enterprises is there like a parent project that inspired this? Yes, so Definitely host of control planes is not new, right? There's a lot of other companies that are offering host a control plane so all the You know GKE All the XK KS Providers do some form of host a control plane SAP Came out with Gardner, which has been Project for a long time But we hadn't tackled this in in an open shift Until IBM came to us They had been running they had been offering open shift 3 As a host a control plane Way before we knew wait we had no idea IBM was doing this, but they were You know running and and actually offering to customers Managed control planes for open shift 3. Well open shift 3 was you know fairly straightforward have Mono Binary that ran everything and ran the the Cube API server controller manager scheduler It was fairly simple to to run this way once we we Put out open shift 4 then things got interesting because everything was controlled by operators And so there were operators operating on each piece of the control plane and There was a ton of moving parts and then IBM was like We need Red Hat's help. We can't do this on our own. All right, so They they came to us and they said we you know the only way that it makes sense for us to offer Open shift is on this Hosted control plane model and we can't offer it, you know standalone So how how can we work together to make open shift? something that we can offer and so We started a joint project and actually you can still see it in the open shift org is the IBM rocks toolkit and basically it was a You know collaboration with IBM to get to a point where we could run Control planes in a hosted way right and If you look at the rocks toolkit, it's basically a CLI that renders a whole bunch of manifests for you and Basically would render all of the pods and deployments and everything that were required to create a control plane IBM put the other piece around that which was a lot of Ansible code to You know create machines to Deploy these pods on management clusters and set up networking set up search and all kinds of good stuff so That's how we started we We actually Had the problem on the red hat side that we did not have access to IBM's internal environment and so for us to be able to Develop and test that our portion of that toolkit. We needed a way to run it On something like AWS or or on bare metal And so we started we I Had a you know very small installer That would take the output of the CLI And would actually run open shift on AWS and Yeah, I mean that and it was only for the pure necessity and testing ourselves, but you know we We we showed it around and actually people thought like You know hey Running this right like why why can't we run open shift like this and so the the whole Hypershift project got started into you know like Converting that into something that would actually Make sense as an open shift Form factor Wow, and that's I how long has this been running about a year or I Saw the first rumblings of it. I think about a year ago, right about yeah Probably a little bit more than that Okay, but the the work with IBM started in Version four point three Wow, what's the very first version of open shift and so It's been a while Probably probably at least three years that wow we've been looking at this That's incredible. We have a very relevant question. We have some awesome questions today About the rate the resource requirements for a three master Hosted control plan that should be less than regular three master nodes put Josh on the spot here I think especially might know this we've been sizing our own internal deployment as I prepared to set my set of 120 developers loose on this to break it and cause some headaches Josh, what are we looking like for the utilization? Well, I was gonna say I didn't think we published anything quite yet What I'm allowed to say or I'm gonna say Just that it allow I mean so what when you virtualize the control plane Obviously what it allows us to do is is been packed pods more so than you would otherwise on a on a controller That would be sitting say say empty And so I mean the numbers we get are starting to bring us in range with what other cloud providers Offer for virtualized control planes at least from a costing perspective of what it costs to host it And maybe I'll use that to semi segue off of the neat part about this as well Is that through MCE and ACM MCE being multi cluster engine which you find inside of ACM Which is advanced cluster management? the product is This is the first time we put that in the user's hands So the administrators in your data center are able to take advantage of this technology Starting with ACM 2.5 to be able to host your own control planes now You can choose to do that in a cloud provider But also do it on-premise in your data center as well and take advantage of that Savings and that packing of pods where you what you know Otherwise may have a control plane that was sitting 50% to unutilized and especially in the on-prem The on-prem circumstances where you know you're buying servers of a specific size And so if you really you weren't utilizing those they aren't going to be filled up Whereas now with the virtual control planes You have a bit more leeway of as you as you build up making sure that you're fully utilizing it Just like you do in any virtualization technology And then I saw a whole bunch of questions pop it Few more questions first one this day since Josh is not allowed to talk about our sizings yet Little birdie tells me that we'll be able to talk about that here in a few weeks. Hopefully so I asked I'll toss this card up send an email to cloud multiplier red hat comm will loop in Josh And once we have official figures Then we can send you the announcement docs and any information that might give some sizing on that So I I didn't know that so you you caught you caught me off guard. I'd love to put Josh on the spot Well, or I'd say it and it would come back to haunt me Before so it'd be but like in this YouTube screen you told me it was only good Yeah Go ahead. No, what I was gonna say it is definitely less than stand-alone machines exactly it is Significantly less. Yeah. Yeah, awesome The other question sounds very interesting to me because I want to do terrible things with Clusters and clusters and clusters here someone asked about OVN Kubernetes Container is CNI and the hyper shift. So has anyone played around with this? Yes, so starting The most recent Version of hyper shift Runs OVN Kubernetes as it's SDN We started with open shift SDN but yeah, recently we have switched to OVN Kubernetes and and The the folks at the in the networking team did some awesome magic to make it work Um, and yeah, it did if you get the latest builds, uh, you'll you'll get OVN Kubernetes Awesome, and that's the hyper shift project should be linked in chat as well Um, so the next question sees or I think segues into my next question So man chat is on top of things today Um, they uh, Andrew was asking how compute nodes are provisioned Off of the parent cluster already talked about can I have a virtual worker node? What is the life cycle mechanism look like for a worker node after you have that control plane running in in a containerized fashion on your management cluster Sure. So internally what we're using is the the cluster api project Uh, mostly it's an implementation detail. So what happens when a new cluster is created, uh, you have Uh, normally I get to interject for just a second because I thought this was the perfect moment for a segue into a demo of this live There we go. The demo gods are smiling on us Okay, so let me let me actually show you Okay, Caesar will put you live here There we go. This is sees your screen. So yeah, so this is uh, a cluster that I'm connecting to through canines. It's basically Showing two different CRDs right which is the main api of hyper shift One is hosted clusters which represents A cluster and the control plane associated with it and then another one called node pools Which is the machines that you're going to associate with that hosted cluster There is a one-to-many relationship between Hosted cluster and node pools You can have many node pools per per hosted cluster um the you know Mostly equivalent to this in in current open shift is this would be a machine set Uh, but with a lot more stuff than Our machine sets have Currently, uh, so so these are the things that you interact with um, and if I go and create a new cluster um, I will I go Create this might create. There we go. I create a cluster in aws um And so what you'll see here eventually is that I'll get a new hosted cluster a new node pool um First it creates Some infrastructure on aws to host them and then here you see them right they show up um, now what happens with this is that As soon as that hosted cluster is created Mission pods Representing the control plane Start getting deployed And one of those pods is the cluster api manager And the cluster api provider those Depending on which provider your hosted cluster is creating um You will you will get a different provider pod right in this case. It's aws So my provider is going to provision aws machines and so if I look at Machine deployments Sorry Machines um, you'll see that I have Three machines created for this hosted cluster that I just Got started creating um and Those correspond to aws machines um Which are the The the things that the provider works with right? um The provider will start looking at these will start creating new instances For them and then when the instance is ready. It will say i'm ready here um and so This is this is the thing that will create new machines for us when you look at the um At what that node pool looks like Let's go look at the node pools Look at this If I can manage to get this paying Resized Okay All right, so the yaml for the node pool has a good bit of stuff. It says um What the platform is for the node pool. So in this case it's aws And it has aws specific stuff like what kind of instance uh, the root volume um Any those security groups that it needs to belong to subnet Where the machine is going to be placed um the release image uh for the the arcos Uh os image that will be laid down on the on the On the machines, uh, and also, you know, well that controls the cubelet version And how many replicas I I want there's also other things that I can specify about My node pool. So for example, I can say that I want it to be auto repaired To true and that results in machine health checks getting created for it um, I I can also say how I want my node pool to Roll out upgrades or changes to the configuration Um, we have two strategies for for upgrades We have replace which basically throws away the machine Creates a new machine Uh, and we also have in place Uh, which was recently added which basically just, um Applies a new ignition payload to the machine in place And let's go back and look at our machines Okay, so now, uh, the machines have been deployed. They say they're running. They have an actual instance ID Uh, if I look at machines Um, they are attached to my cluster eventually they will get a node name here Which is which means that they have been added to the cluster um If I look at my pods, uh, you'll see that I still have some pods, uh, starting up here. I have xcd I have cube api server Give controller manager um And yeah, they're they're still initializing um eventually they'll they'll come up and I uh My control plane will will be up and ready. So if I look at one that I had already running I was about to joke. This is the part of the demo where we talk for two minutes while the cluster comes online Yes, so, uh, this this is another one that I had running before before we came on And you can see all the pods that are running the xcd and and you know, um That's for api Cube api server to controller manager to scheduler. They're just simple pods, right? If I create, uh An ha cluster you'll have three replicas of each they have zone anti-affinity so each one will be Deployed in a separate zone of your management cluster um and Yeah, no, no You you just get a lot more here um I don't Get back to here Well, it's still going. Okay, and cesar. Did we show the api the hyper shift cluster api? Where we are setting up? Yeah, so let me show you that So this is the the the node, right? Yeah, let uh, if you go to the To the hosted cluster we can show that And basically this is the yaml for the hosted cluster. Um it has a if the Most similar thing to this on the standalone open ship side is the install config, right? It has information about networking About the dns um How is xcd persistence managed? You can set the for example the The storage class for your xcd storage Um, you know networking ciders and so on um, and then Platform specific settings in this case it's like ws as well. So we have vpc um We have uh credentials for different parts of the Of the cluster, uh, so so one thing that is worth mentioning here is that as a sign principle, we are not We don't want to store Your tokens or your passwords or anything that could um, you know, I Sort of compromise your account like so so the end user basically what they create is roles and we use, uh sts to to Use those roles uh and and do the you know the the The the limited operations that those roles allow So if you look here for the different Different pieces or different roles that are passed in to us, uh, we have an open shift ingress uh and basically um The role Says that this is a role that you created And you uh allowed the management cluster to use via an oidc provider right, um So that you're not this is this is all the all the creds that you're giving us and we're in the management cluster We're not storing any of your creds um and so Yeah, uh That that's something we we wanted to To get right And Hyper chef That's uh, that's a very big point right since because just since this is a single control plane of all the fleet of clusters that are running in in probably your enterprise In whatever cloud this this plane isn't storing any credentials Whatever you're giving and I guess see there josh you guys will say probably this uh the the access to this to this control plane Doesn't need to go to be open either, right? All right. Yeah, right. So the the cube api server Endpoint of the management cluster Does not need to be public right at all, right? Uh, and so that is much harder to act um Something uh else that we Add do as a default is uh encrypt at cd secrets at rest so um By a default we will use a s cdc encryption just like uh standalone open shift um, but we also support, uh, aws kms provider Uh for encryption and so you We never have to even know what your What your keys are right like even though we're storing your secrets on the management disk We can't read them from the disk because you own your keys and you All you tell us is the arn to your keys and uh, we We can't decrypt it awesome Caesar we had a related question that's probably good to slide in here while you're showing us the install config equivalent Um, and they there's already been some chat, but I wanted to make sure we addressed it How will infer nodes work? I I see this in two ways one Can I have a hyper shift cluster with infer nodes? um And then two can I host hyper shift cluster control planes on infer nodes? I don't know we have to say about that, but I suspect that infer nodes are just differently labeled nodes Correct. Yes. So by default we don't force you to have infer nodes Uh, but basically You can create a separate node pool that you label infer, right? and and Update here the the different operators that run inside the guest cluster to be scheduled onto those Dedicated infer nodes from your infer node pool. So things like the image registry uh ingress you know Monitoring stack You could move to those infer nodes the other thing that we're looking at is Making potentially making those Some of these things that get deployed by default optional, right? You may not care for a monitoring stack You should not have that running on your workers if you don't want one All right, or you may not need an image registry Right because all your All your workloads have all right. You already have your own registry, right? So we we can We should we're we're thinking of making those optional. It's not There yet But that's Something that's gonna eventually come So let's there's a couple more questions. So the vcpu licensing side If you want to send the Link and gurney will put the email back in there and we'll get a pm to respond to that You're supposed to have a pm with us today, but they're not they weren't able to make it. So you got just a tech crowd Yeah, I was just typing that up to say we've been happy to get you info on how the Licensing side of it works with the vcpu's in the hypership space And then there was one about install. So it was asked below and I think Gurney had written it in the chat But so the delivery mechanism to get access to host to control planes is through acm and mce And so mce is an operator that at a very basic level is available for any install of open shift and with any open shift license So if you have a deployed open shift, you can add the mce operator And either turn that host into or sorry cluster Into a hypershift management cluster or hosting cluster for control planes or you can use it to create other open shift clusters and use those as As hosting clusters as well or sorry management clusters that we use those terms interchangeably management cluster is a hosting cluster For our control planes Yeah, we actually have a have someone asked the next question on my list as well Which is what does the networking look like? How do the how do the nodes connect to the management cluster? I'll extend that question a bit. I'm going to steal the question a bit and say Is there a benefit to have your control have your control plane management cluster in the same Data center cloud region, etc that you have your worker nodes. Does that are we sensitive to latency? Yes, uh, so normally, I mean At least in the same region, right? So it depends one For example, we in in aws. We support private guest clusters And the the way that the private guest clusters communicate with the control plane is via private link So no No network goes through the through the cloud. It just you know goes in trouble even in amazon if you are using that your regions and zones need to match With the with the management cluster, right like that It's a limitation of aws both, you know, both management cluster and And guest cluster need to have the same regions for private link to work There's more flexibility if you're not doing that if you're not using private clusters where, you know I would say we would recommend running within the same region and if you're going to have multiple management clusters At least have one per region that where you're going to run your workloads um But um, yeah, that that would be What I would recommend The uh, so networking What one of some of the more interesting things about hypershift is the networking piece because it is a little bit different than the standalone ocp um So communication between The workers and the control plane is Fairly straightforward The control plane is exposed via a load balancer That load balancer can be public In the internet or it can be A private load balancer in the case of private link The communication from A control plane from kube api server into the kubelet For things like logs exec and so on Happens through the kubernetes api server proxy or connectivity Initially, we had a vpn Set up to do that. We were using open Open vpn But connectivity seems like a better Solution for that. It's an application layer tunnel So basically the way it works is Your worker dials into the control plane Because they it it can access the control plane and then through the tunnel Established by that worker the control plane talks back to the worker um And yeah, that that's how that communication is set up and We see I was going to say it's spoiler although not officially supported So you can for example have a control plane running on aws And to answer one of the questions It's am is under the course under the covers Sorry the control plane is pots the workers in aws would be am is but you can have the control plane in aws while the Worker nodes are in as or as an example And at least it'll connect up and for basic workloads it works But as as scissor was pointing out like it And depending on what you what you're doing I think it's you know, you need to be cognizant of the fact that you're literally traveling across the the internet And I I suspect that and again, we don't have pms here So we can we can talk about all the possibility But I suspect the official matrix will be a little different But just to give you an idea of sort of the the cross connecting that can be done It's it's it's going to be quite interesting over the next next few months as we We start to pull these pieces together Josh Cesar, you only need to say this is the hyper shift project speaking not the not the product We're engineers. We're just going to tell you what we've hacked to get working Um, I there's uh Caesar. I just have to put this on screen because so they've they've had a taste of it and they would They're wondering When oh wrong one When they can have uh vmware and open stack I think the best answer for this that I'll try to add live is the hyper shift project is an open source project And they are working as quickly as they can to get there, but it's also a great place to collaborate Um, so if there's something like that that you're interested in I myself am going to go read the code base Uh anything from you Caesar? No, definitely. Yeah, I I uh I can't give the the p.m. Answer so I just say come come talk to us Yes, definitely in the community. It's you know, it's one of those things the more upvotes the more issues We see for it the higher we're able to you know, it helps us push these things higher up the stack Yeah, awesome. Um, let's see joy deep I I know you had a question or two. We're we're hitting close. We might run over a bit But do you did we did we have any questions from joy deep that you're asking? No, I think uh, Caesar already answered that, you know while talking about the security stuff that was that was uh paramount in my mind How can we make this secure and cedar explained that? so Maybe I'll add the two I'll come back to how do you get this again? So you start with an open shift cluster if you go to operator hub and you look for multi cluster engine or red hat advanced cluster management And install those but from a licensing perspective multi cluster engine is available to you with just your ocp license You activate that which is about a two minute procedure for the operator deploy You add an add-on which will activate the hyper shift operator Which is what's doing a bunch of the heavy lifting initializing this stuff that that we just saw demonstrated by Caesar under the covers that will initiate it on that Cluster itself and make it a management cluster or you can create other clusters or import other clusters with With acm or mce and make those into management clusters as well at which point you can start to create your fleet there's that word Of of hosted clusters and hosted control planes Awesome, and I dropped the project link in there. I I apparently pasted the link twice in a row Which is just skill for me So y'all y'all can figure that out in live chat. That's uh, it's my bad And uh, let's see before we close out We'll probably wrap it up if no one had anything else and then I have one last thing to pop up on screen We can laugh about it given on committal answer and then disappear before we're back Every second and fourth Tuesday. I know that causes headaches It would have been worse doing every other Tuesday because there are like two fifth Tuesdays every uh every year So we'll be here 1 p.m. Eastern every second and fourth Tuesday Now to read us out We have my favorite comment from the day that we can chuckle about and make a non committal answer about Which is is the intention for the hyper shift project work to work with the managed solutions like aro I'll say I saw a bit of a nod there from Caesar. So we'll blame him. But uh, you know, I can't say anything about this one Here though is that you know We talked about how this is the underpinnings for what IBM is doing in their managed offerings And so, you know, it's very it's easy to understand and extrapolate how this becomes a part of red hat managed offerings as well and red hat managed offerings include But are not limited to osdar and uh rosa's faces Josh that was even more of an answer than I expected. We would we would get here. So I think I think chat will be incredibly happy p.m. White yell us a little bit. So uh, that was my commitment without a commitment Yeah, there you go. Um, well, thanks again for Caesar and josh for joining us Thanks for hosting co-hosting me with me as always joy deep Um, and we'll see everyone in two weeks Feel free to send us an email if you have anything at our show contact Otherwise catch you in two weeks. We don't have an outro yet. So I'll have one by then See y'all. See y'all. Thanks for having us. Thank you