 Welcome to this week's Ask an OpenShift admin office hour live stream. So, you know, Johnny, happy Wednesday. It's a yet another great day at Red Hat. And I have to say I never get tired of that intro. I think that might have been created by interns. I think I say that probably every week. But it really is. I really like that intro. Yeah, happy Wednesday to you too. Yeah, I was actually showing my wife the playback last weekend. Yeah, it was pretty awesome. She even, you know, for a bunch of nerds, you know, she seems to think that we got it right. Yeah. Yeah. Yeah. You know, apologies to the audience. You got to look at, look at me, but, you know, we'll make the best of it. So this week's Ask an OpenShift admin show, we tried to get several guests. We covered several different topics. We, we struck out a little bit as you saw on all those emails that we're flying around Johnny. And, but that's okay because we actually did get a request slash recommendation from one of our viewers over on the CNCF Slack actually, which is a bit of a continuation from last week's deep dive on disconnected where we'll be talking about disk or excuse me, installing OpenShift to AWS GovCloud. I got that right, Johnny. Right. Yeah. Okay. And we'll also, I wanted to talk about way back when we did the what's new and OpenShift 4.9 discussion. I wanted to talk about Metal LB, and I think we ran out of time. So I'm going to spend just a few minutes talking about Metal LB kind of what it is, why you might want to use it, you know, that type of stuff. So we'll talk about that a little bit. But yeah, it's, it's looking to be an exciting show. Please don't forget that next week we will be off air during this time. So it is a holiday here in the US. I will be celebrating with my family for the first time in a couple of years. So looking forward to that very much. I hope everybody stays safe throughout the holidays throughout everything else. I will definitely suggest however that you subscribe to the channel and whatever platform you're on because in December we've got three really good shows lined up. So I know for a fact we have sandbox containers coming up December 1, December 8. I'm looking at or I'm hoping that we'll have the support folks on us. We'll be able to really go in depth with support folks, you know, what makes them tick, what makes their job easier, how they can help you better, all that type of stuff. And on the 15th, we'll have a special guest from VMware joining us. So we'll be talking about lots of things VMware. So definitely if you if you're not subscribed, I definitely encourage you to do that so that way you can get those alerts when we do come online. I'll also encourage you to go to it's a red dot ht slash live streaming. If you go there there is a calendar, you can subscribe to that calendar and then you have access you'll be able to see all of the shows that are going on. So anytime we sometimes adjust the schedule sometimes we'll cancel shows all that type of stuff, you'll be able to see those in your calendar as well as get all those reminders. All right, Johnny, anything from you. Oh, just looking forward to the to the stream today. Yeah, yeah, I know it's a I'm, I'm what's the song. Who's who's saying that song of niggas Jesus take the wheel and it's Johnny take the wheel because I know nothing about Gov cloud so I'm, I'm just going to have to hand it off to you and do the role I was born to play which is dumb guy. So, that's right, I also specialize and village at it so you know like I think we'll be a couple twins here. All right, so before we get started with today's topic I do want to cover a couple of our top of mind things. So first of all hello to everybody in the chat. I see all of you there appreciate you participating in the chat if you have any questions kind of regardless of what our topic is today. The goal of these office hours streams is to answer your questions. So anything and everything that's on your mind. Talk about it, ask about it. We'll be happy to address those questions. If we can't answer them here on the stream will follow up we'll get those answers and then we'll, you know, we'll talk about them in a future stream. Or we'll put them into the blog post if we can get answers fast enough that follows up each one of these streams on cloud.redhead.com. Some of the top of mind topics. So these, this is for lack of a better term a reoccurring segment if you will here on the ask an open shift admin live stream where we talk about things that are important relevant timely that have happened in the last week or so that we see come up. So the first thing I want to talk about is something that is a little bit self serving. And that is red hat is hiring. So we get pinged, you know, Johnny and I and Stephanie in the background Stephanie hi thank you for helping us yet again this week. So we get pinged in the background a lot about hey can you talk about you know this job positioning or this this availability of this opening inside of red hats. And rather than focusing on any one, you know, position, we tend to just generically say hey go check out the red hat jobs portal. There is just a tremendous number of openings available out there ranging from, you know, being in the sales side of things right as a account executives so on and so forth to in the BU. My team is is hiring we've got I think six or seven positions open on my team, product management product marketing, engineering folks you know Johnny you just moved over on to the engineering side. So kind of whatever your skill set if you have an interest in, you know, working in the open source community working with red hats. Be sure to check it out. The next thing I wanted to talk about. And this is one that you reminded me of Johnny is advanced cluster manager, what we lovingly refer to as rack them the red hat advanced cluster manager for Kubernetes released version 2.4 last week. So a number of things happened with 2.4 kind of the ones that stick out to me. Let's see there is scalability so they added the ability to provision single node OpenShift they've also tested scalability of up to 2000 clusters with single node OpenShift so a single ACM cluster can scale out to 2000 or manage across 2000 single node OpenShift that's that's pretty impressive. It's a lot of clusters. There's a number of other things inside of there. Let's see, let's take this link and paste it into Twitch. So, that link will take you to the announcement blog that around that. See what are some other things that were inside of there. Anything that comes to mind for you, Johnny. No, the 2000 clusters was pretty huge. There was the, I may have just missed it if you said it, but the get ops plugin or the get ops integration. Yeah, yeah, that's a good one. Thank you. So yeah, this is one that, you know, Christian, who is the host of the get ops guide to the galaxy. I think he's talked about it before with ACM 2.4. It is now get ops aware. So if you deploy an application, literally the CRD application into a cluster using get ops, ACM is now aware of that it can see it can help to manage those objects as well. Let's see there's some other things in the blog post. I see some better integration with ACS and ACM lots of other stuff. If you're an ACM customer if you're interested in ACM definitely check it out. It's not something that or I think there's a little bit of something for everyone inside of there. We had Jimmy Alvarez on. Gosh, it's been a month and a half or two months ago now. So I'll see if I can dig up the link for that stream where he talked about a lot of the stuff that's he kind of previewed what was in 2.3 and a little bit in 2.4. So we'll dig that one up so you can take a look. Yeah, especially if you're if any of you are considering, you know, hybrid cloud or multi cloud type of openshift deployments and and things like that where you want to have like a, you know, like a, you know, the single pane of glass to manage all the things. You know that that's really where ACM comes in and just the tighter integration that we have with ACM and all of our other products is just getting better and better and better. And so it's making that deployment, you know, a day zero type deployment thing, you know, a thing of the past where we don't really have to worry about it anymore. So definitely check it out. Yeah, I was recently doing some experimentation with Hive, which Hive is the engine behind ACM to do cluster deployments and stuff like that. So Hive is really interesting because it's literally programmatically deploying, you know, a cluster basically you're having it trigger the installation process behind the scenes. So it was interesting to me as the 2000 cluster limit due to the same at CD bottleneck that limits the current amount of current amount of workers. I don't think so, because essentially ACM from what I understand manages those right basically it's an application. So it's not storing data about those clusters necessarily inside of at CD other than, you know, if it's deploying those clusters or something like that there are some CRD definition or CRD instances rather that are in there but you know 2000 objects is nothing for at CD. So I would suspect that it's something internal to ACM itself, but is that limitation not an at CD limitation per se. I'll also remind you that ACM and ACS and quay and all that other stuff, all of the OPP. So the Red Hat OpenShift platform plus products, they all qualify as infrastructure workload. So you can, you know, deploy an entire OpenShift cluster that is, you know, all control plane and infer nodes host all of those services and kind of a hub type of scenario to and you don't have to, you know, use entitlements for that. I always thought that was a cool thing that we did. Yeah, our hope nine, I guess it was, I guess it was just a tested number that they shop for and it seemed good. I don't know, I'm not terribly in touch with the engineering side on ACM. I'll have to ask Jimmy. So I'm going to share my screen here real quick to talk about the next the next thing and that is downloading previous versions of the installer. So if you've been to where am I? Let's see. Oh, so this is console.redhat.com. So if you've been here before, right, you know, this is how we go and we create deploy new clusters, right? I go into the create cluster. I'm going to come here and I'm going to say I want to do a vSphere cluster with IPI. And if you download this installer, this is always going to be the current GA release of OpenShift. So if I right click here and go to copy link address and then I open a new window and I go here and we'll get rid of the actual file name. I just want to go to the directory. We can see that it takes me to 495. So what if I want to download 494 or 48 something or 47 something, because all three of those are even 4.6 US. All of those are fully supported releases. How do I go and do that? An answer is and you've all probably seen me browse to the mirror here and download stuff from there because see if we go up a directory. We have all of the versions that are listed in here, right? So the official way to do this. I was recently informed is actually from access. So the customer portal. So if I come here and go to products and services, cloud computing, OpenShift and select OpenShift container platform, we have the download button. And, you know, if you've been a red hat customer, this is how you get all of your software. So it makes sense. It was just Andrew never put two and two together. So yeah, from here we can go and we can download kind of any version that we want. And you can see that these are all the stable versions. So one of the downsides of using the mirror is that it, you know, in that listing you just saw me scroll through. It's every version that gets released, including versions that don't get promoted too fast or to stable. Whereas over here, these are only going to be, you know, the versions that make it to at least fast because so 497 you see is what it's calling the latest. And if we go back to the console, we want the OpenShift console, if we go to releases, and we look at 49, you see 497 is in fast. So this kind of brings me to my next point or my next thing here in the top of mind topics and that is a fast release is considered generally available. We've talked about that before we talked about it with Rob Zomsky when he was on rate, we've talked about it a number of times fast is generally available fast is fully supported. So when you're deploying a new cluster, you'll see it, it kind of encourages or pushes you over to that fast by default or that very newest release by default, not a candidate release candidate releases are considered unsupported, even though sometimes they do align. In that instance, it would be supported, right, because it is in both fast and candidate, and we can't tell if you don't if you click this button to download from candidate versus if we click this button for fast. It's the exact same bits. So my point here is, if you're deploying a new cluster, it's perfectly fine to deploy with the version that's in fast. Again, fully supported, you can pick up the phone you can call us it is considered generally available in that respect. The only downside if you choose to do that is that when the cluster finishes deploying, you will probably see an alert on the console there about, you know, the upgrades channel doesn't have any updates or it isn't found. And that's because it defaults to, in that instance, the stable channel. And if fast is ahead of stable, which is pretty normal. So I find deployed 4.7 4.9 not seven but stable is 4.9.5. It defaults to having the update channel set to stable so it's going to say, my release doesn't exist, I can't, you know, I don't know what's going on. You just need to switch that channel over to stable at that point. But otherwise, it's exactly the same. And there's no reason you couldn't do that. I will point out and kind of what made me think of this is that one of our folks internally. We had a conversation this morning about the docs. Let me see if I can find the right page in the docs here. Yes, so this particular docs page, let me paste this into the chat here. So this particular docs page, which is for installer provisioned bare metal clusters has this command in here that does. Let me refresh the page so it'll go back to my bookmark. Is this the page that I want? Did I just lose myself? What happened here? There. Ah, here we go. I was just looking at the wrong place. So if we look at this documentation, if we follow it exactly, it's going to grab the latest four dot nine release, which is going to be the fast channel. So if I were to follow this documentation to the letter, I would end up with that particular, you know, I would end up with a fast deployment or a deployment that uses the very latest generally available fully supported release which is the fast channel, not the stable channel. So we did create a docs BZ to redirect this over to stable, because I think most people, you know, reasonably expect it to use the stable channel, even though really those channels stable versus latest is related to updates and upgrades, not necessarily new deployments. So just be aware of that. If you happen to be following these, these document documents, I think if we look in the other ones, so again, I'll pick on vSphere and if we go down here to installing a cluster on vSphere. When we look at the obtaining the installation program, it actually tells you to go through the web page and or, you know, access the law infrastructure provider page on the console Red Hat OpenShift cluster manager site. So it tells you to go to the web page and download it from there. So, alright, and the last thing that I have is the what's next presentation. So just a quick reminder that next week, excuse me, not next week, two weeks from yesterday, November 30 at 10am Eastern is the roadmap presentation, the what's next presentation. So if you, if you're curious about learning from product management, what's going to be happening over the next, you know, one, two, three releases basically the next nine to 12 months. It's a great presentation to listen to. Usually it's, if we schedule it for an hour and a half, we try and keep within that sometimes it does go longer. But yeah, it's always an interesting one. I'll be here on the stream. So we do stream that out live across the across the various streaming platforms. I'll be here on the stream helping to answer questions as well as questions that I can't answer or any of the other folks. Any other streaming hosts can't answer. We'll, we'll shuttle those over to the product management team and get them answered for you. So I do encourage you to watch that. It's always a good indicator of what's going on. You can get a little bit of a preview. If you look at the what's next presentation, they do include a single roadmap slide inside of there. What do you what's next presentations? They basically deep dive into each one of those and take a look at that. All right, I'm done. That's all I got. Super straightforward today. Nothing. I won't say nothing terribly exciting because, you know, red hat job openings ACM 2.4 it is it is exciting. It's just not maybe not glamorous stuff to talk about. I don't know. Okay, so Johnny, do you want to talk about metal LB first or do we want to talk about AWS gov? Let's do metal LB. Okay, if you're okay with that, I'm fine with that. We're going to we're going to see how my cluster does today. So let me share my window here. So this is this is my lab cluster. I'm running 495 here. I just updated it from 4.8 something or other not too long ago. So inside of here with so let me first start by saying that with Openshift 4.9 metal LB was added as a supported option for exposing services directly to the outside world. So traditionally or historically, we don't expose services, right? I if you need to access an application that's running inside of the cluster, you create a route. And that route then runs through either port 80 or port 443 and you can access whatever applications on the other side, and that it identifies the pods that belong to that application via a service definition. So if I come down here to networking and look at services, let's go to this one. I thought I created a I thought I had created a service earlier, but I guess not. So if we go to services, right, we can look at sorry, I'm going to switch around and I'm going to grab my service definition real quick. Well, I can't see what I'm doing. I'm typing on the command line because I'm grabbing the YAML for my service definition so that I can post it into the thing here. So let's grab this guy and then we'll come back to our web browser and I want to create a service. So very simple, you know, my creative naming scheme here simple service, I am going to take any pod that has a and a label of app equals SISC. And then I'm going to redirect the service ports 80 to the target port 8080 super straightforward super simple, or I can click create here. What that's going to do is is going to identify three different pods running inside of my cluster here. You can see here's my deployment. So that's when I browse to that service or if I click that service from something internally, it'll more or less, you know, round robin across all of those available pods. I can expose this externally right I can't access that service from outside of the cluster. But I can expose it externally through a route. And that route when I select this simple service effectively what that's telling the ingress controller is hey, look at this definition look at the pods that identifies and use those to create. The engine x or no h a proxy sorry, the h a proxy rules for forwarding traffic over to those pods. So it is important if you didn't know this already. So routes don't and ingress controllers in general they don't send traffic through the service rather they use the service as the identifier for the pods to send traffic to. So it goes directly from the ingress controller to the pod not via the service mechanism. And then we'll do this target ports basically we're saying following the target port definition from the service it creates. And as you'd expect I get my application. This is a super simple application I use this one for demos regularly. All it does is expose who is requesting my IP and then what the server name that it's using and then the internal IP address. So kind of an easy way to just show that hey we've got an application running I can refresh it a couple of times and theoretically it'll change over to a different server IP at some point. But what if I want to you know expose that as a load balancer what if I have something that's not running on port 80 or port 443 what if I want to have it directly connected from the outside world. To do that we can use metal LB. So I'm going to deploy metal LB using operator we have this metal LB operator it says provided by community I think that's a mistake because it's actually coming from the red hat. Operator catalog so it's it's one of our operators I'm not sure why it says provided by community here. So I'll do the normal click install standard stuff here right I do want to use I want to create a project we're going to call this metal LB system there excuse me while I cough. We'll hit install here and it'll deploy the operator kind of as you would expect in full disclosure I haven't tried this since 4.9 GA I tried it in the RC and there was an error with the operator so if this fails if something goes wrong here I do have the instructions on how to do it manually so we'll we'll see what happens operators being installed. So an interesting thing here if we go to the installed operators. I can click on this guy and down here we have our should be a I'm looking for the install plan. There should be a install plan that we can look at and view in order to see precisely what it's creating on the back end and my mind is completely just not working today as to why I can't find the install plan. Anyways it did it did succeed there I don't know if you saw that. So with that we need to create a metal LB instance create that so metal LB good enough for now we don't need to change the name here to anything fancy when I click create and what we should see here is if I switch over to I'm still in the metal LB. instance, we'll create that. So Metal LB, good enough for now, we don't need to change the name here to anything fancy. We're gonna click create. And what we should see here is if I switch over to, I'm still in the Metal LB system, namespace, or I switch over to pods, you'll notice a couple of things happening. So one, here is our operator controller. So this is the controller, the logic behind those CRDs. So when I created that Metal LB CRD instance a moment ago, it's the one that actually turns that into logic. From there, I have these, or I now have these speaker pods. There's one of these running on each node in the cluster. It's a demon set. And this is what will control, what will manage the traffic coming into the system. And then up here is my controller pod, which will do things like assign an IP address to that external service. So let's talk about how Metal LB works. So I'm gonna switch over to the documentation here. And I'm gonna paste this guy into the chat. This is the documentation page that I'm looking at. And what I want to show you here is this graphic. So at this point, we can look at our diagram here. We see that in this instance, we have three nodes. Each one of those is going to have an IP address on the network, according to this diagram. So 100.11, 21, and 31. And then we have these speaker pods. And these speaker pods are effectively, when I create a service that is using Metal LB, it will assign, the controller will assign it a load balancer IP. So we see 100.200. So that speaker will assume that IP address and then start sending ARP packets or ARP requests for that IP address. So now when my application, when something needs to access my application, it says, okay, I need to go to externally. I go to 100.200. And because of the ARP cache, the ARP packets that it's sending out, it says, okay, I need to come over to this speaker. Now, which speaker it lands on is actually arbitrary. By default, it will use Kube proxy on the backend and it will just like a service. It will send that traffic across all of the different application pods that exist in the cluster. So that's kind of Metal LB at a high level. It will traverse. So Kube proxy does traverse the pod network and all of that other stuff. So you can, there's a couple of things to be aware of here, not the least of which is you are limited to one, the throughput of a single node for that ingress traffic. But two, if say you're using a 25 gigabit interface here and a 10 gigabit interface here, you would be then further limited by the throughput available on the pod network, which is the SDN. So this Docs page is really good. They go through a lot of stuff. First and foremost, I should also highlight that right now Metal LB only works or is only supported with layer two inside of OpenShift. So there is a BGP mode for upstream Metal LB. So if you're not familiar with that, BGP is a routing protocol. So effectively what happens in that instance is when I create a route with BGP mode enabled, it will advertise BGP routes upwards to the router to any node that has an application pod running on it. And then it will use BGP to do that load balancing protocol. So L2 mode, it is just simply, I have an IP address here. I'm gonna use ARP to advertise that address. And then all of the traffic flows in and then goes to those application pods as opposed to the upstream router, or I guess here would be the router, is doing that load balancing across the available nodes. The other thing that I'll talk about on here is that here infrastructure considerations and most importantly, limitations. So as I already mentioned, single node bottleneck, slow failover performance. So this is more or less in line with KEPA LiveD. It doesn't use KEPA LiveD, but it does have similar sort of if this node, sorry for scrolling on it on y'all. So if this node, which has the 100.200 IP address were to go down for some reason, whether it's a coordinate drain because I'm rebooting for updates, whether it falls through the floor, it will take a few seconds for one of the other nodes to pick up that IP address and resume traffic flow. Okay, I'm done rambling about the documentation. Everything should be running over here. Looks like we've got all of our speaker pods up and running across the cluster. So what I want to do now is create an address pool. So effectively it needs to know like what IP addresses am I allowed to use for a load balancer service? So let me grab the appropriate bits that I need here. It's just like that. So now I want to go back up to here. And I'm going to create. So it is a Metal LB API endpoint of type address pool. And you'll notice a couple of things. So one, this is Metal LB system namespace. I'm going to call this VLAN 14 because that's the VLAN in my lab. So importantly with 4.9 and Metal LB, we do need to use the layer two protocol here. And then we just create a set of addresses here. So let's talk about this a little bit. So first I can specify multiple address pools inside of here. I'm calling this VLAN 14 just to help me keep track of it, but I don't have to limit it to just this pool or just addresses on VLAN 14. In reality, I can configure my nodes kind of however they need from a networking standpoint. So let's say it's a physical deployment. I've got a bond running that has my SDN running across it where my machine, the IP address on the machine network is. I've got another bond that I'm using for say dedicated storage traffic. I've got a third one with like four network adapters that has 10 VLANs or 50 VLANs or whatever trunked into it. And then it has VLAN interfaces for each one of those. It doesn't need the nodes don't need to have an IP address on each one of these networks. So the reason for that is when Metal LB assigns that IP address it configures it on all interfaces. And then it relies on the network to be configured correctly to route the traffic to the appropriate interface. So that's why it doesn't matter if I have an IP address on this or not it always listens on all interfaces. It just so happens that the traffic should only arrive on the correct one. So let's go ahead and create this address pool. And you can see that it went exactly as expected. We did set auto assign. So I'll poke at or I'll highlight in the documentation over here as soon as I find the documentation address pools they are documented very well inside of here. One thing that I do want to point out is you can, for example, use a CIDR if you wanted so you could do a slash 24 slash 28 whatever it happens to be you can disable auto assignments and you can use IPv6 if you so choose. So if you decide disable auto assignment then in the next step which is actually using Metal LB you would just have to request a specific IP. And again, documentation walks through all of this you can request specific IPs you can share IPs. So if I click over here I can have multiple services using the same IP hopefully with different ports you can see HTTP 80, HTTPS 443 here. So port conflicts might be an issue but you can absolutely use the same IP address. All right, so our address pool is created now. So now what we want to do is come back down to our networking and services and switch to the right namespace. At this time, we're going to create our load balancer service the YAML for that and we'll hit the plus. And actually, I'll hit this plus because this one, I know we'll create it in the right project. So this time I'm going to call it a get out of here help load balancer service. So LB service, you can see basically the exact same definition that I used before but this time I have a type of the over helpful help here keeps getting in the way. Thank you, Clippy. So the type here is set to load balancer. So let's hit create here and let's see what happens. So the first thing that we'll notice here is that I now have in my service address type external load balancer with a location of 14.230. So this time without creating a service you see I'm just going to copy this IP address and we'll open a new tab and paste that in. And what we should see is our service endpoint. This is the same application that was running from the route externally accessible. It just so happens to be running directly from our load balancer service. And you can see I can continue to refresh that right it keeps coming up, our time keeps updating. So it more or less behaved exactly as you'd expect. The good news here is that you can put if you wanted to you could put a DNS entry in for that. You can address it through that DNS entry. It should behave just like an externally accessible load balancer at that point. So I'm going to take a look here at chat just to see if I missed anything. No, so load balancing is not really for throughput not with L2 mode. So with L2 mode it definitely is limited to a single node. When BGP mode is available, which tentatively is Openshift 4.10 watch the what's next presentation on November 30th. So when BGP mode is available it will be more we'll say a more robust in that respect for increasing total throughput available. All right, so that's what I got for Metal LB pretty straightforward. It kind of fills an important gap in capabilities makes it easy to access those pod-based services externally. And importantly, you can also expose ports other than 80 and 443. I happen to use this example which runs on port 80 but you don't have to. All right, so I'm going to hand over to you, Johnny. So we can talk about AWS GovCloud. Okay, awesome. Yes, thank you Andrew. That's awesome. I've heard a lot about Metal LB and I know it's been in the works for a long time. So it's glad to find or I'm glad to finally see it's like it's going to be a thing and it's going to be real. Yeah, well you could previously deploy it yourself, you know, manually it was just not supported. Now it's supported, which is great news. Yeah, that is awesome. All right, so AWS GovCloud if anybody has deployed in GovCloud, you know that it's not as straightforward as it possibly could be. So just at a high level what we're going to talk about really is the kind of the gotchas really, you know, like when you're deploying in AWS GovCloud, the big thing to remember is that you need to create a VPC first and then create a bashing within that VPC. And the reason is because of the way that Route 53 handles DNS, it's all private zones in GovCloud. So there is no public zone that you can actually reach out and do a DNS resolve to. So that's why you have to do it that way. But once you have that VPC created and you have that bashing or jump note or whatever you want to call it that's built out, then essentially you can run an IPI install as if you were doing anything else. The big gotchas are that you need to create endpoints for your private networks because out of the box, if you use like the CloudFormation templates that are on the OpenShift documentation website, which I'll post a link, or then what all it'll do is it'll create the S3 endpoint, but really you need the S3 endpoint, the EC2 endpoint and the Elastic Load Balancing private endpoint. And essentially what that does is it allows your private subnets to talk to the public API over a private link service. If you're trying to do a fully air gapped installation, then you will need like some type of proxy server or firewall that's allow listing the Amazon AWS domains. So that way you can resolve to the Route 53. Otherwise your installation is gonna fail because it can't create the DNS records that it needs to create. So that's just one of the, that's actually like the probably the biggest gotcha when we're talking about like completely private subnetting or completely private VPCs is that you will need a squid proxy or use the AWS network firewall service or something like that. I do have a link that I'll post to the chat for the squid proxy reference architecture. And then I'll post another link that there's a team internally within NAPS. Big shout out to North American public sector, by the way. We do disconnected and we do restricted. So this is our thing. So there's a group of, there's a team within NAPS called Red Hat for Gov that is doing a bunch of documentation for the different environments. Be it Microsoft Azure government or AWS Gov cloud restricted or VMware or whatever they've gone through and they've documented that process to do an IPI installation in those different environments. So I'll share a link to that as well. So that way you can go in and take a look at that and hopefully get a lot out of it. So what I'm gonna do quick is post or I'm gonna share my screen here and just kind of walk through the documentation really quick and we'll see how well I do this. Let's just do this whole thing. I guess I can stop sharing my screen on the backend. Oh, no, keep going. It's fine. All right, Stephanie keeps us organized with the new system. So, all right, so let me make sure. Yeah, so this is the landing page for installing an AWS Gov cloud. So essentially what it is, is because you're already creating a VPC you don't have the same requirements for networking. So you don't have to create a network or you don't have to worry about going through and creating network permissions with IAM. My account that I'm using it has, it's full admin so I'm kind of cheating a little bit but you can narrow the permissions down and scope the permissions down to what you need. The big thing about using the VPC is like I was telling you before, you don't need or you need to create the endpoints. And so I will walk through that really quick but just to kind of hone in on some of the things that you don't really need to build out or you don't need to build an internet gateway, you don't need to build a NAT gateway or subnets or routes or route tables rather. What's gonna happen is once you've built your initial VPC your deploy VPC and you just kick up that IPI it will provision everything for you. It'll give it a cluster ID, it'll provision your control planes, it'll provision your workers and then you also have full capability like full integration rather to use machine sets and the APIs. So the difference here is with a standard fully connected IPI installation it's creating all of those resources. It's creating the VPC, it's creating all of those records for everything else. Whereas with the GovCloud and even with if you do like an IPI or there's, I'm gonna call it tiered IPI if you will which is very much what we're talking about here of you can create and provide at certain resources beforehand and tell it to use those. And I know that those features are added to OpenShift specifically for this type of instance. Yep, exactly, exactly. And I think it was around four six was when the IPI became like fully available within GovCloud regions. And then I think it may have been four six or late four six for Azure government. But yeah, it's really awesome. Once you provision that initial VPC and AWS and get these three end points I cannot harp on these enough. Like if anybody takes anything away is that these three end points are absolutely required. And otherwise what will happen is you'll get through your installation and then you'll realize that it can't talk. And so the install will try and update like the machine API actually will try and update things and it can't actually talk to anything. And so it'll just bomb out. Also point out that this type of install is useful if your organization has policies like very strict policies around permissions. That was one of the early, we needed the ability to provide an existing VPC because a lot of organizations permissions wise they locked down VPC creation pretty strictly. So this is another example of depending on the permissions available you work with the appropriate teams in your organization to request the resources and then provide them into the installer. Yeah, that's absolutely correct. I can't tell you the number of customers that I've gone to where they've already got a VNet in Azure or they've already got a VPC or they've already got a resource group with VMware, right? And they have to work within that bubble because a lot of times what happens is especially on the DOD side they're using a service provider that service provider has already provisioned this container of resources for them. And so they have to work within that bubble and so this is where this comes in and it's super, super helpful. And as long as they can get the permissions and work with that provider to get the permissions kind of like scaled down, then it just, it works really well. And we've done this type of deployment in I would say very restrictive environments. So definitely like at the, I don't know if anybody really knows like IL-4 but like at the IL-2, IL-4 level which is this entire level where it's really just like these really restricted environments so where it's really, it's not fun to work in but like once you've got everything kind of like laid out the foundation laid out you can just run your deploy and keep taking these things over and over and over and it's pretty cool. That was really the big thing about the documentation is that just creating those endpoints a lot of people overlook that it's super important when you create the endpoints you associate your private subnets to them and that allows your private networks to talk to these public services via a private link service. And just to reiterate back on the Route 53 thing if you want a completely private VPC and you want to do IPI installation or the installer provision installation then you will need a squid proxy or some type of external access so that way you can reach out to the Route 53 API and make that call because there is no private link service for Route 53 right now. And then I'm gonna stop sharing this screen and then I'm gonna kick over to a terminal in just a second. Getting our hands dirty on the CLI. Oh yeah. Yeah, you get to see how terrible I type so. It's anybody who's watched the stream for any amount of time knows that I can't talk and type at the same time. And if I try my fingers either start typing what my mouth is saying or my mouth is saying what my fingers are typing I've one of the product managers I work with I've done presentations with him where he is holding a microphone and typing something with one hand while saying something completely different and it blows my mind every time I see him do it. That is awesome. So I ran some, see exactly, I'm already forgetting how to use Linux. So I ran some automation that essentially what it does is it provisions the VPC and it goes out of provisions of VPC provisions the public and private subnets it creates the end points and associates and with the right subnets and the right route tables and everything and essentially just gets the environment ready to go so that way I could try hopefully to show you how this works, how this looks and works and then it's like that one time you're hoping that it works. Yeah, so you've got a bastion host that you're moving everything over to. That's correct. So, yeah, so like if this was commercial I'd be able to run the OpenShift install from here and I'd be able to hit the commercial API is no problem but because I have to hit that bastion node I just spun everything up from my laptop or where my corporate machine if you will and then now I'm gonna SSH out into the cloud and actually start the deployment from there. So does this need, do we need to mirror images or anything like that? Is it a disconnected or I think we have an AMI in those regions but I don't know about all the container images. Right, so the way that this works is because this is a connected install and we'll reach out to Quay and pull all the images down recently and I hate to give Jared Hockett any credit for anything but Jared on the Red Hat for Gov team did a lot of work to get the core OS images out in to get them published out in AWS so that way you didn't have to pull them in and do a snapshot and like import them yourself but yeah, he did a lot of work got all that stuff out there and that's another link that I have that I will share out with everybody so that way you can actually if you're in US East One you have the list of AMIs for each version and if you're in US West One then the same thing. Yeah, and Jared's good people I know you're poking fun at him I worked with him at previous employer as well when he back when he was a developer so. Yeah, he's a good dude and he and I have worked together for the last year and a half or so and we've got the same where we read each other he got me pretty good last week on this you know, he got me really good publicly and it's awesome. All right, so now I'm just gonna I'm just getting everything set up so that way the magic can happen. All right, and so now what I'm gonna do is I'm just gonna make a deployment directory and then copy the install config into there and then before I kick it off I will walk through this with VI instead of VIM. You don't need colors. All right, yeah, I'm just and I'm so tuned to use VIM that it's just, it hurts. Just don't pull an Andrew and expose your pull secret. That's what I'm worried about actually I'm gonna try and not do this. But yeah, so as you can see like the install config is pretty standard you know, it's what we do with a connected installation. The big things that you'll see here is where right now I'm calling out my region, which is pretty standard but I'm also calling out these service endpoints and this kind of goes into what I was alluding to earlier about like these are a must and so although they're defined here that doesn't mean that they're being created in the VPC or within your accounts, right? So like all you're doing is just mapping them back out to that account but they don't actually, these entries the stands it doesn't actually create them. And then the other thing that I do is I just kind of go through and I say which availability zones. So generally, right? You're configuring a highly available service so you'd want your subnets to be spread across the you'd want your services to be highly available. And then the big difference here, and here I did it. I did it. But the big difference is that with GovCog we need to define the subnets and that's how the installer knows which VPC to use. And again, the AMI ID that gets published and then your public key. So it's all pretty similar to how we deploy in commercial which is pretty awesome, right? I mean, there's just like these little nuanced things that we have to do. And then that publish internal at the end. And the reason why we have to do publish internal is because of Route 53. If Route 53 and GovCloud supported publicly hosted zones then we could do a publish external and it would work. Like a parody feature right over in Azure government is that you can actually do publish external because it does support public hosted zones and private hosted zones, right? And then I'm going to stop sharing for just a second to make sure that I've got my AWS credential scored away and I don't want to share that out because that would just be terrible. That would be much more of a pain to reset. So I know we've only got five-ish minutes that are normally scheduled time. So for any of our folks who are watching the stream please don't hesitate to put any questions or anything you've got into the chat. You know, we'll address those as quickly as we can. You can also reach out at any time. You can contact me or Johnny via social media. So both of our Twitter handles. So I am at practical Andrew just like you've seen in the chat there. Thank you, Stephanie for posting that up for Andrew.Sullivan at redhead.com. And then Johnny is, I can never remember your Twitter handle. J-Rock TX1. There we go. And then Johnny with no H and two Ns at Red Hat. Yep. So yeah, please don't hesitate to reach out if you have any questions and it's always interesting to see what you all have to ask about. And I think for those of you who have sent me questions and stuff like that we do tend to talk about them on the stream here. We tend to follow up and cover what those, once we find the answers, cover what you asked about. Okay, Jans. Your fingers not cooperating? I'm trying to fly and it's just not going well. We don't have a stream that immediately bumps up against the back end of this one. So we can't go over by a little bit. So don't rush too much. Okay, let's see what I did here. Yeah, actually, when did that change? I don't know when that changed. It used to be that there was an OpenShift Commons briefing that was immediately after the ask and OpenShift admin live stream. So we always had to end, we had a hard stop right at noon, but that has since changed. I can't un-martial something. All right, well, I'm just gonna pretend like I don't care because I don't care. That's fine. So with MetalLB, why is it called Layer 2 Networking? So it's referring to the OSI model, all right, and the layers one through seven. So layer one is your physical, right? So essentially it's talking two things at the, and Johnny helped me out here. It's talking to things at the physical layer, which is MAC addresses. Layer two is going to be at the, or no, maybe it's layer two is MAC addresses. I can't keep it straight. Layer two is MAC, yep. Yeah, thank you. So layer two is MAC address. Layer three is where routing comes in with IPs, et cetera. So basically what that means is that we, everything exists on the same L2 or same broadcast domain from the perspective of accessing those things. Layer three would be I have different IPs, or IPs in different subnets, I should say, and then I'm able to go through and do routes. It's been a solid 20 years. So since I thought about the OSI model and now I'm really struggling and kind of embarrassed that I can't like immediately rattle it off like I used to be able to. I'm all of the certifications that I used to have when I was an administrator. I'm gonna get my admin card revoked. TCP and UDP are layer three. No, those are layer, yeah, layer three. Thank you, our hope nine. Yes, next stream is at 1400 Eastern, which is the rel presents stream, I believe. So I don't know what they're talking about today, but it's always interesting to see. And in case you didn't see, I think that's rel nine is now available in beta. So lots of interesting stuff there. So metal LB doesn't work at the layer of TCP UDP. So yes, so what it's referring to there is it's not doing like a lot of times we'll hear layer seven load balancers or something like that where it's able to look at what's happening at the application level. You know, hey, is there something available? Is there something at the other end of this endpoint that I can check and verify before I send traffic to it? So a layer two would basically be, is there something there? Yes, I'm sending traffic to it. That's kind of the thought process behind it. So yes, it does work in that when you create that load balance service that has an IP is associated with it. I can then from a completely different IP and a completely different subnet, all of that other stuff, I can access that. It'll get routed through the network appropriately. That's really what it's referring to. And then BGP mode is effectively using the router, the upstream or next hop router as that mechanism for balancing traffic across the available connections. So Andrew's a little rusty here as is evidence to anybody who actually knows what I'm talking about here. But so what that means or my understanding is BGP is not, for example, session aware or load aware. So when we're talking about the fancy L4, L7 load balancers from partners like Citrix and F5 and so on and so forth. There's a whole bunch of them. They can do things like, hey, I know that this endpoint, I sent the last seven sessions there. I see that there's 40 megabits or gigabits of throughput whereas this other endpoint has two sessions and there's only 10 gigabits of throughput going to it. I'm gonna start favoring new sessions collecting over to that or connecting over to that endpoint stuff. I don't think that that's available with Metal LB. Definitely not in layer two mode. I don't believe it's available in BGP mode but I would have to talk with a networking experts to really figure that out. Wolfpack Microsoft cluster. I don't know what a Wolfpack Microsoft cluster is. I'm from Raleigh. Wolfpack to me means NC State. We're simple folks here. Yeah, I don't think I've ever heard of that Wolfpack Microsoft cluster. Or what did, Microsoft did have a load balancing mechanism at one point. Now I can't remember the name of it. That might be what you're, I think that's what you're probably talking about or I hope not. And I can't think of the name of it. It was a special networking mode that was supported specifically by Windows servers. Yeah. It predates, you know, Windows server failover cluster or anything like that, which is not really a load balancing mechanism. You know, Windows server failover cluster is literally a failover cluster. Yeah, back in 2000, that takes me back. Let's see if I can, I'm gonna have to look that up while you're continuing to type away there, Johnny. Yeah, I've got my Gamble all jacked up somehow and I'm just not seeing it. So I'm just trying to figure this out. Like I probably have a semicolon where I should have a colon or I don't have my spacing correct somewhere. Let's see. Network load balancing is what Microsoft used to call it. NLB. And I remember back in the server 2003, 2008 days, it was a little on the, it was at least all the Windows admins I worked with, they always discouraged it. I don't know why compared to deck load sharing. Man, now you're really going back our hope nine. So yeah, we do have a mechanism to reset pull secrets because it happens surprisingly frequently with the streams and everything with all the stuff that we do. So there is a way that we can reset it as a customer, as somebody who's a user of OpenShift, you can reset your pull secret by creating a help test ticket. So basically open a support case, requesting that it be reset for whatever reason, they'll go through and they take care of that on the backend. So it is possible. I wish and I don't understand. I've actually asked and have not gotten what I would consider a satisfactory answer, but I wish I understood why we can't make that mechanism easier to do. Why can't I go into my account on access.reddit.com or console.reddit.com and say, reset pull secret? I don't know the answer to that, but maybe we can find out someday. So Johnny, I don't wanna put you under the gun or under pressure anymore. I'm failing visibly. Yeah, I think we trust that it works. I think the important thing here and really the takeaway is what you showed us in the install config YAML, which is providing those endpoints and all of that other config for the pre-created, pre-configured, VPCs and all that other stuff. That's the magic that happens here of, I'm not in a regular cluster or a regular AWS region, I'm in a region that has specific restrictions around it, which also applies if my account is severely restricted, which is at least in my experience, fairly normal, right? It's not uncommon for organizations to, I don't know why specifically VPCs get restricted. So with that, any last words, Johnny? No, just to add on really quick is, I will add it or I'll add the links to the pages for Red Hat for Gov because they've done some excellent work and to take that through the different environments through the Azure and VMware and AWS, right? You can take that and actually apply it to your environment and have a cluster in some of these really disconnected and restricted environments. So I'll share all the links for everything that we talked about today. Yeah, and I know not just NAPS, but there's a whole sub-community, if you will, inside of Red Hat of folks who focus on disconnected clusters. So if you have any questions around that, if you have any issues, be sure to reach out to us. We're happy to make those connections and get those answers, get those questions answered for you. So one last reminder, no stream next week. Next week is a holiday here in the U.S. So I hope everybody has, if you're going anywhere, safe travels. If you're not, enjoy your time with family for the rest of the world. We will see you in two weeks where we will be talking about, oh, what did I say it was before? Sandbox containers. So Sandbox containers, which is the better known name is Cata containers, it's a type of virtualization for pods for containers. So looking forward to talking with Adele, the product manager for that, to find out the details, the use cases, everything that's going on there. And of course we'll dig into some demos there as well. So have a great week, have a great holiday weekend for those of us in the U.S. Thank you so much to everybody who joined today and we'll see you on the next stream.