 So, everything I'm going to show you in the first part of this as I run through it, you can find and execute for yourself from this GitHub repository right here. This is in my personal GitHub space. I have several tutorials that I've worked up and a few more that are coming. This one right here that I have pinned to the top right is an installation for a single node cluster. Now, what I'm going to show you guys here is going to be running on a NUC, an Intel NUC 8i3 BEK. It's got 32 gigabytes of RAM in it. It's got a one terabyte SSD M2 drive in it. It's a dual core, which means four VCPUs. So this little guy right here is not a high powered piece of equipment by any means. And they're actually pretty affordable. This one is getting, as computing goes, it's getting a little long in the tooth. The newer i3-FNK will actually allow you to put 64 gigabytes of RAM in it, has faster CPUs. And if you really want to spend a little bit more of the i7, and I should have brought one of them down here so I could hold it up to the camera if I had been thinking about it, because they truly are tiny little machines that you can do a ton with. So when you get to this point here, you can either navigate into the documentation through the readme's, or I do have it up on GitHub pages. So this right here, and the link is hanging off of Michael Keen's site, where he's hosting the documentation for us, but I will also post it in our chat right here right now. Oh, there we go. Here's your look, Charles. Yeah, there you go. Bruce is holding one up for you if you can see it on the image there. They are tiny little things, and I'm ashamed to admit it, and it's probably embarrassing to my wife, but I actually travel with four of those when we go on vacation, because I like to take my open ship cluster with me, because it keeps me happy. So without further ado, let's deploy a open cluster. I'm going to move this link back over here out of the way. I'm going to minimize this browser because we won't need it for a while. We are going to be working from the command line for a bit. OK, so if you if you look at the GitHub pages, the first section is installing and setting up your host. Now, in my lab, I'm actually running a pixie environment with with IPixie that I boot and install my machines from. So I'm skipping that point for you. I have a brand new installation of this particular one is CentOS 8, but this tutorial should work almost out of the box without modification for CentOS Stream or if you're running Fedora. It does need to be an RPM based if you're if you're following this tutorial because I make some assumptions about the the commands that are available for managing the machine. So the first thing I'm going to do is install my vert environment. And you'll see this is not going to do anything because my pixie boot already knew that this was going to be a KVM host. So it did that for me. So the next thing I'm going to do is install all of the tooling that we're going to need for this lab. So I'm going to install W get. I'm going to install get because we've got to get the code from somewhere. I'm installing the net tools because I'm still kind of an old fashioned Unix sys admin and I just can't let go of if config and net stat. So net tools provides that crutch for me. We're going to stand up a DNS server. So I'm installing bind and bind utils. I'm installing bash completion because show I'm installing our sink. The guest FS tools because we are running libvert and KVM here. I am installing for bigger font chart. Yeah, yeah. Okay, let me just a bit. How's that, Bruce? Three times bigger, bigger. Is that readable? No, that's good. I think it's a good day. OK, OK, it's it's going to take a week. Yeah, one more because you're OK. I'm probably see the you can't see the chat, right? No, I can't not not why I'm in this in this form here. Yeah, so the bigger how's that? Eric, give me a thumbs up if that's good. OK, that's a Eric says bigger, still bigger. Oh, because you guys aren't seeing it full screen in. No, no, we just get apart. We've got the talking heads below here. Yeah, they've got seventy five percent of what is interesting. OK, I wonder is there a way for us to I don't see a quick way for us to make it here. Yeah, no, no, they say it's readable. OK, it takes a minute to refresh. I think we want to see if I can hide the video. You could take your faces off and that might. Hide the video. Did that hide me or did it just leave a box where I used to be? I think it left a box where you used to be. Yeah, I might as well keep your faces on. I would still go one more font size bigger if you could. Sure. Oh, yeah, I can I can crank this. Tell you what I will do. I'm going to make this terminal. I'll leave some of the terminals a little smaller because they're just going to be scrolling logs. But the one that we work in, I'll do this. So hopefully you guys can see it. Give me a thumbs up if this is good, folks. Yes. All right. Thanks. We got there. All right. Thanks, guys. All right. OK, so we're installing the EPEL release because we're going to need some of the libraries from there. Libvert development, the HTTPD tools that are going to give us some of the things that some of the tooling that my scripts use. And finally, Nginx. And all Nginx is going to do here is host the ignition configs for a newly booted machine to grab its ignition from. So I'm going to go ahead and let those install. And I'm going to make sure that Libvert is running and enabled. And now what I'm going to do is because I just like to control things, I'm going to tell Libvert where to store its virtual machines when I create them. So I'm creating I'm deleting the default pool if it exists. Now, you can see here in this case, there was no default pool. And then I'm going to create this default pool and I'm going to point it at slash virtual machines. So all of the VMs that I build with Libvert, they're going to go into that location. Now, I'm going to start up Nginx because we're going to need it. And I'm going to create the directory under Nginx's default serving directory where we're going to host our ignition config files. And finally, I'm going to configure the firewall. All right, so I'm enabling HTTP and HTTPS from the firewall. I'm enabling DNS because we are running Vine and the rest of the firewall I will leave in place. Next step is we're going to need an SSH key so we can get to our OpenShift cluster if we need to directly connect to any of the nodes. So I'm going to create myself an ED25519 key pair. The next part of the tutorial here, I'm not going to actually execute. I'll put it on the screen here for just a minute so that you guys can see it. This is creating the network bridge and what I've included for you here is the command line way using NMCLI to create your network bridge. Now, there are graphical ways to do this as well. You can even do it during the install if you want. The network bridge was actually already created for this environment. You can see here, this is my bridge BR0. It was created by the Pixie Boot. So the networking was already configured for me when this host was installed just a little bit ago. But I did include in the tutorial everything you need to set up bridge networking on your own. So with that out of the way, we're going to let this guy apply any updates and then reboot. So this will take just a little bit. And while this is doing its thing, getting the operating system all fresh and ready to go, I'll bring this back over here for you guys. And I'll take you on a little bit of walkthrough of what we're going to do next. So the next thing we're going to do is we're going to clone the Git repository. Where this tutorial resides. And I've already provided several utility scripts that are going to make the work that you're going to do here much smoother. I'll take you through what the scripts do in case there's anything you want to customize or you want to take apart and reuse for something else. There are several environment variables that you're going to set that I've provided an out of the box in the set SNC ENV script. But you will probably want to modify them for your own environment. The 10.11.11.0 is what I'm currently using in my home lab network. So we'll set a domain that will be our DNS domain for this lab. This SNC host is going to be this server that is rebooting right now. The name server actually is going to be the same one but I've allowed by using variables for you to use different hosts. So you can create your own name server host if you want to. You can have your own bastion server. You can run the single node cluster on its own machine. So by varying these variables, you'll control how and where things are installed in your single node cluster. And while I'm at it and we're waiting for that little guy to come up, I'll also say, when you graduate from the single node cluster but want to use the same type of environment, I have a similar tutorial that will take you through building a full productionized six node cluster with three master nodes and three worker nodes. All right, our host is back up and it has a new fresh updated install. So let's go ahead and continue with our single node cluster build. So the next thing I'm going to do is create a place for all of this code to live. And I'm going to clone my repository. Okay. So I've got a clone now of the repo that I've been showing you guys. All right, so my bin directory already exists because my pixie boot actually created that. So I'm going to copy all of the utility scripts from the bin directory that are in this repository into my users and right now I'm running as root, my users home directory. And then I'm going to make it executable. So here's what we just copied over. All right, there's a utility script to deploy our single node cluster. There's a utility script that is going to yank the bootstrap node out when the bootstrap is completed just leaving our single node running. There's a script for setting up the environment that will just set all of the environment variables for us. That's what I was just showing you over on the other screen. And as you can see, it does have entries that are biased toward this particular setup. And in fact, I believe they're biased toward this particular machine in my lab. Indeed, you can see that I'm on 10.11.11.206. So this script conveniently was already set up to run on this particular machine. The last couple of things is there is a script to set up the DNS for you. And there is a script to destroy your single node cluster. The undeploy S&C will completely remove your single node cluster. And when you rerun the setup DNS, that will actually put your environment back to a point that you can redeploy the cluster. So you can also clean this whole thing up and then run it again, should you be so inclined. Make sure I made all of those executable. Okay, so now the next step would be to modify this script here to ensure that everything is set up for your particular network. I'm not going to change anything here, I believe, because I'm going to use the domain S&C test. 206 is this host that we're currently on. I am using a slash 24 network, so I don't need to change either of those. This is the gateway of the router that this particular node is talking to. And I'm going to use 149 and 150 for my bootstrap and my master nodes. The other thing I have here, this OKD registry, this is for installing stable installs of OKD. I'll take a quick detour and show you guys that. So if you go to this URL, this will give you the status of current OKD releases. Put this in the chat for you guys. There you go. So if you go there, you will see the current state of the given OKD releases. The very top of the list is the stable channel, right? And those are in Quay.io under OpenShift slash OKD. So if I'm going to build one of these, which is what we're going to do today, I would build from this channel. But this will also build nightlies. So if I want to build one of the nightlies, like let's say we want to be brave and build a 4.8, which we're not going to do today because I'm not sure that I'm not going to have to make changes since 4.8 is giving us the ability to do the bootstrap in place. You can see we're starting to get some of these that are green and stable now. So it won't be too long before we'll be putting out a video on how to install from that. But if I want to build from a 4.7 nightly, which I have done actually recently as I was looking to see tracking particular bugs as we get them wiped out, then you can come down here to the nightlies or you can even under the stable, build older versions of 4.6 or 4.5, okay? So today we're going to build a stable release from 4.7 and it will come from quay.io. If I did want to build a nightly, then I would change, excuse me, this OKD registry environment variable to registry.ci.openship.org slash origin slash release. Okay, so that's really the only difference there. And I'm realizing that's probably really small font for you guys, so there. I'll crank that up a little bit. All right, so that's where you go to find the current releases and what their disposition is. So since I'm not going to make any changes to this, I'm going to get out of that file and we'll continue on. So the next thing to do is to add this script to my bash RC so that when I log in to this machine, it will set that environment for me and my single node cluster will always have the environment variables that I need it to have. And last thing, I'm just going to go ahead and from the command line source that so that if I show you my environment now, you can see that those variables in the script are now populated. And same thing, if I log out, log back in, you can see that now because it's in my bash RC and bash is the shell I'm running that my environment is set up and ready to go. The DNS configuration, I'm going to run it real quick and then I'm going to show you what it did. So the first thing I'm going to do is I'm going to enable name D so that it can start. And then I'm going to run actually before I do that. Now I'm going to run it first and then I'll show you what the magic behind the covers. So I'm going to run my setup DNS and what it did leveraging those environment variables that set up a name server on this host that is serving the SNC test domain. Give you a quick view of the name D.conf. It's pretty much a generic name D.conf that it created but you can see it's populated it with the SNC.test zone. It knows the reverse lookup for the pointers for this environment and there's a file that contains my pointer records. I did create an ACL that includes my anything that's on this 24 network under my 10.11.11.0 network and my local host. Now if you're also running a podman or Docker on this you will need to add the IP address for that network as well so that any of the containers that you run can do name resolution. That has bit me before in the past when I forgot to do that. Listening on port 53 on both loopback and my primary Nick and the rest of this is pretty much plain vanilla setting up name D. Created two zone files, the pointer records for my SNC host which is this guy we're sitting on right now and then the bootstrap and the master node and it created the A records for all of the hosts as well. Now I am gonna spend just a couple of minutes talking about a couple of things here. One, I'm not setting up a load balancer because this is a single node cluster, right? So I don't need to balance across different ingress nodes. I don't have infrastructure nodes. In my other tutorial, you would set up a load balancer to manage your traffic but here it was complexity we didn't need. However, during the bootstrap, you have to be able to talk to either the bootstrap node or then the master node and they have to both be able to resolve. So what I'm doing here is a little DNS trick by having the same address with the two different IP addresses in there, the DNS round robin will give me a poor men's load balancer. These two records that have the remove after bootstrap annotation on them will be pulled out of this when we destroy the bootstrap node leaving just the master node in place and the master node has multiple records. We have the actual host for the master node. We have the Etsy D node and then we have a wild card for our cluster and the API endpoint for our cluster and the API internal for the cluster. So all of those, you see, they all resolved to the same address. They're all the master node. These are the same A records that you would create minus the duplicates for the bootstrap if you were building a full cluster and then the last one is the Etsy D serve records that are in place. So all of that was done for us by this script. I'll give you a quick. All right, so I'm pretty simple. It's said magic in the repository. I've got stub files for the name d.conf, the zone files and the name d.conf.local. And those stub files, there are records in them that look like this that get substituted with some said magic when these files get put in place. And then the last thing it does is refresh name d so that you get a clean start. One word of caution, don't do this on a machine that you've already configured name d because this will destroy your name d configuration. Okay, so this is set up to be a clean install. You're not using this particular machine for anything else except a single node cluster. Then this is safe. If you're running name d, this will destroy your name d configuration. So before one. All right, so name d should be set up for us. We'll do a reverse DNS lookup of this guy here. And oh, it didn't. Oh, yes, I've got to do one other thing that is unique to my environment. I've got to be because I built this in this way and did not do the bridge. I'm going to, I need to go grab a and do an NMCLI command to make it use this DNS because it's still trying to resolve the from the DNS of my lab. I've got this conveniently set aside. So what I'm doing is I'm doing an NMCLI connect against my bridge connection and setting its name server to the environment variable that is set. This is why I use that set ENV in my BashRC is because then I don't have to remember numbers and things. I know that the variables are set from a consistent thing. So it's going to put the name server and the domain in there. And if I restart now, network manager. Now name resolution is working. So there we got a reverse DNS look. Okay. So that's one of the big prereqs out of the way. We have DNS up and running. Now one other thing, and this is for MacBook users. If you're doing this and there's probably a similar way you guys can do this on a Windows system, on a Linux system, you can point your box to this DNS that you just set up. But if I become root on my MacBook for just a moment, there's a directory called Etsy resolver that you can put other domains in. You can see I've got an entry for my home lab right now that any name address that I try to look up that ends in .clg.lab, it's going to go to this name server 10.11.11.10 That's fine. Anything that is snc.test, it's going to go to this host that I'm showing you in the other tab. So this way, from my workstation, I can resolve the cluster that I'm getting ready to set up, but I don't have to do any other fancy tricks with my local DNS. That also works if you're running code ready containers on your local machine as well. So hopeful tip there. All right, the next thing is to prepare to install our cluster. And the utility scripts that I put in place are hopefully going to make this pretty straightforward for you. All right, so first thing we're going to do is, let me see if I can get this to go to the top. There we go, so you guys can see it. I'm going to set that OKD release variable that I was showing you guys previously. So this is the one we're going to build from, the March, no, we're not going to do the nightly. We're going to do the March 7th release right there, 090821. So I'm going to set that environment variable, then we have to prepare our installation file, the manifest. And I've got a stub of that for you here as well. So you can see what's going to happen. Your base domain is going to, again, with some said magic, your base domain is going to get populated. The cluster network, these defaults should work for you out of the box. If on your home network you have a 10.100 or a 172.30, you might need to change either the CIDR for the cluster network or the service network. But since most people are sitting on a 10.10 or a 192.168, these should work for most of you. The compute section, you want zero replicas under the worker section. And this is what makes it a single node cluster. I'm telling it to do one master. Because we're doing a bare metal install, you put none under the platform. Since we're not doing AWS or vSphere or Azure, if you're doing this on one of those platforms, then you can take advantage of their capabilities and you would put your settings here. This pull secret is just a fake pull secret. That is base64 encoded foobar right there. And this is going to get replaced with the SSH key that we created earlier. So what we're going to do, we're going to copy that into our working directory, which is down here. So now you can see I've got that install config in my working directory. And I'm going to run this said command against it to put the domain in place like that. Then I'm going to populate a environment variable with my SSH key that we created earlier with the public key. Don't put your private key in here as such. And the reason I did that is now I'm going to set it into there. So now if we look at what we created, let me clean this up a bit. There you go. Now we have an install config that is ready to build a single node cluster. It's a very short, very simple install config. I want to make sure I'm in my working directory, which I am. And I'm going to install the single node cluster. Now before I kick this off, I'm going to go ahead and kick it off. And then while the install is going, I'll pull up the code and show you guys what is actually going on under the covers in this, because I've abstracted away building the virtual machines and configuration and a whole bunch of stuff. So I'm going to show you guys the code that I wrote that is doing this in my own particular opinionated way. Before I do that, I'm going to go ahead and open up another terminal. I'm not going to make the font quite as big in this one, because it's just going to be for letting log scroll to give us some eye candy to look at. I will show you. There are no virtual machines installed currently. I'm going to kick this off, except I think I skipped an important step. I did. I skipped an important step. I don't have OC installed. All right, so we'll pretend that didn't just happen. And I'm going to come over here and grab the OC command. Where we get OC from is, I'll show you here, the quickest way to get it when you don't have anything installed on this machine yet is to go to the OKD repo. Let me crank the font up here a bit. And under the releases folder, you'll see all of the recent releases. With each of these, the latest one will be the one always on top. Under that, you'll find downloads for the binaries. Now, it is at this stage, and I say this in the tutorial documentation, it is not critical that you have the latest or the version of OC that you're going to be installing from. It just needs to be a recent 4.x version of OC, because the script that we run to build this is actually going to retrieve the real version of OC. All right, so I just pulled that from the page I just showed you guys with a W get command. I'm going to uncompress it. I'm going to move OC and kubectl to my binary directory. And then I'm going to clean up the rest of the files. So now I have an OC binary. Before I run this again, I'm going to make sure that my DNS configuration is still good. Yes, I can resolve quay.io. So now we should be ready to go. All right, so no virtual machines installed. Make sure, OK, I didn't accidentally create any resources I need to clean up because of that. All right. So now where it's paused right here, this release extract, it's actually pulling the correct versions of OC and the OpenShift install. It's creating all of the manifests. And now it's pulling down the Fedora Core OS instances that the machines need for their initial boot. Once the boot strapping begins, it's actually going to replace the operating system with the version of Fedora Core OS that is bundled with this version of OpenShift so that everything is controlled and managed by the OpenShift cluster itself. And that's actually one of the things I didn't mention this morning that is somewhat unique about OpenShift versus other Kubernetes distributions. OpenShift actually manages its own operating system that sits underneath it. All right, so now if I pop back over here, we do, in fact, have a couple of virtual machines. So let's watch the bootstrap. All right, so bootstrap is doing its initial install of Fedora Core OS right now. Sorry to interrupt. What F-class version did you use? I'll show you here in a minute. It's actually embedded in the binary, not the binary, but the script that I ran. So when I pop open the script, I'll show you there, because you can override that. Again, it doesn't have to be the specific version of Fedora Core OS that is needed for that install of OpenShift. It just needs to be a fairly recent one. In fact, it can even be newer, because when the bootstrap process runs, it's going to replace the whole operating system anyway. But actually, the March 1 doesn't work. It needs to be older than that. Yes, yes, I did not use that one. OK, good. All right, now because my script disconnected from those, when it kicked off the initial install, they did not actually reboot themselves. They just shut themselves down. So the next thing I need to do is to boot these up. But when I do, they are going to begin the single node cluster install. So I'm going to kick that off. Let's see what time is it. It's 45 minutes in. OK, so I'm going to go ahead and kick that off. And we'll let it scroll. And while it's doing its thing, I'll take you guys on a tour of the script code. All right? And at this point, I don't do anything else special to make the cluster install. I just start the nodes. Shorten this one so that I can come over here and let you see the console on that one when this guy starts up. So over here on the left, you'll see the bootstrap node start up. I'll start the master node. And I'm actually going to switch and let you see the master because I want to point out something that it's going to be doing. All right, you see it actually didn't really do anything. You see it's pending on this start job here. It is waiting to get its ignition config from the bootstrap node. So while that is going on, I'm going to leave that running there and get another window. And in this window, I'm going to start the monitoring for the bootstrap. Still waiting for the Kubernetes API. And in this window, I'm going to let you see what's going on under the covers of that. So we're going to run a journal control against that bootstrap node. All right, you see the RPMOS tree stuff that's going on? What it's doing here is it's downloading the correct version of Fedora Core OS that this cluster needs to run. And it's going to apply it. And there, it just finished. So you saw this guy kicked me out because the bootstrap is now rebooting. So I'll go ahead and pin that there. Let me separate this one out. Shrink it down a bit. All right, now we're pulling a bunch of container images. Everything in here is containers. We're rendering the manifests. And in just a little bit, this API is going to come up. And here, you will see the master node finally get a hold of its ignition file and start its install. See there, starting cluster bootstrap. And there you go. So the master node now has its ignition, and now it is booting itself up. It's going to do a very similar thing. It's going to pull its Core OS image and apply it and then start its install. All right, now, if you watch these logs like this, don't panic when you see errors in things flying by. I saw all this fail to fetch, fail to fetch, fail to fetch. This is doing a whole bunch of stuff in parallel. And so it waits until states come into being so that it can do things. So during this time, you will see stuff like this. If you don't see progress down here, you can see the API is up and running now. That's where there might be an issue. Now, I'm going to stop tailing this log for just a minute because we do have to do something here. Because this is a single node cluster, we need to tell EtsyD that it's OK that he's all by himself and that he doesn't have two partners with him. Because otherwise, EtsyD will get upset. So I'm going to export my kube config variable to where my install is so that I can issue OC commands. And I'm going to see if the EtsyD object exists yet. Since I was talking to you guys, yeah, it does. If we had done this 40 seconds ago, you would have seen an error. I call that out in my tutorial because you do need to wait a little bit for the EtsyD object to show up. Now that it's there, we're going to patch its specification. Let me show you what we're doing here. We're going to patch its specification so that it will support a single node. So this is what its specification looks like right now. I'm going to apply this patch to it that is going to set this unsupported config overrides. And now if I show that to you again, you can see now under the spec there's this unsupported config overrides where non-HA is set to true. So that is one thing in 4.8 because single node will be an officially supported configuration. In 4.8, we won't have to do that anymore. Right now, we have to do that because it's still not a fully supported configuration. But it does work. So now we're waiting for the bootstrapping to complete. While that is running, I'm going to pull up the code from that script we were looking at and show you guys what's going on in there. Let's see. You know what? That is not going to be readable. Let me just do it like this through the terminal. And that'll be more readable for you. So what this is going to do is set up the infrastructure for you. You can see here I'm setting some variables that you can override. I'm setting the CPUs for the cluster to before. I'm setting the memory to 16 gigabytes. I'm setting the disk size that it creates to 200 gigabytes. My Fedora Core OS version that I'm going to do my initial boot from is this one right here. That was your question previously, Bruce. And I'm pulling from the stable stream of Fedora Core OS. So you can experiment with different versions of Fedora Core OS and using the testing stream or the what's next stream if you want to. This will also take some flags for CPU memory and disk as well. All right. There are several functions that I've defined that I'm going to skip past and come back to. The actual start of the script. So one thing I'm doing, this is a cheesy little random generator here, so I apologize to everybody up front. But I'm generating MAC addresses for my bootstrap and my master node. So I'm generating those MAC addresses. And the reason I'm doing that, it really has nothing to do with OpenShift. It's the way that I wanted to automate the creation of these virtual machines. And I wanted to give them fixed IP addresses. So knowing what the MAC address is enables me to have predictable names for the devices. And I'll show you that in a minute, because that's something that I also do in my larger clusters for my home lab. Because I'm simulating in my home lab running bare metal clusters. So then I use a dig to get the IP addresses that are already in DNS for the bootstrap and the master node. And I populate variables with those. Here's where I'm pulling the OKD tooling for the OpenShift installer and for the OC command. And I'm putting those into my home bin directory. And then getting rid of that temporary folder that I created. Here I'm grabbing a tool called FCCT. This is a Fedora Core OS tool that allows you to manipulate ignition files for machines. And I'll show you the configuration further up in this script for what I'm using FCCT for. That enables you, especially if you're doing bare metal cluster work, that enables you to configure what the operating system is going to look like. Let's say you're deploying some nodes and you want SSD devices that are in them to be preserved for RookSef. This is a mechanism that you could do that. And you want to specify what your file system configurations or other resource utilizations or GPUs or things like that. This is a good way that you can inject things into the ignition. So then this is just good old-fashioned OpenShift doing its thing, creating the manifests. I'm creating the ignition configs. Here I'm calling one of my functions, config OKD node, that I'll pop up to the top of the script and show you guys. And what it's doing is using that FCCT command to then create the custom ignition files for these machines. And then I'm putting those in the directory of my InginX server. So that's what's going on here with this copy and this shmod. I'm putting those files over there so that they can be retrieved at the very first boot of that machine before it starts building the cluster. The next things here from this syslinux on down, I'm actually preparing an ISO for this machine to boot from. Again, because I didn't want to create a pixie environment. I'm not using vSphere. This is all bare metal. So I'm creating an ISO for the bootstrap node and for the master node that have the specific configuration in them that they need just to come up and then start being part of the cluster. And the last thing here after creating those isos is then to create the virtual machines themselves. And the creation of the virtual machines themselves is using the vert install and passing the appropriate parameters, including the particular ISO file that I just created. So the last thing I'm going to show you in here, let's check on our bootstrap. Bootstrap is still running. Make sure nothing's blown up over here. Everything appears to be healthy. OK. So this function config OKDNode, what the FCCT tool works with is YAML files, an ignition config. And I'm setting this up as a merge type. So it's going to merge the following configuration into the ignition config that then becomes the initial ignition config for this machine to boot off of. This is why I had to know the MAC address, because I am explicitly hard coding the name of that NIC to be NIC 0 so that it's always predictable. And then I'm configuring network manager with the network information about that machine. It's IP address, the net mask for the network that it's on, it's gateway, the name server, and the domain. And finally, populating its hostname. So it's fairly short, but it gives you a clue into the power of ignition configs and what you can do with them. So I am prescriptively setting up the network configuration for this guy before he boots up. And that's the deploy OKD SNC script. It does all of that for you. And it looks like the bootstrap is now done. If we come down here, there we go. Our bootstrap is complete. So the next step here, because our bootstrap is completed, is to get the bootstrap node out of the way, all right? And that one is a very, very simple script. It uses verse commands to remove the node, to shut it down hard, undefine it, remove its pool, and then destroy the files under slash virtual machines. I'm also remember those DNS files that had the little remove after bootstrap tag. I'm using a cheesy little grep stunt here to yank those out of my DNS configuration and then restarting name D so that everything is cleaned up. So now at this point, we can watch the rest of the installation complete. Oh, I did not want to start Apple Music. Sometimes this thing is annoying. So that was watching for the bootstrap complete. There's another command that is very similar and also in the tutorial for watching the rest of the install. And this will take a little bit while it completes everything. There are now a couple more things that we need to do because this is a single node cluster. First, let's just look and see how everything is looking here. Running, running, completed. Running and completed are good. Crash, buff is not good, but I don't see any of those. So everything appears to be fat and happy right now. So a couple of things that we need to do. The ingress operator is going to be unhappy because it expects to have two replicas. And the authentication operator is also going to be unhappy because it is expecting three XED nodes. So we have to tell those two that we're running in a single node configuration. So let me first, let's look at the ingress operator. Let's get the ingress controller named default in the OpenShift ingress operator namespace. Let's see, make sure it exists. Yep, it does. Let's take a quick look at it. I probably did that while the API server was in the middle of doing its thing. That will happen occasionally because while it's working toward completion, it will stop and start a couple of things. So don't be alarmed if you temporarily lose access to your cluster. If it doesn't come back, then we'll be worried, but it should come back here shortly. Let's just do a quick look, make sure we're still running. Yep, comes back sooner than that. This is a bit of a longer pause than I'm used to normally. Take a look at our master nodes. This is a good opportunity to show you. With that key that we created, we can SSH in as the core user to one of our nodes. This is actually a little bit worrisome. I hope my node didn't freeze. Of course, this ran perfectly smoothly three times this morning. We're still waiting on that. Our node is still up and responding. So let's just hope this is a little burp and we didn't have some kind of a weird lock going on. Oh, there we go. Now that didn't let me in. A little bit disturbed that that one did not. Let's make sure I don't see anything super disturbing there. Make sure it didn't inadvertently remove the wrong entry. Nope, my API is still 150, 150, 150. All right, so this is an unhappy node. I'm going to do something. I'm gonna reboot it and see if XED will recover after it reboots because it is not responding on port. Actually, before I do that, let's see what it is listening on. Nope, port 64, 43 is not there. XED is not healthy. Let's let it reboot. And while we're doing that, I'll come over here. Hey, Bruce, any questions or anything in the chat? Well, our single load cluster is unhealthy. Yeah, now Eric had a question earlier. What happens if you execute your script with minus H? I guess he's asking if there's a help function. Oh, there is not. These are not production ready scripts. If you execute it with a data... Tells you the appropriate sacrifice to make for the demo. Yes, unpredictable behavior. Let's see if this guy is... He's rebooting. Okay, so further to that, he said that he tried minus H and didn't get any VMs to come up. Okay, it looks like something happened to XED during this install. So what I'm gonna do now is show you guys how to tear this whole thing down and do it again. Because it looks like XED is not going to start for us. Watch this for just a couple of minutes. Nothing is standing out when we grab the logs. Probably be able to find something. So here we go. This is what it looks like when it goes wrong. It is not. So what I'm gonna do is tear this sucker down and show you what it looks like to tear it up so you can run it again. So one note, the bootstrap machine is gone because we already destroyed it. So the only machine running right now is that master node. So there's another helper script undeploy the node that will tear that out now. So now it's completely gone. The DNS, so the DNS entries, the bootstrap entries are not there. So I need those to be in place before I can start this again. So to run the install again, run my setup DNS script. You can see it put my bootstrap entries back and then I run my deploy again. I need to make sure I'm in the screen that had the OpenShift version set. So I'll just go ahead and re-export that and get some of these extra screens out of the way here. That one was completely hung on that bad SSH. So something went bad with that node. So we're gonna do this again. We're gonna use this release, March 7th. We'll start our deploy again. And I will say in my larger lab environment, I actually have an engine X server set up that I have a mirror of these image repositories in. And that allows me to have much, much faster installs because it's not having to reach all the way across to Quay.io to get the container images. It already has them locally. And that helps if you're experimenting or building and tearing down a lot of clusters or something goes wrong like it did today. Okay, so Tara, we've got a couple of interesting discussions in the chat that you, so Michael, Michael was asking about any thoughts regarding the network bridge part. If you're doing on a remote machine where you don't have keyboard access. Okay, and now what you've generally done is start out with keyboard access to get something installed, right? Correct, well, in the tutorial, that's what I suggest. In my home lab, I actually have a Pixie set up. And wow, here, let me kick this off again and then we'll wander over and I'll show you how I do that in my home lab. All right, still waiting for the master to finish. There we go, okay, they're both done. Yeah, let me set this to monitor install because I do need to, I'm gonna move these out of the way so that I can set that EtsyD patch quickly once this bootstrap completes. And you know, I will say that weird, the cluster couldn't finish thing. I have seen that a couple of times in my lab and I'm wondering if it's a race condition on when I yank the bootstrap node out that EtsyD goes sideways somehow. Because I have, it's very rare that I've seen this and of course it would happen on the day that we're doing a video. But I have actually seen that happen with a full cluster build a couple of times but not enough times to warrant actually getting in to see if I can figure out why it's doing that. But you're also using the OpenShift SDN network. I've had problems with that in the past. Correct, you're right. I'm not using the Kubernetes OVN network. Although that actually shouldn't, that shouldn't have had any impact on EtsyD. That, what was weird is that EtsyD itself actually fell over and wasn't able to recover. It happened shortly after we yanked the bootstrap node out. All right, let me show you what we're talking about. Oh yeah, EtsyD. All right, so in my environment, I should probably be using Ansible for some of this but I know Shell so I write Shell. So in my environment, when I get a new nook and I'm adding it to my lab, I have a utility script here, deploy KVM host that all I have to give it is the host name that that new nook is going to have, the MAC address off the butt of that nook. So I flip it over on its butt and I get the MAC address off of it and how many disks it has in it because some of the nooks just have a single M2 slot and some of the nooks have two M2 slots. So what this script does is it, let me come down past the function. So it creates a little working directory for some iPixie. It does a dig just like you saw in the single nook cluster one to get the IP address that I've already pre-populated in DNS that creates a file that is named the MAC address with the colons replaced with dashes and a dot iPixie and creates an iPixie file for that machine. So that it will boot from this kernel and this initial RAM disk off of my Nginx server and it pre-populates the IP information that I want this machine to have. Let me pause for a minute, make sure our API is up. So I need to pause real quick and I need to patch SCD, okay, there we go. Let me tell SCD it's okay to be a single node cluster, all right. Okay, so now bootstrapping is still running. Okay, and then I create a kickstart file for that machine and the kickstart file then has all of the configuration that I want it to have. And here's where I feed in the disk information. It's going to create some of the file systems with prescriptive sizes and then just blow the rest of them out to whatever the size of the disk is. It sets up the network configuration with the host name and the bridge devices. I think this is what you were asking, Mike. So by doing it this way, in fact, most of the nooks up in my lab, I'm embarrassed to say I've got 16 of them, the nooks in my lab, most of them have never had a monitor plugged into them. Though ones that have had a monitor plugged into them, it was probably so that I could update the firmware because every so often you need to update the firmware. Hey, Neil, I just saw, so there is a hack-ified way to do upgrades of a single node cluster, but it doesn't always work. Now with 4.8, you will be able to upgrade a single node cluster and it will be more of a supported configuration for updating single node clusters, but it still obviously requires that the whole cluster goes down. So like we're with a full HA3 node plus workers cluster, you can do upgrades without any downtime, with a single node cluster, you incur downtime when you do an upgrade. Yeah, so I'm using Kickstart in my lab and it sets up everything that I want this machine to be doing. So since this is a KVM host, it's installing the Libvert tooling that I need and then it reboots the machine when the machine comes up. It's got its full personality and it's ready to be part of my lab. And I think I've got a section in my tutorial, yeah, for setting up the Pixie Boot. So there's a section in here. It also talks about the little router that I'm using. Yeah, hey, Charo. All right, so let's see how our bootstrap's doing. I also had a question about trying this on a Mac and I told him that I wasn't crazy enough myself, but you're braver than I am. I have not tried this on my Mac. Actually, code ready containers will run on your Mac. And once we're waiting on a bug fix for the bare metal installer that actually the deem will probably drop the new release today. And if that's working, then I'm gonna build the 4.7 version of code ready containers for OKD. The version that's currently out there is still 4.6. But actually coming up soon because we're having to work on support for Apple Silicon, you will be able to build code ready containers on your MacBook. And once we do that, you can actually use that to build a single node cluster on your MacBook if you were so inclined. You'd probably need one of the MacBooks with 32 giga RAM. I don't know that it would work on 16. Once we're at 4.8 and you don't need the bootstrap node anymore, then it's probably more likely to work. All right, let's see how our cluster is doing here. What are the things we're creating? There will be errors. If you look at this before bootstrapping is complete, you will see containers in error states. Again, just like when we were watching the log scroll, most of the time that is okay. It just means that there's some collaborator resource that this particular operator is waiting on that isn't there yet. And so it continues to error until its collaborator is in place and then it will finish its installation. Are there any other questions? I'm hoping that this time I'll actually get to see a completed install and that SED doesn't go sideways on us again. Let's take a quick look at the bootstrap node and see what's going on inside of it. All right, things are progressing. So that's all looking good. And one thing, once you get this out, there are some of the operators from the marketplace that might be assuming a fuller cluster. I know like Rooksef out of the box. If you wanna run Rooksef on a single node, you'll have to adjust the replica count for some of the components because it will expect at least three nodes for particular things. Code ready containers. Yes and no, Patrick. The code ready containers doesn't configure the nodes in exactly the same way that this does. But while we're waiting, while we're waiting for the bootstrapping to complete, which actually this, it might be getting ready. So watch here. I'll come to that in just a minute and I'll show you guys how code ready containers is built. I'm thinking this is the end of the bootstrap process here. Yes. All right. So we successfully bootstrapped again. Here are our clusters. All right, let's see if we're gonna get past it this time. Hopefully it will set that cert. There we go. Okay, we made it out of the woods that time. Excellent. Do a quick check and make sure that all of our certificate signing requests got approved and issued. Okay, that's good. That's good. All right, progress this time. Okay, single node cluster from code ready containers. It's actually doing, it is useful to give you some insights into something. It's actually doing an IPI Libvert install of your OpenShift cluster, which is pretty cool in and of itself in a similar fashion to what I'm doing here. In fact, let me see. I was hacking on code ready containers. Let me go find it here. Yeah, there we go. So there's a project. SNC and CRC are the two projects that it's built from. Crank the font up for you guys. Apologies that that makes the screen really crowded. So what the punchline in this it is very prescriptive toward building code ready containers. So it strips a bunch of things out of the OpenShift cluster like monitoring and things like that to make it fit in a really compact size or compact for OpenShift. Let me find the installer. So it is, it's writing into the manifests, the domain information and the VM name that it's creating. And then it's using and where it's calling the OpenShift install. We're still manipulating the manifest, still manipulating manifests. Okay, created the ignition files somewhere in here, right here after creating the ignition configs. It is using the bare metal to then in here some, oh, here we go, creating the cluster. So this is using the, it's using an IPI install where the installer actually does all of the provisioning and stuff for you. So not giving you control over the network configuration or things like that. This is more like the, I'm deploying in the cloud on AWS or on GCP or on VMware or something along those lines. So that's what the code ready containers is built from. So this big long SNC.sh script here, what it's actually doing is standing up a single node cluster on Libvert. And then you run another script that's part of this bundle called, where's it at? Create disk, where's create disk? Right there called create disk that then creates a Qcals image of the cluster. It strips a whole bunch of operators out to, like I said, to try to get the size down as much as possible. And once it's done all of that, it does a couple of little tweaks to the running virtual machine so that it's compatible with Hyper-V for running on Windows or some other things. And then at the very end here, it creates where it's doing this create tar ball for Hyper-V, for Hyper-Kit, and for Libvert. So for each of these, there's functions further up here. You see it's creating the QMU image. It creates then from this images for Windows, for Linux, and for Mac OS, right? And when you're done with that, then you have a file, a virtual machine file that is a single node cluster that would run on one of those platforms. Code ready containers is then a wrapper around that that bundles the image so that you can start and stop it with the CRC. And that's why the CRC binary is so fat. If you go download it, you'll see that like the Mac OS CRC binary is like two and a quarter gigabytes, I think. Most of that is the virtual machine image because it's when it uncompresses, it uncompresses to about a nine, 10 gigabyte image. So most of that two and a quarter gigabytes CRC binary is your compressed disk image that contains the cluster. Okay, now there's a couple of things. I'll have it Chrome start. There's a couple of things we need to do now so that our cluster can complete starting because like I was telling you before our previous one went south, authentication is going to be upset and the ingress controller is going to be upset because it's not gonna be able to start all of the replicas that it wants to start. So let's see if we can successfully do that this time without our cluster going sideways on us. This is where we were before it went sideways ingress controller should be named default. Actually, let's just do it this way. I'll show you that it's named default. It's going to be in the open shift ingress reader namespace. There it is. His name is default. So let's take a look at good old default and scrolling, scrolling, scrolling. You see it wants to have two replicas. All right, well, we don't have two nodes for it to have replicas on because if we were able to dig further into its configuration that is not shown to us here it also has anti-affinity policies in place so that it won't run two of its pods on the same node. And so because it has that anti-affinity in place one of its pods is not going to be able to start and since it wants to and it can't have to then the operator won't be healthy. So what we're going to do is a good old fashioned patch command. Actually, you could do it. You could do it the boring way like this OC edit and I could go down here and change that replicas to one and save it. But if you really want to impress your friends the OC patch command looks like magic because from the command line without doing anything else and this is also how you can do this kind of thing with infrastructure as code. So what I'm going to do, this is a JSON patch and what I'm saying is under spec, I want replicas to be one and I'm doing a patch type of merge rather than replace so I'm going to merge this into the existing config and I'm going to patch the ingress controller named default in the OpenShift ingress operator namespace with that patch and there it is. Now it's patched and if I do this dash O YAML again and scrolling, scrolling up, you can see that now replicas is one. Okay, well there's one other that we need to do this with and that is the authentication's operator and it needs a patch similar to what we did to EtsyD. So I need to tell it that it's going to have an unsupported configuration override of use unsupported unsafe non-HA, non-production, unstable OAuth server to treat and you can tell that the engineers that put this in here wanted you to know that you were intentionally using unsupported unsafe non-HA, non-production, unstable OAuth server just in case you weren't quite sure and there we are. Now we said I want to use the unsupported unsafe non-HA, non-production, unstable OAuth server and hopefully now that I've done that, this message down here, you see we're 98% we're almost there. Hopefully now we'll get past that because now the authentication operator should be able to complete its install and update and we actually don't have to wait for that. The secret here is we already have a usable cluster at this point so let's go ahead and go get the authentication. Okay, you see this OKD4 installed there that was created by my script and that's where it put the manifests and things and then told the installer where to go to get its manifests so that's all of this stuff under here was created by the installer. So there's the ignitions that were created. You see this guy right here, the worker ignition. Oh, yeah, look at that, there it is. We have a working cluster so we'll prove it now by logging into it from the console. Before I step away from this though you see this worker ignition, if you were in the main stage when we were first starting and we were talking about the fact that you can't go from a single node cluster to a full cluster that's really just talking about the control plane. We could use this ignition file, this worker ignition file. I could build a virtual machine, boot that virtual machine off of this worker ignition file and it would join the cluster as a worker. I would not have an HA, Etsy D control plane because I would be using an unsupported, unsafe non-HA non-production unstable Etsy D but I can create lots of worker nodes now and have a single master cluster with worker nodes. So since our cluster is up, let's go ahead and log into it. You'll notice down here it gave me a URL and it gave me a username and a password. If you lose those, you can get them back. What I was going to show you was under this auth directory here. This cube config is what I've been using when you've seen me do this. That cube config is kind of like the passwordless SSH. So it gives me immediate access to it. That's like my back door key. Don't share that unless you wanna give people cluster admin access to your cluster. And if you delete it, you can't get it back but it's probably a good idea to either keep it in a super secret safe place or delete it and use the identity management that's there. The other thing that it gives you is this cube admin password which is a file that contains this randomly generated string right there that is the password. So let's get a Safari window and the URL to our new cluster. Okay, it's using self-signed certs, right? Cause I didn't, I had a very, very simple install config so I didn't give it any search to you. So it's using self-signed certs. So I do need to accept the certs and it's gonna make me do this twice. Now, we're gonna log in as cube admin and we need our password. There we are folks. We have a OpenShift single node cluster. Now, one thing to point out here, you see this two right here in a fully functional cluster but that number off to the side of the pod shows you is pods that are in some sort of a not ready state let me show you in one of my larger lab clusters. You see that's not there because all of the pods are in a good state. You will always see this in the 4.7 OpenShift cluster because there are two Etsy D Quorum Guard pods that don't have anywhere to run. So nothing to worry about here. It's not broken, it's just you will see that because these two Etsy D Quorum Guard pods don't have a node to run on. When we get the full bootstrap list 4.8, you shouldn't see that anymore. But at this point now, the cluster is ready to run. There's actually a couple of other things that I included in the tutorial that will be helpful for you. One of them is let's go ahead and set up a real user account and a developer user account. All right, and I did include in the tutorial and let me get back to that. So in this space, you see this HDPasswordCR.yml this is a custom resource that I created for you so that you can set up HDPassword authentication within the cluster itself. And what we're gonna do right now is let's create, I'm just gonna pull these straight out of the tutorial. So I'm gonna create a working directory for the credentials. And locally on my box, I'm going to, this was actually when you saw initially we were installing the HTT tools that was to get to the HDPassword command so that I can create HDPassword files. So I'm gonna create an HDPassword file called HDPassword in that directory I just created and for a user admin. And I'm gonna make its password the password that was autogen, right? If you don't wanna do that, you just put whatever you want your password to be right here. And while I'm at it, let's go ahead and let's create just a regular developer user too and we'll add it to the same password and his password will be DevPassword. And now from that HDPassword file, I'm gonna create a secret in the OpenShift config namespace. It's a generic secret. I'm gonna name it HDPassword secret and I'm gonna create the secret from the file and it's gonna name the file HDPassword and I'm giving it the path to that file. So this from file equals equals is kind of weird. The first equals is what you want the file in the secret to be named. And the second equals is what the physical file that you're creating the secret file from is called. So I'm gonna create that secret. So now come back over here. See in our single node cluster, we should have a secret now that was created called HDPassword. You know, I always forget there's a search here. Right there it is. So there's the secret we just created in the OpenShift config namespace and it has the content of that HDPassword that we just created. All right, so that's step one. Now it's in the OpenShift config. Step two is to apply that custom resource. Right there, the HDPassword custom resource. And it will do this because I'm using an OC apply and not an OC create. I'm in a habit of always using OC apply. So this is just it complaining, but it didn't actually break anything. So now if I, what's the type? Let me see what I forget what the type is from this custom resource. Oh, it's a type of, there. I've got an OAuth named cluster. You see? There it is. There's my identity provider. It's a type of HDPassword and it's using the HDPassword secret to get its data. So now we have some real authentication so I can get out of this kubadmin user. And now you see I've got two options here for logging in. I can log in with the kubadmin, which was the one that was created temporarily, or I can log in now with my new provider. So I'm gonna log in as admin, just logged in. Need to do one other thing though, because I'm not admin. I'm just a developer with self-provision right. Okay, so I need to give my admin user authority. So I'm gonna give my admin user cluster admin right by adding the cluster role of cluster admin policy to that user. And there you go. You saw in the background, boom. Now it knows who I am. Now it knows that I'm the administrator of this system. And if I log in as my dev user, now I have an admin and a developer user. So the last thing I'm gonna do is I don't like this temporary account. I don't like leaving credentials lying around that were in plain text. So the last thing I'm going to do is get rid of that kubadmin user. And I do that just by deleting that kubadmin secret from the namespace kubesystem. Don't do this until you have another way to administer your system, especially if you've also deleted the, that this right here, if you've deleted that kubconfig, because at that point you've locked yourself out of your cluster and your only option, your only recourse is reinstall. But now if I refresh this there, I don't have to select my identity provider now. I can just log in as admin and secure password. Oops, I didn't go back to the console. Get back to the console URL. I did that from now what is now an illegitimate URL. There we go. So we now have two users, a dev user and a cluster admin, couple of other things just to keep it from complaining at you. If you're gonna use the internal image registry, you either need to give it a host volume or an ephemeral volume. In the tutorial, I've got the command for you to give it a ephemeral volume. So I'm gonna switch the image registry to a managed state and I'm gonna give it an empty dir, which is an ephemeral volume. But since you only have one node, that ephemeral volume is really sitting on the file system of that one node. So the images that you put there that they're living on that one node. So you can use the internal image registry at this point now. And you saw in the background there, it's actually doing some things. Because of that, you see the image registry kicked up. So now we've got the image registry running. And that one is shutting down and replaced itself. So now we've got a configured image registry. And the last thing is image pruners. It will complain if you don't have an image pruner scheduled, you'll get warnings in your console. So let's just go ahead and set up a image pruner that will prune out images based on this configuration at midnight. And there we go, what time is it? We've got an hour left in our official time. If I haven't run everybody off, are there any other questions or anything else that you guys wanna talk about? Anything else, Bruce, in the chat? Pretty congratulatory. So, you know, both on the second time around, I guess it's always turn it off and turn it on again. Is there a MIDI? Yeah, I know. I need to run enough installs. I might kick off a loop that just spins them and shut them down to see if I can catch what it is that causes the LCD to fall over. Because like I said, I've seen that just a few times and I literally ran this at least three times this morning and four or five times yesterday while I was cleaning up the documentation. Yeah, I don't usually yank the bootstrap that fast. But of course, the one that we're recording is the one that the guy's like. And that's probably when it happens. I might experiment with that and see if there's a race condition in yanking the bootstrap node after it tells you it's okay. Maybe you need to hold your breath and count to 10 and then yank the bootstrap node out. I've also seen it become unstable if you wait too long. Yeah, I have a question for you too. Cause like I was in a situation once where the authentication was no longer working because of a failed upgrade. And the only way I can get into the system all to debug it was with the Qubit Mint. And there is sort of no way of getting it back once you delete it. See, that's right. So what I've generally done, I have found that HT password is pretty stable even across upgrades at a previous place where I was doing this for real in the data center. What I set up for them because they had Azure AD as their authentication mechanism. So, but if something went either wrong with that or with the cluster's integration to Azure AD that they would lose access to their cluster. So we set up HT password for just a few of the cluster admins. And that was kind of the always their back door. And then you just set up a password rotation and complexity policy that periodically replaces that secret. And that way you've at least got a way to get into your cluster if you lose your primary OAuth provider. I've also set it up. I know I used to use actually GitLab OAuth. But Michael has a question here. Did you see that, Charo? Yes, yes. And Mike, you're absolutely right. In fact, you can do that with code ready containers too. That's the way to make code ready containers available off the workstation that you're running it on is to slap HA proxy or your favorite proxy in front of it and use that to route the traffic. In this case, I'm using the Etsy Resolver on my MacBook to get to it. In my Bastion host that is balancing traffic across the three control and the three control plane nodes are also my infrastructure nodes. So they're also running Ingress and the image registry and other things. Anything else, guys? Well, Charo, have you... One of the things I noticed in basically all of the various tutorials is that everybody's using self-signed certificates. And I think that with... Because you need a wildcard certificate that the only free ones that you'll get is if you have a real domain name. So you have to spend 20 bucks for a domain name which isn't that excessive these days. Yeah, which I did. I bought a domain name because I plan on actually one of these days my wife is encouraging me to do this and I just never get around to it. It's to start keeping an official blog of this stuff. So I did buy a domain upstream without a paddle. There's nothing there right now if you go to it. But that's me upstream without a paddle. So I actually do to create some real certs and do this with real certs so that I can show folks how that works as well. I'll get around to it one of these days. One of the other things that I'm not sure if I would have to make any endpoints available on the internet to make that work. But that's another thing I don't do is make any of the endpoints available on the internet. In fact, I actually run my lab as though it was a disconnected data center. So I've got firewalls and routers, actually two layers of firewall between my cluster and the internet. And I've got a Maven mirror sitting in Nexus. I actually, in the big tutorial, there's a section where I'll lead you through doing a disconnected install of OpenShift using Nexus and you actually put Quay.io and the Red Hat domains in a DNS sync hole to simulate being firewalled off. Things that I also would like to do is to set up a certificate signing service in the lab so that you've got your own certificate authority that you use to sign the certs. And it's kind of the in-between where you don't have fully signed certs from like a verisign or what's the free service that I always forget the name of. But you're doing your own certificate management. And there actually is a cert manager operator. I actually have it installed in here because it was needed for either Key Cloak or maybe it was Skyla DB. I forget why I installed it, but I've got the certificate manager operator installed in my cluster to do internal cert management. Hey, Neil, working node, Neil had asked a question. Does it work in single node? Hey, Neil, if you're still here. I think he was talking about the operator, wasn't it? Weren't you, Neil? Oh, I don't know. I haven't tried installing it in a single node cluster yet. It'd be worth, I don't know, let's see. Is it in operator hub? Let's find out. This is the single node cluster, right? Yeah, we're on the single node cluster. Let's go to the operator hub. Oh, cert manager's not in the operator hub. I installed it from the GitHub repo. Let's install an operator. Here we go. So this is the operator hub, everybody. If you haven't seen it, we're gonna install the Strimsy Kafka operator. Stable all namespaces and automatic approval. I'll see if this is one that is not single node friendly. So far, so good. Neil, that's why I did it. Oh, so there we go. We just installed the Kafka operator cluster. It is healthy. And it is now ready for us to single node Kafka cluster.