 Let's go ahead and get started, Joseph. Did you want to lead off with yours or do you want me to go first? What's the plan, Jamie? Let's synchronize. Sure. So let's have you walk people through a detailed explanation of your setup with DNS, DHCP and vSphere, and then I'll show my automation. So I can show the installation as it is described in the GitHub OKD repo because I'm using that. Sure. And you don't have to wait for the whole thing. Yeah, we're good, Diane. All right. I'm going to just pop into the other ones. Just know that you're recording. You've got about 10 people watching you and take it away. And you don't have to go for six hours. I set it up so you could go as long as you wanted, but in both six hours, you're doing good and you're doing tech support. So that's the other thing. Remind people that you're not doing tech support, you're demoing. Take care. Thanks, Diane. Thank you. Albert, you're asking about vSphere. You want workers in different clusters. Can I use API and move a worker after it's created or should I use UPI? You want to share workers over different clusters or what is your purpose? I don't understand the question, to be honest. So I can respond to Larry real quick. So Larry three X is no longer supported as of May. So you'll want to move over as soon as possible. There are some migration guides available on the web. And if you have any particular questions about migration, we might be able to handle them or folks in the event chat could handle them as well. You'll definitely want to move over as soon as possible. And OK, well, while we're waiting for for effort, why don't you go ahead, Joseph? And oh, here we go. We got a response. I don't know. I've never heard of scenario. So are you thinking that so moving the VM to a different vSphere installation and have it work? Is this UPI or IPI? So oh, you're asking. OK, so essentially a worker is bound to its control plane. And if you are going to, if you want to use a worker on a different cluster, you basically need to redeploy it. And so that's what the that's one of the foundations of F cost of Fedora OS is that when you want to make a change, you basically just redeploy the node. So you would just redeploy the node by taking the metadata. You could do this via UPI if you did UPI. Take the metadata that was generated in the ignition config file and then insert that into the VM. And there's a flag. You have to set it. I can't think about the top of my head. But basically, when you boot up the VM next time, it'll re-provision and make that connection to the other control plane that you want to move it to. So there's really no way to just point to a different control plane. Essentially, you'll want to re-provision. I hope that helps. I could show installation of based on the OKD repository, maybe the first steps. Yeah, yeah, go ahead and show the automation stuff that sort of simplifies all of that. OK, so it all starts here. This is the GitHub repository of OKD. Don't be confused that you've not find any source code in this repository. It's more like a meta repository where mostly it's documentation, but also the guides. Here in this guides folder, we have currently IPI Azure description and a UPI in different flavors. I could show you how I start with the vSphere Terraform version. Does anybody of you use Terraform? Maybe we can just cut that first. Maybe all the time. That's good. Very good. Good decision. OK, I think it's worth to, yeah, three, OK. Three is a set. I could, OK, I will spend a few minutes about that. So the first thing is you should clone this repository here. So OKD repository, I've done that in advance. I built a cluster an hour ago from that. This is the repository. I'm in the vSphere Terraform folder. A few days ago, I can try to make it bigger. Maybe that helps. Oops, it scrolls away. Give me a second. Maybe like this. I hope you can see something. So Jamie, if somebody asks anything, maybe you can throw that to me because I can't see the chat. Absolutely, yep. So this is the repository I'm starting from scratch. This is the same you see in the GitHub repository. If you go to Guides, UPI vSphere Terraform, you will land in this repository here. And if you will see that there is a file in the repository that's called Terraform tf-was-example. I will show you this one. Here it is. And at first, you have to fill out the variables. Don't get frightened. Yeah, it's nothing special. You have here your cluster ID, your cluster domain. It can show you what I built from that. It is here. The cluster name is C1. My example here, the domain is homelab net. Yeah, and you have to fill that out. I have to find my bullet like this. Then you have to tell him where the vSphere server is, your vSphere credentials. Then you have to provide the information about vSphere data center and data store. I can show you how it looks like in vSphere. This is the installation I've bought with my $150 VM user group, this version of vSphere. I'm using that in my homelab. Here you see the data center. And the storage is called data store 2. And you have to fill out that. In this Terraform variables file, you have to say how much masters, how much workers you want to have deployed. You have to provide a few more. Ignition configuration-based informations. You can leave this to fixed. And here you have to provide the URL of a web server where you serve the bootstrap ignition file. Yeah, because as maybe you remember from my slides that in the first step, the bootstrap node will fetch its ignition file from somewhere and that's what you have to provide here. You have to provide the location of a web server where the bootstrap ignition file is. I will show you how you create that in a few minutes. Then here you can provide the MAC addresses of your VMs. The bootstrap VM has this MAC address here, control plane, three masters, three MAC addresses, three workers, three MAC addresses for the workers. And that's pretty much everything you have to provide here. So that's most work you have to do. Then we have a folder here, installation here, installed here. In this folder, you have to provide the ignition files. I downloaded the OpenShift installer from OKD's web page. Maybe you remember it's... If you go to the OpenShift OKD site, you have your releases here on the right and you simply have to choose the installer binary for the version you are interested in. You can download it here. In this case, I would download... Yeah, maybe it's this OpenShift installer because I'm on Linux. I would download this one. I have done that previously for an older version. And then you untar it until you have your installer here. Let me set this installer for version... OKD version 4.5. So that's okay for this demonstration. Afterwards, you have to provide the installer an install config.yaml file. I can't show it in detail because my credentials for vSphere are in that, but I will copy it in the location where the installer is. If we can show you a template without credentials later. I have configured it for OVN. Maybe I don't do it in this directory. Let me create a different one. Now we have an install config OVN file. I will replace it to install config.yaml. The install config.yaml contains information like, again, how much workers, masters you have, what's the domain name is. You can provide vSphere credentials. So if you later in your running cluster want to dynamically create more workers, you can open shift or OKD will use the vSphere API with the credentials provided here in this face to provision more workers on the fly. Also an autoscaler could use the vSphere credentials provided here to create more nodes dynamically. The next thing is you have to do this. You have a few possibilities what you can do here. I create ignition config files. Also create you also could create the manifests of the cluster operators in this stage and configure them and afterwards build ignition config files from these manifests. That's useful if you don't want to create a cluster and configure it afterwards. But if you want to create a cluster that is preconfigured with your implementation details. This example, I don't do anything like that. I straight forward create my ignition files. And here we have the most important one. It's a bootstrap ignition file. We can have a look inside of it. Looks like it's a huge JSON file. Sorry, I cannot. I think it has 200 kilobyte size. And we have a few smaller ignition files, a master ignition file, and a worker ignition file. See, it's much smaller. And it almost completely contains only a certificate root certification authority certificate. It's going from here to here. It's based 64 encoded. And that's pretty much everything. What it also contains is the URL that points to the load balancer. And I use this port 202623. This port is, if you see that, it's always the port of the configuration server that is running on the bootstrap node and also on the control plane. And from that, this small ignition file that's used for the master VMs will fetch constantly, will pull for a ignition file from the control plane or from the bootstrap node in the first phase on this pass. And it tries and tries and tries to get this configuration file until it gets it and then it will provision itself. The size of this ignition file, if it is fetched from the bootstrap, the VM is much bigger. It's almost the same size as the bootstrap again. You see here it's almost a 300 kilobyte. And the thing is that you cannot provide vSphere very big ignition files. I don't know the exact size it's possible to support a VM in vSphere for the ignition files, but it's much smaller than what's needed. And this two phase ignition fetch is used to overcome this limit in vSphere. Also for the bootstrap VM, we have to use a small stub. It's called a panned bootstrap. Looks it's even smaller than the master ignition file. It contains the address of a web server. I used Apache, a simple Apache web server on this helper node here. And this Apache provides a bootstrap EGN file I have. For that to work, I have to copy this ignition file to the RWWHTML folder. I don't do that here, but normally the procedure is like that. And afterwards, I can start Terraform to do its work. See here that I already provisioned VMs with system method. The cluster is already set up. I hope you can read it. OK, can make it bigger. Here we see all the VMs. And this is a running cluster. Yeah, it should look like this in the end. If you see a dashboard and what is this? The sample is OK. It's don't worry about that. And you have normally three green checkmarks, and you are fine. So then you have your first running OKD cluster. I don't know if I should destroy this one and create a new one. Are you interested in that? Of course, it will take a little bit of time, but we could see the initial phase of setting the bootstrap node. Well, that will happen in the one that I'm going to do for the automation. So they'll see all the VMs get created. OK, great. Because I think lots of people are struggling in this first phase that the bootstrap VM comes up. It gets its bootstrap ignition file. But afterwards, things get stuck. And maybe it's also interesting how you can debug that. Jamie, we can maybe do a debug session. Maybe we can produce a problem, come in one, and try to find the solution. Because you will find videos on the internet that show you the perfect world. There are lots of videos for that. But maybe you can take more about the sessions if you see how to troubleshoot that. OK, I will stop screen sharing here. If I find the button. Yeah. OK, Jamie, maybe if you do UPI, can you autoscale work? Sure, that works perfectly. You have for that to work, you have to. Sorry, I will share again. Get used to this tool here. OK, I hope you see my screen. For that to work, you have to create a machine set. How it's called in OpenShift. And the machine set has a provider spec where you can say OpenShift, which kind of provider you want to use to create machines. I will show you the documentation for that. Here it is. That's the documentation for how to create a machine set on vSphere. And it looks similar to this section here. Here you have also to provide the information about your vSphere cluster. You can provide disk space, memory of your VMs that are created in vSphere, CPUs, and so on, data center, data store, and so on. And if you provide this machine set to vSphere, you can simply do two things. I don't have a machine set up tonight and try to, but I don't think it will work. OK, at least we see something. You will see here that you can create more machines by simply pressing plus minus. And if I would have filled out my vSphere credentials, I would see that if I press Save, that vSphere immediately will create new VMs. You have also to provide a VM template for Fedora Core S in your machine set. That was one of the fields. I think it's somewhere I don't see it. But it should be here, one of those fields. And then you will see the face here, the provider state. It's taken from vSphere. It says powered on, powered off. Then the machine will, Fedora Core S will try to join the cluster. It goes rather quick. After a few minutes, you will see the machine here. It's unready. A few minutes more and you have a ready state on your newly created machine. So there are also a few tricks for that. If you want to provide some specialties on the newly created workers, you have to provide ignition files for that. And there is a, I don't know if it is documented, but I found that in, I will show you that. It's rather useful. In the namespace, OpenShift Machine, where is it? OpenShift Machine API, where is it? Here is it. You have secrets. You have this master user data and worker user data. And if you look at that, you will see a big base 64 encoded string. If you decode it, surprise, surprise, you again see the small ignition steps. Similar to the one we got from the installer. Here you see also the same, exactly same URL. This is here, the ignition file will force a new F course machine to constantly pull the ignition config from this URL. Here is a certificate. And here you can do whatever you want. I also provided a hostname for newly created machines with this method by simply setting the hostname field somewhere here in the ignition file. You can also create services that are talking to VMware with the VM tools demon. You can do everything what you want here with the secret. And it's in OpenShift Machine API. Under secrets, you have two of them, one for the master, one for the worker. But you asked me about automatic creation of machines. Yes, sure, you have an autoscaler. The autoscaler is here under compute. Here it is. Per default, you don't have one enabled, but you can create it. You have to tell the autoscaler which machine set you want to use. Machine set, we talked about that a few minutes ago. You have to provide a name. And you say how much nodes it is allowed to bring up and what is the minimum node count you are allowed to scale down. And also said, it works pretty, pretty good. And there are some specialties, but this is not OpenShift related, but more Kubernetes related. If you use resources like, for example, pod disruption budgets, then it can be that if the cluster autoscaler tries to delete nodes, that's this PDBs support disruption budget. Kubernetes resources are blocking the eviction of nodes because then you are destroying the contract of how much pods must run in your deployment. That's exactly what a PDB does. So it defines a minimum number of pods that always must run. But the autoscaler works pretty good. No complaints about that. OK. Any more questions before we move on to automating the process? And I did put two polls in the polls. Are you currently running OKD? And if yes, what version are you running? It'd be interesting to see what folks have and be able to get a sense of what folks are running. And so now, let's talk a little bit about automating the UPI process. So if you're doing UPI, as mentioned before, you're going to want to have a proxy or a load balancer and possibly a proxy if you're on a private network. And so there's a link to some scripts or a script that I wrote. I'll put this in the chat. This is a project I've been working on for a while. It's called OCT. And it's basically automating the UPI process so that you can continuously test OKD cluster installs and everything after that that goes with that. And it allows you to do everything from basically generating the ignition config files to downloading the version of Fedora Core OS that you want to use as your base operating system on the nodes and running the OpenShift installer. And I've got a list of the prerequisites. I went through these before, but essentially you'll want to have your DNS entries that we've talked about for your bootstrap, for your master, for your worker, API, and API ends, and for your apps, the wildcard for your apps. And so your load balancer is going to have basically you need your load balancing to handle the API and ingress. So you'll need two different pools for that. And again, if you're on a private network, you're going to want to use a proxy for outgoing traffic. That's for the installation itself to download the containers off of Quay or those from the testing releases and also for regular operation of your cluster for your pods to do any outgoing traffic like running a YUM update or Composer install or retrieving network resources out on the net. And so Squid is a good one for that. Squid is something that you can utilize pretty easily and configure pretty easily. So let me share my screen here. And there we go. And so here is the repository for the tool that I have been working on. And it's OCT. And it's a command line tool to simplify the process of building and destroying OKD clusters and vSphere. This utilizes the GoVC command and the OC and Qt Control tools that come with OpenShift. So GoVC is a separate project that this tool utilizes and then also OC and Qt Control that are provided with the OKD and OpenShift installs. And so here's the command line arguments. I won't go through all of them, but basically it allows you to automate all of the stuff that Joseph was just talking about in terms of having your configuration file and it also has a bunch of extra features. So I've got a list of the functions here. For example, the tool checks if you have OC installed. And if you don't have OC installed, it'll pull a version of the tool down to your machine, to your working directory, a bin folder, and your working directory. And this was mentioned before by Vadim. And I'll expand on it a little bit. The OC binary that you use to manage an OKD cluster can be used to pull down installation tools for different versions. So if you're running a 4.6 cluster and you happen to have the OC tool, you can use that to download the installer and the OC binary for a particular version like 4.7 or a nightly release and whatnot. That's something that a lot of folks aren't really aware of, but allows you to do testing very efficiently just by pulling the installation tools and the relevant OC. And this script also checks if you have the GoVC tool, which is a tool for working in the command line remotely or locally with vSphere. So you can create VMs, import templates, OVA templates, and do all sorts of stuff very simply with the GoVC tool. And so MyScript works with that. And there's a couple of functions within the script that do the heavy lifting. The first one is install cluster tools. And that one installs OC, kube control, and the OpenShift installer binary for the version that you want. Then there's one called launch pre-run. So launch pre-run does the work that, you know, when Joseph was talking about, you have that configuration file and the installer generates ignition files and then eventually use those, the installer, you generate those ignition config files and the installer utilizes the SSH account that was created, the core account to go in and trigger the installation and whatnot. Launch pre-run helps you, it basically generates those files for you and modifies them in the way like inserting your, what they call a pull secret. It takes care of doing all that for you. Once you've created a template of the config file, this actually copies the template into a fresh one, installs the necessary information that you need, and then gets everything set up so that you can do a deployment. And deploy node function in the script actually does the part of generating a VM in vSphere for each of the types of nodes that you need worker. Control plane and bootstrap and then inserts the appropriate ignition config into that and can also boot it up. So this is for the individual node. This class gets called by the individual node. Build cluster is what calls deploy node. Build cluster basically takes all of the information and then calls deploy node for each node that you need. Deploy node can also be used for deploying standalone Fedora Core OS nodes. So if you wanna play around with Fedora Core OS, we talked about that a little bit in the main opening session, this tool can be used to automate that process as well. And then there's destroy cluster which allows you to very easily tear down your cluster and then manage power, obviously bringing the nodes up or down. And then also clean. So what clean does is it actually cleans up those files that we were talking about before that get generated when you go to do an install, the master ignition metadata JSON file that gets created and all that stuff. So clean will actually clean that stuff up. And so I think what I'll do is actually demonstrate that right now and then I'll do a destroy and then we'll go from there. So here is a cluster that I have running. It's called logos and it's got a bootstrap master, two master nodes, sorry, three master nodes and two worker nodes. And so what I'm gonna do is first off clean up this mess here that gets created. So call the script and I go clean and there you go. So now all of those files that are created are needed by the installer removed. And this allows you to quickly redeploy, start fresh if you run into problems. I'm gonna delete the bin which has the installer and OC and Q control as well just so that we have a completely, whoops, completely clean environment here. Okay, so I have this append bootstrap and install config template. And this script is being run on a machine that I use, an installer machine that has Apache running on it. So for UPI, if you're doing UPI vSphere, you're often gonna be pulling the bootstrap script off of a web server. So the web server is actually running on this node. But let's first destroy the cluster that I had before. So I'll do destroy and then go master node count and three and then worker node count is two. And do that and we'll say logos. And whoops, what did I miss here? Oh, whoops, master node count, there we go. Oh, for some reason it's not working. I'm not sure why, but let me see real quick what I missed here. But basically you can delete all of this stuff, or sorry, that's why destroy and cluster name, I forgot. So this allows you to work with multiple clusters and I need to put the cluster name. And so we do this. Okay, and so now you can watch these actually disappear as they get deleted. And this is again using the GoVC command line tool to connect to vSphere and delete those nodes. Okay, and just takes a second to delete those. And Joseph, let me know if there are any questions in the chat since I can't see what's going on. Yeah, and so now we've got fresh environment. All we have is the append bootstrap, which we can reuse because it doesn't have any unique data and it just has the web address and then basic parameters. And then we've got a template here, that template, I won't cat it out, but basically it's the template that you see in the instructions for an install here. And yeah, it's basically this template with the SSH key and credentials in it. And the script knows to copy that template into a fresh version of the config file and to utilize that. So I'm gonna show you the wrapper script that I created that calls all the different functions to do those different stages. So I have a script called build logo since it's sort of a wrapper script. And as you can see, it calls all of the different functions. This allows you, I'm disabling it right now because it's like watching paint dry, you're watching the import, but you can import from URL into your vSphere the template that you want, the OVA template that you wanna use for your OKD install. And you set your basic parameters appeared, how many masters, workers, the template URL that you wanted to use or the template name if you're gonna skip that step. Your library within vSphere, your cluster name and where you wanna put that cluster in your vSphere, what network it's gonna use. So I'm using VM network, the default network and then the folder for the installs there. This source is just reading in my credentials for the tasks. And then again, here's the input, I've commented this out, but this is importing the template at the library that you wanted. This installs the tools. So I'm gonna install the tools for a 4.6 OKD installation. Pre-run and auto secret. This is, auto secret is a flag that I created that inserts a sort of dummy secret. And this prevents you from having to go to the OpenShift portal to get a generated pull secret. And there's a dummy secret that you use, you don't really lose any functionality by using the dummy secret. Eventually there is talk in OKD of utilizing the functionality that the pull secret provides. But that's, right now it's not as, doesn't have as many ramifications of just using the dummy one. And here we have the build, a call to the build where you provide the cluster folder, cluster name, node count, all of those things that we discussed. And this is a little trick that I do so that I can use reserve DHCP. Basically I have a script that has the MAC addresses and I call that to set the MAC addresses and all the VMs so that I can use reserve DHCP. So I know what the name and the number that the nodes are gonna be at the name and IP number, but I'm not doing a static IP installation. Static IP installation is a little less flexible. And so this turns the nodes on and then this runs the open shift installer here. So in my installation folder, and I am going to run that script that we were just looking at, build logos. So I happen to have it in my path and go like this. And so now it's running all of those steps. It's downloading the cluster tools for four or six. And that'll take just a second. And while this is running, I can mention that there's some new features that I'll be talking about at the end of this session. I'll be adding that make this even more automated. So here it's creating the manifest, which is a step you would manually do in UPI. It's done that, it's modified the manifest to make sure the control nodes are unschedulable for worker pods and basically set up the control plane. So you don't have to do those manual steps. It copied the bootstrap to my VAR HTML for the web server. And now it's deploying those individual nodes. So as you see here, the bootstrap is getting deployed here. And now it's gonna go through each one and it's adding again that ignition metadata so that when these are booted up, they will automatically start performing their tasks. The bootstrap node will automatically start, will be available for the installer to pull down the initial containers and the worker nodes will be booted and do their Fedora Core OS update and then they will restart. So that's a process that folks may or may not be familiar with where essentially when your nodes first boot up, they install the updated version, the most recent version of Fedora Core OS and then boot into that again. So you can see all of these nodes are getting built and we'll wait just a minute for those to complete. And Joseph. Yeah, you could show the cloud. Yes, this window if it starts so we can see the bootstrap process. Yeah, yeah, I'll stay on the bootstrap and then as soon as it powers up. So it waits until all of the nodes are created before it powers up. I wrote it that way because there's a flag that basically you can set if you, for example, wanted to build the cluster but not turn it on yet or run the installer yet and whatnot. So it's components, it's done as components so that you have the flexibility there. So now we're gonna, it'll probably take about two more minutes. So are there any questions at this point or anything that seems unclear? Yeah. So the question from Mark Delaney, do all of the components get installed into a single data store? Is there any way to assign more than one so that three masters, two workers plus bootstrap aren't all being built on a single data store? So if you're utilizing vSphere's, but it depends on your vSphere's configured like it will automatically put them in the best place for resources if you have that enabled, there's another route that you could do which my script doesn't do that but you can do this very easily. And actually an old version of my script used to do this where you can set the data store on each individual node and it won't make a difference for the cluster. I can share the code with you if you wanna share me your email, was it Mark Delaney? If you want, share me your email and I'll share with you the old code that I have that actually does allow you on a per node basis to select your data store. And that is something that some folks might wanna do if for example they anticipate their workers are gonna have a larger size versus your control plane or whatever. So okay, this is going a little bit slower than I hoped. Are there any other questions about this? Here is a question from Larry. Are the sphere role permissions well documented? I suspect that most of these installations are all using full admin accounts. I can ensure you Larry said we don't have full admin accounts. I think a few of these permissions are documented. Yeah, I believe it is on the, at the very top. Let me see if I can find it. I'm pretty sure it is in the top of the vSphere UPI 4.6. If it's not, I can find that for you and I'm happy to post it. If you leave some contact, actually we can post it in the blog post. We're gonna be doing a blog post sort of posts this session. It's a blog post of like all the stuff that we covered in this and I can put that info if I can't find it real quick here. We also fall into situations where we had to try it out by trial and error a few times, but we started with OKD 4.2, I think, one and a half year ago. The documentation was not that beautiful it is now. So I'll find that for you. I'm sorry, I can't find that right now of what the roles are. Hold on one sec here. Okay, so we've got our bootstrap done and now it is going to power on. To see the ignition fetch. But on the masters we will see it very good. Yes, it'll take a bit of time here. Okay, so it's finishing up the workers and as soon as that's done, we'll be able to see it. Maybe we could SSH into the bootstrap node. I think it's interesting to see what it does. Yeah, well, let's watch the bootstrap process first and we'll see how folks feel about watching things scroll by but let's start with the bootstrap. Could you create a snapshot maybe? So we can see it in here in the UI and afterwards after rolling back the snapshot we could SSH into it. Before we did that as well. What's the first step? Yeah, let's, hold on. Okay, so now it's powering all of the nodes up it's going through. So here's the bootstrap. So here is that first run of the bootstrap node and it always takes six times on the sixth attempt. It's able to get the ignition config off my server. And now you see it's reading that ignition config in and configuring network and all of that stuff. Resizing the disk. Right, exactly. Okay, so now in the background, well, you can see actually here on the screen it's performing all of the tasks of updating the F cost image. So downloading the latest Fedora Core OS version. And then it will reboot into the updated version. And actually let me get a master up. So I'm ready running. Yeah, so the masters are running and they are waiting patiently for the bootstrap to complete. And yeah, so this should be going here. Okay, so it's going through that process and then you'll see it reboot again. And then a couple seconds later of the master nodes will be pulling their info. And again, this is calling that pool that of the master plane that's set up in the F5 that I have. And for Joseph it'd be the HA proxy. Yep, here we go rebooting into the latest version. And you can see down here the installer is waiting for that initial 20 minutes for the bootstrap to start and make itself available. And then once the bootstrap is done in a couple seconds after it reboots and turns everything on to provide the machine configs, this will change and you'll then see it switch to waiting for the control plane for 30 minutes, I think is the amount of time that is. We won't wait for all of that. But I think you could use your cube admin password to call OC get ports or namespaces so we can watch what it does. That's also possible, I think. It should be possible now. So maybe see the cluster operators, how they get into running state. Oh, let's see here. How do you forget this? Let me see what this is. You can set an environment variable. Yeah, let's see. I have a script that does this. Let me just remember what it is. Jasper is asking about which core is the F course version you have to use for OKD. It depends. And I don't think I normally can, you can start always with the F course version that is mentioned in the OKD version. If you go to the download page, I always look at, I Google for origin CI release and then the first hit is the page where all stable releases of OKD are listed. And if you click on the version, then you will see which F course version is installed by this version. And you always can start with this version. Right, so if you click on a particular version like this, you can see, let's see, where is it? Here it is not, but normally. Yeah, let's see here. Oh, you know what? It's in the get page is what it is. So if we go to get started, and go to releases on there. So it's the releases page on the GitHub. You can actually see the different components that are there and that includes the OS version that it used for the installation. And that is, where is it? I'm not sure if it is always, I think it's a room for improvement. Sometimes it's up there. On 4.6, it's 4.6 normally. Normally in almost every release, you see it. There are a few releases recently where they did not write it down. Yeah, it's strange, because usually they do have it there. I'm not sure why. I saw it, I saw it. Machine OS, yeah. So this is the one you want, Machine OS. And was it not? Were we missing it or no, it really isn't up there. Okay, that's strange. Okay, yes, so Machine OS, that tells you what version of Fedora Core OS that it works with. There are some bugs though, where like for example, the most recent version of Fedora Core OS, you don't want to start fresh with that. You want to start with a previous version and that will, okay, so, and that'll work actually, if you start with the previous version, there's an issue with Podman on the most recent version of Fedora Core OS that it doesn't work with OKD. And we actually have a, in the repo there is an issues section where you can find things along those lines and there's a known issues here. Let's see if it's in there. No, it's not, but on the blog, there's an article about it, right? I think you put something on the blog about that. So as we can see here, the installation is now going to, waiting for the control plane to configure. And we see one is booting up. And if we look at the others, those are booting up as well. Yep, now, so these are going and your workers are still gonna be waiting because they won't be able to get anything until the control plane is going. So you'll see this internal server error. And if we go here, you won't see any nodes yet until they've joined the control plane and the installer is finished. And okay, so now they're rebooting and gonna be joining. Yep, so there you go. Booting into the latest version of Fedora Core OS and then they'll, the SCD cluster will configure itself. Do you have the chance to set the cube config? Environmental variable, maybe we could have a look? Yeah, I did. So what is, then it's OC, you can export cube config as uppercase to your OS directory. I saw it somewhere. Yeah, I've got it. So I've loaded it in and then let's see, it's OC. What's the flag to use it for each command? You don't need it. If you set an environmental variable, you only simply have to write OC. Well, it is set, right. But so OC get nodes. Ah, yeah, so there you go. Yeah, can you do an OC get pods, all namespaces maybe in a watch? So we can. Which namespaces can have that by default? All namespaces. Oh, right, yeah. What is it? All namespaces? All hyphen namespaces, two hyphen. Yeah, yeah. I think you need two before eight, yeah. There we go. Yep, so those are all of the operators that are getting installed. And you can also do OC get co, which is gonna give you the list of those as well. And let's see where we are in the nodes. Okay, so they are not ready yet, but they will soon be joining themselves to the cluster. The first thing is that network will set up itself and it's the most critical phase normally. And here's a step that I need to automate. I should put this in my script. You will have to approve once the installation is done certificates. And those certificates, the process of approving those can take a while. And so it's kind of handy to have the certificate approval process sort of, there it is. You have to approve all the certificates. There won't be any yet, I don't think. Because the installation hasn't complete. You have to approve the workers. If they start, if the control plan is set up, then the workers will start to provision themselves. And after a while, after the first reboot, if KubeLed starts, you will see the CSRs, the certificate signing requests. Yeah, and there's a handy little tool that you can use once this is done. Well, we don't have to wait for it to be done. But essentially you call this that we'll get all of the CSRs and approve them. They added a nice little flag I noticed recently. No run if empty in the documentation, which before you would get an error if there were. So it actually is like, so XR exactly doesn't run if it gets a nice back. So yeah, that is good. So anyway, so this now we're waiting for the install to happen. And yeah, are there any questions on what we've shown? Have you written the script on your own? It's, I love it. Yeah, I wrote all of this. Yeah, it's, yeah, I wrote this over the period of time, basically about a year and a half because doing the installation process over and over and over again for testing, my production cluster has been up, my production, I've got a production, OCP of production, OKD and then a testing, OKD and rebuilding those and having disaster recovery, essentially to get them up quickly. I wanted something that simplified the process. And this is the actual script code here. And if there's any features that folks would like to see, happy to write those in. And there's a lot of folks doing UPI. So it made sense to me to write something like this and share it with the community. There is a question from Larry. Do you use the script for both OKD and OCP? Yes, yeah, it'll work for both. There's no difference. The only thing is that for OCP to get that support, you wanna have the pull secret. So in the code, and I'm sorry for flipping through really fast, I hope I'm not making people dizzy, but if you look in my code, where is the pre-run, right, so basically if you don't add, if you don't say to use the dummy pull secret or auto pull secret as I call it, it actually says please enter your pull secret. So you can paste it into a dialogue in the script so it doesn't make it completely automated. I think what I'm gonna do is add another else to this where it'll read if in your config file, if there is a pull secret already there and if it is, then it won't. It'll just use that and duplicate it. Pull secrets are good for, what is it, just a 24 hours or 48 hours? I can't remember. 24 hours of the pull secrets are good for, and I think they expire versus certificate on the other side that expires. Jesper is asking you about, he's paying awesome work and he has a question, he's a bit confused why you are not using an IPI because... Sure. Yeah, what's the difference? Sure, so with the reason that I'm not using IPI is because we wanted to have the F5, there's a couple of reasons. We wanted to have the F5 as front facing for the cluster because we could then route requests through that load balancing and have things such as if the cluster in its entirety goes down, still have monitoring notifications, still have redirecting those URLs to outage pages and whatnot. Also the ability to use the other functions that we have in the F5 that are a little bit superior to the load balancing within vSphere to use the internal load balancing within vSphere. There's also in terms of IPI, the ability to add more network customizations and also there's some benefit to IPI in terms of portability because we have this script and because we have the F5 and whatever we can duplicate this in other places that maybe aren't necessarily vSphere or don't necessarily have slightly different infrastructure and whatnot. So not relying on everything being internal to vSphere was the way that we wanted to go right now. And also at the time, it wasn't clearly documented. When I first started this, it wasn't clearly documented how to set your subnet for your cluster and whatnot. They've added a lot of functionality and a lot of documentation recently about setting your subnets for the cluster and for the pod network and stuff like that. A lot of that wasn't clearly defined early on in the four releases. So I hope that answers your question. Okay, and it's done. So now it completed in 17 minutes. So now we'll see the workers are now booting up. And if we go to here and go, check there's no certificates now, not yet. So we see now that the masters are ready and they're running the most recent version of 4.6, okay, D4.6 and Fedora CoreOS. And now the worker nodes are gonna come up. That's one thing I didn't mention is so in your call to the script, when you put the release version, you can do the whole release version like this, like copy that string for a very particular release version, like from the nightlies or whatever, or you can just put a major minor string like that, like 4.6 or 4.7, and what you'll get back is the most recent version for that major minor release. That's great. Yeah, it gives you a lot of flexibility. Larry is asking or is saying that he's still confused about the cluster API subnets compared to application subnets. Sure, so your OpenShift cluster makes calls to what are called API, the REST calls to an API address, API.clustername.domainname, and those map to a pool, whether you do it UPI or IPI, that domain name maps to a pool, load balancing pool, and that can be any subnet, they don't necessarily, I don't think that you necessarily have to have the same subnets in terms of IP numbers for those, but you want different pools because requests to the API address, those are meant for controlling the cluster, adding nodes, spinning up pods, and basically cluster management or development tasks. The other subnet is the actual subnet that the pods are running on, and the pool for that to connect to that is another address and that's the apps.clustername.rest of the domain name, and that is anytime you spin up a pod, it's gonna have the, or spin up an application, the application is gonna be, have a route that is application name or whatever.apps.clusterdomainname, and you can do mapping to that, but essentially that's a separate load balancing pool, whether you're doing IPI or UPI, that's a separate load balancer because you want to request to that to go to your different worker nodes to handle requests for the actual applications. Does that explain it and answer the question there? No answer. Okay, we'll see. So now we see when we do get nodes, those are ready, we're gonna check for certificates none yet because the workers are still sort of rebooting into the OS sort of, they are rebooting right now. Yeah, here we go. So worker nodes are now switching into the new version of Fcause, and you'll see that there's some certificate requests that need to be approved, and then once that happens, the worker nodes are added. And was there a response on if my answer made things clear or did it make it muddier? Was the subnet Larry's answering, I think so. Okay, so it's pools basically. There's two different pools and those may or may not necessarily map to different IP subnets, but it's different pools for different tasks. Okay, so now this is up and there we go. We just approved the CSRs for the two worker nodes. And here's the other one, there we go. And then there's another one, another set of certificates that'll pop up and there they are again, and I just approved them. So now if we do get nodes in a couple of seconds, usually in about a minute, the worker nodes are then available. Here they are. So with, yeah, so with the script and just a couple of clicks and just a couple of clicks, if you wait patiently to approve the CSRs, within an hour you can have an OKD cluster running. And so let's do, get nodes up, still not quite yet. Any questions about any of this? I hope this was helpful. The thing is that you can customize lots of everything with UPI. Yeah, you can export all YAML manifests right with the installer. You can patch everything and create with the installer from the patched manifests as the ignition files so that your cluster right from the start is completely pre-configured. And that's, I think it's not possible with IPI. Right, and here we go. We've got one ready and I bet when this returns, the other one will be ready and we will have a working cluster. Now it's gonna take some time. If you do OC, get CO, you'll see that the machine operators are not done yet. There's still one that needs to finish here and ingress always needs to finish and monitoring. That will happen over a period of a few minutes. They are getting configured right now. Larry, and sorry? Yeah, no, Larry, see, you have only have to sign the certificate during the first installation. If you, that's, I have also asked why they do that? They say it's because of security reasons. So they don't want to have Fedora Core machines wildly joining an existing cluster and it takes everything what comes and joins it. But if you, as I told earlier, if you use machine sets in a running cluster, then the approval of new machines is done automatically. If you mean that with automatic third signing, Larry. Right. Do you have any examples of manifests patching via ignition files? We can show that maybe in a separate directory. So if we go, well, here's a brief one. So if we actually go to the code of, oh yeah, right in pre-run. So here's an example of some things happening in terms of manifests. So there's a manifest that gets created when you run the create manifest call from the open system. Could you zoom a little bit? It's a little bit, it's very small. Yeah, it's better. Thank you. Okay, great. Yes. So that is, when you do create manifests, there's some manifests that actually need to get deleted for UPI. And so in my script, I actually delete the ones that need to get deleted. And then there's also one that needs to get changed. This one here, cluster scheduler, needs to have the part where it says are the master's schedulable for pods, like application pods. And to say that as no, you change that to false. And that's something that you wanna do on a standard install. Could you do that in a directory? Do I have any, I don't... We only need an install config YAML file. Yeah, so it's... And directory and empty directory for that. Oh, you know what, when you run the installer, it deletes the OpenShift directory. The manifests and stuff. It also consumes the install config YAML file for any reason. Here's what I can do. Let me do, use my tool to do this quickly. OCT, yeah, I'll do this. Let's do this, clean. Okay, my bin is still there. Is the authentication... I mean, I'm keeping that. Because if I do create manifest, I'm pretty sure it doesn't. Let me do CP auth. Larry asks... Oh, I actually accidentally deleted the auth, sorry. So, but in short, yeah, you basically, it's XML, or sorry, JSON or YAML files for the manifests. And you basically just change the YAML for that. You have some credentials to log in or is it lost? Well, I can do... It's, is it still loaded? No, it's not so loaded. So I lost that. But, you know, the YAML files are standard YAML files, as you would expect. And the modifications are pretty perfect. Even cover some modifications that you can do at the bottom of the... Let's see. You will get a directory with lots of YAML files. They are numbered from zero to whatever. And the lower the number the earlier, normally the manifest are applied to the cluster. Yeah. But did I destroy your cluster? It's my fault. No, no, no, no, no, no. It's all it is, is the password, the access has gone, but that's fine, I can redo it. Um, so installing the search term users, where is the, yeah, it is this one, but basically at the end there's, um, examples, I thought it was this one, of like creating all sorts of modifications to your, um, config files to do things on install, like, um, what was the example that they give? Oh, setting your NTP server. So there's an example of using a, uh, config to, um, set the NTP server on the nodes when they boot up and things like that. So it's, it's, there's a lot of examples in the documentation for that. I think the cluster operators should be ready. I have a strong feeling. Okay. Well, I, I lost the, uh, the config stuff when I cleaned it. Oh, okay. That was immense. Okay. Yeah. But, um, it may or may not have been, because I've noticed that it can take a long time. So I don't think we want people to sit through that. Um, so any other questions? I'll stop screen sharing here. And, um, and Joseph, if you want to log in and show your stuff, because you've got some stuff. Yes, I can show an upgrade. Yes. So I, um, have to take a look first. Okay. Um, after the first installation, I'm on Okadie 4.5. I have, uh, a default machine set. I always delete it. This one is from a question someone asked me. So it's not really a useful machine set. Okay. You should normally start, um, an upgrade if nothing is degraded. In this example, I think the open shift samples operator in the older versions of Okadie, um, sometimes gets degraded, but, um, the upgrade will should succeed nevertheless. Yeah. And I will do an upgrade now. I go to, I can, um, if you have an internet connection like I have, um, you can choose the next version you can upgrade to. In my case, I'm able to, um, go directly to version 4.6. I will choose that. This one. First, there was a, there was a problem with Okadie 4.5. I have to check something. I will SSH into one of the masters and check something. I will write a blog article on Okadie about that. Someone is still at the old version. You will need that. I hope I don't, oh, sorry. Scrolling is, uh, yeah, no, everything is fine. I can upgrade. Um, on Okadie 4.5, some repositories on the Fedora code, uh, CoreOS nodes were enabled. And if you upgrade during the upgrade process, then it's, if the repos are enabled, it tries to pull images, uh, packages, sorry, from the internet, which sometimes could be a problem. So you have to disable them before you upgrade. Yes. All the repos are disabled. Now I can go straight to my update button and send a switch to overview. You'll see here that it tells me that the cluster is working towards a new, newer version. Currently we are at 4.5 from last year. If you want to see more, you can go to cluster operators. I always, I always sort. Okay. It has not downloaded anything. It will take a little bit. It takes a little bit. I can see here that it's creating a new cluster version operator, a new version. Okay. It's done. And this should be the first operator that gets updated. So if I go to cluster operators and sort the version, now it's still not here. It takes a little bit. One person. Let's give him a few minutes. What it does is that it downloads this a new version of this cluster version operator and the cluster version operator contains all manifests of all operators that have to be updated, upgraded, and it updates one operator after another in a certain order. That's starting with the API server. And the last thing that gets upgraded is the OpenShift machine. Where is it? Something operator. Where is it? OpenShift. The machine config operator is the last one. And this operator upgrades the operating system at the very end. In this situation, you will see that always one master and one worker are getting unschedulable as the nodes were automatically drained. And then a new OS version is applied and the system automatically puts into the new Fedora Core S version. In this case, I have Fedora Core S32 released last year in June. And after the nodes are upgraded, we will see that we have Fedora Core S version 33 from, I think, January as this year. And you have to do nothing. It's running completely on its own. Let's have a look on the operators now. Okay, the first operator is already on a newer version. It's an ETCD operator. You can have a look what it does. Okay, it's already over. It's installing a new version of ETCD and automatically does the migration from the old ETCD to the new version. You can have a look. If you watch it like I do here, that always if something pops up, you will see all the pods that are spawned to do the upgrade. Here we see a new ETCD. Some quorum guards are created. The Kube API server operator was upgraded and now upgrades one API pod after the other. Yes, this is the Kube API server operator. If you, as I always switch back and forth between overview to see which pods or what the state of the pods is, here are six. Okay, I do it just for interest. Normally, you can leave it alone. Cluster operators. Let's have a look. What happens here? ETCD is still updating. It should be. Here you see in the right column, you see that two nodes. He's talking about masters here. Two masters are already on a newer revision. What does a revision mean? Revision means it's a new state. It's not really that it does not really mean that this ETCD is of version four or an image and Docker image was tag four. Now it means that something has changed. Also, if you delete an ETCD pod, you will see that it's upgrading because the operator always tries to get to its desired state. Even if it's not upgrading, but only getting to its desired state, you will see that it's getting a new version. Now we have three nodes. Sorry. We have a question from Jesper. Jesper says, when you have installed with UPI and have the VCR template and credentials specified in the machine set as talked about previously for scaling, would it roll VMs or would it upgrade the existing VMs? It will create VMs with the new, a good question. I have to think about that. I think it will at first it will create a version with the older OS version. If this one is created and has joined the cluster, it will do automatically an upgrade to the target OS version. In the end, all nodes have the same version regardless of where you started. Does that answer your question, Jesper? Let me ask this. Are you asking if new nodes, so a machine set can be used, a machine set is used at two different times. It's used for the initial building of the cluster and then also for adding nodes and applying to it. If you're applying and adding new nodes, those new nodes will be brought up to the same across the machine set. It would be new VMs. That's why you have your credentials for vSphere. You always get new VMs and the old ones are not. If you have a newer version of a template, then the old VMs will still run in the cluster because you would disrupt the cluster in that case. But if you upgrade the cluster on its own, normally all VMs get upgraded. Jesper says, as I understand for IPI, when upgrades happen by creating new VMs and if they join successfully, the old VM will evict containers and then it will spin up containers in the new VM and continue. Good question. I don't think so. I think for IPI, it's the same that, so if you're updating the OS, it's rebooting that same VM with the new OS and any new changes from the machine config. It's not spinning up a new VM. I don't think in IPI. I don't think so. This will take some time. I don't think that we can wait about that, can wait for that. But one trick is, in the end, it takes a rather long after all VMs are upgraded. You can set how much nodes are upgraded in parallel. The default is one, always one master and one worker, but you can increase this number and then the upgrade will advance much faster. I did regenerate my YAML config. Just to get folks, if you want to stop sharing, and then I'll go ahead and share my, just to show folks what this looks like. It's hard to, am I sharing? Yes, let's see. Yes, okay. All right, now going here, so this is, there's a step where when you create the ignition config files, it then deletes this folder called OpenShift, and this folder called Manifests. And if you look in the documentation, oops, I'll put a link to the part that talks about, yeah, here we go. So in this section on installation, customization, there's examples of creating these YAML configuration files that will then get put into the nodes, these Manifests that will then get put into the nodes for things such as disk encryption for crony, configuring crony to use a particular NCP server, and anything that you can think of. So this is actually factors into why it was encouraging folks to familiarize themselves with Fedora Core OS. Fedora Core OS does everything through Manifests, or I should say everything through Ignition Config. And so you create a YAML file that's similar to a Manifest, and you put it through a tool that they have, and then it creates the JSON-based Ignition Config, and then you apply that to a Fedora Core OS node, and all of these things that we're talking about, and a lot of stuff of the OS can actually be configured through these. So if we go down here, this is like pre-creating the Ignition Config, here's the Manifests. So if you go to Manifests, these are the Manifests that go into creating the Ignition Config. And an example of a modification is I found that my nodes were timing out, my worker nodes, were timing out when trying to connect to the control plane when they would first boot up. And I think we talked about this a little bit in the channel. So what I found is if you modify in the Master and Worker IGNs a timeout value. Ah, yeah, you do that. Okay, nice. Which one is it? It's in the OpenShift. There we go. Oh, no, actually, I have to actually, you can't do it in the Manifests, you have to do it actually in the Ignition that gets generated after. I'm not sure why. But basically you can modify the timeouts that the nodes have when trying to connect to Machine Config from the API server. I can show something similar if you have a running cluster with the machine configs, I don't know. If everybody knows that, we use that a lot. Jamie, what do you want to? Do you want to go ahead and log into yours? You want to log into yourself? I'll stop sharing. It would be nice. We can switch back and forth. Now we have a workflow. Okay, so do you see my screen? Yes. Okay, so here we have one of the coolest features that has to do with CoreOS. These are the Machine Config under Compute Machine Config. You find some Manifests that are telling OpenShift or OKD what you want to configure on your host. Yeah, formally, if you wanted to install something or write a file or change a file, you had to SSH into the node on OKD3. It was very common to use Ansible to change something on all nodes. You don't have to do that anymore with Fedora or Redhead CoreOS. You can almost change everything with Machine Config Manifests. Let me show you an example. I am searching some random... Yeah, here we have an example. Let's forward that away. This is a Machine Config. It's applying configuration to all workers. You can set the role here. You can also do it on single workers. If you want, you just have to label them. Then you have a spec field. This spec field, in fact, is an ignition, has an obsoles, has an ignition structure. And here in the storage section, I can say, come on, write this base64 encoded whatever content into this file in this path. And the user for the file, the permission is, user ID is root. So if I decode that, I don't know what that is. I don't know what this is. It's a content of a file. And if you apply that Machine Config here, it's only a short ignition snippet. There you can write everywhere on the host where you have read, write permissions. Normally, I think ETC and the other one was the var, I think the var folder. You can write into it with this method. And what happens then, if you apply that, is that you get, I will sort the date here, you will get a new render file. We have here a rendered worker and a rendered master file. And I always sort it to the date because say, every time you apply something here, you will get a new rendered file. These files contain all configurations, the base configurations and the ones here, the additional ones, they are merged together in one big ignition file. Let's have a look into it. Here you see we have the SSH, some SSH files. We have lots of files that are written with content, lots of lots of stuff. In fact, it is a ignition file that is served to new or existing Fedora Core as nodes. And you see that it's a long list. Thumbware here, you can also find system deservices. Say I'm not base 64 encoded. You can create your own system deservices, sometimes you need that. You see lots of services. This is a hypercube service. And you can change what new nodes will see by applying machine config objects. One thing we have done in my company is we wanted, we have an air-gapped cluster and we wanted to pull the images not from the internet, but from our internal registry. Our internal registry, let's say we have a name, something like that, something like that, name of the image. If I want to use that, I could use some proxy. I could patch all my manifests to use this registry, which is awful. Or you could use a mechanism of Portman and this mechanism works like this. I will open a terminal on the host. Normally, you never have to go to this SSH to your nodes. You can use this mechanism here. It's upgrading. I have a terminal now. Now I'm going to enter a change root command and now I'm on the host. I'm not using SSH to go to my master zero. I'm using this method here. I'm now on the host. It's upgrading that example. And what we have done is to change this file here. ETC containers registry. This is a configuration file for Portman, where you tell Portman which mirrors you use and we have bent. This is a very short one. I think during the installation we will get more information here. It always kicks me out. You can tell Portman to add always if someone wants to pull an image from Quay or from Docker or whatever, that it in fact should go to a different registry. And this process is completely transparent. Yeah, so in this way with this Portman file, configuration file, you can tell Portman to in fact pull images from a completely different registry than the one that was addressed in the manifest. And we write this file with machine configs. Now maybe you ask, yeah, but sorry, but during the cluster installation, this file will be overwritten. How do you control that at the very end of the cluster installation, this file has to be written. This can be done with this numbers. If you apply a machine config with a higher number, then these configs will be applied later in the process. And the lower the number, the earlier configuration will be applied. That's some, it's a kind of priority, prioritization you can do here. And we have done lots of configurations this way on the host. And we absolutely not use Ansible for changing existing clusters. We not even have to go to nodes with SSH, because normally you don't have to do that. If you want to change your cubelet parameters, I think you even have a configuration object for that. At config, I don't know, but I've seen that previously. You can change everything with custom resources or other configuration mechanisms in OpenShift and OKT. That's what I love about this Fedora Coreware thing, that you can control everything with Kubernetes manifests or Ignition Files. OK, that's what I wanted to show. I can show the progress. Let me see how far our configuration rate, we are at 70-14 percent. How much operators are left? It has a little bit to do. OK. We will come back if it is upgrading the node. Machine config is always the last operator to do that. So we have a few more questions from folks. Hey guys. How are you doing? It looks like you're getting there. Yes. It's running fine. I forced, I've closed something and he deleted his credentials because of me. It's not supposed to work this way. It's all good. All right. So from the folks who are listening in, I hope this is still really helpful. I think it's great that we've got these two guys here doing this. Are there other things that you would like to see us document better? I can see some of the questions in the chat. Are those things that we should be updating in the documentation guide? Yes. Let's take some notes based on some of the things that people have asked. Some of the things are stuff that should be addressed, I think, in our documentation and if not, then in the actual OKD itself documentation for sure. Yeah. I know because in a couple of the other sessions, there are things that are just in the OpenShift documentation are as clear as mud too. So it may be a thing that we end up with better documentation in OKD and then kind of move some stuff there. So yeah. So Larry, if you find those references, whether they're in the OpenShift docs or OKD, just log an issue or make a pull request against that chunk and we will make sure that we get it merged and get it in. We're really trying hard to get other eyeballs on this documentation. With doing that process, if you're not familiar with which repo or whatever, feel free to reach out to any of us. We can direct you to the right place and help you. Yeah. I'm definitely game for even grammar checkers, our friends of mine because as we all are, I type too fast, other people English is a second language. And if the documentation was auf Deutsch, Joseph, you'd be rocking it. So we've got some issues with that. So we're doing pretty good, I think, so far. But we have a couple more stubs, I think, from the HomeLabs group that we're probably going to get put in, make Vedin put a stub in for his and Craig for his and Shree for his. But the HomeLabs are pretty tricky because it's really specific to people's what they have for hardware. But they're still helpful. How far along are you? Are you just waiting for something to complete? Yeah, I think we're wrapping up unless folks have any more questions. We've covered a lot of ground and got some great questions. And this was great. I had a great time. Cool. Jesper is asking another question of Joseph. He saw a blog about newish quay feature for transferring images to offline networks. Do you have a link, Jesper, to that blog post? A new feature. As this was not a feature, but more a workaround for a buck. You don't, if I don't know what you mean, newish quay feature for transferring images. No, Jesper, I think what you mean, if I'm correct, if I understand it correct, is that there is a buck that affected one of the core libraries that have to do with docker images. And as this library is used, yeah, this library is used by lots of tools, by Scopio, by Podman, by whatever. And because of this buck, the buck exists since a long time, and it occurred on quay. And it led to that wrong image manifest. It's written in a wrong version. And therefore, online, sorry, company mirrors, company registries were refusing to store these images. And more and more images, also the OpenShift and OKD ones, sorry, OKD ones were affected. And in my company, the registry refused to store the image we could mirror. But in the end, two things happened as this problem was fixed. And I had help with a few guys that showed us how to convert the image so they are working again. And the third thing is I could turn off the switch in my registry that refuses to store the image. So in the end, it was not so bad at all. But I would not call it a feature. It was more a buck. Man, it could get turned into a feature. And product walls, and we could sell it back to somebody, I don't know. But anyways, if you're still waiting for something to complete, is that in the background? Yes, we are still waiting for an upgrade to finish. So that we can show the OS upgrade. I don't know when it will start. It can take 10 minutes, maybe. Well, let's let's let people know it'll be what it'll be like at the end. Maybe we would not make people sit through another 10 minutes if we. Yeah, sure. Yeah, I think it's been it's been kind of a long day. And we believe you that the upgrade will work. And even though the time left says five hours, I just did that in case somebody wanted to do a complete live deploy somewhere in the hardware. So you're not obliged to stay another five hours, because I know you can see the time. But if you're if you guys don't have any more questions, and anyone else, next up, we're going to try and do these things quarterly. I think that it feels good feels right to do this. So Larry, your team, chess, this is a chess game, your checkmate here, your team is on the hook for doing some sort of deployment demo. Next up, and Jesper will will get you in on a Saturday since Saturday is better for you. I don't know what time it is where you are, I think you're in Finland or someplace like that. So we'll we'll make you guys do some of the demos to and maybe have fireside chats as well. So I wanted to really thank both Joseph and Jamie, Jamie, especially for getting in the background and helping with organizing stuff this week. Well, I had some family stuff going on. So that was really wonderful to have my back. And Joseph, for all the work you do around getting the blog post done, it's getting late there. And it's a Saturday. And I know you guys took all the time to make this happen. So thank you very much.