 The first step is here, so I didn't bother with a proper video. I've got a bunch of screenshots and a lot of talking, but here's what we have. We will show an InstaLocad for GCP using IVIflow. Show a few known issues and take a quick peek at what Installer is actually doing. IVIflow should be pretty different because you have to set up things yourself, and in our case we want Installer to take care of that. So here's the InstalConfig template I'm using. There is not much different from other providers actually. The difference is, of course, platform GCP, your project ID and the region you're using. The rest is pretty much the same. I'm using its, in fact, Changer template because I'm templating those using Ansible and create clusters on demand. I wrote my own wrapper around Installer where I can do make GCP or make AWS, and I have multiple clusters running outside. Note that it's not designed to keep a long-term cluster living. It's just to quickly spin them up and destroy. So I just run make GCP and it does the thing. And here is the link to that. You can dig into my terrible skills and make files later. So what it does, in fact, it pulls the latest Installer. We're pulling from origin for the for Installer and we're verifying that it's the correct version and release image matches the desired one. Next, what it would do, it would template and install config from GCP template and uses the base domain for our GCP account. It's literally calling an Ansible command and templates it. We're also copying to temporary folder from my file for my cluster and we're saving a copy because the Installer consumes the install config.yaml. Later on, this huge terrible command which starts the Installer from the image we have just pulled. So we're passing it, I have a better line. First of all, we mount the folder with our cluster because you can have different ones running in a while. And we're making the Installer output to this particular directory. Next, we mount the Google credentials. And finally, we run create cluster command because the entry point in this image is OpenShiftInstaller. We also override the release image just to be sure. And currently, you have to override the image used by the Installer which points to our local copy of the Oracle OS. Here's the output from the Installer and things that are running. Basically, it notifies us that the OS image has been overridden and so is the release image. But that's what we expect. So after about five to ten minutes to create the necessary resources, we would see that images in our console started creating and the bootstrap image got assigned an external IP. So if we SSH that IP, we would see that initial Oracle OS image has been started and there is a bootcube service there which does all of the jobs and we can watch it using JournalCityL command. So the first thing bootcube service would do is to upgrade us to the latest image from our release payload. It would extract the machine OS content, pull it and apply it as an OS tree content and then finally reboot. So after the reboot, we would see that the Oracle OS version has changed the latest one in March when I run this and bootcube service continues. That's basically a huge difference from our cause where the initial image does not pivot into itself. So during the process, you can, on the bootstrap node, you can already export the cube config from this directory and slash open shift and start watching what's happening in the cluster once it assembles the API service. Oh, this is pretty small. It should not work. Sweet. So first thing we see is that the version of... It's like this. That's much better. Can you all see that? Yeah, yeah. Great. So what we see here is that the first operator called version, it's a cluster version operator, started progressing and it has started the network operator and it's also progressing. This is why three of our masters are not yet ready. So network configuration has been installed in them and we have tons of pods hanging and pending because nodes are not yet ready. You don't have to do that all the time. Like this. And when finally network is installed, those masters are reporting that they are ready yet and other operators like machine config started progressing, pods are in creating state and so on. So the difference from UPI flow here is that there are no workers yet. They physically don't exist. That's expected because we use machine API to create them dynamically. We define three machine sets, each of them in different availability zone and we want one machine in each. And after some time when machine API operator starts progressing, it would create machines. They would get status provisioning. That one is not yet properly processed, but later we would see that new workers have started appearing in our GCP console and eventually network configuration would be installed in them. The necessary files would be copied. They would become ready and more operators would start their progress. The most critical ones is of course authentication and QBAPI server basically. There are lots of crash loop back calls to greater operators yet. That's pretty much expected because authentication has not yet created all the necessary certificates. But in the end, we would see that install has completed. The omkilled pods are irrelevant. It's actually bugged and should be fixed. All machines are ready and we're done. We can run OC status and that's pretty much it. Two known issues again. You would have to use the OpenShift install or a similar override because it's not yet uploaded. Once it's done, we will update the installer to point to correct location. And in my case, bootstrapping never actually completed. It did complete, but OpenShift installer on my site never noticed that and never asked to destroy the bootstrap node. I'm pretty sure the problem is on my end because all things look like it should work. So if we have this confirmed, we'll dig into that a bit deeper. Hey, Vadim, go back two slides. Two slides, that one. Yeah, so those out of memory killed pods. I don't know if I see it on Etsy D, but I see the same thing when I'm doing the UPI install in my lab. It doesn't seem to adversely affect anything because the correct pods do appear to be running. But if that's a bug, I might be able to provide some additional information because it's happening on the UPI site as well. Yeah, it also affects OCP. So there is a bugzilla for that. And those pods, they update the kube API schema. And once they're done, they are using way too much memory because we're limiting them. And they get all unkilled because they have used all of their limits. It shouldn't affect anything. It can affect though. So it's a bug which needs to be fixed, of course. Yes, but since OCD is just a fork installer and MCO, the rest is the very same what we have an OCP. So every single bug which is outside of the installer field builds MCO field to put the proper file, all of that goes straight to OCP, of course, but we would like you to notify us that you found something. And file a bug in OCD repo as well, so that we would know how bad things are. Any other questions? Hey, Vadim, it's Danny. I just have a quick one. I might be off a bit, but I think a while ago we were discussing about moving to each CD operator and be that used as part of the boot cube and all this kind of stuff. Is that been done or still on the plan? Yeah, it is. I barely know how it works, honestly. Right. But it's there. So the boot cube basically is using the operator nowadays, is it? Like in versus when we used to be like, I don't know, 40 or whatever, 41. Yes. So previously what we did we asked MCO to template at CD members static pods definitions. Yeah. I think at CD operator is doing that for us nowadays. Okay, but before the official OCP for forward the communication goes out. I think we will have a proper description of the process. Okay. Yes. Let's find out what's happening or any other questions. How long did it take for you to set this up. The whole process about 30 minutes. So maximum installer will allow you to run 20 minutes of bootstrap and 40 minutes to set up the cluster. And I think it's about 20 minutes for the infrastructure. So we cannot take more than an hour and a half. It will fail in the middle if it takes an hour and a half. And this is set up with IPI style right so then that means that in the in the console you can do things like grow the cluster and whatnot, and everything just kind of magically happens correctly or is it, or their caveats there too. Once they run create cluster, I'm not touching it at all. Just watching it fail or eventually succeed. And wait how did I do that. Oh yeah. And yes, due to sheen API being supported here. I can scale machines and things. And I won't have to provision them manually myself. And that is the, that's the cool part. Well, any chance you any chance you might remember Vadim whether we support the machine API on on. VMware, because I remember in the past it was not ready for that. Yes. Okay, we have a machine API on VMware. And I think there are works to make VMware IPI. Okay. I don't think we have this in Fedora cross installer yet because we didn't base. And I'm not, I'm concerned about the. But yeah, it's totally possible. Thanks. Neil in the, in the lab work that I've been doing, even though it's UPI with liver, if you provision just with the vert install, if you provision another machine pointed to booting off of the worker ignition config. The only thing you have to do is approve the CSR. And, and it will cool. So I, at, before I left the, for the weekend, I've got a cluster running at home with the usual three masters and six workers. And you can just keep adding workers to it till you run out of CPUs and RAM. That's cool. And terrifying, but it's awesome. Yeah, you need this as well. So for live weird, what you can do is to install a cluster IPI physical actuator, I think, and you would be able to create machines in a similar fashion of using machine sets and machines. Oh, I know this. That's cool. And without approving CSR is because that's taken care of by machine API. And yeah, you would, you would get the very same experience basically. That's nice. That would be very cool. All right. I guess that's all. Thanks for sharing that Vadim that is awesome. That was fantastic.