 Let's start off with, good morning everyone. We're actually in Australia. I'm in Sydney, Nixon, Canberra. So it's a very early start for us. So if we stumble a little, excuse that. And I'm sure that'll be no problem. But let's get straight into it. Because as you know, some of the installations that we've been looking at today have taken some time and ours is no exception. So I'm going to quickly introduce the architecture and then we're going to jump into getting the installer kicked off and start to see the good stuff. So to put open, I come to the OKD space from OpenShift. I work for Red Hat. So I will probably just mix the terms up. But yeah, to put OKD on OpenStack, we have to look at some key integration points. But in the middle of all of that is the use of the OpenStack APIs. We don't ever want to interact with our underlying infrastructure without going through those APIs so that we can ensure a good sense of control over the environment. So when we do this installation, we're going to utilize some different aspects, which we'll dive into a bit later once we have it running and questions come up and such. But we're going to utilize different components of the open stack environment, things such as glance for image storage. We're going to use some cinder block and disk storage or presentation. And we're going to also use object storage, a bit of everything. And of course, during the talk, Nick will break in with his insights and please add anything based on the experiences we've all had. So we're going to try to implement essentially what's sitting here to demonstrate how OpenShift or OKD can actually sit on top of OpenStack. In this example, you can see I'm setting up an OpenStack via an overcloud. So we're going to be using Red Hat OpenStack platform. But it will work the same with most OpenStacks. So as you would have known and heard a bit today, there are many different ways or multiple different ways to install OKD. There's the full automation, which is the installer provision infrastructure model. This is a guided install. It's easy, but very prescriptive. And we're going to use that today so that we can just dig right into the way OpenStack handles OpenShift. There is, of course, a UPI model. And that's fully supported as well on top of OpenStack. So this is where we use pre-existing infrastructure. It's flexible and can be more complicated. But we do supply plenty of helper scripts to communities writing them. There's lots of really cool stuff going on. We don't require OpenStack admin privileges for either of these installs. So to really reinforce that today, I did not install this OpenStack cloud. I'm using an internal cloud. I do not have admin privileges on the cloud. I just wanted to see what it was like as an actual installer. So let's go straight to the install, because, again, it's going to take a bit of time, and we'll see how far we get. I have two environments. I've got one I prepared already. And hopefully, we can use that to show a simple demo of something. But we're going to install on this current environment called Meculin. So let's go ahead and do that. All right. So as mentioned, hopefully, this is large enough. We have a running OpenStack cloud, right? So nothing too exciting. It's got Cinder for volume. It's got a basic setup where we have an external network presenting our external connectivity. I've gone ahead and created a simple tenant network, because what I want to do is show that we can actually plumb into that network with some of our OpenStack nodes, or OpenShift nodes. What else have we got sitting in here? Got some object storage. So we're looking at a bit of, in this case, we are backed by a Ceph install. And so this is going through Rattos Gateway or RGW just to access that. What else? A couple of floating IPs that need to be pre-allocated and set up in DNS. But to really understand how that works, let me switch back to the installation window. And how we talk to the cloud for the installer is to the usual clouds.yaml file. And in this case, I've got two clouds indicated here. I'm going to be using the one I've called OpenStack with my demo user. And what we'll see in a minute is when we run the OpenShift installer, it's actually just going to communicate with the cloud directly and prompt me for various aspects of what is needed in that cloud. That's how we can then create our install config file. Now, while it's an IPI and it will generate most of that on its own, we will add some extra pieces to it. So let's get right to building the install. So we're going to do a create install-config. I'm going to do this off, as I mentioned, a specific cloud. The first thing it asks me for is a public key. Again, this will seed inside of the ignition file to allow us to SSH to the Coros or Fedora Coros nodes. Additionally, you've seen that. So now it's actually reading the cloud.yaml file. And it's asking me which cloud I want to install on. And I'm going to choose OpenStack. It asks me what my external network is. You can see that right now, seeing the same stuff I just showed you in the other install, that we've got a custom tenant network and an external. So this will be my external network. I need to have two IPs set up previous to the installation. One is for the API VIP, which we'll see how it sits in front of the tenant network so I can get created. This one, I've preset it up in DNS in Route 53 just to make it easy. The other IP that's required is for the Ingress Router. Same situation. It's the wildcard domain for apps. It's also been set up in Route 53. So we'll select that. Got to choose a flavor. These are being presented to me as a tenant in the cloud. So this one looks good. The usual stuff here. I've got a base domain prepped. Call it that. And then the Pulse Secret is the same thing that we saw before. It's the fake one. And that'll generate my cloud config. But I actually have to make some alterations to that. And what I want to do is actually I have it here. I have one ready. And then the one we just created. So a couple of things that we're going to do to make this IPI install a little bit more customized is, as you'll see on the left in the red, I have reduced the number of replicas down to one. In theory, I'm hoping to speed up the installation time in practice. I may not be. Additionally, I'm able to add that additional network IP. This block actually belongs to the worker nodes, not to the controllers. So that network you saw on there, I'll be able to go ahead and attach automatically with the installer. Additionally, for the control plane nodes, I'm going to add a root volume and a block volume out of Cinder. This is the volume pool provided by OpenStack that I can actually carve volumes out of. Again, and I'm doing this mostly to just demonstrate the features. We might use that because we need to back at CD with something fast. We might prefer to install this way just to have the options. And then what else? I've added an external DNS. So when the OpenShift installer creates the subnet, it can seed it with a name server. It doesn't have to. Many clouds may offer that directly. But it's a convenient way. And, yeah, Nick, if you've got anything to add, I absolutely hope you jump in. I was just going to say with the volumes, having centralized storage like that allows you to do via migration much more easily and everything between computes on OpenStack. So additionally, I'm specifying the cluster OS image. Those who have done bare metal installs are probably familiar with this. I'm doing this because while the IPI installer does, as you saw the guys do in the previous install, it does pull down a QCAL. The problem is because I'm backed by Cef, I want to use a raw image. So I'm able to create my own, place it in there and still access it from the IPI installer. So again, interest of time, I'm going to create that, or add that config file into my directory so that now I'm actually installing off of that one. All right, so again, the pieces I just spoke about, as the guys mentioned, we're using OVN, who were needies on there. You might have seen that in the previous. That's done in OKD, but in OpenShift, it's tech preview at the moment. So anyway, let's get the install going. We're going to watch the environment as we do this. So I've set up a watch that's going to show various components as the instances are built, images are used, that type of thing. Let's get this going. We're going to do the same, create cluster, off our directory. And for these guys, I like debug. Okay. So this should go ahead and get that install going. Right, so it's loading off of clouds.yaml. And very quickly, we will be able to, as the install grows, we will be able to see the infrastructure begin to appear in OpenSack. Yeah, I was going to say one of the reasons you might want to attach multiple networks is if you want to implement something like multis later and attach pods to multiple networks to give pods access, direct access to the outside world. Hmm. Yeah, next playing with some really exciting stuff where we can start putting OpenShift into telco environments. So it's really cool. As you can see, the build has begun. So the installation has started. And the one point we notice is the ignition file has been created in an IPI install. This is stored automatically in glance in UPI. You can store it wherever you want. But for an IPI, we get it for free in glance. Additionally, the volumes have started to be built for the control plane and bootstrap nodes. Now, as you know, we didn't actually specify anything for the bootstrap, but because it's part of that initial cluster to do the bootstrapping, it defaults to using the same as the control plane installation. What else can we say about this? The ports, the master ports are being created and allocated against that internal tenant network that you're seeing. Yeah, the actual DHCP control of that machine that worked at 10.0016 is actually handled by the OpenStack router as well. Right. Let's see what else. So the floating IPs have now been matched to the internal network. I believe they're all trunks that are created for each of the instances as well. In a telco world where you may want to have a jumbo MTU or something on the system, you would set that all up through your OpenStack neutron setting the default MTU to a certain size that when it creates networks. And the router will, sorry, the DHCP server, the OpenStack DHCP server will tell the OK DVMs as well what the MTU is and they'll set it accordingly. So they'll come up with a jumbo frame as well for the MTU on the node network. All right, now what you'll notice is done here in an OpenShift on Stack install, a couple of other things have appeared, excuse me, is we have another floating IP that's appeared. This is actually chosen at random, it's not preset and it's attached to the Bootstrap node to allow us to jump on there and I guess troubleshoot, look around and see what's going on. And that same, what's it called SSH key was added to that node. And so in theory, once that thing comes up, we can connect to it. So let's go ahead and take a look at the Bootstrap. Terrific, so the Bootstrap is actually on its way up and hopefully we can get on to it. You might want to show the console of the controller that's continually trying to get the ignition file from the Bootstrap and when it's ready, it'll get it and boot itself, yeah. All right, so I can probably get this out of the way. We can see what OpenStack has done and we can see what's being built up and as Nick said, the controllers, there we go. So this is this two phase installation. The first phase is completed on the Bootstrap so soon it will be able to give this controller its access. So now we should have cry control and all that on there. Now it's just a case of waiting for the installation to begin. And we can as well, I'll just jump off again and I've set this up. So hopefully we should start to get some life out of things a bit too early, I think. Yeah. So probably the more exciting thing to watch here is the journal. And it's downloading the images live, so depending on your internet speed, it could take a bit of time. But we can see it coming up. All right, you can see it's all beginning to start up. Perfect. Eventually that cluster, I mean, this is the same install process. Hopefully you've been watching all day but you've been seeing all day to actually get us there. That was Nick was pointing out. The images have just been grabbed by the controllers and now we should be able to start to see the cluster form. And that was our first piece. Now with the installation, if you notice, there's no floating IP placed on any of the control plane nodes. And ideally, right, you're not gonna jump on to them. But we can actually just jump over to them if we want by just using a don't command for Linux for SSH. And then we can watch what the cluster that the masters are doing. They're given the same key. And so the same process is gonna be happening. Sorry? You can do it the lazy way like me and assign another floating IP address to them by the horizon, could we? Yeah. You can see we don't have any of the tools. So we haven't gone through the phase, the first phase yet. But you'll see it's happening there. There we go. In a minute, it'll kick us off. And what you're seeing is the same installation process across, you've seen across all the other platforms except we're sitting here inside of OpenStack. What else can we add is it's not the most thrilling. I have more slides. There we go. So we're switching over to the second phase in a minute. We'll get kicked off and then the cluster will go ahead and build up. And once that's come on, I wanna jump back on the machine and have a look at some of the networking aspects that are being set up on there. There we go. As you can see, the installer has also set up all our security groups for the cluster. And again, they're all unique to this one cluster. So as a tenant in OpenStack, I could run multiple clusters in the same space. Just have to have your quota set up. Things have to work. If I didn't have Swift, it would place the registry back into block storage and to Cinder. But we're gonna talk about that in a minute. I just really did wanna get the installer running for the hope of actually hitting that final line. So if we now go back to the masters. And one thing to note is the reason you can have multiple clusters as a tenant is that it tags every resource that is deployed for the specific cluster, I tags it with the cluster ID. So if you were to go in OpenStack and do a show on a network that it created, you'll see that it's tagged with the unique cluster ID. So when it comes to tearing it down, it doesn't go and tear down every resource you've got deployed in OpenStack. BlueDreams is hurting my machine. So here's what Nick's, hopefully I'm still with you guys. Yeah. Here's what Nick's talking about. We got the tag, we got the unique network set up there. But if you click on the actual name, August. Yeah. Of the network. It doesn't show you the tags on the GUI. If you were to do a show, OpenStack show network on the CLI and OpenStack, you'll see actually a tags property as well. On this guy? Yeah, if you do a show on the, yep, on that guy. You'll see the tags down these, yep, down there. Where it marks them specific to this cluster ID. And it does it to the VMs and everything as well. If we hop over to back onto our control plan before we actually, before the master disappears or the bootstrap disappears. Again, before I ran that, cry control, you can see some of the functions and stuff we have. So core DNS is running. And this is handling all the internal name server stuff for the cloud, for the cluster. Where we are getting, like you can see the settings that were created when we did the installer. And I was able to set the forwarder to set that. And then we can see how our internal network has got a set of VIPs created. And these are going to be balanced across the entire cluster. So this is all handled internally with the IPI install. Obviously with the UPI, you can do different things you need to. But as a great way to get OKD running very fast on OpenStack, not having to fiddle around with too much external networking and DNS and such. This is quite convenient. And this is managed, and again, all by KepaLiveD. So it sets up the VRRP communication. And then, and again, Nick's the networking expert here. So he'll correct me when I say the wrong things. But we set up various different VIPs inside. This is for the API. We set up a DNS VIP so that all the internal nodes can resolve against it. As well as the Ingress, which we'll go and set up in a minute. Now, if your OpenStack isn't within a private network, so, and these IP addresses clash with that private network, you can renumber them through the install config file. Do, but also, because we're going to lose the bitstrap soon, HA proxy is running there as well. Again, this is all set up by the IPI installer. And we get it all for free. And then shortly, what Nick was talking about there is in that install config, you can actually be very specific. And I know you've run into issues where, that does happen, right? Where the site is being used elsewhere. Especially if you're installing it on OpenStack that's inside a enterprise network, that's typically will be using 10 addresses itself. You might want to change it to 172.16 or some other private addressings. Now, one other piece of set up that we need to do manually that hasn't, that will be, I think it's being added to the next release of the various pieces is to attach, we still have that extra floating IP. And we want to go ahead and attach that to the Ingress port so that the app URL and then all the apps can resolve. So at the moment, we do that manually with just an OpenStack command. So this is a floating IP. I'm gonna set the port to use the floating IP we designated and set up in DNS. So right, if we look at anything.app.mycluster and name we're gonna actually get that resolution against that IP. So now is where we get to sort of the kind of the boring part of the install. Everything is rolling. But what's gonna happen in a minute, as everyone knows, is we're gonna have that bootstrap. Oh, there it goes actually, so that's a nice timing. So the bootstrap is currently being removed. And that means the cluster is actually self-sufficient. Great sign. So we've got our three masters up and ready. And we are now moving into that second part of actually getting the cluster and the operators to go. Actually moving along quite nicely. And then finally, everyone's favorite. We can see that the various operators are starting to come up. So what I thought we might do while this is running is quickly look at some of, as well he was saying the last one was the stuff about the slides. I've got a couple of slides. So let's briefly talk about those things. Before we go, the timing here is actually accelerating and I love it. What we have going on here is the bootstrap's been removed. We have our control plane now established. You'll notice that the image for ignition has been removed as well. That's wonderful because obviously there's sensitive stuff in there and we don't wanna have that sitting around. So the installer removes that. Obviously if you're using another method you need to do that yourself or expire it or protect it or whatever you need to do. But the installer is looking after that. We are still, as you remember, we asked, I think I have got one worker coming up and we're gonna connect it to this extra network. But we haven't seen that actually happen yet until the workers come up. So let's quickly, I wanna quickly go back to these slides because I think that it helps to visualize a bit of what's going on while it's happening in the background. So essentially this is the IPI deployment flow on OpenStack. It's similar obviously to other cloud providers but I found it helpful just to outline it, right? We run the installer and then this Bootstrap node grabs our Fedora Core OS or our Red Hat Core OS image at a glance, it grabs ignition from glance and then sets up the Bootstrap cluster and then that cluster can pull the containers and then maintain itself as normal. These are all running through Nova and they can be backed by Cinder, whatever, we'll talk more. Once that Bootstrap cluster is established and we wait the KIPA LiveD back over to the three masternodes and the cluster is established and then we are able to go ahead and add workers through a machine set which is what the installer will actually build for you. So let's dive quickly into the integrations as the installer goes in the background. We visited this already, you should be familiar with what's happening there. So where are the integration points for OpenStack when you're putting OKD on there? One is the image service. So this is glance where we will store a Core OS image for the base install and the ignition payload. We can't add it directly to cloud init because cloud init, it's too big. So what we actually do is offer the instance and just a URL to go retrieve it. It does mean that the Core OS image and that tenant will need access to glance. The next thing to talk about is networking. So to try to summarize what we saw happening there and please, Nick, add when you have more here, we stick OpenStack floating IPs in front of these internal VIPs and they're all managed by KIPA LiveD to balance across the cluster. There's an API VIP on the Bootstrap until the cluster is up, shouldn't be unfamiliar, which is then weighted back to the masters. Those VIPs, as we talked about, have to be established in DNS prior to installation. The masters will then run everything that's needed for the cluster, your DHCP, an HA proxy, Core DNS, the plugin for Core DNS, the MDNS publisher, and KIPA LiveD. The Ingress VIP is then set to the workers and not fronted by a floating IP, which in our case, we have to manually assign. The machine network is the neutron tenant network that was built. This is what Nick was talking about, how you can actually change it to a different SIDR if you need to or a different network ID if you need to, but that is managed by the installer and an IPI. You can customize that somewhat by what Nick said. Let's see. There's a tenant on OpenStack. You can use the same IP addressing across multiple clusters on OpenStack, and what distinguishes them is the NAT address or the floating IP address that you assign to them. So each cluster that you deploy in OpenStack as a tenant would need its unique floating IPs and DNS names registered. But in terms of the internal IP addressing, they could be the same. Are we seeing a lot of people doing like one cluster per cloud or when is that, what are you seeing out there? In enterprises, I mean, you know, in enterprises, I see more multiple clusters, you know, a dev cluster, a pre-prod cluster, a production cluster, say on OpenStack, or you may even go down to specific development teams. So, you know, a development team may want their own cluster and you can easily, as their own tenant on OpenStack, you can easily spin up their own cluster on that. Another aspect of networking which we didn't talk about here is this is running the OKD SDN on top of the OpenStack SDN. So we've got, in a way, we've got double encapsulation with VXLAN, but with OpenStack, you've got the option of deploying Kurya, which allows Kubernetes to interact directly with the OpenStack SDN itself. So it removes the OKD encapsulation layer and it creates the infrastructure networks that it requires for OKD, it creates it natively on OpenStack itself. And in that kind of deployment, if you were to enable Kurya, there will be many, many more networks that you'll see created on OpenStack and not just the main node network, which then it encapsulates, then it uses its own SDN to encapsulate all the infrastructure networks across. So at the moment, that's been abstracted to us and all the OKD infrastructure networks are being encapsulated across this orange network that you're seeing here on OpenStack. But if you were to integrate the OKD and OpenStack SDNs through Kurya, you'll see multiple networks being created here, one for each infrastructure network and one for each namespace or project in OKD as well. Yeah. Yeah, I didn't play with Kurya on this one and some of that was because the version of OpenStack I'm using doesn't actually have, it uses OVS and can be a bit heavy and my resources are limited. I wasn't able to do it. So this is quite good. The worker node has actually come up so we're progressing quite well with the installation. As you can see through my watch command, we've connected to both networks. So we're my BYON OKD network as well as the OpenShift network that was created by the installer. Additionally, we now have containers created. Not those containers, different kind of containers. These are object storage and this is going to be the backing for the internal registry. Now, it bothered me a lot that there were multiple names here so I asked the developers and it's a bug. So there's only meant to be one and they're working through that. Not sure how that happened but these containers will be created. If we had been using Cinder for our registry we would see another volume attached to one of the workers. Obviously that's not a best practice for HA whereas for an object store it's not pinned to an instance. So far the installation is quite good at the moment. All the components are coming up and as you saw in here we've actually attached into that network. So let's see the status of the install. All right, the workers up. I might even be able to do a scale on here. And one thing that I can see that is all set up for you if we... Yep, the installation seems to be moving along pretty nicely. This should complete shortly so let's quickly jump back to the slides to finish what we were talking about. Someone I work with at Red Hat, Robert Heinzman, a colleague of mine did this incredible drawing to try to capture what's going on with all the different components of the installation. And so I wanted to reproduce it here. It's all his but it really helped me to understand how it holds together. You can see where we have our floating IPs sitting in front of our tenant-based VIPS where the balance is with Keep Alive D and how the whole thing is held together. Again, so just to talk through these integration points we have got, in one case I showed an example of where we set a root volume. I set it to 30 in the actual demo but we saw those connected. And as I mentioned it can be the registry back end but that's not the preferred method. The IPI, the installer will actually test to see if it has access to object storage. If it doesn't or gets some kind of no access errors or whatever, it will go ahead and set it up in Cinder. Something that's coming soon and something that's near and dear to Nick I think these days is the addition of more storage support for OKD and OpenShift. Right now we are a little bit limited in that there's no RWX support by using Cinder for volumes. Of course you can use an NFS server, you can set up a storage class but Manila is being brought in and that is what brought a lot of the BYON functionality. Nick, if you want to add anything about Manila or just, yeah. Well I mean what Manila will give OpenShift is the ability to basically utilize NFS on, you know, typically a CIF storage cluster that's deployed with OpenStack. And the way OpenStack does it, or I should say CIF does it for to present an NFS front end to OpenStack resources is it uses the Ganesha project which does NFS to CIF FS gateway for you. And Manila's responsibility is to establish and secure the file shares. So what that would allow you to do is to create persistent volumes that can be shared across multiple, or pods running on multiple workers because they'll all be able to use the standard NFS mountings to communicate. And I think the Manila support is certainly improving but, you know, any work that people are interested in doing to assist with this, it's a large piece of work and takes a good deal of testing and environments to back it up. Some other interest, I mean obvious stuff here is NOVA for all our compute. It can be ephemeral, it can be block. The usual requirements here that we need fast disk, this isn't a public cloud so you need to be working with an OpenStack admin to understand what kind of storage is being supported underneath that. Also, there's improved support for availability zone so you can actually start to place the workers where you want them inside the cloud. That's evolving as well around between the Cinder support and NOVA to get that right. I know Nick, you've had some battles with that. Yeah, there's, I mean, and one thing to note is the deployer will request affinity or anti-affinity for the masters and things like that so it will ask OpenStack to try and place them on different compute nodes. So it avoids trying to put, you know, two masters or three masters on the same compute node because obviously then you're not really running HA for your three masters are running on the same physical machine that may die. But if you're only running with a small OpenStack cluster with only two or three compute nodes then you may see multiple masters on the same compute but it will try, you know, to spread them across different physical machines. And then, yeah, and in theory then you should be able to get away with something like a live migration, some of the built-in capabilities in the form. I don't know that's 100% perfect though. I'm sure that has its ups and downs. Yeah, I mean, I've played with it quite a bit. If you do have central storage like Cep, RBD and you do birth, you do create it using volumes or ephemeral backed by Cep on OpenStack. And that's where you start needing to see your OpenStack administrators on how they've deployed things and what's available. But live migration certainly does work even on masters. I've done it many times. As long as your storage backend systems can handle it, you can move a master from one physical compute to another. Yeah, and I suppose that's the nice thing about being able to get things like OKD straight onto an OpenStack platform is you get all that on-premise benefit of infrastructure as a service. Well, exactly. OpenStack is a private cloud. Yes, indeed. So our install is still progressing. The final piece as far as the integration point is the object storage or Swift. It's the preferred registry backend for HA and the default choice of the IPI installer. And we saw that. One thing I'd like to do is actually show a scaling demo. So what I've done is when the IPI installer runs, it creates just a machine set for the workers to make scaling simple. I'd like to demonstrate that it works and see how it works across OpenShift and OpenStack. If the cluster hasn't finished, I've got another one that I pre-built to just show the demo while we do it. So let's see where we are. We're not quite there with the final installation. So we're going to go ahead and bring up this second one. Hopefully we'll be successful there. Okay, so this is a different OKD install, but on a different OpenStack. Make sure we're all logged in. And guys, if it takes a little bit longer, don't worry, we'll just hear the last talk so you can go a little bit longer. Yeah, no problem. So yeah, we can keep an eye on that. But as always... So here we have another OpenStack environment, and this one has got an OKD install with a number of workers. In this case, we have two workers and our three control plane nodes. You can see I built this previously. The actual control plane was built days ago, and then I've scaled a few times. And this is an IPI install. It creates a default machine set for the workers, as you'd expect, which means that because it's using the OpenStack cloud provider, we can actually easily scale out. So why don't we add a couple of workers, and now we can simply get back. And hopefully the two platforms are speaking to each other. In the meantime, I'm looking at the console and your other stack has completed deploying. Oh, excellent. We can do even more machine builds. So what we have happening here, right, is you can easily see that once we scaled out the machine set, OpenShift has related that information back down to OpenStack, or OKD back down to OpenStack, and we got a matching of naming so that you can really see how integrated the two platforms are. And I know it's not surprising for those on AWS and such. It's like how it works every day. But in an OpenStack space, it's really helpful that we can see the same type of integration that's happening in all the other platforms, so simply done. And what I'll show you, as Nick said, because we finished the other install, this is what comes right out of the box. So literally we now have the machine's provisioning. There may not be nodes yet because they're still building up. We can see that the extra, the new workers are being added to the topology. And just being automatically plumbed into the right networks. If we do this on the other machine, on the other one, we'll actually get the extra networks. We can see that we have the ability to create the autoscaler, so the custom stuff where if we want to go ahead and hit it with load it will automatically autoscale. And that constant communication and interaction and integration between OKD and OpenStack means that the two pieces work almost seamlessly, right? You're able to just scale out without much effort. So Nick's told us so anyway. The big positive of this is it's running on-premise, where I can say it would normally be running. It's not in the cloud. It's not across, you know, depending on the type of workloads you're dealing with and security requirements, government regulations and things like that. Yeah. I mean, it's awesome, right? It's an open source, on-premise, running an open source container platform, perfectly integrated and, you know, almost... Yeah, it's wonderful. So Nick, you're saying the other one finished? Yep. We'll put those away. That's cool. But yeah, look at that. So I know Diane loves this. So here we go. We've had a successful installation on our cloud of the live demo. So we're going to go and see if we really did. The proof of life moment. Here it comes, yeah. So obviously relying on the DNS I set up previously. Here it comes. Voila. Can we see that dashboard? Hey, that's great. Yeah, that is great. The demo gods love it. And good. This is terrific. A couple of things I want to point out that have been done here. A storage class was created. And I haven't mentioned much about it, but the Cinder storage class is created by default out of the IPI installer. Again, you can modify that. You can change that. You can do what you want. But you get that one. That one's built in. And so you're getting that immediate integration with OpenStack where you can use it to use it for persistent volumes and other pieces. If we had, let's see what else I did go on about was our... You had to, you know, if you didn't integrate with Manila, the other department that had Manila, you would see an additional... That's where you would set up your... Well, the Manila operator would automatically create an additional storage class for you. Yeah, that's... Oh, right. Okay. So again, that installer is handling all that for you. Here's our autoscaler. Might as well scale this while we're here. That install definitely was... Now we can... You know, we've got the autoscale ready. There we go. Ready to go. You can see it appear there shortly. And overall, I mean, that's... I probably couldn't have asked for more with that installation and that timing to actually have been able to go through all that. I don't know if I had any more slides to... No, see, I prepared this slide. This is my just-in-case slide. Success. But it's a different one, obviously. So that was just to sort of end it off. Jeff, that was the just-in-case one. That was the just-in-case, yeah. Say, you know, look, I really can do this. It does actually work, but I don't need to do that because it really does actually work. And, you know, working within Red Hat with a lot of the upstream guys and gals to put this... to see how this comes together, it's just getting better and better with each new cut and the add features to it like you wouldn't believe. But the integrations have become so clean. I mean, literally, even the UPI install, which used to really, you know, terrify me, is perfectly documented. There's a bunch of helper scripts to make it work. I know Nick loves to bash about on it and find all the various issues. But it's, you know, it works right out of the box. You know, these two technologies just work so nicely together. So here we go. I've added those two workers. Remember, we had an extra network, so that's not been forgotten. They've been plumbed into that network. I haven't had to do any kind of complicated setup on the hosts. I'm not pixie-booting. I mean, it's all being taken care of. It feels like I'm on public cloud. The integration between the two is becoming so clean that I feel confident with my OpenStack cloud running OKD, OpenShift, whatever. Because it's just built so nicely. I mean, I'm just clicking buttons here. And of course, you can do all this with the CLI. But that wouldn't be as fun to watch. Or maybe it would be. Maybe. Yeah. So what else can we add, Nick? I mean, this is there. The install is coming up. Now, this is just OKD. There's nothing to show off about how these nodes are added to the cluster. But as you can see with this screen here, it's all managed through those OpenStack APIs, meaning our OpenStack admins are aware of what's going on. They're able to control quotas, access to resources, ensure that tenants have what they need, and then the tenant running OKD is able to do what they need to do. They're able to scale access externally, run multiple clusters. It pretty much gives you a public cloud-like experience but on something so much cooler than public cloud with OpenStack. Nick, I don't know if you want to say much more. I can mention what I've been playing with. If you had an OpenStack that had bare metal as a service as well, through the UPI, you can actually achieve a scenario where you've got OpenStack deploying bare metal workers for you as well, dynamically and connected straight into the OKD cluster without even, you know, just as we've scaled up now for virtualized workers, you can have bare metal machines deployed for you and attached directly into the OKD cluster. Pretty cool. I love how you say that the private cloud is the coolest there and I have to agree for lots of different reasons but I think it is one of the neatest things to see us to be able to do a full stack there with all OpenSource stuff. It's really pretty awesome. So I'm really grateful you guys got up so early so you're the only ones with light in the room so far today. We had people up at midnight to see you and Saudi Arabia. So I'm really very grateful to have you guys on and look forward to collaborating a lot more with the OpenStack community and bringing this to the forefront. I think you guys, there's an open infra summit I think in October that we're going to try and do an OpenStack Commons with an OpenShift Commons with an OpenStack theme. It was the event that was supposed to be in Berlin so I'm negotiating how to co-locate virtually with the OpenStack foundation but hopefully we'll get you guys back on stage hopefully before October but to continue on the live stream having a lot of OpenStack content as I think there's a good number of our end users who are deploying OKD, OpenShift on OpenStack these days and we'd love to hear from you all as much as possible. So kudos to you guys for getting up. Is there any last words or a slide or anything that you wanted to end on? Just in case we want to get a hold of you and... Fair point. I didn't actually prepare that. The more we hear from people the better so I'm easy. I'm August at redhat.com and I don't know if you shared a Twitter handle at SilentOg on Twitter but yeah just reach out, ask questions there's a bunch of stuff that we've produced Nick works on blogs I've got some blogs on OpenShift we want to share the Shift and OKD on Stack Experience because it is growing and as you can see in 36 minutes we've got a container enabled cloud that's pretty awesome and I guess to end on the more we talk the better and I just can't wait to hear more I'm so excited to have been given the opportunity to actually do this, I was terrified but I was super excited so I'm thrilled. We love to scare people. The fact that it's all 100% open source really excites me there's no proprietary cloud there there's nothing and if you go for the commercial version as well there's hundreds and hundreds of companies already using this stuff out there and as I say it's open source turtles all the way down so it's really pretty cool