 Good afternoon, folks. Thanks for joining our session. We'll do a brief intro here, Andrew's here. What we're going to do today, description of the session, they are metal, the PI. So just to clarify what we're going to do today is OKDM. And Andrew's going to go using a Linux with Libvert KVM, how to get it before installed on such a Linux box. So that's just to clarify. Bare Metal often gets used and OK OpenShift installs to a agnostic platform install. So this is agnostic, just in the sense that you run it on any Linux, but it's not known how usually people think of bare metal, right? The hardware, pizza boxes, not meant to be just to clarify. So our goals today, we will show the actual all using a Linux. Andrew has said for 33, Andrew, so we'll show that. And then there's lots of recommendations and gotchas. Andrew was chatting me today. Finally saw a lot of things that he will share with you in getting this working. And then we'll wrap up with some Q&A from you all about your questions about spinning up or in this way. So Andrew, do you want to do a brief bio intro? Sure. Do you want to use the slides that we have for that or? Not see my slides progressing. I do not know. I'm still looking at the title slide. Really? Oh shoot. Asked if it was progressing. Let's try this again. It's a good thing we're not a technology company. Okay, I try to let me just share. So folks, this is the first time. We're kind of learning as we go. Let me try this again. How about now? John, I agree. This is, it's a whole new experience for sure. I'm just seeing a black screen, Justin. This platform has a lot of potential. I like the separation of the stage in the sessions. I like how everybody can break out and there's chat for everything. But I don't know if it's just unfamiliar or slightly glitchy or, but it's, it's been an exercise. To say the least. I run of this. I can see that, Justin. You can. Yeah, your brother, your browser is now. Okay, so present. See what happens. Do you see yourself now? Let's get a second to load. Still a loading screen for me. But I know what I look like. No, I will. Yeah. And do you mind trying to be shared? Cause you're seemed to work in. Yeah, I'll go ahead and share. So if you want. Yeah. I'll stop. Do this again. Of course. I do not have these issues and all the major ones, the major culprits like WebEx and whatnot. All right. I see your screen. Hopefully you see a picture of me now. Yep. All right. So apologies for the rough starts such as life sometimes with technology and especially with streaming platforms. So I am Andrew Sullivan. I am a technical marketing manager with the red hat cloud platforms business unit. So my day job is really focused around open shift. And broadly speaking, open shift on various virtual platforms as well as open shift being a virtual platform. So open shift virtualization, that type of stuff. So there's some contact information there. Anybody's always welcome to reach out to me, Andrew dot Sullivan at red hat.com or via Twitter. I also have a weekly live stream on open shift.TV. It's the ask an open shift administrator office hour. So that happens Wednesdays at 11 a.m. Eastern time. So anybody's welcome to stop by where we have a whole bunch of different topics. Last week we talked about ETCD. This week we talked about VMware, open shift on VMware. So always a good time and please don't hesitate to stop by and chat if you'd like. And I'll hand over to you, Justin. Okay. Thanks, Justin here again. The guy that has the glitchy screen share. By the way, Andrew's demos are really awesome. Do you have a direct link to those streams on your, on your Twitter feed? But can people get them there? I usually don't link them on Twitter. They're all on the open shift YouTube. So if you just go to or open shift.TV, if you go to open shift.TV, you'll find links to Twitch as well as to YouTube where you can get them on open shift YouTube. We're working on, we have an intern who will be starting in the next few weeks who is going to work on automating the process of creating playlists for all of our different streams and all of the other metadata and stuff that's going on there. So hopefully it'll get a lot easier to find them in the future. Awesome. Because you mentioned that I had to follow up so people would know. Good thing, John has is already familiar with them. They are really extremely beneficial. Just to see you and others, because there's a bunch of others who also go on there too, just demoing the technology live. So I'm Justin Pittman. You can find the links there on my LinkedIn. I've been in tech for a while. What I do for Red Hat is I currently do enablement for our partner sales. So if you know any of our partners, it could be F5 or Dynatrace or Dell. I'm usually involved in those types of opportunities for our technology. And then at home I use a bunch of stuff like that's not necessarily Red Hat productized, but might be upstream like I'll use Overt or something like that in my home lab. Anyway, good to be here with you all. Do you still want me to kind of go through a couple of things, Andrew, if you want some last minute prep? All I need is the... I think I'm good. Whenever we want to go and demo, if you want to cover any of the other material around kind of the overall flow or anything like that, that's completely up to you. And I can talk to you as well. Let's do that. So if you go one slide forward... Oh, people don't need that. The next slide. Let's do the next slide. Oh, it's going to make... It's animated. I forgot that. Here we go. So we thought it would be good to give a visual. If you're familiar with OKD or Kubernetes, this may look somewhat familiar to you, but let's just describe that for anyone watching it live here or in recording. So OKD and Kubernetes in general, they do have certain external dependencies, whether you're trying to access a application that's hosted inside the cluster and you need that application to be routable. That's the first thing through, let's say a load balancer and the client can only get to it through DNS resolution. A lot of those components, especially at scale, will be done externally from the cluster. So to help us today, Andrew and I talked about how best to kind of demonstrate those external dependencies. So one way to do it is what we call a helper node. Actually, one of Andrew's peers, Christian Hernandez, initially came up with the idea of this helper node that has a bunch of these external dependencies as services. So it comes with a load balancers HA proxy. It has a DHCP service installed on it. It has a DNS server installed on it. So it has all of this very quickly built out through Ansible Playbooks. So you don't have to build out each and install each of these services just to get the external dependencies done. So we'll have a link at the end so that you can get to the Ansible Playbooks for the helper node. It usually really simplifies the process for building out. Andrew, did you want to mention that Libvert does have a subset of these services now? Yeah, I'll talk about that when I start going through and showing how everything is configured in my lab and talk about some of the alternatives as well. Okay, all right. So after the helper node, the basic workflow, if you could go forward one, Andrew, just one slide. Thank you. The basic workflow for what we call a user provisioned infrastructure, which is a nice kind of way to say, bring your own infrastructure. The workflow is you will use the installer and you have to feed that installer several different pieces of information about how you want the cluster to operate. So a big one is the operating system for the nodes. So in this case for OKD, it's Fedora Core OS is the operating system for the nodes. So there needs to be some image available that the install can ingest basically and or have available. So this gets back to the helper node. Where do you put these images? If, for example, you can't download them, like maybe you have a bit of a restricted network in some way or you need to configure the bootup process like we're going to do here. Andrew is going to show you. Well, you can store those on a web server. So the image gets stored on a web server that is on the helper node, accessible from the bootstrap node that will then ingest and download that Fedora Core OS image. That's just one part of the flow. This diagram is meant to help you kind of visualize that is one piece of how a node gets its initial operating system. Other pieces of the installation are pretty detailed and documented elsewhere. We have some links about those documents, but basically there's a couple of things like the ignition config about how those machines are supposed to boot up inside the cluster. And those can be manipulated, but the installer does generate those ignition configs as well. So that's why this is visually presented here. But the big thing to remember is for now, meaning OKD 4.7, a bootstrap node will be spun up first. So you'll see that in the demo today that the bootstrap node comes up first. It pulls down things like the ignition config, any kernel boot parameters, and the operating system image that's needed. So just trying to give people a workflow here. The first time you see an OKD or OpenShift install without any visual diagram of what's going on, it gets rather confusing or lost, at least personally I find. I don't know your opinion on that, Andrew. You've done so many of them, maybe it's like it's so easy for you now. But for newbies, I kind of think that it's helpful to understand, okay, you have the installer binary, but there's some other things that are needed, like a node OS image, and you need other things that need to be ingested by the installer generated by the install, like ignition configs. It's just helping people visualize that process. Any other thoughts about that, Andrew? Yeah, I usually, you know, for people who aren't familiar with the OpenShift install process, it's definitely non-traditional or unconventional if you will. And really the biggest thing to remember or to understand is that we're really, when we do OpenShift install, when we do all these other things, basically we're setting up the prerequisites. So all the things that the cluster's going to need, like DNS, right, and an HTTP server so it can access the resources it needs. And then we create those ignition configs, and when we turn on the nodes, they read in that ignition, and then basically it instantiates and self-configures itself. So once you get to the point of doing like the OpenShift install wait for commands, basically you're just waiting to check in on its status and see what's going on and all that other stuff. So, yeah, John, it's both better and worse than Ansible. I like it better because... Is that referenced back to OKD and OpenShift 3? The Ansible install scripts? Yeah, I assume so. So for people who don't know the previous version used Ansible to do this, but now 4 does not. Yeah, it was a huge... The OpenShift 3 or OKD 3 install process was basically a huge set of Ansible playbooks, which is really nice because, well, Ansible's familiar to most everybody at this point, and you could easily go in and troubleshoot and look at the nodes and find logs and see what's going on, and they were rel nodes, so you could easily connect to them and see all of that. With 4, it switched to where now it's CoreOS, right? So Fedora CoreOS is harder to connect to, if you will. There's only key-based authentication. It's only there after Ignition has run a lot of stuff like that. And it's just meant to be... They dramatically reduced the scope of the installer, whereas with 3, the installer was responsible for... Basically, you handed it the rel nodes, and then it would go through and it would deploy Docker at the time, and then it would deploy Kubernetes, and then it would deploy on and on and on and on and on including things like the logging service and the metric service and all that. With 4, they changed the installer so that it's only responsible for basically getting the cluster up and running and then everything after that is at day two. So the installer is much more successful or more frequently successful, I should say, but it can be more difficult to troubleshoot if you're not familiar with it. And we've done a couple of streams about that. I'll touch on some of those things inside of here. Fingers crossed, it will go well, and we won't have to do a lot of troubleshooting. But I'll show... Andrew, it was troubleshooting late into the evening. I saw those messages late, but this is the nature of, I think, sometimes if it's a latest build, right, which I think is what you bumped into last night and a couple others that are in the other sessions here today, that they bumped into similar issues with the latest builds. But, you know, that's what happens when you're Fedora-CoreOS instead of the downstream, like REL-CoreOS. I don't think that... You know, speaking of the demo, Andrew, maybe we jumped to that. The next slide... I don't want to bore people with slides, right? The next slide we can always come back to later. Actually, you know, the recommended sizing. Do you mind just... Let's go to the recommended sizing first just so people understand. Yeah, so there was some discussion, if people didn't see it in chat earlier, that you could get a single node OKD on 16 gig. All right, now, I have tried single node on my laptop, the 16 gig. It really... You better not have anything else running, basically, right? So it is possible. But what you're seeing now are the recommended specs for a fuller cluster. And we kind of added... I kind of just threw this together, added it up for you. This is like double 16 gig, right? If you're going to have more than one node. And Andrew's box, I believe, is even bigger than this, right? I think you have an even bigger box. I'll talk about that in a moment. But effectively, it's a desktop PC. It's running a Ryzen 5 2600 with 64 gigs of RAM with Fedora 33. The good news is it doesn't have to be ginormous. So you can get by... And the demo that Andrew's going to do is a fully featured cluster. And there have been inroads to kind of reduce the footprint over time. Probably another thing to think about is storage. And if you can... What is that term? It's not sparse provisioning, but it's... Then provision. Thank you. Then provisioning so that you don't bump into errors with KVM or KMU complaining about your storage going beyond what you have locally. Vadim in chat also recommended that you at least have an SSD. Andrew and I have been in many conversations where the LCD instances inside OKD just expect a certain amount of response time, low latency, a certain amount of IOPS. They just expect that from the disks. So don't try this with old spinning rust. Those are just some basics for how to get this up and running, what kind of box you need. Yeah, I would definitely recommend that a minimum and SSD and if you have available an NVMe disk. A week and a half ago, so not this past week, but the 10th, I believe it was, March 10th, I spent my live stream hour just talking about EDCD with the product manager. And I have worked with the support team. I've worked with the engineering team and all of that other stuff. And really you want low latency storage for EDCD and the lower the latency, the better and the happier it will be. For a lab, which is more or less what it will be deploying today, everything is going to be in one box that will be deployed with libverts. You'll see I'll be using virtual machine manager for some parts, I'll be using the CLI for some parts. But you can probably get away with less, especially because you're not going to be scaling. There's no, at least for what I'm doing today, you wouldn't necessarily want to use it for production. You wouldn't want to deploy massive applications scaling the hundreds of pods on there. But definitely take into account storage and storage latency when you're, if you're doing any sort of production type of deployment. Okay. We're about 30 minutes in or 25, because we were dealing with that. So I think it's time to let Andrew show us the remarkable demo that you all have waited for. Ready, Andrew? I am. Hopefully you're able to see my window here. So I'm, I actually run a Mac desktop, but I have a VNC session into my, my other desktop there. So one of the things I wanted to do here was grab this link here. I'm going to paste this into the chat. So all the stuff that I've been working on, I'm going to be showing here, I'll more or less going to be, I documented and I shoved it into this gist. So if you want to like, see the exact steps that I took and all that, because I am going to skip over some things, like my helper node is already set up. So if you wanted to see how I did that, it's all inside of that gist. So now I need to, so I don't distract myself by seeing myself moving in the mouse around and hide one of these windows. All right. So let's start with, well, let's start at the beginning. I'm going to use this as a bit of a reference and kind of walk through the different parts. So I'll start here by saying that you'll note that I'm going to be deploying more or less a five node okay D cluster. So three control plane nodes. You can see I've worked up the name there, but that's okay. We will temporarily need a bootstrap machine. And then I have the helper machine. So it's just mentioned, the helper is what is providing to a several different services. So today that's going to be a web server, because it's going to be serving the ignition config files that we'll be needing as well as the root FS image that it will need to install Fedora CoreOS. It will also be providing load balancers or just a simple HAProxy config. And then let's see, what else does it do? DNS. And that's really it. And as Justin said, we can use Libvert for some of that. So let me bring up virtual machine manager here. So I've got another number of other machines. So these are just laptops and other stuff that are sometime powered on that I sometimes use in my lab. If we look at the details for this machine, however, all I did was create a new virtual network. So if we click down here on the plus, so give it a name, whatever we want to call it. I am going to Nat it and then pick a subnet that we want to use for it. I used 192168110.0 slash 24. And then we don't need the HCP because we're going to be using static IPs in this particular configuration. So if we were to click the finish there, what we would end up with is this network here. You can see it's very straightforward. Now, Libvert does use effectively behind the scenes, DNS mask as well. And you can go in and you can create essentially remember which section it is. You can assign it to give out static HP leases. You can assign it to do DNS resolution for all of those nodes and all that other stuff inside of there. I didn't do that today just because, you know, I wanted to show the helper node. So pretty straightforward there. The only other thing that I did, where'd it go? The only other thing that I did here, I created a separate storage domain. You can see our Libvert OKD images. This is actually mounted as a, it is a NVMe device. So I just have a second NVMe device in the box. And then I mounted that into this location with I think this one's running EXT4 instead of XFS because Fedora seems to be shying away from XFS for some reason. But yeah, pretty straightforward. So I need a network. I might need storage if I need to dedicate something if I don't have enough space or if my default drive or my boot drive doesn't have the performance that I need, which I did here, but otherwise, pretty standard Libvert. Go ahead, Justin. I was going to ask folks, if they have some input while Andrew shows us this of what they were targeting, we'd be interested to hear, you know, are you targeting an install on Ubuntu or is it for your homelab, or is it for dev? We were talking about this before in prepping. We'd like to hear your feedback as we go through this about what you're thinking. John says, general, OKD knowledge, which is also great. That's all. I'll trust you to keep an eye on the chat because I can't really see the whole part of the window. So the other thing to be aware of here is I'm using Fedora 33. Fedora 33 is one of the operating systems that is now using SystemDresolveD as the default resolver. So the result of that is basically I want to be able to have my box, my workstation here, resolve over to the machines that we're going to be deploying with OKD inside them. So I already know that I'm going to call that internal LAN, that temporary one that I'm using for my OKD deployments, OKD.LAN. So effectively, now that I created, so remember over here, I created my new network. So I have my virtual network OKD. So if I were to do an MCLI con show, probably ought to do it with a sudo. Oh, I'm on the wrong note, that's why. So I have this virtual bridge one, I can do an IPA and show that virtual bridge. One here is my 110 network. So essentially I need to tell SystemDresolveD that hey, when you see an OKD.LAN domain name, I want you to send those DNS queries to this location. And same thing with reverse DNS and all of that. So that's what this set of commands here is going to do. So I'm literally going to copy and paste those and have them execute. And then we can do a solve. Sorry, I can't talk and type at the same time. So we can see here that whenever it sees those domains, it is going to send them across that virtual bridge one interface. We can do the same thing with DNS. You see it's saying anytime I need to resolve something on that interface, I'm going to point it to this host. And that host is actually incorrect. That host should be 39, which is the IP address of my helper node virtual machine. So we can check that and we can see that we're good to go there. So now anytime I do an NS lookup. So let's do an NS lookup of api.cluster.okd.plan. It will theoretically, it will resolve over there. And I'll have to see why that's, it could be that my DNS service isn't running over here in the helper. So we'll take a look at that now. So here I'm connected to the helper. So this particular interface, I guess I should have changed the colors on these so that way they're easier to see. So this particular one is in, so beret is the name of my fedora machine. Helper is the name of my helper machine. And if I do check the status, we can see that, yep, DNS mask is currently failing on there. So I'll have to figure out why that is. It says it doesn't like the interface name. How strange. So my helper node here, and if we follow over here in the gist, you can see step two is create and configure the helper node. So I am using podman for the HTTP server, HTTP D server, and then I just installed regular HA proxy and DNS mask. You can deploy those into pods. It's a little more, I tried to run these as non-root containers and that's a little more involved. So instead I just went ahead and did a DNS install of those two services. Justin mentioned before Christian Hernandez created the helper node. So if you go to the helper node, let me dig up that link real quick. I'll share in the chat. Okay. So if you change the tag on that, there is a helper node v2 beta and that whole thing is containerized. So Christian has gone through and he took it from being, you know, services that are deployed as normal services to a system. They're all now in pods and they can all now be used that way. He even has a helper node control type of thing going on in there. All right. So let's come back over here. So I'll check on DNS mask first. So I have this OKD.conf for DNS mask. So for whatever reason, it's not liking this particular item. I'll have to take a look at that here in just a moment. Essentially what we're doing here is these three are prerequisites for OKD. So it needs to be able to resolve to the API and API internal load balanced endpoints. And we can see that those are pointing at this host at the helper node, which is running HA proxy. And we'll look at the HA proxies configured just a moment. And then we need a wild card for apps. So if I do, you know, test.apps.cluster.okd.lan, that should resolve same as over here to the helper node, which has that load balancer going on. The other thing that we need is our node entries. So I said before that we're doing static IP addresses. So I went ahead and added those IP address and name resolutions into the DNS mask config. And that's really all we've got. So let me double check why this is not in P1S0 and why that isn't working. Hey, Diane, I saw that you joined. Welcome. Do you know in Hopin, is there a way to make Andrew's screen share bigger as an attendee or viewer? It really doesn't because it has our faces down below. So they're seeing it at 75%. So if you can make your font bigger in the black, that would probably make people like me with my glasses on my top of my head as opposed to all my nose happier. I figured you might say that. Yeah, I know. It's me. Well, plus I wanted to clearly see what, as we look at this DNS, what it was saying. By the way, Andrew, you will get a chuckle out of a comment that John said. John said system D resolve D was a pitta. Yes, yes, it was. The ranting that I was doing on my team's Slack channel the other day about having to figure out how it was working and why. And it took me, I know why probably two hours of searching the internet and testing things to finally figure out those three silly commands. And, you know, the Fedora team did a great job when they first announced Fedora 33 about, you know, hey, you can, you know, we have this new thing and it's great for split, split DNS when you're connected to VPNs and all this other stuff. But there's nothing about how to configure it. So yeah, I feel your frustration. And I cannot, I think it's because I'm in. Yeah, I'm, I'm in a, oh, there we go. There it is. I can see control, control shifts from plus from inside of a VNC session. Yeah. Every time. So hopefully that's easier to see. So it is. Okay, good. So I did get DNS mask working. I don't know why it didn't start by default. If you were paying attention there, I just did a simple, you know, just control starts DNS mask. And if we do the same thing here, you know, I did that NS lookup and it goes right over and it resolves my tests. So I can also do a test.apps.cluster.okd.lan. And it resolves right over to our helper node. So a couple of things to note here. So you saw when I first set up everything, I'm using this okd.lan subnet or a domain name rather. So cluster is actually going to be my extremely creative, right? Because you know, I'm a TME, not a, not a designer, extremely creative cluster name. So this will be unique to each okd deployment that you have. And then after that, we have these different names for the different functions inside of there with apps being a wildcard. So I can literally put in anything inside of here. And it will always resolve over to that load balancer that it's pointed at. So that covers DNS. The next one that we want to talk about is, or DNS through DNS mask. Next one that we want to talk about is HAProxy. So if you're looking at that gist that I posted inside of there, you'll see the full contents of this file. Other things that I don't like about Fedora33, they replaced them with Nano is the default editor, which throws me through a loop. For shame, for shame. So coming down to the HAProxy config here, there's a couple of things to pay attention to. I do turn on stats. And that's just a convenience thing, especially in a lab, especially if you're testing, it's nice to be able to see when the API servers, when the endpoints pop in and out. So after that, we start getting into the actual load balanced endpoints. So the first one is the API server. So this is what API.clustername.domainname points at. You can see it's 6443. And then on the back end, we're passing it to our bootstrap, because the bootstrap comes up first, and then our three control plane nodes. Being HAProxy, I don't need to modify the configuration of this as this bootstrap node goes up and down, and as I reload the clusters, because it'll automatically say, hey, you're not responding, right? The check here, you're not responding. I'm not going to send traffic over there. So moving down, we have the front end and back end for the machine config server. Same as above, that points to the bootstrap and the control planes. So one interesting thing to note here, the machine config server is how the machine config operator serves up the configuration, the machine config, to the nodes. So this is an unauthenticated endpoint. Basically you can go in, you can pull that HTTP traffic, or HTTPS traffic, but it is not authenticated, and be able to see that machine config. So this one is, this is effectively the API int, API internal endpoint. So if you are separating your traffic, if you want to have kind of maximum security protection, again, this is a lab, I'm not concerned about it. You would want to have this as either limited to just being the other nodes in the cluster, or even possibly on an internal only network that's available there. So moving on down here, we then have our ingress endpoints. So these two are going to be the API, or excuse me, the start out apps wildcard. So we have one for HTTP, and one for HTTPS. And notice that it's not in here because I set it up at the top. I apologize, I'm going to page up, so it's going to jump around on you. See my default mode here is TCP. So it's doing layer four, yeah, layer four load balancing. So it's just passing through things like all of the SSL encryption, all of that directly to the routers that are running inside of OKD, so that way it can do re-termination and all that other stuff itself. You can do layer seven. I have not done it, but I've been told multiple times that it is possible. And just to check on that, we'll go ahead and go to helper.okd.lan, and we'll go to port 9000. Yes, I know. The HA proxy stats page? Yep. That's assuming that I put in a DNS entry for the helper, which I don't think I did. Yeah, it's not up here. So we'll go by IP. Yeah. It should resolve, but I didn't put the that name in there. And here's our HA proxy stats page. So you can see there's no nodes that are up at the moment. And then so we took care of DNS via DNS mask. We did HA proxy. You can see here as we scroll through this gist, let me make this a little bit bigger for you all as well. The last thing is an HTTP server. So if we do a ps-a, well, did I create it as root? There goes my thing about creating, I guess I did, creating non-root containers. Make a liar out of me. Well, it used to require, like Docker used to require some root. So was that just like an old leftover habit you had? So Docker does. And what I probably did, I had sudo-i over to root, because I was probably modifying some config and wasn't paying attention to who I was logged in as and created this pod. Anyways, it works super easy. I'm just redirecting port 8080 externally to port 80 on the pod. So that's important because remember port 80 is being load balanced for the cluster that we're using. So that's why we're using port 8080 for the HTTP server. Aside from that, I had gone in and inside of my web docs and I just linked here, we'll bring up, oops, bring up that page again. So you see I attached a volume of our www HTML to the pods HD docs or HD root. And inside of here, you see I've got two different directories. I'm going to do a quick tree in here. So the first one is ignition. We'll use that in just a moment. This is where we'll dump the ignition files that our nodes are going to need when we install them. So let's just install directory. So these are the three metal files that we need for our deployment. So why is that? And what do I mean by that? So with Libvert, there is no UPI, there is no IPI type of integration. So that means that we have to do what Red Hat and what OpenShift calls a platform agnostic non-integrated deployment. So essentially, this is going to be the same type of deployment that you would do to a physical server, even though it's VMs. And it means that there's no integration, no cloud provider integration, no CSI or entry storage integration with the underlying infrastructure. Now, if you're deploying to Overt or something like that, there is that integration. You can deploy an installer provisioned infrastructure, IPI cluster, and it will connect to your your Overt Manager. It will talk to the APIs to provision the VMs. It will configure the CSI provisioner. It will do all that stuff out of the box. But that's not what we're doing here because this is Libvert and we don't have that integration, except when we do. And I'll talk about that in just a moment, too. So I'm jumping ahead here by showing you this, but a little bit further down, and I'll revisit this in a moment, a little bit further down. I'll show the links to download those. And make sure that the permissions are correct. So that way they can be retrieved by the nodes when we need them. Which is the next step. There we go. So from my Libvert host. So that means that I'm on this beret host. You can see I'm in the OKD directory. We want to pull down a couple of different things. So the first thing that we want to pay attention to here is notice that I'm using a slightly older version of OKD. So just in was a little too earlier. I tried doing it with the very latest bits with the latest 4.7 bits and was having issues with the bootstrap. So if you happen to be on the GitHub page, if you're looking for the issues, you'll see there's some comments for me regarding that. So I'm using 4.6. I'm using from back in mid-February here because that's the one that I was able to get to work. We need to pull down two files. So we need the OpenShift install and we need the OpenShift client. So when we pull those two down, pretty straightforward, all we're going to do is unpack them and then move the three files that are within OpenShift-install, OC and kubectl into user local bin. You don't have to put them in the path. I just find it easiest to do that. So if we come back over here and we do an OpenShift install version, what we should see is OpenShift install version 4.6, same thing for our OC 4.6. So I've got my CLI tools in place now. If we switch back over to here now I need to move over to our helper node and download those three binaries that I was just showing you. So we have our root FS, our NetRAM FS and our kernel image. So download those. Here's where we create those directories. Download those, put them in the right place, make sure that the permissions are set correctly and now we can access that. Just to make sure. We'll browse. Not HTTPS. If you do just a basic curl from the... No, you know what it is. My pod isn't running. Oh, yeah. So that's up. So why aren't you serving me any files? Well, I would anger the demo gods at some point. Sometimes I don't know about the containerized hdbd, but sometimes the permissions, well, is that pod running as root and it has permissions to all those files, right? It should. But let's check. Yeah, I agree. But John, usually when this happens it's some kind of permission issue. Well, theoretically, we have access to all those. Who's that? Are you inside the pod now? I can't see it. The hoppin thing is obscuring. No, so I just did a podman logs against the pod just to see what the logs are saying. Logs seem to think that it's fun. How very strange. Well, just to appease the previous error where all you did was restart DNS mask, maybe just restart that pod. I did that. Oh, well, always a, always a difficult one. Something's always got an auto break or got a break. That's okay. We can work around that because I happen to have a separate web server available and we'll just point to that. So I'll see. See, sorry, I probably am mispronouncing your name. I cannot pronounce your name. Is firewall de-running on this box? Nope. I think it would just be a non-responsive. When I do demos, I try and turn everything off that can interfere even though, yes, it's horrible, terrible practice and you should never actually do that for demos. I try to make it as easy on myself as possible. So what I'll do instead is I will use my other web server and it'll just require changing a couple of things later on. See to make it easier on myself, I also reuse IPs, just different subnets. That does make it much easier. While you do that, I think it's good to rehash something. Andrew, you're probably we're about to get to this. So I'll jump the gun and just mention it. Wait a minute. John says in chat 503. Is that web server behind the proxy? Yeah, you know what I'm doing? Here. Thank you, John. You are absolutely correct. Because that is not going through HAProxy. Oh. Perfect. That's not embarrassing at all. Yes, you were absolutely correct, John. Of port 80 80 is the web server that we want to use here. So thank you. I appreciate your help there, John. Live troubleshooting. That's awesome. I know. Never a dull moment. All right. So our web server is up. We're good there. I have pulled all of our install files. If I go here to the install, we see we have our three files. So from our Libvert host, now we need to and let me get out of here. Here's here. I want to wrap up my previous thought because I think this is still a good point to mention it. Some folks maybe not here because we seem pretty techy, but in the recording we'll say this is a lot of heavy lift. I've got to download all these files. I got to set up all of this configuration. Isn't there in a push button or something? It's not like you were going to hint at this. Can we mention it now? Yeah, I'll cover that in just a second. I'm going to create this or walk through this install config. So the next step here, I'm on step four here, create the install config. So from our Libvert host, which remembers the one that we downloaded OpenShift install to, we're literally just going to create the install config. We're going to go straight forward. So first, we want to make sure that we have an SSH key in here so that we can connect over to the VMs once they've been deployed. Remember, there's only key-based authentication with the user core. And then the pull secret is actually just a dummy secret. You can see here we're just using this non-useful text. So aside from that, there's a few interesting or a few important parts. So one, we want to set the base network, the same thing that we configured in our helper node with DNS mask. So the worker replicas, we're going to set that to zero. We will have workers, but we still want to set this to zero. This is basically indicating that it's not responsible for provisioning any of those. We'll have three master replicas or three control plane replicas. And then down here, these will be defaults. So the cluster network, this is what it uses to assign pod networks to each one of the nodes. So it'll take this slash 14 and it'll carve it up into 23s and assign a 23 to each one of the nodes in the cluster. And that's where those pods get assigned. The service network is the set of IPs that are used for services. So when you create a service, the IP that gets assigned comes from here. So this networking.machinenetwork.sider is very important. So we want to make sure that this subnet that we're deploying our virtual machines to. So 110.0 is what I'm using here. The reason why this is important is because OKD, CoreOS, looks for an interface on this subnet and that is the interface that it will configure, for example, the SDN on. It's also used, for example, if you're using a proxy, it will automatically add this subnet to the node proxy and in a few different other places. So a lot of times we'll see people that they forget to set this or they don't set it at all and there's just random failures or occasional things that go wrong and it's hard to figure out why. So and the reason why I said I'll get to the faster version of that is because regardless of whether we're doing this kind of non-integrated what used to be called bare metal UPI or an IPI type version we always want to create an install config. So let's look at what that looks like. So if I do an open shift install creates ignition config now the ignition config install config right so using IPI I can use the interactive installer and I can step through each one of these things that's happening here and I can choose the one that I want to use or the infrastructure that I want to deploy to and actually I need to pull the most recent version of the CLI tool so let me do that real quick because if you look at so if you look at the 4.7 installer and I want to go to here I want to go to here so if we look at the installer for 4.7 if we do an IPI it actually lists libvert as one of the options so we'll pull down this real quick you see when I untar that I get my open shift install so if we do an open shift install creates ignition config with 4.7 what I'm going to get is this libvert option and it will ask me to connect to right which libvert do I want to connect to so this happens to be the local host if we browse to and I will share this link in just a moment if we browse to github slash shift slash installer and we go to docs dev libvert let me take this guy and paste it here so this is effectively the kind of how to set up and how to use this libvert IPI side of things so the important thing here and I've done this with my host that I'm working with we'll skip past all this libvert to accept TCP connections so effectively what we're doing is telling libvert to accept an unauthenticated connection across TCP and that's how it connects so if I copy this string here we'll copy that and if I go to virtual machine manager and do an ad connection I can do a custom URL paste that in and you notice it didn't ask for any kind of authentication these two are the same host so if I were to go here and disconnect this one and then connect it again it prompts me for authentication so this one is unauthenticated that is what the installer that is how it communicates with our host to be able to create and connect those virtual machines so you do need to configure that before using it as we come down through here it kind of walks through all those different things eventually we get down here to setting up network manager DNS overlay so this is one of the same thing that we were talking about with fedora33 and systemdresolfty this is the same thing if you're using other operating systems that use network manager with the DNS overlay so this is how you tell network manager to configure your resolver to point to that local network inside of there and note that it does deliberately do this one 26.1 so make sure that you do that IP address so let's walk through this process so liver connection URI is the local one right base domain we'll call it okd.lan again my cluster name is cluster2 and the pull secret is that same empty pull secret that we had from before I should really store this somewhere other than right here and what it's going to do is it will go through and I should have turned on the debug logging but behind the scenes it's creating the resources that it needs oh I forgot I created the ignition config and if we look at our so here's our ignitions and all of that I meant to create install config not ignition config and if I do a simple create cluster now so it's going to pull down the terraform provider it's going to do everything that it needs to get started and if I check over here you can see that it's automatically created a new network and this is where it's using the internal essentially a round robin DNS load balancer quote unquote to resolve all those things here in a moment it will start up a virtual machine do all of that other stuff it also created a new storage pool so we have this cluster one you can see it's underneath open shift images in this instance and it's going through and doing its thing the problem is this doesn't work so there's an error there's a bug in the current one where here in a moment it will finish and it will create two virtual machines that will create a bootstrap and it will create a control plane node and the bootstrap only has four gigs of RAM which is not enough so it turns on when the okd bootstrap process goes through it pulls down a new image and then uses rpmaustry to switch to that image and that image is larger than the temporary swap space the slash var run or slash run rather space that is available so it runs out of space and it never succeeds you can get it to go further if you and I think I almost had one running earlier today you can fetch it before it boots so basically if you immediately turn off those virtual machines edit both of them to give them at least 16 gigabytes of RAM and then turn them back on and then wait long enough you should be able to get a cluster at that point and it will be IPI I would recommend again creating an install config and setting it to have more than just the default one control plane but so let's pause for a moment just to recap because that was a lot to digest so there's two methods the method that you just are showing us now is the automated integrated IPI method against libvert where the installer the okd installer is calling libvert and it is creating the VMs it already downloaded everything it needed oh there it is yep the VMs so the previous method that you had started to take us down was the non-integrated non-automated where you basically have to download all the images and set up all the configuration etc so just to be just to restate that these were two different install methods and the reason to not use the automated one is it's brand new as of 4.7 and there is that memory bug that you were mentioning correct yeah and the goal is and I'm sure that we'll get there before long the goal is pretty sure it would be great to have IPI and especially if we can do a single node IPI on something like this where you just say you know open shift install create cluster and it points to that local libvert it deploys a virtual machine it gets everything up and running all inside of that single node will be great I would love for that to happen and I will actually I intend to continue seeing if I can get that to work on my node even after this particular event so John has a question for you in chat Andrew yeah so can can you set the memory in the install config so unfortunately no and the reason why that is is because it's not actually defined as an option inside of there so if I do open shift install explain install config dot and we can just step through this right so we see we have our platform here so if I do install config dot platform and then we have libvert so we have the or I to connect to the default machine platform which is honestly don't know what that is but it's as an object it is empty so I haven't figured that one out yet it's just a resource object and if we switch to for example control plane so install config dot control plane and then we go to platform same as before if you're familiar with the install process this is usually where you would set the set those values libverts see we don't have any options in install config to set them for the worker nodes you could do it by so creating the install config and then doing a generate manifest and then going into the manifest and modifying the machine set that it creates to put the right amount of memory but the control plane nodes are created by the installer and the installer has no right we just walk through that tree there's no parameters in there for adjusting it so yeah we have seen hidden parameters I my cursory searching through the github repo did not turn up any of those so it might be worth asking you know on github or something like that if anybody knows of any but this has been my cheat so far is to just power it off immediately and then adjust it and test it out so for better or for worse that's where we are right now oh no I want to go into that directory yeah John I think John you were making some comments about vSphere you can set the sizing I think with most of the other platforms you can I was about to say overt has that too set that yeah the libvert IPI feels a little neglected sometimes so it's not there but definitely with overt slash rev definitely with vSphere definitely with open stack as well as I believe the hyperscalers you can change all of those things with the hyperscalers like AWS you would change the instance types that it's using but so I just destroyed those resources so that is one of the nice things that does clean up after itself so if we come back here and we see our network is gone and that's extra storage pool is gone alright so back to our regularly scheduled so I have an install config that I created very straightforward I was using the okd.lan and I'll do a cat on this guy just so we can look at it as I talk oh I say that and look I have it set up for my other my other one so I should go ahead and modify these so we're going to use our okd.lan domain name coming down through here we need to make sure that our machine cider is set correctly so this is 110 and then everything else we can much leave at the default again definitely make sure that you have your public key for SSH inside of here so you can connect over to the nodes and the last thing that we want to do here is set the name correctly so the domain name that you need will be a combination of this name value plus base domain so cluster.okd.lan which remember is what I said in the DNS mask so with our install config I'm now going to copy that into this cluster folder and it's just an empty folder I just created it as a holding spot the reason why I always do this is because when you do the next step it's quote unquote ingest the install config .yaml and delete it so if you want to go through multiple iterations without having to constantly recreate that file by hand I always put it someplace and then copy it into so we'll put that guy inside of there and then the next step that we want to do so at this point we have created our install config we've staged it in our folder remember we already have the bits that we need for the kernel for the net ram fs all that other stuff staged on our web server so now we need to generate manifest and we're going to point it at the cluster or the cluster folder that we just put our install config into if I didn't have this it would simply consume the one that's in the directory one thing to be aware of sometimes we'll see folks that have they try to do an open shift install and it keeps causing like really weird errors look for hidden files in the directory that are like open shift install logs and stuff like that because the install app the binary will look at those files and it will make decisions for you you're like but it didn't ask me that or why did it do this I didn't tell it to do that so make sure you and you remove all of those files that are inside of there and I'll show you what those look like in just a minute so there we see it consumes the install config from the target directory so if we go into our cluster directory here and do an LSLA see these two files those are files that it uses to make decisions for you right so more or less pick up where an install left off or something like that so this is why I always recommend you know create a sub-folder copy your install config into there and then do everything inside of there when you're done with it just RM-RF that whole folder so you can clear it out and start completely fresh each time or what I do is I create a new folder each time and switch over to it is that okay so real quickly I'm going to set our masters to not be schedule so why do I have to do this remember in the install config we set the number of replicas for the control plane to be 3 and the number of replicas for the worker nodes to be 0 the installer assumes if there's 0 worker nodes that the control plane needs to be schedule we don't want that in this case so I am telling it through the manifest file make the masters non-schedulable it's a hard word to say so and this is a part of the regular install documentation I should have linked to that from that gist maybe I'll update it you know all of the install docs are linked off of docs.okd.io and you can go in and look at here I'll bring that up real quick I put it in the chat earlier but I do want to take the time this is actually a good time to ask you Andrew sometimes it gets a little confusing which of the steps to execute for the installer sir you showed us generate the install config generate the initial config generate the manifest and sometimes it's not clear when to do which the manifest you just showed us are you edit those for example when you need to change a master to be scheduleable are there kind of like general recommendations do you usually always generate the install config that's a really good point and it's something that's even the open shift documentation does a terrible job of pointing out so if you're doing IPI with a hyperscaler so AWS Azure Google etc then you can I absolutely go in and do an open shift install create cluster and just go you don't have to worry about anything if you're doing an on-prem IPI deployment I always recommend doing a create install config first answer all the questions so on-prem IPI open shift install create install config it'll ask you all the questions so what do you want to install to I want to install to overt okay what's your overt manager in point what's your overt manager credentials which cluster do you want to use which storage domain do you want to use so on and so forth and it'll spit out that install config at the end then you want to modify that install config to make sure that that's networking dot sider is correct because by default it'll be I think it's 10.0.0.0 slash 8 or something like that so if that's not right for your you know environment you absolutely need to change that because otherwise it can lead to those random problems down the road now if you want to do a compact cluster so three nodes with schedule will control plane at that point you can go ahead and do open to install create cluster that'll read that in that install config and it'll begin the deployment process if you want to have if you want to do a if you want to let's see if you want to modify things like the machine sets so change the amount of CPU or RAM or something like that and you forgot to do it in the install config all those things you can generate the manifest and then the manifest you can go and modify things so one of the things that we showed is using the manifest to automatically create an infra machine set so that way it'll deploy info nodes doesn't add the workload to them but at least deploys the info nodes kind of right away or right from the start now if you're doing on-prem UPI I tend to recommend always doing it varies but almost always I end up doing I create the install config by hand sometimes I'll do a kick it off with the open shift install create install config I keep wanting to say ignition configs which is wrong install config it gives me that template and then I can go in and edit it by hand to remove the platform section for example so create the ignition config or install config rather create manifest specifically because we want to mark the schedule of the masters as non-schedulable unless you want that three-note compact deployment I very rarely do that just because I almost always have my default config is a lot of times to test kubernetes and stuff like that so I want to have dedicated physical nodes so that's why I have all of those other random laptops old laptops and stuff like that I turn those into okd nodes for kubernetes and then from there going on with the rest of the process so yeah we don't make it easy in the documentation for when to use each one and why to use each one it's still much clearer than how it's usually documented thank you so ok so all I did here was I created or I marked our control plane nodes as non-schedulable and then I'm going to do an open shift install create ignition configs the one that I want this time and we'll specify that it's in the same subdirectory that we did before so at this point it will consume all of these files that we saw inside of here and it will generate the ignition configs so one thing that a lot of people don't realize is that the ignition configs and specifically the bootstrap ignition config is more or less all of these files base 64 encoded and laid out to be put into the right places throughout that bootstrap set up so if you were to go in and know if I look at my cluster see I have this bootstrap see how it's 283 kilobytes so if we were to look at that it would have all of those files base 64 encoded in there along with the certificates and all the other things that are inside of there so ok so let's switch back over to our gist that I was using so here's our ignition config I keep saying install config here's what we just did we created our cluster we copied our install config inside of there we created the manifest we set our control plan to be non-schedulable we generated the ignition configs so now I need to put those ignition configs on to our helper node on to the web server and this is so that when our nodes boot up they're able to reach out they pull down that ignition config so that way they can do that initial configuration so we're getting better with the various IPI installs of being able to attach those without having to host them on a web server but with the non-integrated method we do still have to do that so for example if you were to do a network deployment with VMware you can go in and you can set a VM property that will attach all of that data inside of there so we'll copy this command and paste it inside of here you can see I'm just copying all of those ignition files over to our helper node into the right place but we need to do it as the correct user not as that user and if we switch back over here so this is our helper node web server and we check ignition we see we have our ignition files and I need to set the permissions correctly I do note that on the over here made you to adjust the permissions so I'll do that real quick so we'll set those to this time it was a permission issue yeah so we'll do that real quick and now we're offered to download our bootstrap file okay so we're good to go there so now we are on to step six and this is where it gets interesting with Librex a lot of fun automating this and playing with some new things that I didn't know about Libvert yesterday and yesterday evening so the first thing that we're going to do is create the disks that we need for our virtual machines so I am not on the helper node I am on my Libvert node I'm going to paste that command in all we're doing here is a simple loop where for each one of the nodes we do a QMU image create of a 120 gigabyte QCAL2 image inside of our remember my mounted storage pool alright so now we have all of those files inside of there if I just for giggles want to go over here and look in the storage and do a refresh see we now have all of these QCALs inside of there so 120 gigs is usually my default size for these it's I would say it's the minimum size that you will want for any kind of production cluster any kind of long running cluster remember that this space this 120 gigabytes will be used for everything Fedora CoreOS is doing as well as any container images that are downloaded as well as any scratch space as well as any empty there's logs that are generated right all of those things go inside of there and effectively if that fills up it's it's a bad day right it means that you got a lot of work to do to recover so the next step after that so we'll set those permissions correctly because now they're on my root and we want them to be owned by the QMU user so I'm going to create a working directory for our virtual machine definitions I've already done this step so this is pulling from our web server those kernel internet ramfs images or files you see I'm just pulling them down I'm putting them into varlib libvert boot so if I look inside of that directory I really miss being able to do a keyboard copy and paste but pnc doesn't let me do that we have our files inside of here they're available to us now the reason why I want them locally on this box is because in the next step we're going to tell it to do a local kernel boot of our virtual machine to do the install so we're going to use a couple of variables here go ahead and just the second reason that in my brief headaches with libvert that varlib libvert has certain permissions I don't even remember if it's just standard it might be extended attribute permissions that if you don't put things there whatever isn't there or if you try to use a different directory libvert will start to complain a VM it might boot once and I've seen it not be able to reboot after a while so that's a special directory for sure for things like storing images for libvert yeah I've had the same thing and permissions are always an issue it seems like so my three variables here kernel is pointing to the file that we just described appear same thing within it kernel args is important so remember I said that we needed to point it to where the root fs is at and it's hosted off of our helper web server we're telling it that it does need networking to continue and we're going to tell it which disk it's going to install coro s2 so in this instance we'll be creating a disk using the vertio controller so we dev bda if you use scazier set make sure that's set to sda so this is where things start to get interesting so I'm doing static IP assignments instead of dhcp and what that means is that I need to pass a kernel argument string into each one of the virtual machines that tell it what its IP address is going to be and that string looks like this so let's break the string down and all of this is in the documentation so if you go into that agnostic out here installing on any platform all of this is described inside of there so don't worry that you have to remember what I'm saying here so the first thing is the IP address that we want this node to use double colon and then what is the gateway for this subnet what is the net mask what is the host name including fully qualified domain name that we want to assign to it the interface that we want to configure with all of this information none I don't remember what that one is but it's always none and then the DNS server that we want it to use so because all of this information except for these two bits the node IP and the node name is the same all I did was create that one string and then you can see I'm just kind of going through and doing a set operation to replace it for each one of my VMs and storing that into another variable so our bootstrap IP address is going to be 60 and the node name is bootstrap our worker 0 is going to have the IP 65 with the name worker 0 and if I were to do an IP worker 0 we can see that it has done exactly what we wanted here's our IP of 65 and our host name of worker 0 so now the last variable that I need for right now is telling the node where to get its bootstrap and we're going to save this as a variable as well for the next step so all I'm saying is your bootstrap ignition file right so coro s.nc.ignition URL is on our helper node port 8080 ignition bootstrap.ign so this is where it gets interesting in my opinion so what I've done is used vert install to assign so we see we have a disk here which is our bootstrap which is created that remember the interesting part is this install line where I am telling it to create all of or to look to those files that we just configured to boot the virtual machine so I'm going to do this one real quick and then I'm going to copy and paste the rest of this just to execute it and do all of them at once so we can move on to the next step but we're just repeating that for each one of the control plane nodes and worker nodes the one thing to note is that I did a little bit of and you see I used the term jigglypukery up here so I'm doing bash variables by reference so essentially I determine what the node name is without the dash because that throws an error and I create basically the same variable name that we created up here and down here where I actually want to use it I reference it by referencing that name here with the bang in front of it so I had some fun doing research on bash last night because I had never done that method before you simplified it for a bunch of people because they technically could do this through the VM manager GUI correct and that's what we're going to do right now so the last thing that I want to do here is this step so the last thing that we're going to do is do a vershell define of each one of our virtual machines and then I don't know why I don't know if it's a bug I don't know if there's something going on if I just leave it alone it doesn't use the right actually let me exit out of this so if I look at this bootstrap xml definition so even though I told it the kernel file is in farlie bliver boot I have no idea where that name comes from or why so essentially all I'm doing and here's the output of that whole big kernel line with all those variables and stuff like that so I don't know why it does that but this little bit of vert xml editing basically forces it to be what we want it to be so we'll go ahead and do that could not find oh I bet I forgot to do a sudo in there somewhere no do you have to be in a specific directory because it's looking for those xml named in node configs so first it's what's going on here so bootstrap to find from node configs bootstrap and then it says it can't find it it's there so it is there let me bring up a text editor real quick and what I'll do is edit my loop here and just remove the define from it so instead of doing the vert shell define and then editing the xml now I'm just doing the vert xml edit that looks like the statement here yeah because it's reading the file name in that's why forgive me for editing off the display but all I'm doing is you'll see in just a moment adding each one of the nodes to a for loop this last little bit of the xml edit is to get rid of that weird kernel text there we go so all I did was use the VM names so the issue was I'm doing an ls node configs and it spits out the node name dot xml which when we then do the this down here you see I was reusing it so there we go okay so the end result is now here in virtual machine manager I have all of my VMs so let's look at what I just did on the command line and how that looks and how you can do it in the GUI so the first thing I'll do is open up this VM and kind of peruse through the different settings that are inside of here so CPUs I'm giving these for any the control plane and bootstrap for CPUs 12 gigabytes of memory that's just to make sure that has more than enough to do the things that it needs to do and then the magic that's happening here is underneath the boot options I'm doing this direct kernel boot and I'm having it boot off of that kernel and that init ramfs image with all of those kernel arcs that we built to do the static IP assignments and where to install to and all that other stuff after that it's pretty standard right the disc created the network adapter we don't really care what MAC address it gave it or anything like that because we're using static IPs not DHCP and everything after that is standard the one thing to note here we are using the fedora coro s stable so if you were to search here for just coro s you'll note that there are three of them that come up I happen to be using the stable one so you can create these manually if you so choose right click the create new name we'll do a manual install you could also do I think import existing disk image if you pre-create the disc so we're going to do the same coro s here coro s stable we'll give it the amount of memory that we wanted to have and CPUs so I'm just going to let it default create a disc here we can select we can put it into an alternate pool if we choose and then give it some sort of name so one thing to note we want to make sure that we put it on to the right network and I'm going to customize the configuration before install and then this is where I would want to go in and make sure that my boot options are set correctly so we just click that and then we browse to our far live boots and if I knew my alphabet this would be much easier which has happened there it's searched which is not what I was expecting it to do so far live blue birds yeah I know anyways permissions there again but we would want to set that to do the same thing for our NITRD same thing for our kernel args hit the apply and then begin installation and it'll kick off that install so if you're creating them manually same exact thing just provide those values inside of there so the reason why I'm doing it this way is because it's much faster so you can you can host those files you can have it pixie boot for example there's a number of different ways of doing it I find especially with a single libvert host like this hosting those on the local file system makes it tremendously quicker to do the the install process so at this point right we just finished defining all of our virtual machines so now we can install coro s so I'm going to do this part manually just so we can see what's going on inside of here so I'm going to start with our boot strap machine and I'm going to turn it on so when I turned it on it did that kernel boot right it read in the kernel it read in the NITRD now it's going through and it will have read in so there's all of our kernel command lines read all that other stuff so it reached out and it is pulling down the root fs and the ignition file and it is installing that to the disk for our virtual machine now's a good time to say the reason to look at that boot console is let's say you fat fingered something and it couldn't pull down one of the images you'd see it there yep so Neil so we have gone through we've set up or stage everything that we need on the helper node we've created our ignition configs we've staged those so at this point I just installed the bootstrap but we're getting ready to install coro s to the rest of them as well so very much to Justin's point it's nice to have the console up so we can see if something goes wrong you'll also notice that once it's finished instead of doing a reboot it's shut down the machine so that is quite convenient in one of the things that's nice here so I'm just going to turn the rest of these on and let them do their thing I think in the gist I note I have automation over here that says just a virtual start and then optionally add in a sleep if you're using slightly slower storage so this is an NVMe device you may want to consider staggering these because they can take they will eat up a lot of IO and I can hear the box that's right on the other side of my monitor here I can hear it spinning up and begging for mercy nice and Neil we are using static IPs here so we'll let that guy go through and here in just a moment it shuts down so now we see that all of them have shut back down so effectively we now can assume that coro s has been installed to all of our nodes so remember at the very beginning I said that effectively the install process is super complex and like we've spent what an hour now going through all these different steps of staging everything and making sure all of coro s's are set well now all we do is turn these VMs on and it sets itself up with one exception that's approving CSRs so I do need to go in and I need to remove that config that we had just done so the reason why it didn't restart when it told it to reboot is because by default libvert when you set the kernel boot it sets the on reboot to destroy so it shuts down instead of actually rebooting because if we didn't remove those kernel parameters so if we didn't come in here and remove these the next time we boot it up it'll go through and it'll reinstall coro s and we don't want that we want it to boot to the operating system that it just installed so we'll do a quick round of edits on our virtual machines to take care of that now if we look at our VM definition here you can see our boot options have been removed and if I check the XML here our on reboot is now set to restart instead of destroy like it would have been before so now we simply powered on we can check over here with our right so virtual starts I'm going to stagger these just a little bit because they do they can be painful you don't technically need to start them you can start all of them at the same time so I'm going to start these four we'll leave the bootstrap up just to see what it's doing there but what we really care about is a couple of things so one at this point we can use open shift install to monitor for the install progress so all I'm doing is saying hey wait for the bootstrap to be complete give me all the logs and reference the cluster that is currently being deployed from this particular directory so what this will do is it will sit here and it will ping the various APIs will make sure that it's able to connect to things and we more or less wait so I did say that we wanted to see what was going on inside of the bootstrap so we we know what it's doing so I'm going to open a new window here remember I said it was important that we have an SSH key so we can connect to the node because that is exactly what I want to do at the moment so I'm going to connect to our node as the core user and it uses that SSH key that I provided in the install config and the bootstrap dumps me at this command prompt where it very helpfully gives me the name of the command to use it just booted me I think yeah so what happened here see I was connected to core bootstrap and then it terminated that's because it went through and rebooted itself and it was busy running my mouth what happens is on the first boot it goes through and it instantiates it uses this release image so it goes through and it pulls down the newest RPMOS tree image and then switches to that image and then reboots the node so that's what we just missed go ahead and paste this guy oops wrong one so we'll paste this guy in there so I'm back on the bootstrap machine to look at in particular the bootcube is what's going to be doing all the work right now and it's doing its thing so at this point go ahead Justin while we watch the bootstrap I think a couple of points have been brought up two one is from Neil he just joined his question was about UPI and why is it required to use static IP you talked a little bit about this earlier but just to repeat while we're doing it this way so it's not required to use static IP but I find it to be easier because the alternative is static or DHCP reservations so why is that the case because our load balancer has to know which nodes are which and how to direct traffic over to them so we work around this with IPI because with IPI when we provision a new node or whatever IP address the node happens to get assigned from the AWS DHCP server the cloud controller will update the load balancer with that node's information we don't have that with UPI there's not that same level of integration with the infrastructure so instead we need to have access over to it why not use DNS I'm not sure what you mean we do use DNS I think some of this may also go back to why we're doing this install method which was OKD4.7 has the new automated IPI that will automate a bunch of this versus we are stepping through with you through the essentially not automated non-integrated installation against Lidford I don't know if that's part of what Neil was asking here just to give you background about why we're doing it this way Neil with DHCP host names that have IPI assignments that's expected but remember we need a load balancer for the API for the API internal endpoints which is really the machine config controller as well as for the ingress which is star.apps so essentially if we were to do an IPI or a UPI rather and just let DHCP do its thing and do DNS name resolution we would have to rely on the load balancer to be able to do that maybe that is possible I don't think it works with HA proxy so that is a good point Neil some of the more enterprise load balancers so I don't know if Citrix or F5 or something like that can do like a DNS reverse lookup to find the machine on the back end so your question so load balancers can't do DNS base selection HA proxy I don't think can the last time I checked which I'll admit has been like a year ago it could not so maybe that's changed I should probably look into it but yeah that is why I've always done either static IPs or DHCP reservations so that is a good point Justin I see you kind of scowling at the screen you're looking to find out if it's possible damn yeah probably my fault I should have checked I should be checking periodically or even pinging our partners to see if it's something that they're working on or they can do instead of having checked once a while ago and just assuming that it is the way that it is yeah I'm looking at HA proxy site to see what they say about integration with DHCP I think just to give everyone background Andrew and I kind of went back on forth on this about we did want to show the most simplified way to install at first but when we saw that the automated integrated installer for example that was the first issue then we decided okay well let's do the UPI method and it provides a more under the covers view wouldn't you say Andrew you get to really see the mechanisms that are working right yeah just definitely yeah I mean there's all kinds of different ways to do this I mean let's not get into even like pixie booting stuff because even though that's possible and it would definitely get a fleet of nodes operational we kind of figured this would be a small group that probably doesn't have enterprisey stuff like pixie servers and enterprise level load balancers and things like that so we wanted to try to say hey if we do just one box the Linux box hosting OKD what is a way that could be streamlined to get it working so that's what we're showing now yeah I was as Justin and I were preparing for this I kept telling him like you know we have like this world of possibilities and this is this is one way out of about 300 of doing it and it was I switched between them because I try to be familiar with a lot of the different ways so this it was hard for me to narrow it down so at this point kind of the background here we're just literally watching paint dry waiting for this bootstrap to complete so if we switch over to watching the bootstrap logs so we see we have all of these pod status messages going by so these are actually done in groups of four that will repeat you can see there's cluster version operator kube API server kube scheduler and kube controller manager and what we're waiting for is all four of them to be in a status of ready so here we have running not ready running not ready so two readies and two running not readies so it typically takes a couple of minutes for all four of these to go through what's happening is so the bootstrap stands up it turns on it reads in that that ignition and then the machine config and among other things it uses the ecd operator the cluster ecd operator to instantiate a single node ecd and then instantiate the cluster or the machine config operator the control plane nodes then come up so those three nodes come online they look to that machine config operator instance they get their configuration and then after they configure themselves they start talking to the bootstrap and the bootstrap says to using the ecd cluster config operator I'm going to increase my ecd node count to three so adding two control plane nodes decrease it to two removing the bootstrap and then increase it back to three bringing it up to the three control plane nodes the bootstrap then hands over all of the Kubernetes control plane operations to the newly instantiated those three new control plane nodes so that's effectively what's happening here in the background that's what we're watching all of these pods all of these messages scroll by this is it handing over all of that information and waiting for the control plane nodes to take ownership of it so we can see basically it's done it's going to do a few more things inside of here and what we expect to see after a minute or two is this bootstrap to register complete so knowing that that is almost done I'm going to turn on our two worker nodes so in a moment what we'll see is this will come back this wait for will end it will say bootstrap completed after x number of minutes over here we'll see that this log ends and it'll say bootcube complete or something like that at that point we can power off the bootstrap and we can delete that machine if we want so at this point what's happened is and it's going through and you see all of these different mammals that are being applied so on that new control plane so it's a kubernetes control plane the cluster instantiated or deployed operator life cycle manager olm and it is now using operators to instantiate all of those okd services so there are bootcube service has succeeded so at this point I'm just going to do a simple shutdown and if we switch over here see it dinged at me so it took about 10 minutes for bootstrap to complete so we'll shut that guy down and now I want to do a wait for install complete so same as before this isn't actually taking any action it's not reaching out to the cluster just like bootstrap was and it's not saying ok now you need to do this it's just monitoring and reporting back the status of what's going on in the cluster and we can actually get this information right here this working towards 5% complete by querying the cluster directly so that's what I want to show over here in this one while that's going so inside of my cluster folder so remember this is where I placed the install config file this is when we created the manifest it was done inside of here this is where we have our ignition files when they were generated very importantly we have this off directory inside of here are two things one is the kubadmin password so if you walk away the kubadmin password is stored in this file and then the kubconfig file that we can use to connect as a system admin to the cluster so all I'm doing is exporting my kubconfig environment variable to point to that file and now I can do remember I have my oc oc in my path so now I can do an oc get node against our cluster and it'll reach out and you see it finds the three newly deployed nodes so at this point we have so control plane is handed over if I were to do an oc get cluster version we can see here's our working towards the same thing that we see over here it's at 88% it's at 88% the important thing right now is with UPI I have to manually say yes I recognize that worker node please let it into the cluster and we do that by looking for CSRs certificate signing requests and you can see I have one that's pending two that are pending right here so let's switch back over here to our gist and you can see I have this two CSRs pending approve them I have this little shortcuts that uses just queries for them and then pipes it into XRs to do the oc add a certificate approve you can do that manually I'm lazy what's that thing about why I spent three minutes doing manually which you can automate in three hours so we'll do that I don't know who came up with this I think it was Christian who originally discovered this and I think he did literally spend something like two or three hours trying to figure out the go template to extract this specific string wow so the first it will request two of these one for each one of the nodes so here's the line requesting one for each one of the nodes and then once those are approved there'll be two additional ones that come up so here's worker zero so we need to basically execute that same line again and it'll approve those CSRs now if I do an oc get node what I should expect to see is at least one node show up you can see it's in the not ready status and that guy is shut down I'm half wondering if I didn't work up something over here 10.66 that's right because I thought I saw both of them have the same node name up here which seems odd so worker zero worker zero that seems strange that they're both the same all approved so I think I must have done something wrong maybe in my DNS or something like that so it's just two references to the same worker node it shouldn't be there should be one for each node so I'm not sure why that's not coming up it could have been something that went wrong I don't know what's going on there but the masters are up yeah yeah that's a strange one I've never had that happen before it's coming up the cluster will deploy with a single worker what we'll see is the ingress operator is angry because it wants to have two replicas by default so we can fix that by just changing it to having one replica of course then it's not highly available but that's okay so if we do an OC get cluster operator at this point we can see all of the different services that OLM is deploying and we're just waiting I'm just kind of curious because we have two minutes to the top of the hour what is the resource utilization looking like on your host box the Livert KVM box yeah so this system monitor here so usually in idle state with this five node cluster deployed it's sitting around 40% CPU utilization you can see it's going up and down and it's still deploying services and right now it's at 45 gigabytes memory used not too bad so I mean it's not as nice as a single node cluster that fits in 8, 10, 12 gigabytes of RAM or something like that but it's certainly usable from a modest size machine and again the largest of these is the control plane nodes I use 16 gigabytes of memory I don't think you actually need to use 16 gigabytes that's a habit from OpenShift if we do an OC get node and OC describe node on one of our control plane nodes here way down at the bottom we have this allocated resources you can see that's requests only total 5 gigabytes of memory so that tells me that your slide that you had shared earlier that has the 8 gigabyte RAM that would work right because usually the biggest issue is not the node is actually using 5 gigabytes of memory but rather it needs at least 5 gigabytes of memory to be able to allocate to pod requests and 8 gigabytes should be more than enough to satisfy both the requests as well as real resource utilization inside of those nodes so what's interesting to see is you were already up to about 45 gig so that exceeds the you know people get an idea for the floor here if they're doing multi node we have 4 or 5 depending I would suspect at least 2 of those gigabytes are firefox firefox here we can find it depends on where I'm at so remember I'm sharing a vnc session into my fedora box and my desktop is mac os and typically for work related things redhat uses the google suite I use chrome just because google likes to play inside baseball with their stuff and it just works better so now that the nodes are booting up and the operators are coming online oh I see dian joining back here I think this is a really good time to just take a moment maybe for recap any q&a that we have how are things going dian in the other sessions we have had success in the home lab groups they got 3 different home lab deployments walk through and demoed amaze month and in the which one was the one that just ended you guys are bare metal so it was the v sphere folks finished theirs and they did cut out a little bit early because it was going to be another 10 to 20 minutes of staring at the screen to watch an upgrade so those two are done and you know just keep on keep it on as you want to and what I'm curious about is if there's things that people who are listening in here think that we should be adding to the documentation besides your documentation Andrew in your gist file which I'm going to get you to make a pull request against elmico's repo some time today so we can get that in so how has it been going over here I've probably been running my mouth too much explaining various nuances instead of so it's funny because if I just go through and deploy a cluster in my lab it takes about 40 minutes and you notice we're at 2 hours and that's Andrew running his mouth that's explaining this is awesome again Andrew because you're I've noticed that first timers whether it's okd or openshift the docs can really lead you astray so Diane I think this is a great effort to have maybe more streamlined or verbose documents on deployment of okd but you know you explaining the process is kind of what's missing in some of the documentation Andrew yeah so I'll also say that so it's funny my team the tech marketing team we have a bi-weekly meeting with the UX group where they just go through and say hey this is what we've been working on you guys interact with a lot of customers what do you think so one of the things that we've been trying to set up is basically the same thing with the docs folks I tend to specialize on the install process and kind of the day 2 administrative process so I have a lot of feedback for them about how to better organize install docs and all that so it would be great to get that better organized yeah nobody's going to argue about that our documentation is always a work in progress I think this we're going to try and do these hop-in sessions there's some variation I know like once a quarter and I think we're also going to try on Thursdays to have like open office hours for OKD that is that are all community driven so this has been this is really useful for us who are in the working group and I know I probably I'm speaking to the converted because I think all of you who are I can see online here now are all in the working group now and if you're not Kareem if you're here and you're not coming it's usually just a sell-it-in if you haven't joined yet do-so it's usually a time zone issue too I think we have people from all over the world trying to come in and see this stuff but the working group has been great in terms of giving feedback and stepping up and doing stuff so yeah my heart of hearts I would love to get one product manager from the OpenShift team to come on a regular basis to these just to hear some of the things that people are doing and I think a lot of the PMs are also deploying home labs and things like that and I think I heard a threat that all of them have to manage the internal cluster for a period of time each PM it's not a threat it is a reality we actually right now we're going through a process of pairing product managers with tech marketing managers because I love our product managers and they are extremely knowledgeable and deeply technical on their focus areas and having them branch out and learn more about OpenShift as a whole is nothing but amazingly positive so I've already seen some benefits there they've already experienced some pain trying to set up authentication so they've already created some JIRA issues themselves saying this is way too hard we should make this easier so it's really and I will make all of the recordings of these sessions available to everybody to use it does take 24 hours to render all of these videos so if you're anxious it probably and my internet got restored as of Monday morning at my house I will be able to upload them on Wednesday or on Monday afternoon so look for them there and I'll make a post of the mailing list when they're there but this stuff is just hugely useful for everybody and just really grateful for you guys this time so I'm going to leave you to finishing this go pop into the other session and see how they're doing and we'll drop back here and we'll just keep going back and forth until you finish your deployment here okay thank you Diane so while we were chatting there if you were watching my screen I found out where the issue with the single worker zero name is here so up when I was doing this variable assignment I did not replace that one which then propagated down into the others so they do have distinct IPs they just have the same host name I don't know how that's affecting OpenShare or OKD it may not like it it may be okay with it so if we end up not deploying that could be one of the reasons why yeah John you're right you can't register the same host name so I'm what I need to do is OC get node dash I need to see which one it actually is so worker 0 65 so I think that means that I can shut down the other one which would be worker 1 running with 66 we'll do that and see if it what's interesting is you see the CPU usage is up there so it feels like it's doing something even though it shouldn't be doing something so I don't know what it's going on there but we'll turn it off we'll see what the cluster does yeah I do remember when they were all fedora yep just waiting watching this thing go through into its thing so you can a lot of time see authentication that one worries me that could be because we only have the one node I know what I could do I could reload that node because all I need to do to reload the node is come in here to the boot options and repopulate these and that would work I mean you'd pass it those parameters on boot but that would work why so I'm going to set the node name correctly this time so here so the issue is right because remember we did static static ipconfig and we set the node name here so what I need to do is because you can't really change the IP address or hostname of a coroS node easily really the recommendation is treat it like an appliance blow it away and reload it so it's exactly what I'm going to do so I'm just going to rather than doing it all automated from the CLI like I did before I'm just going to do it manually from here and replace this guy in here and then jump back no I can't control to jump by more than one at a time so we'll set this to one and that should be the only change that we need to make on that particular line what was the typo again was it in the yeah so if we come back over here so in step six of the gist where I set the variable for worker one I did not substitute the correct node name okay I got it so we'll set that guy there where'd you go so hit apply there and then I don't think where are the preferences I can never remember detail you got to go to detail what preferences are you looking for sorry so I can edit XML oh yeah because I want to set it so that it will do the destroy on reboot again so we'll turn it on this time instead of booting to the already installed coro s it's going to boot to that kernel parameters that we provided and it's going to reload coro s on this node so you can see now it's going through the process of writing that out it'll take just a moment let's check on this guy while we're here oh look it finished so 17 and a half minutes we'll see it spit out our cu badman password very nice remember this okd.lan doesn't exist outside of this box and remember I set that domain using the system control or system d result d so let's paste that in there and we get our lovely firefox I like the chrome one where you have to type this is unsafe have you done that with chrome when it doesn't like the certificate it forces you to literally type into the browser this is unsafe instead of clicking that I accept the risk button that's a change since I stopped using chrome ok there we go there's our cluster it is it is a 4.6 cluster so an upgrade is available but we're good to go and let's check on our other worker node while we have a moment you can see it started and installed and then it stopped let's flip this back to restart and apply and then on our boot options we uncheck this and apply and then we'll start our node and just like during the install process what's going to happen is it'll turn on it'll come up it already has its network config and everything and then it'll go through the machine config operator so that api ends you can see it trying to get to it from here so it pulled down its ignition config and now it's going through and it's trying to it'll reconfigure itself here in a moment it will reboot again because it pulled down a new rpm os tree so it's going to flip over to the new one and then it'll reboot and then at that point is when we should have to approve the csr for it to join the cluster that was just a type the bigger thing is the console is up yeah console's up we do have a degraded operator that's the ingress so remember I said with only one worker node it will deploy it will just have the ingress will be angry because it wants by default to be scaled to two and that means that there needs to be two worker nodes available for it so this is great I think this is where I totally get that you wanted to see that second worker node up but you successfully showed us the deployment of okd I think we had some good questions so far any questions that folks have that just joined us or had been sticking with us from the get go about the install to we used Andrew used fedora here any questions that still don't make sense yeah John you're correct I could create a brand new VM do the exact same process and add as many worker nodes as I need to into the cluster and you can use DHCP just the static IPs DHCP reservations after creating the VM check out the NIC and make a reservation so I can show you an example of that so this is my self-contained okd libvert's lab I actually have a bigger quote-unquote lab a separate lab where I do pixie booting and I use DHCP reservations there so if we sage to here this helper node is running bind it's running DHCPD all those other things so if I go to instead of DNS mask and inside of here I can see because I was also prepared to do this from so in this demo what I did was connected it to that internal libvert only network I could also bridge it to my external network so if you were sharp-eyed when I was reviewing my stuff here under the virtual networks I have this br0 and it just connects directly out to the same network as the rest of my host so if we had wanted to do this DHCP reservations set up for each one of them and then just as with the other one I have an HA proxy config for that and then I have we go to var name D and then I have a bind zone for that for those particular hosts inside of there same exact thing in that instance and especially what host do I need this host I want to there we go tftp boots so if you're familiar with pixie you can actually have it based off of the MAC address automatically boot nodes so in my normally because I turn through especially open shift clusters sometimes three or four a day when I'm creating demos and stuff like that so all I have to do is turn on my VMs and then hit escape to pixie boots and walk away from it and come back and half an hour later I have a fully new cluster that's been deployed so you can absolutely have it do that so if we do a on the final note of this because we're going through the details of like adding it in an alternate method pixie booted I'm kind of curious you also said that you would need to approve a CSR so it's not just the node boots but someone some operator has to approve yeah so with UPI regardless of the of the platform that you're using you always have to approve the CSRs for the for newly or knows that you want to join so so that would be the third point John to your question yeah so there's see how I have a pending one so if you were again quick guide with my worker one here see how it's only been up for like 30 seconds my cursor disappears but it's only been up for a minute or two because it it booted and then it got the new RPM OS tree image and then applied that and rebooted and then it came up and now it wants to join the cluster so now I need to approve this CSR I do find my XRs thing here so I'll approve that CSR so we can see here the pending one so this is node bootstrapper is the first thing that will request one for each new node and then we'll give it a second and we should see another pending show up after just a second and these CSRs they do disappear after I think 24 hours they'll automatically disappear from the list there here's our pending and we see it's for worker one this time so we'll do the same approve and now if I go to see get node we have our worker one awesome so and that will it'll go through it'll deploy all the various services eventually ingress controller right the the operator will say hey wait there's a new node and it'll deploy another ingress router and then that's error or this that's bigger and then let me make this bigger and then this operator being angry on now it's that one's updating because of the new node with the ingress will eventually stop being angry and it'll have two nodes to lay its stuff on so and if we look here so yes this is also upset that's because we just added the new node and therefore it's going through and all of those operators are going to adjust for the new node and deploy additional services once that new node is fully joined it'll go back into a healthy happy status and it'll be ready to go one thing to note with 4.7 if your machine config pool is unhappy so for example in this instance there's a node that's not ready it won't allow you to do updates so back to the deployment documentation that Diane had mentioned it sound like you wanted to make a few edits or changes to your deployment before you submitted it yeah so I do need to make one edit so over here this is a bug in my documentation this is what led to that one worker node not being valid but yeah other than that I don't know if we it can be added as is or we can incorporate it into a larger thing I'm happy either way well let's do this let's get you to make a progress on Mike's stuff today so I can at least get one new chunk of docs in there and then we will get people to comment on it so that would be wonderful if we could get that one in and that would make Diane's day there we go all the operators are happy if we come back down here to our cluster settings we can see I say that it would allow me to update it will after a minute but we're ready to start testing and now that all of them one last check on the resources on your box now that the second node is really building out you can see idle state right now so there are 48 gigabytes of RAM somewhere between 40 and 60 percent of CPU and this is an older CPU it's a Ryzen 5 2600 so that's four years old now three years old now something like that so and then the network is that's I know it's super hard to read it's tiny for me to even read that's right around 2 megabits or 2 megabytes per second so that's 16 megabits so with 5 VMs running a cluster I mean this is a decent size like you said older hardware someone can spin this up if they have wherever where they work or if it's home lab I mean this is you don't have to break the bank to spin this up on something like a Linux box yeah really the only investment quote-unquote that I did is an NVMe device I think I spent $80 on a 512 gigabyte NVMe device PCIe NVMe device just make sure that your motherboard is capable of plugging those in and it solved all of the performance deployment issues you saw it took us less than 30 minutes total to deploy the cluster before that I was trying to use I had a LVM device with LVM cache so it was two spinning drives and a RAID 1 with an SSD cache and it would deploy but you couldn't really do anything with it and the deployment took the better part of an hour instead of half an hour so yeah I asked and my wife was very kind and said sure that's it saves you time that's good authorization to purchase that's great yeah so I don't see any more questions in the chat and the clusters up you showed us that I think we had a great chat back and forth and totally understand about the document wanting to get that up so that other people can can run through this it would be great because I think it's very clear well written and now they have this recording to step through it as well if they have a question yeah I think this was excellent this is like the perfect way to end the day with this up and running and I think you guys nailed it on the head here so thank you so much for taking the time today I know it's a Saturday we may all be down and lockdown and that but you still need to go out for a walk and get some fresh air now wherever you are I think your east coasters both of you Andrew and Justin is that right yeah yeah so there's still a little light out there in the background even though you're probably in your basements as I am so I'm not seeing any other questions John thank you for joining us and sticking with us all day and all your wonderful feedback love this and so when we do this again next quarter John and other folks we're going to get you guys to do walkthroughs like this you don't talk too much that's why we put you in chat between you and Neil John that's both of you are going to have to be on the hook for the next one so I'd like to do these like once a quarter something you know because depending on the release cadence for okd this really is very very helpful and if people have any issues that they want to do against the documentation that would be great if there was a platform or a target that we didn't hit throw an issue that just adds a stub or make a pull request for that a stub on that in the repo and we will endeavor either to make you do it or find people who will collaborate with you and make it happen this is it's really been great I think the feedback people are giving is just wonderful so thank you both for doing this today thank you Andrew especially because he was up late last night with a couple of issues with the installer I saw my chat pretty late that he was banging away at the keyboard getting a couple things fixed I don't mind it's a lot of fun and like I said I learned a few things today so it's worth it for me thank you awesome well you're getting thunderous applause and clapping in the chat so well done and we will see you all at the next okd working group meeting and