 At that time, not that they don't put up a fight regardless, you would think that they don't get enough screen time playing Fortnite and whatever other nonsense. My son has been streaming Terraria, which is its own amusing and hilarious set of things. I can hear an echo of myself, but a few seconds later. Andrew is audible. Yes, so that is good. All right. Am I audible, audience? Can you hear me? Sing for us. I saw that and no, I'm not going to sing. Nobody wants to hear me sing. No one wants to hear singing. All right, I did not change anything. I literally just turned things on and off and it came back. So yeah, you applied the standard Windows fix. I did. I don't know what happened there. I don't know. You're back to being out of just my left ear, but I'll manage. Okay, but now there are some speakers running. So it's a bad echo. What? The monitor further away from the mic. I would say, you hear the echo first and then your loud, clear voice. Yes, we still hear the static background. Yeah, there is a staticky background. I don't know what that's about. Is your fan running, Andrew? Your laptop fan? Yes. Is your mic near it? Like four feet away. Weird. I do hear ourselves a few seconds delay very faintly in my headphones. How would you hear ourselves very faintly in your headphones? I mean, okay. That's probably my monitor. Sorry. Um, Andrew has a great mic echo. I can mute and see if that causes it to go away. Yeah, like mute it for a second. No, I definitely still hear like a fuzzy background noise. Muting it made it louder it's had. Oh, good. That's impressive that it makes it louder. I really don't know what to make of this, folks. I really don't happen. I really don't know. So let me mute myself and we'll see if that helps. Does this mean I have to act like I'm crazy and have a conversation all to myself? No. Background noise is still there. I don't know how to fix that. Possibly a laptop mic for sure. Let me just make sure that this is disabled over here. Did this the other day without issues? Well, without major issues. Google Hangouts noise. Use Google Hangouts. Okay, it could have been my laptop laptop. So that laptop mic was unmuted. Folks, do you still see if disabled a laptop mic from OBS? There is no way to do that. You still hear any background noise? Yes, it's still there. Wow, this is really, really, really crazy. All right, so if we both mute, it's still there. All right, audience. Yeah, the background noise is still there. I'm not sure. Testing, testing one, two, three. All right, got it. Perfect. All right. Now we can get started, I think. All right. I believe we are all set. What was the issue? So for whatever reason, there's multiple monitors, audio monitors and OBS, because I think Eric uses like his output from his like sound card or audio from his like coming out of his headset and the setup. So like there's four different titles you have to do. Some of them are local, like desktop audio and mic. And then some of them are scene specific, like headset audio mic and headset monitor capture, which are things Eric's having to do on his rig that are not what I'm used to. So yeah, part of it's familiarity, you know, this being what our third stream officially ever and the other part of it is just good documentation. So we'll get this fixed folks, don't you worry. Right now we have good sound and good picture from what I'm seeing on my monitor. So Eric, or not Eric, Andrew, could you like reintroduce yourself for the audience as if we have just begun? You mean we haven't been doing this? No. So to introduce myself, I am Andrew Sullivan. I'm a technical marketing manager with Red Hat's cloud platforms business units. So while I have not been doing the whole live streaming thing for very long, my son is much more experienced in this than I am. He streams Fortnite and Terraria and various other things off of in his copious amounts of spare time these days because, you know, homeschooling is just wonderful. It's lots of fun. I can't imagine. I have a four year old, thankfully. So we just have to home preschool, which is a lot of you hear. I am strictly tech support. I give my wife 1000% full credit. She's the one who handles all three kids actually doing the school at home schooling thing. So the fun part is every once in a while, I'll get a message from her because Piehole is blocking something that the kids need. But yes, that is that that is kind of I'm beginning that journey of household child protection on the internet kind of deal. So yeah, the next rainy stretch we get, which is coming up. Max is getting his first computer, which is a very, very old MacBook Air. So we'll be playing on Lego.com. And maybe one day he can teach me how to actually live stream. That'd be great. Well, see, I only use it for for ads and for the tracking protection because I'm one of those weirdos. Yeah, I actually just flipped over to the the Cloudflare malware protection thing. And that seems to help. There's a bunch of other stuff that I have done in the past. Like I've done Piehole, but it blocks weird things in the Facebook app and does weird stuff that my wife doesn't like. See, there's your problem. You're still using Facebook. Well, I don't use Facebook. Yeah. Other users that are of significant importance to me use Facebook in this household. So I have to support them. Yeah. I will say if you want to low efforts use next DNS.io. I actually I set my mother up with them and it has been zero efforts to get the ad blocking and stuff working for nice. Yeah. So I'm using the the Cloudflare similar malware protection DNS setting or DNS servers. And it was super, super, super easy to do that. And, you know, like a lot of the pop-ups and, you know, nonsense has just gone away. Yeah. All right. So anyways, enough of me rambling about some internal house IT stuff. So back to the introduction a little bit. So my background a few years ago, I was a customer as a customer. I ran either as an administrator as an architect, virtualization infrastructure, storage infrastructure, all kinds of other things. So it gave me a good background coming into red hats where I came from a storage company with the virtualization piece, as well as now with open shift virtualization being republicly announced or re-re-re-re-announced as, you know, what was the feature formerly known as container native virtualization. So it gives me the ability and the chance to really explore a lot of different aspects of technology. So open shift and Kubernetes with containers and then virtualization kind of blending old and new and all of that other stuff. But I'm not here to talk about open shift virtualization. We're actually going to do that on Thursday with Rease. It's a morning Eastern time Thursday afternoon UK time where Rease is. Yeah. No, I can't wait for that show either because just all the virtualization and stuff we have going on right now is really, really exciting to me. Yeah. It's who would have thought that it would have been exciting to be a part of virtualization again. Right. Right. Exactly. Right. Like containers were all the rage and now it's like, whoa, we still have all these VMs laying around. Now what? Yeah. So as I've been, you know, chit chatting here a little bit. I'm going to talk about a couple of different things. So the first one that I'm going to bring up here is the documentation because I want to play along with and I want to not do this from memory and I would rather look at the documentation so that we kind of can walk through that. I also understand that we added a new section of the documentation around what we're looking at now, which is creating a custom virtual machine template. So I specifically want to walk through that so that we can customize the virtual machine that is going to be used for all of the core OS nodes, be they master or worker as a part of the deployment. And then the other thing that I brought up here was cloud. Right. Because I will need to go in and grab my whole secrets because I don't ever copy it anywhere. I just log in because that's easier. And then the other thing that I've got is my red hat virtualization manager. So this is my home lab. I've been using this on and off for various things for the last couple of years. Actually, even before I worked for red hats, it's relatively modest. Home lab. I think I have it's two servers. I think both are running Ryzen five 2600 CPUs. They're. Yeah. It's yeah. Not bad, but pretty good. Yeah. I know it does what I needed to do. And then I have that separate from the stuff that runs like the pie hole and all of that. The only place that it touches is my home server. And then I have the home lab, but the home server runs the storage, which I've been actually been having a lot of fun with that. I have a, an NVME drive with a VDO, you know, compression deduplication, et cetera, on top of that. And then that provides NFS storage out to the home lab. So it's, it's worked really well. It allows me to set up tear down mess around with kind of do everything that I need to do inside of, you know, the things that we do, which it seems like I'm setting up and shift clusters at least once a day nowadays. At least. Yeah. And it gives me a chance to not have to rely on AWS or Azure or any of those other services, which are sometimes flaky in addition to which, you know, just internet bandwidth these days is always a challenge to come by, shall we say. So a quick tour. If you haven't seen the rev interface before, this is the rev for three interface. I think I'm running four, three, nine. Oops. Yeah, about, yeah, four, three, nine. So pretty straightforward. If you weren't aware, if you didn't catch the various news, rev four, four beta is available today. I actually got the email saying that it was available like two hours ago. Yeah, like, yeah, I just got it too. Yeah. So if you're interested in the rev four, four beta, definitely go and check that out. And there's, there's lots of interesting stuff that are happening there. But again, that's not the topic of today's subject. So I'll do a quick tour of my home lab here. So virtual machines. I've got a few things inside of here. So you see, I actually have a powered off cluster that I manually provisioned. This is what I do a lot of the openshift virtualization testing from. I'm doing nested virtualization with this. And then I use Christians, extremely awesome helper node. Although I don't use the straight ansible playbook version. I do some modifications to remove things and change things to suit my particular deployment. Like, like as a setup specific or just perfect. Yeah. Cause so the way that I have my house set up, I actually have three separate networks. So I have all the home stuff, which has, you know, the piehole doing the ad blocking and stuff like that. And then I have a work network, which is where my work laptop sits along with the management interfaces for the home lab and all that other stuff, which has no blocking or anything like that. Cause I don't want to have to worry about that getting in the way. Yeah. And then I have free posts getting blocked before. Yeah. And it's just one extra layer of things that's, you know, if the company wants me to have access to it, I'll have access to it and let the various, if you can see, if you have good eyes, you can see that there's a, one of these things up here is a virus scanner and all that other fun stuff. And then the, the third network is the lab network. So I have, as a part of that, the helper node runs bind that has DNS, but also has DHCP. I can figure dynamic DHCP updating inside of there. So we can pull all of those addresses and it's just, it has a lot of churn. And the only thing that sits inside of there is well lab related things. So it makes it pretty convenient without having to worry about you know, Yeah. You know, I don't want to break my work stuff because then I'm unproductive and turns out when they pay me, they want me to do productive work. It's amazing how that works. I know. So with that being said, so I have the helper nodes. I don't remember if that's running sent to us or well, but either way it uses Christians playbook. And it does a phenomenal job of providing all those services. And then simply the hosted engine. So this lab started out as a single node deployment. It started out with, I had one server that I deployed self hosted engine on to, and then I remounted NFS back off of the local drive to host virtual machines. I actually have a gist out there somewhere that showcases that. Let me see if I can bring out something that's actually logged into something. Something that's logged into something. Yeah. Well, I started a, and this one is not it. I started a incognito tab. So that way I could, I didn't have to worry about any of the other things popping up, although Firefox I only use for lab related stuff anyway. So I don't know why I was so worried about it. It looks like I'm not signed into to get hub on Firefox anyways. Oh, fun. Oh, I closed out. So usually I use brave for my primary browser, but brave being Chromium base loves the RAM and my laptop hurts enough. So yeah, we'll, I just closed that we'll let it be. Good call. Anyways, so self hosted engine. It's running off of, if we come over here to the storage domains, we can see I have a single storage domain hosted off of NFS. Funny enough, this, it says 750 gigabytes. It's actually only a 250 or maybe 300 gigabyte drive, right? BDO, deduplication, et cetera. Really helpful feature, especially for our lab. Yeah, no kidding. Networks wise, I've got a couple of networks to find in here. You can see this is my primary management. And then the VLAN 101 is the lab network. So I'm going with all this is this is the information that we'll need in order to do the open shift install. So in the last thing that I'll need here is my handy dandy terminal window that I'm going to increase the font size of a little bit. Thank you. Is that, is that better? I can keep going. Let's, let me look at the zoom window to get a better view. Go a little bit higher, a little bigger. That might, wow, that's okay. Yeah. That is very readable to me. All right, good. So this is all information that we'll ultimately need once we get to the point of creating our install config and everything else inside of our environment. So last, but not least, I need, oh, actually I did need that window. So I have our install tools. So if we look at, for example, OC version, you can see that I have 443. I just pulled these yesterday because I always wanted to check and make sure that everything was, had a reasonable chance of success today. It did, by the way. Good. So I just pulled the new GA versions yesterday. Open shifts. So FYI for anybody watching, I'm not one of those people who can talk and type at the same time. Oh, that's totally fine. Now that I know that I can help you as you're typing away and hacking on things. Yeah. Some people can do that. They don't have an issue with, there's one of the product managers that I work with. Steve who, Oh yeah. Yeah. Yeah. I have seen him hold a microphone while talking, while typing one handed all simultaneously and it completely blew my mind. He's a skill that I, my brain doesn't work that way. Yeah. We had Eric on his walking treadmill yesterday and I was pretty mind blown. So like if, if you can talk and type at the same time, like intelligently without saying a bunch of ums and us, that's very impressive. Yeah. So the walking thing I'm okay with. I find that it affects my typing accuracy. Yeah. I have to be super careful, like slow down. Yeah. So all I'm doing here is I'm just cleaning up my, my usual working directory. Sweet. So with all of the information that we have, oh, I wanted to do one other thing, which was, remove the cash credentials file. Oh yeah. Yeah. So essentially what I'm trying to do is mimic. If I were doing this for the very first time, I would have never done this before and encountering all of the various prompts, all of the difference, errors, issues, everything else that you might see. As you're going through this process. And with all of that done, we can come back over here to the documentation. So the first thing that we want to do is pull down our. Release. And somewhere down here. Azure. Nope. GCP images. There we go. So we're going to do our open stack image. I can't. Thank you. Appreciate that. That might be just a smidge too big, but that's fine. Leave it as is. All right. So. Remember rev much like open stack, or maybe open stack like rev. I don't remember. I think rev predates open stack. So open stack like rev is KVM based. So we can use the same image for both of these things. So what I've done is follow this first step in the directions, which is go over and look at this JSON file to then. Use this URL based on that in order to pull our. Image. So I will switch back over here. So yeah, the image pool is very important because we need everything that is contained within it. See how I did that. You were typing. I did. It was helpful. The zoom window thing gets in my way here. And then I also need the base URI. And that base URI is specific to the region that you are in. Correct. I don't know if it is or not looking at the JSON file. It's in the root level. I thought that was interesting. So yeah, I'm not sure to be honest. Yeah, I don't need it. Okay. And grab the base URI do not press enter. On the open stack copy the value of path. So what we're doing is coming here and pasting that guy in. So that we build up the full URL for our. Image. We'll see just how good my gigabit internet is. It's way better than my 120 meg internet. I'll tell you that much. You are a very lucky person to have a gig internet. That was the, that was the one downside to leaving the, the Raleigh area when we moved up here to Michigan was that. Yeah, there were definitely not going to be gig internet in our future. We're waiting on 5G here. Yeah, or what's the, what's the one that Elon Musk and company are doing Starlink. So that actually is that might actually be an option for us because we're at a high enough latitude. So I actually, so I'm a nerd. I love space. And satellites and like all that stuff really fascinates me. So I've been keeping up with that, but yeah, he's actually opening up Starlink to the higher altitudes. And, or higher latitudes, because that's where he's got satellite positions right now, the company does. So we might be able to get that and it'd be faster than our terrestrial internet, but it would still be high latency, which would not work for live streaming. So yeah, I don't know how they're going to get around that bit. We'll see. Yeah. I know enough to know that I don't know very much about it. Well, I used to have to like control and like point stuff at satellites for a living. So that's part of it. Just aim upwards, right? Yeah. Yeah. Yeah. That's that's what we do. We just aim upwards with enough power. You can connect to anything. So while we were chatting, all I did in the background was unzip or G unzip the, the Q cow image. And I'm going to import that. So for anybody who hasn't experienced Rev in a good long while, there is no longer the need to have a dedicated ISO or image domain or any of those other things. It's all one big happy storage domain, which makes things a lot easier. Oh, thank you. So the excuse me. Yeah, the, the, the storage, the different various storage things that you could put together with the, the elder versions, older versions always confused me as far as where do I pull which asset from because someone would always create something for some bespoke project. And then like it would just snowball as far as where things ended up. Yeah. Well, and it just complicated administration in general, you know, you need a block storage domain for VM disk, but you need an NFS storage domain for ISO drives, as well as templates. And yeah, it, it's just much easier doing it this way. So all I'm doing is importing knots. I do have the terrifyingly quick speed of one gigabit going between all of these machines. So it takes it a little bit. And all I did, I skipped over the part of using Ansible for this. So, and I guess I did skip a step. Is it an important stuff? No, well, I, I G unzipped it, but I haven't gotten to the part in the docs where we need to import the disk yet. Got it. So the only thing that this part of the documentation goes through is seeing what resources are available in your rev cluster and the number of virtual machines and stuff that we already kind of did at the beginning. Nice. Yeah. So here. So this is the, the step where we're going to get to, which is attaching the disk that we just uploaded to a new virtual machine that will, we will turn into a template. So let's see. It's finalizing and complete. And now we can come back over here to our virtual machines and we'll create a new one. Sweet. So you already have names picked out and everything. No, I, I go by the incredibly creative and unconventional. I named them by use. So yeah, my, I know my son's desktop is named Jackson desktop. My daughter's desktop is Lily desktop. And yeah. So for my project stuff, I use the looney tunes naming convention, but for like the home stuff. Yeah. It's definitely Julie's iPad, Max's, you know, MacBook Air, you know, my, my laptop and stuff I use for work. I still named it weird things too for looney tunes. This one's actually called Michigan J frog. So yeah. Yeah, I will sometimes. So for example, the last time I did a demo video, I think you use Star Trek characters. And so I use the original Star Trek and the, and next generation characters. I think I did one where I use sandwiches. I was showcasing labels and affinity, anti affinity and used like bread types, cheese types, meat types, because let's face it. I eat food were cheese now, but you wouldn't want your, you know, your, your, your pods running on American cheese, right? Like, yeah, you would want them on something fancier. So all I'm doing, and I know I haven't looked at the docs, but I'm relatively sure this is what's in the docs is the template virtual machine that we're going to use. So I attached the disc that we just uploaded. I set the network adapter to be my lab free land. We come back over here. So see new virtual machine leave the template unchanged. That's referring to this guy up here. Okay. Oh, I didn't set the operating system. I should do that. Leave it optimized for desktop. So we will set this to Coro s. Did I skip it? Is it abbreviated? There it is. Optimize for desktop. So you can create different templates inside of rev that have different characteristics. There's also this instance type, which like a t-shirt sizing. So you can use your sizing for things like, you know, a small VM represents. I don't know what is off the top of my head. We'll say one CPU and two gigs of RAM. So you can pre easily create those. So the optimized for is an interesting one in that. Yeah. So it pre tunes KVM, right? So when it launches the QMU instance, it pre tunes based off of what's what you select here. So if we select high performance, for example, right, it will do CPU pinning. It will do it will create IO threads. It'll do a number of other things in order to facilitate that profile. So interestingly with, so prior to rev for three, the high performance. So doing things like CPU pinning would have stopped live migration from happening with for three live migration can still happen and all of that. So in that performance mode. Yes. Yeah. IO threads in particular can make a difference. If you have a lot of IO happening, or if you have or need ultra low latency IO, right, but you can also set that without having to set the high performance. You can see here IO threads enabled. Oh, cool. Yeah. Nice. So most of these, I'm going to leave it to defaults. I'm not going to walk through every setting inside of red hat virtualization and talk about, please don't. You don't want to hear a dissertation on vert IO SCSI enabled. Yeah. No, I am good. Thanks. So here I attached the drive that we just uploaded. I'm going to make the disc bootable. Kind of continuing down instantiated a network interface for each VNEC profile. Okay. We did that. That was me associating the network interface. So 16 gigabytes of RAM with a guaranteed size of eight gigabytes for CPUs. Oh, set virtual course per socket or course per virtual socket to four. So we'll come back up here to that. And we want one socket with four cores instead of four sockets with one core and click. Okay. Okay. So now we have our fancy dancy template machine. Nice. So one thing that we want to do, and this offers us, actually, let me make sure it's we're following on our, our steps here. Yeah. So one thing that I do want to do, if you were paying attention, cancel this, our disc size is only 16 gigabytes. So we want to make sure that that is at least 32 gigabytes and preferably something like 120 gigabytes in size. So I'm going to come over here and I'm going to edit this. I'm going to edit our disc and I'm going to extend the size and I'm going to bring it up to like 60 gigabytes. So it's what's 44 extending it by 44. Golden rule number one, never do arithmetic in public. Yes. So if you wanted, you know, if you said 120 gigs preferably, but you expanded it to 60, any particular reason why you didn't just go the 120 route? Because I am off the top of my head. I'm not sure if it does thinner thick provisioning and I don't want to take the risk of running out of capacity in my, in the middle of your demo. Yeah. That would be bad or anything. So it is important to make sure that you allocate enough space because of course, Coro s logs, you know, container images, you know, all the graph storage is it still called graph storage. That was a Docker term. Anyways, No, what is it? Layer storage? Yeah. Yeah. I don't know what I don't know what the current term is. Yeah. But we want to make sure that we have enough capacity that that's not going to cause an issue. I think early on we encountered some issues in the, in the tech preview for IPI where the drive would fill up and it would cause the cluster to go offline because not enough capacity and that's bad. Obviously. Yeah. No, that's. So our template is created. Our disk size has been expanded much like you would expect with anything else. That is. It's not running cloud on it, right? It's running ignition, but it'll automatically expand to fill up all the available space. So from the, from the chat. Who is this? I can't ROL. Rolf. Yeah. Rolf. He says 32 gig minimum 120 recommended for production environments. Yep. Yeah. Yeah. And I think that's in the documentation. It's definitely in the documentation for VMware and the other. UPI bare metal installation methods. So I'm going to assume it's in the documentation. Yeah. There's usually a system or a. Sister page somewhere. So the last step in the documentation down there was to turn this virtual machine into a template. I'm going to creatively name this. Our costs. Template. The one. Keeping all of our storage stuff the same. We're not changing the storage device. It's on or anything like that. And we do not want this seal template to be checked. Why is that? Because that will attempt to run. What is it? There's a command that it attempts to run that basically resets the system ID and all of that other stuff inside of there. All the UIDs and everything. Yeah. Like like sys prep is for windows and all that other stuff. Got it. And I believe that we want this in cluster two. Can I move you? Yeah, we do want it in cluster two. You're kind of misbehaved. Oh. Oh. Hit cancel and then just redo it. Just the escape key or something. There we go. Oh, there you go. That works. Technology is hard. Always. If it was easy, we'd all be doing it. So I do have two clusters. So if we were paying attention to our hosts over here. So one of these, this is where I do the self hosted engine. It's an old, old Lenovo. 530 W or W 530 Intel based laptop. Is it a big, heavy one? I feel like I had that one. Yeah, I've had it. Yes, I had that a previous job. Yeah. It rivals the MacBook Pro, the 15 inch MacBook Pro and weights. Right. I think the power brick makes it heavier actually. Yeah. And you can get two of them. Yeah. Just for fun. So this, this one, it, it runs the helper node. It runs basically things that I keep running most of the time because I'll take the two primary hosts and turn them off on the weekends, stuff like that. Trying to be environmentally conscious or something. Yeah. I'm waiting for that to become a lot more automated. Right. Like outside of the box have something managed power everywhere else kind of deal for my home. That'd be great. Yeah. And it's funny because if we ran a true data center scenario, you can do things like that. Right. You know, Rev has the ability to schedule based off of, you know, using as few hosts as possible with fencing. You can do things like power on power off hosts automatically. There's a lot of stuff inside of there that can all work to help. But I'm not at a data center. I'm at a house. I can turn my head like this and I can see one of the, one of my hosts over here. So. Um, so the, the message came up saying that was done with the conversion. I'm going to go over here to the templates and we can see we have this RH Coro s template. V1. Okay. It's on cluster two and everything. Everything's going according to plan. All right. So at this point, um, we created our custom virtual machine template. So we will just walk through the documentation around this. Um, blah, blah, blah, telemetry. So requirements. We saw we have four, three, nine. Um, we too have a data center who state is up. That is a good thing. Yeah. Um, we do have at least one rev cluster. Also a good thing. Uh, minimum of 28 VCPUs. I don't have that many, but, uh, we'll chance it. Yeah. I think 112 gigs ran. Uh, so one thing to note, um, in my, where is it? Uh, In my cluster configs. So by default, Rev is ultra conservative, right? So it doesn't do things like turn on memory, uh, uh, deduplication, et cetera. Right. Or over commitments. Yeah. So you can go in here on your cluster and we can do things like set my memory optimization so that it'll over commit it. 200%. Um, count threads as cores. So balloon optimization, uh, KSM, which is kernel sharing, kernel memory sharing, kernel security module. I thought no, it's for, uh, it's memory deduplication. I don't remember what KSM stands for. I forget what the, yeah. I'm assuming somebody will chime in with that. Someone who wrote the docs might chime in. Maybe we'll see. So, um, so we're good to go there. Um, not only because I think I might have that much actual RAM, but also due to over commitments, um, it'll take care of it all there. So rev storage. Um, so this is an interesting one. Um, So Etsy D has some fairly strict performance requirements. Like it really wants less than 10 milliseconds of latency. Um, so for whatever reason, and I don't know if this has always existed or it just came to my attention once the rev IPI process started becoming more prominent. Um, but it can cause issues, particularly with deployment. So if we follow these links out, so this one will take us to this page, which we're going to quickly log into. And once we get to this page, you can close that tab now. Once we get to this page, we will end up finding a link to this article from IBM. And this article from IBM discusses using FIO in order to gauge whether or not your, your storage has enough performance for your, uh, for Etsy D. Cool. So I'm going to copy this command and let's see if I can find a host that has this on it. So we'll go to the helper node and see what happens. Oh, helper node. This must be running. This must be running CentOS because it didn't come up with the, uh, the normal subscription manager stuff. Yeah. So only there was a fast way to check that. So this is running on the same storage just from a different host. Um, same gigabit connection and all that other stuff. So it should give us an idea of what the, um, what the expected performance is. And I need to create a directory called test data. And we should allow this to create and is this a, so this, and if you read through that IBM article, it specifically calls out that they tuned this, you know, size 22 megabytes bite size of 2,300, or I think that's bite size. I don't know. Um, but they specifically tune these parameters for Etsy D, not for what's the lowest latency with the maximum throughputs, right? All these other things associated with your storage. Uh, but we're, what we're shooting for here is a minimum of 50 IOPS and a 99th percentile latency of, uh, less than 10 milliseconds. So this will run for a minute or two. So yeah. So coming from a storage background, um, 10 milliseconds is both a little and a lot, right? And that's, um, all flash storage. So SSD, for example, should be consistently under a millisecond. Um, you know, whereas hybrid storage, so hard drives fronted by flash is usually less than 10 depending on cash. And then hard drives are typically in the 20 ish millisecond range, but all of that is completely pointless depending on your storage system and how it does caching and how it does raid and how it does 99,000 other things. So you can't just say, well, I'm using all hard drives. So therefore it's only going to be less than 20 milliseconds, uh, because it might be more, it might be less than any other number of other things. So you see that didn't take long. Um, what we're looking for here is underneath here, uh, you see my 99th percentile is 87, um, 17. So these are in microseconds. So divide by a thousand and we get 8.2 milliseconds, 8.7 milliseconds. Um, so it's actually higher than it normally is for the system. So it could be because I'm running from a different nodes, one that's already got other stuff going on. It could be any number of things. So after that slight tangent. No, that's helpful in information. I didn't know you could do that. Relevant tangent, I guess. Yeah. So we'll get rid of all that stuff. So my rev storage is on the borderline, but it seems to be good enough. Um, 230 gigabytes or more for storage. Um, again, that's going to depend on how big you make those drives. And we must have access to an internet connection. Um, this is pretty normal. Makes sense. Yeah. It's IPI. Um, we do. So we have offline installs for this available yet or not. I think so. I personally have not tested it. I haven't either obviously. So that's why I'm asking. Yeah. Um, so we typically try to, it might not be at first. Yeah. I don't remember if IPI supports offline or I never do offline. I install installations. I just, it's not my thing. So, yeah. Um, so last one that I have highlighted here, um, feels obvious, but at the same time, um, I, I actually encountered this initially of whatever network the masters. So the control plane VMs are being deployed to needs to have access to, right. It needs to be able to touch the Rev manager API and point. Um, you know, the, the code, the cloud provider inside of open shift needs to be able to talk to Rev to be able to manage those virtual machines. So if it can't talk to them, bad things happen and by bad things, I mean, it just doesn't work. So if you can, and, um, I don't see it linked here, the quick start guide, uh, which is linked from the blog post on open shift.com. Yeah. See if I can do Rev and open shift on that site. See if it comes up. Probably not. I've never had great luck with duck, duck, go and technical searches. Anyways, that worked out great. You even put in your own name. Uh, Google did even worse. So, well, you got the site colon blog thingy there. I don't know if that helps it. No, oh, no, no, no, it's not blog.openshift.com anymore. It moved to open shift.com slash blog. Maybe that's the problem. Hey, there we go. There you go. You got to use the right, uh, Yeah. Ironically, it's easier for me to remember this and then clicking the link for quick start guide than it is to just remember where the quick start guide is. You know, there's this thing called bookmarks. I know you and your technology. I know my fancy technology. Um, anyways, so the quick start guide here has a handy dandy command that you can run. It's a curl command somewhere down in here. Why am I scrolling when I can search? Um, so you can see there's a curl command that will walk through how to test and make sure that you can access the API endpoints for Rev manager. That should obviously be executed from the same network as your, uh, as you intend to deploy the control plane. Yes. So I'm not going to test it because I know that mine works. Oh. High confidence. I'm joking. Yeah. Every time I do that, I, I'm probably going to get bit in the butt now, right? Well, you know, it's, it's live. So if you haven't made your sacrifices to the demo gods, now's a good time. I did only have three cups of coffee this morning and that was five hours ago. Whoa. Okay. I just finished my like ninth cup of coffee. I make my coffee week, but yeah. Yeah. I, um, so when I used to work in an office and we had coffee available all day, I would drink. Yeah. Like eight, nine cups of coffee a day. And then, you know, you start getting the, the caffeine jitters and the, the caffeine sweats. Yeah. I had to, uh, consciously back off. No, I actually, I, I weigh my coffee now to keep the, uh, caffeine under control. So yeah, I actually drink three liters of very light coffee. I know we, I'm, I'm sure Eric will have some, uh, some rationale or something about how you should like the amount of coffee and the amount of water can affect the flavor and cause he knows more about coffee. He's probably forgotten more about coffee than I've ever known in my life. Oh, well, okay. I'll have to bring that up with them next time I talk to them like, Hey, you know, I do a pour over method. How would you do that? He is, he is an aficionado without a doubt. There's always one and everybody's team I feel like. Um, so I skipped over the parts that was just re verifying all of their requirements. So you can see your friend that you have. Yeah. Oh, look, oh, there's the curl commands. It's actually in here too. Oh, cool. So preparing the network environments. Um, again, I've already done all of this because I had just reused the same thing over and over again, but we can take a look at it. So I'm over on the helper node and we can see the NS directory. Yep. One of these own files here. So here's my reverse. Um, which we can see has nothing in it, which is perfectly fun. And if we look at our forwards. So I'm going to be using, where are they? Oh, I know why because I'm looking at the wrong zones. That'll do it. Yeah. Every time. Cause I actually, it needs its own sub domain. So anyways, here's all of our, so this is one of my other domains, C and V. Um, we can see we have all of our entries in here. And actually I don't even need that one. This is the one I'm looking for. I'll get there eventually. I swear. Well, you know, it took us a while to get sound going. So, you know, why not DNS too? If only Christian were online to help us with this DNS problem, right? He is. I see him in chat. I know he's in chat. That's why I said it. So now that I eventually got to the right zone, you can see I have API, which is my 209. And then I have my wild cards for apps, which is at 211. So the third IP address, which will be to 10, we don't need a DNS entry for. And additionally, if you saw in the forward zone. So we have our pointer record in there. And then we don't need one for our wild card. So we can test this. So dig shorts. Yeah. So we can see if we do our dig command for a test. It's a wild card domain against the apps. It comes back with the rights. And you can also do the same thing for our API. And it comes up correctly. Cool. And we'll test our reverse on this one. And it comes up. So theoretically we should be good. And if we really want to, we can test it from over here too. Increase the size of this one. And it comes up correctly. So we verified our DNS is set up correctly. I'm not going to do the ARP thing because I know my IPs are not in in use. We tested DNS. So I'm going to skip doing this for the moment. So what we're talking about here is setting up the CA certificate for Rev. And I do that because at least on my laptop here, I've actually already done this. Yeah. So you won't get the. Yeah. So what this is. So if we go here. And it's just walking through the command line version of doing this. But if you browse to the main window here for red hat virtualization, you'll have this CA certificate. That you can pull and then trust. And you can see it's already installed as a certificate authority. So I don't actually need to do that for a month. Yeah. And you can see it's just walking through adding that as a trusted certificate. I don't need to generate up an SSH private key. I got that one covered. Obviously. Yeah. So. Pulling the installation program. That's the one that's over here. I guess I can already authenticated in here. So if we come to the snazzy cluster manager. No, I'm doing is looking for our red hat virtualization. And you can see it offers me to download the installer. I did this yesterday. I'm not going to do it again. You can select which operating system you want. Blah, blah, blah. Good news is you only need it once. And then I do need the, the pulse secret. So there is a, I don't know how much it lags, but brew does have the open shifts tools on there. Yeah. Yeah. You can do a brew install. Yeah. I, I, I want to say that it's part of the build process that they push those out, but I do not know for certain. Well, I would update it now, but it would take far too long. And my laptop would try and take off. Thanks to throttling. Question in the chat. What version of Rev is this on? It is four, three, nine. Four, four, three, nine. Sorry. So four, three, eight, four, three, seven or four, three, eight. It's in the docs somewhere is the minimum version that works. Got it. So at this point, we're ready to create our install config.yaml. So let's come over here. Yeah. Yeah. Yeah. So standard open shift install file. You see, I'm just passing the directory equals or V or cluster name inside of there. So it's going to ask the normal set of questions, public key do I want to use? I'm installing to overt. Note that overt is the upstream for red hat virtualization. So we're using overt. So it will only ask me this information once. Then it creates after it asked at the first time it creates the, that file that I deleted at the very beginning, which is the home directory dot slash overt slash over to config.yaml. And we can see if we look at the help. It hopefully gives us the template to use here. So is it trusted locally? Yes, it is. Certificate bundle. So this is if we come back here and we download our certificate, and then we copy our certificate. This is what it's asking for. So the rationale here is we, even though my laptop trusts the certificates, once we deploy open shifts and the cloud provider, right? So it basically the pods that are interacting with Rev are up and running. Well, they need to, they don't trust the same certificates as my desktop. Right. So we have to provide that bundle and then tell them to trust it, which the installer does automatically, but we have to provide the certificate to it. So that's exactly what we're going to do here. Let's provide that. Just happened here. Did it do some of it? And then stop. I don't know. Control W. Do anything. Mac OS. You broke. Did it fail you? Wow. I've never seen that app before. You just pasted it, right? Like nothing fancy. Yeah. All right. Thanks Mac OS. That's weird. We'll, we'll go with this one then. So, yeah, the, the odd paste problem. Never seen that one before. It's good stuff. You know, the, um, let's try this again and see what happens. There we go. Yay. The paste works there. So one thing to note, if you saw, it says two blank lines to end. So you have to hit return three times at the end. So over at engine username, I am super secure and sophisticated. It is admin. Good. If you don't tell us our password, we'll only get half of it right. All right. Oh. Certificate sign, but unknown authority. But I just gave it to you. Fine. We won't trust the certificate then. Geez. I know. Well, wait, this, this is running on your laptop though. That is weird. So this is basically saying ignore the, the error when you tell it it's untrusted. Yeah. So if, if, if you did this right, or if, for whatever reason, this thing were valid. I don't know why it's signed by an unknown authority. Then you would not get a series of pop-ups that say, Hey, that's, this is onto your side. Yeah. So as I'm sitting here thinking about it. So it is a self-signed certificate that I have chosen to trust. So I think what it is, is I need to pass it the signing authorities certificate. So you need to, rather than the certificate itself. Got it. So it's more, yeah. It's more effort than I'm going to go to right now. That makes sense. So yes. It's a, no, it's not trusted. I don't care that it's not trusted. Thank you for the warning that it will be insecure. Yeah. So at this point it asks for the username. It asks for the password. It actually connects to our cluster or our red manager. And at this point it is offering me, you know, telling me what resources do you want to use? Pick a cluster, any cluster. So cluster two is our destination. We're going to use our one and only storage domain. We're going to use our lab network. So our internal IP. So this is the API virtual IP, right? Which you got from your zone file. Yep. So remember, this is the one that we want to resolve. To API. So the 209. So the DNS virtual IP. This is the one that doesn't need a DNS record. And then our ingress virtual IP, which is the wild card apps. And then our base domain, which is simply lab. Again, super creative. You know, what's your name is or V. And my pole secret. P pasta. One of these has what I need. There we go. So now we should have, if we look inside of our, let's get rid of the. Yeah. Get rid of that thing. Now, if we look inside of here, we have our install config. So all I did was. Tell it to ignore the. Secret. Yeah. The pole secret. Not that anybody would actually sit here and copy out all like. 300 characters of that thing, but you know, that's way more than that. But yeah. Good luck. Pretty straightforward. It's like every other install config that's out there. The difference being our platform down here. Right. So you can see it uses the UID here. Uses our network name. Basically everything that we need in order to get the, the cluster up and running. One thing to note, as always with your networking, if you need to change the ciders to not conflict, et cetera, et cetera, et cetera. Yes. And then. I think we're good to go. So as always, I will. Make a copy of this. Because when you run the, when you run the cluster create, it consumes it. I like how we, we call it consuming. It eats it and it does not give you anything back. So I'm going to make a copy of that to have for posterity steak. And then we'll. Kick it off. So one thing to note. This time I am inside of the directory. So I'm inside of the or V directory. So I'm not going to specify the. Domain. Or the directory, rather. Yeah. So I am going to use log level debug. This is mostly so that we have something to look at for the next few minutes instead of just staring at a screen doing absolutely nothing. From this perspective. But staring at screens doing nothing is what we do all day. Sure. It's like watching paint dry or grass grow. Actually, I do need my grass to start growing here soon. So, oh, you know what I didn't do? Let's go ahead and. Get out of that. Because I stopped following the directions and just started doing it. There is down here somewhere admin that internal blah, blah, blah deploy the cluster. So. On one of these pages. Key. Yep. Yep. Yep. I think it was on the previous page. So way down here at the bottom. Are they exporting the environment variables? This is important. Who would have thought, right? So this is what this environment variable variable is what's going to tell. The installer, which templates to use when we are doing this installation. Got it. So if we had not done this, the installation still would have would have gone through. It still would have done its thing. But what it would do is it would reach out and it would pull down the image template, then it would create or the. The Q Cal, then it would create a new templates. And then it would use that without that customization for things like the disk size that we had added it. Which are vitally important given the nature of the environment we're in. So yes, please use these variables. So we are going to go ahead and do it the correct way. And our template name. If I remember correctly is. There you go. Copy and paste. All right. And just to make sure it didn't get far enough to try and create anything over here. And. So now we will do it again. Remember how I said it was a good idea to create that backup for the install config. So in the, in the logs, someone says you could use a backup into the installer to keep it from being gobbled up. Just as a tip. Oh, that's an interesting tip. Yeah. No, that. That'd be helpful. Yeah. I 601 half a dozen the other. I mean, whatever you want to do. That's one command instead of my, what three. So, yeah, everyone. I know it's up to you. So where'd we go. So open shift install create cluster log level debug. We have our install config in here. And so now we should be good to go. Hopefully. Sweet. So we'll let it go for a minute. And then once it gets to the point of cloning virtual machines, we'll switch over. Yeah, we'll see some fun. So normally if we were just letting it run without setting that environment variable to use the templates, we would see it would pause as it downloaded the image locally. It would then go through the process of uploading that template. So we're going to go through the process of creating a template like we did manually. Or creating a virtual machine and then cloning it to a template. Instead, because we did all of that stuff for it. You can see here. Down below in the logs. We're relying on Terraform here to create various virtual machines. So master one, master two, master zero and bootstrap, we have over here. Let me make this a little bit bigger as well. Thank you. So you can see here we have our master zero one two and bootstrap. Keep it going. So I'll delve a little bit into looking at what's going on in the boot process here. So I use just for sheer sake of convenience, no VNC to connect to the consoles of these because it will open it in the browser. And then when we look at this, it'll boot and it'll tell us what our IP address is here. So 110. So now I want to come here. You're still frozen. Watch. It's going to start. It's just that it's going to finish pasting in any moment now. I know, right? Core. Oops. I'm going to have to. Oh, is this one of those things you log into all the time? Yep. So all I'm doing is SSHing as core. So remember when we did the create install config, it asked for the SSH key. So it associates that SSH key with the core user on the hosts. And when I connect in, yes, I do. It drops me in his core. And now I can use journal control, which it helpfully provides for me. To sit here and watch it bootstrap its thing. And we can see over here, we're now at the stage of waiting up to 20 minutes for the Kubernetes API to come up. Right. Awesome. So bootstrap is bootstrapping. Another thing that we can kind of look at, I see it's going about it all over again. So one thing that some people don't know, I didn't know this until the last few weeks, actually. So you can pseudo over to roots and then you can use create control to look at the pods or the containers rather that are running on the host. So Rev API uses the same technology as the bare metal API process when it comes to the load balancer and all that other stuff. So you notice we didn't configure HA proxy or we didn't configure SRV records or any of that other stuff. So it uses keep alive D to pass around, right to keep those IPs on the nodes that they are supposed to be on and up and working as the cluster does its thing. So if you're having issues as you're installing the cluster, you can simply pseudo and then use pre-control to look at the logs for these. Nice. And keep alive D is that one. So you can see it's doing its thing. Or I could look at SCD, you know, if I want to look and see what SCD is up to DNS. So core DNS is what's being used for the M DNS responder as a project. Yeah. So keep alive D is basically maintaining the IP address that then points to the core DNS service. At least I think that's how it works. And then at CD then looks at that to get its information. So one thing that I have found out and actually thanks to Christian is that with 4.4, openshift 4.4 the boot process or bootstrap process changed. So it used to be that when it bootstraps, it would basically the bootstrap would start and then it would wait for the masters to come online and then the masters would start at CD and then bootstrap would connect to that at CD and do its thing. Right. So it would stand up all the services against that at CD and then just hand over the services to the masters with 4.4. It uses the at CD operator and it will instantiate a single node instance of at CD on the bootstrap itself. Cool. Then when the masters come online, it uses the operator to scale that to three. So it adds two master nodes and it scales down to two. So it removes the original bootstrap and then it scales back up to three adding the last master. Brilliant. So what we're doing now and you can see this is just scrolling by these numbers as it's going through and doing this. It's waiting for the masters to finish booting. So what's happened at this point and if we're fast about it, we can come over here and see how these masters are rebooting. Yep. Basically the masters started, they looked at bootstrap, they got their ignition config, now they're rebooting to basically do their thing. When the masters come back, we'll stop seeing this scroll by, depending on how fast your network or your environment is. This can take anywhere from a few hundred to on mine at one point I had it way overloaded and it got up to like 4,000 tries of this or something like that. But eventually it'll proceed. But yeah, it's a timeout, right? Like what did this say 20 minutes? Yeah, actually we should have moved beyond that point. Okay, never mind. Sorry. Fun the right one of these things. So the Kubernetes API came up. Basically that's indicating that bootstrap has at least initiated the initial or created the initial Kubernetes. So now we're waiting for bootstrap itself to finish. So you can see this went relatively quickly. Yeah. Testament to your home lab. Yeah. Yeah. And still estimates of all being local. Yeah, that's the and it's funny, right? Cause you know, I know there's a lot of people who, you know, oh, I can't do a home lab because I don't have 10 gigabit network or I don't have all these resources or anything like that. Literally all of this from a storage perspective it's running remotely one gigabit NFS to a single NVMe drive. Nice. Right. So yes, it is NVMe, but it's it's also kneecapped because it has video on top of it. And then that box is running off of an old desktop. It's running like an ancient Xeon. An ancient Xeon. Yeah. I think the pass mark score is something like 8,000. It's five years years old. So in the background here notice it stopped scrolling all of these messages essentially indicating that it's moved on to the next step. So the masters have rebooted it's now trying to deploy all the various services underneath. And what we'll see is this stanza of four will start repeating itself here in a little bit. And we're waiting for all four of them to say ready, ready, ready. Yeah. And they bounce up and down especially I think it's the controller manager in the API server will bounce back and forth a little bit between pending and ready and see does not exist running not ready. So this usually takes a few iterations as it's going through. But if we look over here, eventually it'll even while it's still doing this so it isn't fully up it will go forward and deploy the worker nodes. Yeah, there's the first one. There you go. So it's just in the process of doing its thing. Notice that the bootstrap is now gone or it should be in the process of going. Okay, so we're waiting for bootstrap to complete. I usually in order to prevent yet another console window from freezing I try and disconnect from it before it deletes the VM out from underneath it because Mac OS for some reason holds onto those sessions forever. It doesn't time out. Yeah. Probably something I'm doing but well no it's Mac OS networking is one of those things where it's like are you punking me right now or is this for real a problem I feel like sometimes I can't tell you how many times I like I have you know a wired network that has priority over my wireless network. I can't tell you how many times like I've all of a sudden just swapped over to a different interface for no reason. Yeah, then yeah, like it Yeah, it's the same network but it's wired versus wireless and yeah, it's annoying to me but that's how it goes sometimes. It's funny to me that so I've been an IT for 20 years now this is the first time I've ever not used Windows for my primary desktop Oh really? I've just always you know I worked with the government well Windows was the actually that's not true. I use Solaris. I use Solaris back in the early 2000s. Solaris and Windows you know Solaris 8 that was that was lots of fun. You remember the Spark stations I had an Ultra 60. Nice you know way back when so you notice it moved on over here so it's going through this is Bootstrap deploying all the various services so adding in all the various configs etc so you can see here sending Bootstrap finished event tearing down temporary Bootstrap pull control plane so we'll go ahead and exit out of that guy and if we switch back over in just a moment we'll see it register that and then just a moment later we'll see it destroy the Bootstrap node or Bootstrap it served its purpose it did good valiantly we will ship it off like a Viking funeral not quite there was a there was a series of Tom Caclancy books I actually read when I was in the military and during deployments and it was like cyberspace became like something that you actually like interacted with physically kind of thing it's not like a matrix thing but like is that like close what's what's the one Ernest Klein they just made a movie out of it why can I not think of the name of this it's like the one that's set in the 80s Ernest Klein 80s movie set in the 80s I got nothing Ready Player One that's right no it's not like that well maybe a little bit but yeah like you could actually go into this environment and like you were physically attached to it somehow basically and it was a world within a world essentially and the books were based off like a police force like they had the police this cyber world and you know it's like when something dies there where does it go right like yeah yeah so that was always a interesting question I had in the back of my head like oh they're gone so what does that actually mean is the person gone to that was never clear while you were telling me about that it the bootstrap completed yep and now it's scrolling through and question in the chat um there's only one core on this laptop within VME storage so it's not a laptop actually it's a Dell something or other now there's a whole bunch of talk about Ready Player One in the movie yeah so you can't treat the book in the movie as like they have the same premise not the same otherwise yeah they are very different entities and you can't watch the movie expecting it to be like the book or vice versa at all yeah yeah yeah so you can see this is a it's a Dell Precision T1700 laptop yeah I think I found like a coupon for like 40% off or something to Dell refurbished a few years ago and that's where I ended up with it I need to find one of those so it's doing a see if it'll tell me on this so there an E3 1271 v3 so nothing spectacular by any stretch of the imagination this is literally a repurposed desktop it does have an NVIDIA K2000 I think in it which Plex uses for hardware transcoding okay cool so and it does have 32 gigs of RAM but if we look at the storage side over here we have our VDO device and you can see it's 103 physical gigabytes on disk and it's assessed over a terabyte or right out of terabyte so you know the whole VDO deduplication and all that other stuff it works really really well and then inside of there it's just using LVM inside of there to create my logical volume so I actually have two logical volumes one is so SHE self-hosted engine and then the other one is one that I use for OpenShift virtualization I see your comment about using cockpit Christian I use it for storage stuff I find it to be easier networking it doesn't always do what I want it to do so I usually have to go in and redo it but for the storage stuff I find it pretty easy I find it the virtual machine and the podman stuff to be very very handy if you have a bunch of random pods that you just have running in the background somewhere right like I have on one of the boxes behind me it's a fedora Raspberry Pi 3 and it's just running a handful of pods for me I don't need the full blown Kubernetes experience sitting on a Raspberry Pi I just need to run a few pods set it up through cockpit was easy enough off you go so all I'm doing is while it's finishing the deployment let me see it's at 98% deploying the cluster I'm just going to connect into the cluster and take a look around so all I've done here is this export command that I'm executing is literally the same export command that it'll spit out at the end telling you to use the kube config to connect in use the correct kube config or you'll be in someone else's cluster and I don't know it or not really there we go so we can see our six nodes here three workers it's still going through its thing only been online for 70 seconds so we can see our various CSRs I guess I could approve that one although it should approve it eventually automatically there we'll help it out so we got all our nodes added it's just going through what you're doing it's still at 98% complete so we can do OC get cluster operator so we're waiting on monitoring samples that one may or may not work there is a so the yeah but that one that one doesn't come up until after all the others it's one of the last ones I think so the registry won't come up because I don't feel like looking up the command to patch the registry operator never mind I found it the registry will come up if you save your history long enough it's Catalina, it's ZShell it actually keeps it for 2788 so I'm still getting used to the ZShell is case insensitive for double tab so that still throws me for a loop because I have the habit of using the double tab as a crutch yes same so yeah same problems I feel you if we check our cluster version still doing its thing so it takes it a few minutes depending on who knows what things I'm sure it's internet, CPU and storage I always get a kick out of the times when it says 100% complete but still waiting on things Mike do you not understand what 100% means? 100% is a relative term you know 100% of what? what do you need to be 100% just this? okay cool full completeness is appreciated I do like the fact that it does tell you we're waiting on some operators here it's not like just telling you still at 98% actually tells you what's going on so you can go look so if you notice the output of that if we do OCGit cluster version so that's the same as what this thing is scrolling by so ironically this is usually once it gets to this point is when it takes the longest if we switch back over to redhat virtualization manager here we can see all of our nodes are up and working doing their thing you can see now that it's at this stage my disk I owe and my network is settled considerably so again all of this is over gigabit it's not ultra fast latency is decent and that's what's important this is centos I guess it predates me working at redhat so centos makes sense so while we're waiting on this paint to dry how's the weather there in michigan so it's been up and down we had like a nice ish saturday and then like the bottom fell out and like we haven't had snow this month that's encouraging but last month we had snow like the last week of the month we know we're only five days into the month of may so there's still time there's still time it's 51 right now high of 52 it's cloudy it's like that pre spring niceness that we've already had like twice already this year we just needed to go full spring on us now that would be great yeah while we were testing audio I don't know if you heard me rambling about the weather here it was almost 90 degrees over the weekends oh my god wow that's too much too soon my son who is an avid endorsement was not happy about it oh I bet yeah no that would be like I I worked in the desert for a very long time I really appreciate air conditioning not like heat so yeah that's part of the reason why we moved up here was because the summers were getting brutal and uh yeah it was no fun so now we don't have to worry about that we just got a dick snow every once in a while it does get old like April late April snows that gets old alright look at those guys yeah open shift samples I'm less concerned about kube api server that's important so one trick I also learned about troubleshooting these things is you can you can find the github repository that is the source for the operator which the developers will often include a lot of information about how to troubleshoot and check on and look into and configure even what's going on with it so it's OCAADM release info dash dash commits I think is the and then if we do for our specific operator you can see this is the github repo where it came from and if you really carry the specific commit that was used for this build and now I can come over here that is that's what I get for highlighting something else thanks for helping me out macOS well you know that pastel will eventually happen sometime so now we can see our cluster cube api server operator and if we scroll down in here you can see a bunch of stuff about that particular operator so I found this to be a particularly helpful shortcut so here let's see what's going on inside of there that worked out well okay yeah so clearly here we go controller controller degraded so it's still doing whatever it does hey it finished okay good awesome alright so let's browse to our I don't need that I know what that is so I need our kubadmin password and an unused tab and I will browse to the incorrect URL to watch people like oops did the wrong thing and like when you think about it how many times it actually happens every day like oops pasted the wrong thing or oops went to the wrong site or something alright always amazes me how many mistakes that experts make just naturally and are just like oh yeah yeah well what's the the it's not done in Kruger but it's a trough of disillusionment or something like that of only beginners think that they're experts experts think that they're beginners yeah exactly so as you can see even though it's done deploying there's still a few things in here that seem to be settling not all the pods are completely active no that's kind of normal so if we come down here to our machines you can see they're all provisioned as nodes so they're all up yep it's still doing its thing a little bit in the background here we can see our machine sets we have our worker inside of here I want to I can dig in a little bit I can do things like add a new one which you can see it popped up right away over here if I switch back over we can see that it should appear momentarily I might have to force the refresh there you get it so yeah easy straight forward boom does what it's supposed to do there you go so we'll let that do its thing for a few minutes but yeah it's open shift open shift on rev up and running I see that Dunning Kruger knowledge yeah I have no doubt I am at best an armchair psychologist which means that I'm the worst kind of psychologist yes you're the one I can't talk to so yeah how long would it take for just adding one node to take just 10, 15, 20 minutes tops no it shouldn't be that long right I mean in my environments so literally all we're doing is creating the new VM based off of an image there it is yeah so it basically it finished creating the based off the template now it'll boot it it will talk to the control plane so it'll talk to machine config operator after it boots so it'll probably go through another reboot and then it'll join the cluster just as you would expect so machine config operator is it you that loves that so much or someone else Christian and Eric that's their specialty yeah that was like a 300 chats 300 message chats trying to explore some of the nuances in there yeah it's a big one but it does a whole lot of cool stuff a whole lot of cool stuff provision to reboot in progress man look at that yeah so that thing is plugging along it does its thing it's largely dependent on the virtual infrastructure that's underneath it which again you said is meager right yeah I mean NVMe is it's really nice in that sure it's it's ultra low latency but you also saw at the beginning because of the added layer on top mine isn't ultra low latency right NVMe is normally like if it were just being used as an OS disk for a rel machine or whatever it would probably be like a couple hundred microseconds of latency at most but is everything added on top network latency video latency NFS latency you know it's a couple of milliseconds so it's not great but it's adequate adequate he just stood up a whole cluster and expanded it in a matter of minutes it doesn't take long so there that was 1440 if we scroll all the way back up here let's do this the smart way smarter not harder so if we scroll back up here 1417 so 23 minutes wow dang so I'll take that yeah no that's good that's good time I mean considering everything you just did yeah so Chris what are we going to do for the next hour and 13 minutes well there's a wild turkey walking outside my window distracted me sorry um I hear you sing so oh there look the I sing yeah I don't sing a lot a lot a lot a lot no I don't sing at all there's all our nodes ready status roll worker all added in dang that's awesome man so what else can you do now it's all just open shift from this point forward right like yeah yeah so you can drive your rev through open shift exactly um so it's literally like any of the other IPI experiences um I have not had the time yet to explore and experiment with creating additional machine sets etc to see if we can do things like um can I create customized node types with uh a different template so maybe I want to use revs GPU pastor feature and create some GPU enabled uh nodes you know I haven't gone through and explore all of those things I work under the assumption that they'll just work because that's the way it's supposed to be um but yeah it it's just open shift at this point um it it does what it's supposed to do which is the beauty of it right indeed yeah so the the power of open shift here is that that machine config operator and the the just the wealth of knowledge we dump into the product right like we learned from all of our customers and pass that knowledge along and you know we also pass it upstream you know when we can to help upstream projects as well what are those two busted pods I don't know they must not be important it must not be uh thing with the console that's strange that's very weird console seems to be doing its thing yeah like that's normal looking console so I see a few comments in the uh in the chat over here what's the difference with VMware um so once VMware IPI is a thing it should be basically the same yeah it's just another when when you saw the list of places you could install it in the installer and VMware would just be another option yeah so one thing to note that um one a uh important feature that did not get deployed as a part of this is the dynamic storage provisioner um so you can use the overt storage provisioner see overt open shift shift extensions so you can deploy the storage the dynamic storage provisioner and it will create the dis inside of the storage domain so that is definitely an option in order to add that in but otherwise yeah it's you know VMware with UPI the dynamic storage provisioner that it creates is the quote-unquote old one um so it's the non-CSI version um what VMware lovingly calls the cloud native storage provider is the CSI compliant one and that one um I don't think there are any other versions yet I haven't tried it with 4.4 so it it may but it didn't before I don't know what's going on here with this thing preempted in order to admit critical pod all right then well that's weird I wonder if uh what critical pod is being preempted even though the console was still working so if we look um so cpu 64 memory 97 so that um so one thing to note again the default here is um if we look at our masternodes that's not what I wanted if we look at our masternodes we are going to end up with VMs that have 8 gigs of RAM so again production we would want um 16 gigs of RAM there because otherwise we end up with things like well our memory is basically extinguished on our node here or if you had an operator that might be memory intensive game over yeah so you see 85% so if I had to guess it's a memory thing that's causing those to be to have issues um so that remember those uh those these things here yeah those are important yeah very important so you can see master mem you're set up if you did that and we wouldn't want that so if you're deploying into production and not uh not into your home lab for twitch stream make sure you set the environment variables to use the correct amount of memory and the correct number of cpus yeah that's okay cool man the um I'm trying to think what else we can show people that's cool uh like could you manage any of the network pieces through open shift I don't think so right um so I guess that depends on what you mean by network pieces um so it is not like the underlying revs network pieces no so it's not aware of it so we're not going to be able to create um you know additional vlan networks we're not going to be able to create additional vnick profiles um that type of stuff so oh another thing to note um there is currently an outstanding bug or bz slash rfe where um as it stands today the network name and the vnick profile name must match um so if you have a network with more than one vnick profile and you don't want to use the default one um you have to have to create the the default name someone is asking in chat can rev see the containers on the nodes um it cannot not to my knowledge um although I'm going to check because I honestly haven't checked that'd be interesting and we can see yeah so that's um I believe comes down to the guest uh integration the guest agents right um but from a virtualization perspective you know we should probably point out there's uh redhead virtualization and now there is also open shift virtualization which doesn't you know to me it doesn't muddy the waters that much but for others it might if you want to talk about that a little bit where you would use one versus the other how they complement each other uh and where best you know or not where best but you know how would you make the determination of when to use what yeah um um so if you didn't see the announcement or any of the chatter that was going on during summits um or here at the very beginning of the session I think we talked about it a little bit um so the the feature formerly known as container native virtualization is now open shift virtualization so aside from the rebranding it's basically exactly the same um it is a again a feature of open shift both both ok e and o c p um that enables you to deploy virtual machines to your open shift cluster right so at the low level at the technical level what we're talking about is kvm virtual machines running in pods running in containers um at the the level that you know generally speaking our developers and applications teams care about it's vms and containers running side by side inside of the open shift cluster and at that level to your point earlier yes you can do things like um you know manipulate through multis the underlying network config you can change those you can add those as needed for your virtual machines um so the use case here is um I usually bucket it into two things although it is certainly not limited to those two things um so one is if I have an application that is deployed as containers or much of it is deployed as containers but I still have some vm dependencies um I usually pick on virtual machines or a database servers rather you know dbas were hard enough to convince to go virtual in the first place never mind convincing them to try and go containerized so now I can bring my virtual machine into the virtual machine itself into a container and then have it deployed and managed on open shift and consumed just like any other container um so the other one is for application components that are already they're virtual machines but they're really already being treated like containers right um you could have all this app on any machine just give it the right class machine and off it goes yeah yeah so kind of the um and I'll probably show my naive and my my uh ignorance slash an experience with open stack but I always think of this as being kind of similar to the open stack experience of you know open stack is uh quota enforced api based consumption of virtual machines right open stack is the same thing for containers so if I'm treating that virtual machine just like a container anyways and I want those two applications to integrate well I can or those two application components to directly integrate on the platform I can use container native excuse me open shift virtualization eventually someday I'll stop doing that I think it was Reese the other day and our chat was like yeah I finally hit the day um I now type dnf by default right yeah no I saw that the other day no it's going to be a hard so for me it was uh you know because remember I came from the community side into redhead so it was kubevert no now it's cnv now it's open shift virtualization so well so for me it's just this constantly changing thing it's still based on kubevert yeah kubevert is still the upstream yeah it's still the upstream thing but we've changed our product or project name of it so many times uh enough that I know that it's just open shift virtualization now and it was referred to as cnv and it's still all kubevert under the so yeah yeah yeah it's um it's ever-changing and it's interesting to me um I usually I don't have a cluster up where I could walk through bits and pieces actually I shouldn't do that because Reese will be going through a lot of this stuff but there's some really cool there's some really cool aspects of it um no I the containerized data imported does some cool stuff of you know point it out a URL for a virtual disk and it'll ingest that disk including you know if it's coming in as qcow2 it'll convert it to the format that it needs you know um so there's lots of lots of stuff that it can do there and I can pull out a disk image from s3 from http from any number of other things as well all with just an annotation on a pvc yeah 12 so yeah it there's a statement here in chat it feels weird to put vms into pods especially if the pods are running in vms2 doublevert etc and then oh yeah there are metal nodes too so yeah like how did you how do you think about that in this world you know I can put this vm now really kind of wherever I need it wherever I want it in my infrastructure if I have open shift and I have you know redhead virtualization I've got virtualization in two places yeah so the way I usually talk about it you know I see the comment that it feels weird to put vms into pods is what is a container well a container is a you know kernel level process isolation of well whatever that process is so maybe that process is you know python or java or bash yeah in the case of kvm and literally all I've done here is ssh into one of my rev nodes and then I just did a sshf and grabbed for qmu this is my virtual machine this one is one of the worker nodes it's just a process right so there's nothing that says that I can't take these tools so qmukvm and libvert and the other things that I need to instantiate and manage these vms put them into a container image and then just instantiate the same process inside of a container and have it scheduled the process is basically it makes sense to a lot of people in that sense when you break it down like that but still there's gonna be some kind of determination on the user side that has to say well we move some stuff into our containers maybe we should put the vms into containers or we've got everything we want out of our container space what else can we do with it and then that's when you're like well wait a minute you know I've got this bespoke you know thing that is horribly out of date but I can't touch it because it's been written and now it's doing all of its fancy stuff in the background and you know it's one of those special nodes that often snowflake you know these things happened kind of things so now I could take node and make a disc image of it and put it in my open shift cluster and you know or put it in rev and then I can worry about it on that platform not necessarily the aging physical you know hardware that is sitting on in the corner of my data center that I think to me is the benefit here where you just take that image put it into open shift and then it's already on this modern platform that you can now move it around and do as you see fit yes you have to take that you know bespoke snowflake down to get that disc image potentially maybe you don't if it's a VM but you know that it's safe because it's the whole disc image you're not going to lose anything right like you don't have to reconfigure anything you know that if you're moving from one platform to another that you can now do that right now it doesn't matter if it's VMs or containers for me it also comes down to like everything we do you have a choice you don't have to use open shift virtualization you can use multis and you can connect pods directly to the same networks that the virtual machines are running on today so if you're happy with your virtualization platform and you just want to deploy kubernetes you know open shift on top of that and then connects together containerized application components virtualized application components without worrying about ingress, egress sdn and all that other stuff that's possible too it ultimately comes down to how do you use them how do you want to interact with them how do you want to manage them and which model best fits you know I always like to highlight skill sets first because people are expensive and yes there's a comment in the chat sysadmins cost more than cpu cycle so that kind of ties in so yeah you know it's that being said the cost of people can be eclipsed due to inefficiencies et cetera I used to talk about I think we're going through this learning exercise we the industry IT is a whole of cloud is really great but I have to there is more instances to achieve the same scale and availability which leads to sprawl more administrator time as well as more automation et cetera it's a balancing act it's all about finding the right tools to meet the skills that your people have or that you can get them easily it's rarely about this is the best tool to run containers if you're worried about that then you have a different set of problems if it looks stupid but it works it's not stupid I can't say I've heard that one but okay it's true it's very true move their knowledge into operators yeah that's the big one if you can take stuff that you do all the time and operationalize it as an operator and have kubernetes do it for you whenever there's an event that makes a lot more sense well especially yesterday afternoon the livestream that y'all did with the Ansible based operator Ansible especially for redhead administrators is something that is or should be already very familiar and if it's super easy to bring that in and deploy it as an operator learn some new concepts around how kubernetes does things but ultimately the way things are kubernetes seems to be pretty prevalent and not going anywhere that seems like a good idea to expand your skill sets yeah so we all had to learn virtualization at some point now we have to learn containers and this new cloud native landscape that we're walking across essentially and I don't think there's anything wrong with up leveling your skills it's a great opportunity for people to take what they know and expand upon it and really kind of shine here there's some really interesting use cases that come out of that as well I think our rhpds team is exploring some of those I talked to the customer recently who's exploring some of those of basically creating an operator so you could create an ansible operator for example to implement custom resource definitions to then manage other aspects of your infrastructure so the example I like to use is storage my storage maybe has an ansible set of modules so I can create an ansible operator that when I say create new LUN and I create a new LUN object in kubernetes it results in an ansible module in playbook running that then reaches out to my storage and creates that volume for me so yeah you can add in a lot of other aspects if you so choose if you want to adopt kubernetes or the kubernetes paradigm as your interface well and it seems like from my perspective that the kubernetes paradigm is definitely not going anywhere it's expanding and expanding rapidly the ansible operator is something that I truly love because my background is ops as well and stringing disparate systems together to do something like application deployments has always been something that I've had to do but the second ansible I learned it and latched onto it it became this thing where it's like I now have this in my toolbox and then joining the ansible team in 2018 we were working on doing the operator framework bits and then getting that out the door and then now seeing some people actually starting to build these operators and start using them in production I think our metering operator in OpenShift is based on ansible which I thought was cool because there was something going on there that I needed to look at to see what they're doing specifically but it's one of those things where if you have a sysadmin or a devops engineer or whatever you're calling them these days and they know ansible there are a couple steps away from being able to put some ansible in kubernetes and that I think is the true power of bridging the gap you know that there's something you can do in this environment when you learn just a little bit more and then once you wrap your mind around the idea of VMs versus containers versus bare metal everything kind of clicks together and it's cool and I've done a couple of the workshops for our operator making ansible operators in kubernetes unfortunately the summit one I didn't get to see people's faces but the last one we did at ansiblefest in atlanta when people's eyes lit up when the dots connected you can see the expression on their face like oh my gosh I could actually say network I need more I could just auto add it to kubernetes so it's funny because I have what I'll call passing familiarity with ansible I'm familiar with it but I never got really involved with it so remember I came from the windows world so power shell was always my thing actually wrote a book about it it's behind me somewhere over here you wrote a book about power shell I didn't know that I was a co-author good for you so it was it has the same concepts and it has the same ability to while I never had or got familiar with ansible's operation model in depth there's still the option of doing declarative configuration with power shell and with others and it's a hugely powerful tool ansible operators a low barrier entry for administrators I have a low knowledge so that works out well for me I have a low knowledge when it comes to a lot of things and ansible actually helps me figure out how to manage load balancers when you fully embrace ansible you kind of see the world through the lens of ansible modules which I think is fun what can this module do what can't it do how can I interact with this thing and if you learn it like that you kind of get a great operational knowledge of that device maybe not exactly how to run that device while you're on that device but you can certainly ansible that device into existence and out of existence if you want that's the whole purpose abstraction layers do you need to know or do you just need it to do what it's supposed to do and trust that the system whoever implemented the system implemented it with best practices it's the whole purpose of operators we codify that knowledge etc and that's why it's important for your operators to be a dependent because if they're not then you're going to have these weird issues of things failing or things not starting when you thought they should have and then ansible enforces that dependency on top of you because if you don't write a good playbook it won't run right so the two kind of pairing together makes a good jive makes a good story here and makes a lot of people happy when they first pick it up and start kicking the tires on ansible operators absolutely cool so I I don't have anything I can sit here and chat all day but yeah I mean I don't have any problems with oh there's a question here in chat how fast does cluster destroy happen on rev is there a time to do a destroy we can absolutely show that so let's move this down because it actually goes pretty quick so what I'll do is rearrange the windows a little bit so that way we can see both of these at the same time and then so you can see it's going through and stopping and this takes a second to refresh you can force refresh it so you can see the VMs are already down you can see removing VM dang this thing's plugging along dang it goes fast plays no games you want this thing on? it won't there it's done and the resources are gone vanished just do we have time yes we have plenty of time okay so more questions in chat let's see why does the rev icon for open shift nodes show as a desktop christian if you remember we had to select a desktop profile when we were first standing these up in rev I think that's why is that correct that is correct this is a little desktop thing this is a little server thing yeah so when you select that desktop profile why not server is that explained anywhere I honestly don't know I should ask that I assume that when we did the testing and validation that we didn't need whatever settings are associated with server I don't know the answer to that alright so here's a question for you what's the advantage of virtualizing a VM in a container in my understanding it would be great to set up ocp on bare metal then use ocp virtualization I would love to see Microsoft AD in exchange to be containerized there's a couple of different things that I'll talk about here the advantages of virtualizing a VM in a container so aside from the management plane it is exactly the same the goal being it's literally the same KVM, QEMU it's literally the same library so the VM itself the implementation, the execution all of those things are the same the role-playing that you want to use do you want to use Kubernetes and OpenShift do you want to use Rev and whatever your old school virtualization data center virtualization interface that's basically it from my perspective a lot of people especially developers, app teams they like the Kubernetes thing I submit objects, it's a desired state engine it makes everything happen you can the VMs behave as expected in OpenShift virtualization so by default it will create the VMs using the live migration evacuation policy so if I cordon and drain a node it will live migrate those VMs I can set it so that it doesn't terminate it just terminates the VM and then reschedules it and restarts it somewhere else there is a video I'll see if I can find it I feel like one of those old people how do you find YouTube I go and google YouTube I can never remember our YouTube channel for the Kubernetes community I actually worked in the infrastructure working group for a little bit just to create a shortcut to the YouTube because they do all this DNS all their management all their URL shortening is all done in Kubernetes and I had to make just to get YT.Kate.io working because I was tired of YouTube.com user Kubernetes community Christian already took care of this for us OpenShift.tv that actually works if you go there there is a video that I created and then released April 30th so it says the date that shows machine using OpenShift virtualization and then managing it from both the Red Hat virtualization interface and the OpenShift virtualization interface cool you can go and again which management interface do you want because at the lower level it's the same technology you shouldn't care if it's in a container or anything like that it just works the second thing that I'll talk about OCP on bare metal so you can do emulation you can do nested virtualization so if I have a virtual OpenShift cluster and I want to deploy OpenShift virtualization onto that you can do that obviously you're going to have the normal uh you know double you know nested virtualization performance penalty of it's not going to be fun that being said if you weren't paying attention or you didn't hear when I first started the stream I first set this cluster right here in my rev environment is OpenShift virtualization I'm running a nested cluster inside of my own home lab in order to do testing and validation and stuff like that it works fine with the fedora cloud images stuff like that when I start trying to boot into more heavy weights operating systems like if anything with a GUI it gets painful yeah but for creating videos for doing demos and stuff like that yeah it works and maybe for your application or your environment that's fine but yeah generally for performance reasons you're going to want to use bare metal yeah I mean I don't know supports policy on nested virtualization support might say no I don't know the answer to that but it is technically possible well yes it's like I could run I could run OpenShift virtualization on VMware right like I mean it's technically possible I could run VMware inside OpenShift virtualization potentially turn on nested virtualization and it works fairly well which if you're not familiar with Red Hat virtualization it's pretty straightforward to turn it on if I go to my host configuration here and then I go to the kernel tab I just normally when it's unlocked there's a checkbox here for turning on nested virtualization and you can see it just adds the kernel command line so set that checkbox click okay and then you have to do a reinstall and it it just reinstalls the packages and resets it it doesn't reinstall the OS or anything okay yeah I was about to say yeah so then you do that it takes I don't know my host it takes like five minutes for it to install packages and reboot and it comes up and then you've got nested virtualization enabled wow so yeah you can go full inception yeah and it works like I said it works well enough well enough at least for my utilization that being said so there's also the concepts and it was this was shown at Kubcon in Europe I want to say in 2018 okay 2018 or 2019 so Ludse, L-O-O-D-S-E yes did a session with Kubevert where they had a Kubernetes cluster running on bare metal that would then spin up Kubevert virtual machines to host nested Kubernetes clusters on demand so essentially if you think about it I could create an operator that creates OpenShift clusters or Kubernetes clusters by creating virtual machines to then deploy so you could end up with this on-demand thing in that manner as well spiderweb of everything yeah that's cool awesome well so you said that Reese was doing a live string about OpenVirtualization I believe that is it's Thursday Thursday morning Eastern time so yeah we'll be demoing, deploying and using OpenShift virtualization itself you know Andrew put all the rev bits together for us and installed OCP on top of that now after that we'll dive into the aspects of using OpenShift virtualization on a clean cluster basically from the get go kind of deal and his setup is what I used to create my cluster in here as well so yeah so the expert will be coming on to talk about all things not the expert but an expert will be coming on to talk about OpenShift virtualization Thursday morning 9 o'clock Eastern time that is 6 a.m. Pacific sorry and we'll give you UTC because that's how we roll in Kubernetes land it is 1300 UTC to be fair it's over in Europe so yes so he's in London I believe so it'll be 2 p.m. his time I think Wales yes is that GMT or UTC at that point I don't remember I don't know I see somebody asking why not LXD slash LXC I honestly don't know the answer to that I am not smart enough to understand the difference between the different containerizing technologies so I do I do know who we can ask so if anybody wants to reach out to me I don't know if that person is an employer or not you can reach out to me Andrew.Sullivan at Red Hat and we can find out the answer to that yeah there you go um cool Andrew any parting thoughts words anything else before we jump off of here nothing relevant nothing relevant alright well please join us on Thursday there's also a stream tomorrow afternoon at 1300 Eastern 1700 UTC about playing with Prometheus so Eric Jacobs and Josh Woods who also wrote a book on operators will be joining me we'll be joining the stream to tinker with Prometheus and then on Thursday morning again the OpenShift virtualization demo so please subscribe to the channel stay tuned on social media stay tuned to twitch there'll be more sessions like this we'll get sound figured out all in due time really appreciate everybody joining today thank you very much until next time stay safe out there folks