 Thank you projects. Historically, we've done a lot of work in the virtualization space. We got a lot of experience with OpenStack, OpenShift, and now, you know, as you said, this new very interesting and exciting topic, OpenShift virtualization. Awesome. And for the audience out there, sorry, I did not hit the actual transition button fast enough. So my intro was completely skipped, but luckily, Reese's was probably mostly there. So just a fresh reminder, I'm Chris Short, technical marketing manager Reese Oxenham is joining us today. You just caught his intro. And also Andrew Sullivan, my fellow teammate. Are we wearing the same shirt today? Possibly. Is it the no, no, just just read. When's the collaborative to the core? Yes. Okay. I'm the odd one out. Where's my red shirt? Well, I'll send you one. Thank you. I'm just here for moral support for for Reese. He certainly doesn't need my help with anything technical. Reese is incredibly good at this stuff. And I'm blown away by the script and everything that he's created, which I hope he walks through a little bit of today for creating the nested OpenShift virtualization lab. Appreciate Andrew, but let's save the praise for when the demos work. I built this cluster about an hour ago. I haven't really tested it thoroughly before we run through this. So I wanted a completely fresh, clean cluster. So hopefully everything will work out just fine. If it doesn't work, we'll just say it's not your fault. It's OpenShift's fault, or I don't know. Yeah, it's also the fact that we haven't launched this yet. There you go. It's because it's in beta. Yeah. Yeah. But yeah, I mean, this is a fresh OpenShift 4.4 cluster. You know, OpenShift 4.4 is only just made available very, very recently. So it is just a completely fresh environment. It is actually all running on this one machine. So it is, yeah, all of these machines are completely virtualized. But it's just because it's much easier for me to, you know, build up demos and labs and things when it's running on my system here. Plenty of resources available to me. So I mean, what we got, hopefully this is coming through okay. I've just got five systems here. Just three masters, sort of standard, highly available configuration and two workers. Now, for those of you that don't know what OpenShift virtualization is, you can kind of think of it as merely a feature or an extension of OpenShift container platform to run virtual machines. So we're kind of delivering on the notion of a single platform to run both of them simultaneously. So you can have sets of nodes that run containers and virtual machines side by side, all orchestrated with the same APIs, all running on the same hardware, all utilizing the same networking back ends, the same storage back ends. So you're no longer having to maintain multiple silos of technology just to do, you know, virtual machines and containerization. So we're really trying to deliver on this with OpenShift virtualization. And what I'm showing you today is OpenShift virtualization 2.3. We're going to go through an actual deployment of that. We're going to set down some networking, some storage configurations, we're going to deploy some virtual machines and we can poke around the API and what you can currently see on the UI. But just to point out, we are very much still in beta with OpenShift virtualization. We announced it last week at the Red Hat Virtual Summit. And it is available to play around with today. You know, anyone can go and do what I'm doing today. But it's not currently supported from Red Hat, but we hope to have that very soon. Keep me honest here, Andrew. You're doing great. Room all support. Just to be clear to everybody listening out there, right? Like this is not like something like you have to add in, like this is going to be baked in to OpenShift, right? Like it's an OpenShift container platform, it's an OpenShift Kubernetes engine, it's going to be there for everyone to use. Correct. It's not like it's, you know, as you say, Chris, you don't have to deploy any additional hardware or indeed, you know, make any drastic changes to your environment. It's an opt-in. If you want to use virtual machines alongside your existing infrastructure, you simply enable the extension. Now, I just want to rewind a little bit. When I said no additional hardware, to run virtual machines on top of OpenShift, we really do recommend you use it on top of bare metal for obvious reasons. Inside of this environment, which I'm showing you here, I'm doing nested virtualization. This works just fine in a demo in a sort of lab environment, but it's not recommended nor will it likely be supported for any production usage. So if you don't already have bare metal OpenShift inside of your environment, then we would very much encourage you if you wanted to take a look at leveraging OpenShift virtualization, or indeed, as you'll see it in a few different places, CNV or container-native virtualization, then we would recommend you attach bare metal machines to your cluster to do so, or indeed deploy a dedicated bare metal cluster for such utilization. So can I can I pause for a second and ask you a couple of questions there? So there's a lot of questions that come up internally, both, you know, just internal folks asking as well as asking on behalf of customers around mixing cluster node types. So it is fully supported to have a virtual control plane and physical worker nodes. The caveat being you can't deploy using UPI or IPI, it's the quote unquote bare metal installation method. So does that and does that hold true with OpenShift virtualization as well? It does. So just to add a bit of color around some of the terms that you use there. The new OpenShift installer for version four and above, we've really worked on what we call platform integration. The idea here is that you run the OpenShift installer binary, it asks you a few different questions around where you want to deploy your cluster. Do you want it to run on Amazon? Do you want it to run on OpenStack? Do you want it to run on VMware or whatever it might be? You answer some questions away it goes it deploys all of the infrastructure to get it all up and running, but crucially ties in some of the underlying platform integration. So if you tell OpenShift I you know I want new worker nodes, it will connect into those APIs provision it and away it goes. One of the big things we're currently working on is to get that same capability for bare metal as well. So this is commonly referred to as bare metal IPI, installer provisioned infrastructure. So the idea is that if you want to do things like scaling or you want to provision OpenShift all the way from bare metal right to a running cluster, you can do that and OpenShift installer will provide you the ability to do that. The problem with this configuration when it comes to the original question about having mixed clusters is it's kind of assumed that all of the entire cluster is of one type. So if you start running off you know working on top of OpenStack that it's always all of your worker nodes and all of your infrastructure will always be on OpenStack and the same with regards to you know running on VMware or running you know on bare metal or something like that. So what tends to happen is if you want to break from that mold, there might be a little bit of hand crafting or manual deployment of some of those bare metal machines, because realistically the OpenShift installer was set up and is originally configured to deploy against one particular type of infrastructure. So it is possible and it is supported and the caveat that you said Andrew is absolutely right. It just requires a little bit more manual steps to get some of those bare metal workers up and running although I you know it's fully documented to get it set up but it's not part of the actual installer today. Yeah, and I also want to point out that so UPI specifically doesn't necessarily work because so I'll pick on vSphere. So with the vSphere UPI installer it deploys the dynamic storage provisioner which isn't going to work with a bare metal node. So yeah, you have to take that into account right of we yes you can work around that by doing some careful you know labels and and and or taints and tolerations etc. So that you only have VMs with a you know dynamically provisions PV from that storage platform on those nodes right that type of stuff but that's an awful lot of stuff to track and and you know if it goes wrong it's you know a hassle and all that other stuff. So from you know the Red Hat perspective it's you know no just always use the bare metal no integration installation method if you're going to mix node types. Yeah, absolutely agreed. Alright so let's go ahead and show how easy it is to get OpenShift virtualization up and running and aside from you know one of the greatest things about OpenShift for being the platform integration, one of the sort of next best thing is some of the integration work of operators. Now operators make the deployment and the lifecycle management or additional tools, components, features and you know value add software much much more powerful. So it's kind of handing over the knowledge of how to manage and manage the lifecycle of those directly to gubernettis and therefore OpenShift. So it's incredibly powerful and we've done the exact same thing with with OpenShift virtualization. So turning this on is literally as easy as deploying a new operator. So you go into operators and the operator hub and this is a list of all the various different components that you can deploy as part of OpenShift or with varying different methods for you know depending on where it came from. Some are provided as you know community open source bits. Some of course require additional licenses and things from the respective vendors. But all I'm going to do here is just search for virtualization and you can see here we have container native virtualization or indeed OpenShift virtualization as it will be called in the final product. You can see here it has a particular version 2.3 and it has what we call capability levels. So operators have various different levels of sort of feature support and maturity. So some it literally can just do a basic install. Some operators have the ability to do rolling upgrades. So if you start off on say 2.3 we add this as part of our cluster and 2.4 comes out in a few months the operator that you've already installed can manage its own upgrade path. So it will get you to that next version without you as an end user or you as an OpenShift administrator having to worry about how do I do all of that? How do I get that back up and running? You have things like full life cycle. So you know scaling up and down and recovery from potential errors and fault tolerance and things as well. And then there's some additional things you know getting some more metrics and insights into what's going on inside of the environment. But you know we're pretty mature on the 2.3 version of OpenShift virtualization here and as I said this is fully available as part of OpenShift 4.4. You can install on older versions of course but I'm just saying this is now fully available through the marketplace as 2.3 but I just want to if anyone's joined since we started this this is not fully supported from Red Hat yet it's still very much is in the sort of technology preview slash beta realm. So it's going to hit install and it's going to say which channel do I want 2.3 it's going to put it in a specific namespace for me called OpenShift CNV approval strategy I was going to leave that as automatic and I'm going to hit subscribe to that. Now what's going to happen is it's going to go ahead and it's going to deploy some additional components for me it's going to lay down the operator that I wanted it to do. So here you can see install is ready so I'm going to go into here and now what I've done is I've just I've simply deployed this operator what I'm going to need to do now is create a new instance of the deployment of the hyperconverged operator. So I hit create instance. It's going to ask me for some additional component additional questions based on a sort of YAML file here bare metal platform or levers as false because I'm doing nested virtualization here and I'm going to hit create. Now what this is going to do is going to deploy all of the pods that I need. So these are all the services that provide me with all of the API capabilities. You can see it's probably going a little bit too fast if I change this. Can I can I interrupt you for a second Reese? Yeah, of course. If you go back to the to the operator the installed operators and look at the CRDs that are inside of there. So one of the new things with OpenShift 4.4 is you notice how much less crowded the screen is. It used to have all of the CRDs listed in there. Now it only shows the the important or the critical ones inside of there, which is super convenient from an administrator perspective of I only have to see the things that are actually important, although I can dig in and I can find all of the other ones if I want. Yeah, absolutely. So now all being good, you're going to see all of these pods and eventually running. So these are all of the respective services that I need to then go ahead and provision virtual machines on top of my OpenShift infrastructure. So you've got to know controllers got bridge markers that enable you to do things like bridge networking, various different components around CDI, which is about importing data. So you have existing disk images you want to use for your virtual machines will go ahead and do that. Host path if you want to use some local storage, CNI so you can actually, you know, get your networking into that as well. There's NM state, which we're going to go into in more detail. This is a really cool operator that allows you to set out your networking configuration through network manager directly through OpenShift. So if you want to make changes to bridges on the underlying infrastructure to support your virtual machines, you can actually go ahead and do that through here as well. And there's there's lots of various various other bits and bobs that we have here to help us with that. So all those pods are now looking like they're running and just a couple are likely still still going ahead. This Cupid node labeler, if these are the last ones, yeah, these are the last ones that are going to be going to be running here. These just looking at my nodes that are available for to make sure that they're capable of running virtual machines. So you're going to see that, for example, I've got some machines in my environment that masters that aren't scheduled, so it's not going to be able to run virtual machines there. Only my workers can. So it's going to bring these machines up, check that it has dev KVM in there so I can actually do the virtual machines and then we should be relatively good to go. You'll see on the left hand side, you know, this is dynamically changed. I now have a new entry for virtual machines. No virtual machines found, so you know we can do a bunch of things with this as well. So there's new with wizard, so it'll run you through and we'll go into that in a little more detail shortly where it's going to ask us a bunch of questions about my virtual machine. We can import. So if you have an existing, you know, say VMware cluster that you want to pull virtual machines from, you can do it directly through this, or you can just go straight to YAML. You know, you're an expert in OpenShift and Kubernetes and you want to copy and paste your YAML into there directly. You can go ahead and do so. But I just want to make sure that this deployment has gone ahead successfully first. So I think that they're all running. Yep, so every pod is running here. We don't have anything in pending terminating or failed or anything like that. So I'm confident that my OpenShift cluster is working just fine. Now on the terminal side as well, let me just check I'm logged in. Yep, I'm logged in here. You'll also see that we now have some additional API resources and custom resource definitions that we can use directly from the command line as well. So I can do OC get VM, no resources, but that's, you know, it's proven just by doing that that it understands what that VM resource is instead of saying that it doesn't understand what that resource is. There's VM, there's also VMI for instance, you can define a VM that has multiple counts like you used to be able to do in OpenStack as well. So now that OpenShift virtualization is deployed, you know, I think that was literally three or four clicks to enable that particular feature directly through the operator hub and I have this ability to do it by the CLI and the API. I need to make some additional very minor changes inside of my environment to support the running of virtual machines, namely around networking and storage. So the first thing I want to do is I want to set up my networking. Now by default, OpenShift virtualization supports out-of-the-box pod networking. So just like your containers, you know, they will be essentially on a masqueraded based implementation. So they hide behind a knatted interface, you can create routes to them, you know, you can use all the various different standard OpenShift networking capabilities directly for your virtual machines. But for a lot of cases, that model that works for containers doesn't always fit for virtual machines. Sometimes you might want to enable, you know, direct network attachment of your virtual machines onto existing networks. That can be over something like a bridge or it can be over SRIOV or indeed, as we're working on within the engineering department, it's a lot more of the fast data path stuff. So we need to basically make a small modification inside of this environment to suit my needs. I want to demonstrate that I can attach a virtual machine directly to a, you know, a data center network that I have, you know, well, I say a data center network, it's a network that's on my physical machine, but it'll enable me to, you know, secure shell directly to that machine without having to go through the OpenShift networking implementation as well. Yeah. And I want to point out that so the pod network will work just fine, right? The VM, when the VM gets deployed, it will use the IP address that's assigned to the pod effectively. So the challenge with that is if the VM changes hosts, right? Yes, absolutely. The IP will change because all of the IPs are sort of localized to a particular host. Yeah. So if the application can tolerate an IP change, great. If not, then, yeah, something like a traditional connecting to a Layer 2 network or I'm struggling to remember at, you know, 922 AM Eastern time. Wait, you're not fully up to speed at 9 AM? No, I'm one and a half cups of coffee in. So there's Mac VLAN. There's, yeah, there's several different types of networking that I assume that you'll touch on at least. Yeah, absolutely. I mean, in the example that I'm going to show, we're going to use just a standard Linux bridge, but you're absolutely right. There are lots of different types of networking attachments that we can use. You know, there's SRIOV, there's OpenV switch bridge, there's Mac VLAN, which, you know, provided you've got supporting hardware, which is pretty much anything nowadays, you can absolutely use that. So, yeah, all it really comes down to is defining that configuration so that OpenShift knows how to attach and how you want to do it. The great thing about OpenShift 4 is it leverages multis out of the box. So you're not limited to just having one network attachment to your, not only container, but now, of course, virtual machine. So you can just as easily run with, you know, the standard OpenShift SDN network, you know, pod networking and an additional network that you'll just use directly for connectivity, or you can just use one of each. You know, it's completely flexible. Yeah, that's important, right? Because I can still use the pod network and define a service inside of Kubernetes to connect to that virtual machine or virtual machines, as well as provide, you know, that external, you know, data center connectivity. Exactly. Exactly. So, what I need to do as this is just a complete, you know, vanilla out of the box OpenShift 4.4 cluster, I need to set up the underlying networking interfaces on my machines. So each one of my workers... Let me pause you, Rhys. There's a question of what about behind the load balancer? Yeah, so if you're behind the load balancer and you're just using standard, you know, OpenShift networking out of the box, it'll follow the same path as if it was a container. So there's no difference there whatsoever. Obviously, if you're using additional networking interfaces, you know, either provided by Linux Bridge or, you know, Mac VLAN or something that OpenShift doesn't have control over, then, you know, you'd like to use some kind of external load balancer, as you would with, you know, your existing virtualization platform or Refusing Band Metal or what have you. Yeah, so I'm thinking of like Citrix. Citrix has a certified operator to integrate with their ADCs for things like external load balancers and that should absolutely work as expected. Yeah, absolutely right. I have no reason to believe that it wouldn't. All right, so what I need to do is, I have a file here that I need to make some changes. So I know it's small right now. Let me, I'm just going to blow that up in a second. I don't want to click on that. So let's go into my terminal here. So I'm just going to say nmstate.yaml and in this file I'm going to paste the following. So this is a node network configuration policy file. So what this does is it uses nmstate. Now you'll see that in this deployment that we deploy something called the nmstate handler and each of these machines, you know, all five of them have got three masters and two workers. Every node gets deployed this small pod and this is the one that handles all of the network manager configuration for the machine. So the nmstate lives in the kubvert which is the upstream name for OpenShift virtualization. It lives in that upstream community but it allows us to define what the underlying network network configuration looks like. And so what I'm doing here is so we got this called node network configuration policy. I give it a name so I'm just calling it by the exactly what I'm doing here. I'm creating a bridge, adding a particular interface to it and it's for the workers. So we use a standard node selector. I only want to make the changes on nodes that are workers because these are the only ones that are going to run virtual machines and they're also the only the ones that I have the additional network attached to. I have desired state and that desired state is to create Linux bridge with the name br1. State is up. I am not attaching an IP address to it and this is critical. All I want to use it for is layer 2 connectivity. If I only had one interface on this entire machine that I wanted to you know use for also the rest of OpenShift networking as well then I'm obviously going to want to make sure I have an IP address on that bridge but for this I just want to provide connectivity. I don't want spanning tree and all I want to do is I want to add this physical interface to that bridge. Now EMP2S0 is this you know specific for my particular environment yours may be very different but EMP2S0 is an interface that is on the layer 2 network that I want to attach anything I attach onto br1. So I want to pause for a second to talk about networking in OpenShift because as far as I know there's with OpenShift virtualization now there's three different ways to achieve this configuration right? So if we're just talking about basic level bonds or interfaces you could configure those when you install CoroS so using like the DrayCut command line or the kernel parameters that are passed so you could use the network operator and so at the cluster level the network operator and define a CNI inside of there so I think if you if you're still connected to your CLI it's like OCGit network I think is the yeah so if you do a dash o yaml on that one so I think in there yeah so in there you would be able to define an additional CNI network definition to create these things and then third and specific to OpenShift virtualization is in M state absolutely so ultimately it comes down to you know which one are you most familiar with which one are you most comfortable with I think and there's a bubbling way back in my head for some time has been helping either the documentation team or somebody to here's some common networking scenarios right of you know maybe I've got four physical network adapters and I want to create one LACP bond or two you know mode one bonds or you know walking through those kind of common configurations and theoretically it should just work right it's rel 8 underneath the covers so it's going to understand default routes and you know we shouldn't have to do too much extra config yeah absolutely and you know NM state I think is a great way of of just expressing the desired configuration and then it have you know go out there and set those configurations I mean LACP creating you know bridges it does all of that stuff right out of the box it's certainly how I like to do it but you're absolutely right there are a number of different ways to achieving that that configuration do you know off the top of your head does coro s support bonds and and now the name escapes me what's the other one there's bonds and there's teams yes thank you as far as I know yeah but there shouldn't be any reason why it won't I mean at the end of the day it's still a rel 8 kernel it's still a system D it still has the you know the vast majority of libraries and tools that you would would want it's just you know stripped down it's immutable and it you know it being immutable you have to make the vast majority of configuration changes through machine comic operator and things like that but I have no reason to believe why it wouldn't support bonds teams or any other sort of link aggregation so that's a good point of that I guess that makes a fourth way to configure which would be machine config operator so where you're laying out files you know I have config files using mco yeah yeah indeed so the good thing about nm state is it will apply it immediately instead of going in there and laying out the machine laying out the the if config files and you know any additional requirements directly on the file system because every time you do an mco change it needs to cycle the machines so that so I guess a question for you my rule of thumb has always been if I'm going to create something using mco it's something that needs to be applied basically before the node is able to join the cluster yeah and then that's a very good way of thinking it whereas this will apply it immediately and will likely then reapply it after the machine has come up okay so what you may want to consider doing is you know and I need to look into this because I've never actually tested it this way deploying using this to make sure it all works as expected and then setting the configuration you know so it comes up on through an mco because I don't think that when you execute the nm state I don't think well I know it doesn't invoke mco because the machines stay up they don't they don't reboot or anything but whether or not it there's a way of saying you know making it assistant as in saying well once you've enacted this set it through an mco so that it always comes up without you having to apply it every time the machine comes up because yeah let me show what happens so if I don't see apply and I call m state I think yeah so it creates this new node network configuration so OC get nmcp you can say configuration is progressive I changed it to an e which is an enactment you'll see that it's already configured right so nncp is the policy you have various different policies and then you have an enactment so for each node you have you know so that's the node name this is the particular policy for the three masters obviously the selector wasn't matching because I only want this to apply to the workers then you have the successfully configured for the various other ones so if I do OC get nncp slash then this one for example oops let's get some more data you'll see that it has some you know successful configurations the date and times that it would do it and what I suspect would happen is if I was to reboot this machine that configuration wouldn't be there or if it is it'll just be there as you know ephemeral anything I write to that file system is going to be gone when it reboots that's just the nature of CoreOS right so to make it permanent I'd need to look into whether there's a way of making that persistent through NM state or whether the most appropriate way of doing it after you validated that it works through this is to do an MCO so I don't know I need to look into that yeah I would think that if it's basic you know very low-level networking that's required just to boot and get connected to you know the master or the control plane that should be done ideally at install time using kernel parameters for drake cut and as a secondary option you know after the fact using MCO and then for other networking so enabling you know additional pod networking SRIOV right NM state etc that would be applied after it's rejoined you know or connected back into the control plane yeah absolutely all right so that should be enacted we can actually just quickly check sitch core at ocp for work card one let's see an example.com yes now remember direct secure shell access is not recommended you know with open shift 4 we really recommend that changes are made using MCO but we frown upon SSH in the box exactly but all I'm going to do I'm not making changes all I'm doing is just checking that the bridge was configured properly so if I do ip link dev br1 what am I doing wrong ip link I'll just do that just a second there we are ip link br1 is there so that's fine ip link where was it ip link pnp2s0 has master br1 so we know that it's created that bridge just fine for me so now that that's happened what I next need to do is create a networking definition for what I want my machines to be attached onto so if I just show you this a second again I'll close this next attach.yaml and I'm going to insert this file so this is a network attachment definition I'm just calling it tuning bridge fixed and it's just a bridge network so the idea here is this is just a standard kubernetes network attachment definition so this essentially tells open shift what to do when I specify this particular network how to attach it from a cni perspective so what I have here is I have a plugin called cnv bridge so it knows that there's a particular type of bridge for cnv now these are a little bit different to pods just remember that when we launch a virtual machine on top of any system it is just a binary we have to attach a a virtual nick into that virtual machine to get networking attachment directly through to it when it's a pod it's just a case of putting an interface into a namespace so cnv bridge is slightly different in that it then has to link a virtual interface inside of the virtual machine directly to the namespace so it's slightly different but the functionality ends up being very similar so the difference here is now that the bridge I'm specifying is br1 which we know exists because we created it it knows how to do that particular attachment so I'll just save that and I can apply that net attach so there we go now our bridge is created so that's networking for my particular environment pretty much set up I set up the underlying host configuration and I've created a new network attachment for for attaching a virtual machine to it and those network attachment definitions are namespace specific in other words I learned this the hard way of if you create the network attached definition in one namespace it's not accessible from other namespaces which I think is a good thing because it means you can control from an administrator perspective control what resources your projects or your users have access to that I mean yeah it's a good point to bring up because if you want the whole node for whatever reason maybe it's a small node that needs access to this entire network you need to make sure that you do it in the right everywhere namespace absolutely okay so that's storage done sorry that's networking done maybe we should talk about storage now with OpenShift virtualization there's a wide variety of storage that you can integrate with you know you can our of course preferred mechanism to use would be OpenShift container storage with OpenShift 4 that's built around the CEPH project so it's all deployable via an operator because I'm doing all of this in a sort of nested virtualization environment I kind of run out of memory so I don't have OCS running in this environment but I do have NFS now NFS is you know a really quick and dirty way of setting up shared storage so I do have an NFS server running inside of this environment so I was just going to going to use that directly so I need to set up because I don't think that I set this up out of the box storage classes no I don't have any storage classes so I'm going to create a storage class and I'm going just going to paste my amul in here because that's that's easier for me to do NFS storage class copy that paste that in yeah so all this is standard kind of storage class metadata name is NFS and this is a no provisioner as in I cannot set this to be a provisioner because it doesn't do any dynamic provisioning with NFS that's one of the biggest drawbacks about using NFS you have to have all of the various different PVs already pre-created if you're utilizing something like OpenShift container storage you just set it up with an operator you spec it generates the storage class for you and everything will be dynamic for you you don't have to worry about you know creating volumes creating you know partitions and doing all of the manual PV creation it's all automatic for you but NFS it's it's cheap and easy for my for my requirements yeah so I'll while you're doing that I'll touch on and be fair to our storage partners of some of our storage partners can do dynamic NFS provisioning I'm thinking of NetApp specifically and then there's also a number of storage partners who also have certified operators to consume local storage so you know you can take advantage of whatever resources are attached to your host if you so choose to create a shared storage pool that'll be dynamically provisioned and then last but not least you mentioned it earlier which was the host host path provisioner so host path provisioner is exactly as it implies you provide it with a path to a local storage device so could be an individual disk could be a local RAID device hardware RAID device so you pass it that path and then it will create the folders and files as necessary to provide up to whatever your pods are doing obviously the downfall there is well it's local to that note and it doesn't move around exactly yeah that's very fair I was just talking about NFS out of the box with Linux you're absolutely right some of our partners in this area that do do NFS like NetApps and various others they can absolutely do the dynamic provisioning as can many other hardware integration sorry storage integration partners do all of this completely dynamically all right so I've just created a really basic NFS storage class so I can go ahead I need to define a new persistent volume so I'm going to get rid of this and I have just a definition I want a copy and paste here and I will again show you what this looks like so I have one here system volume type I calling it NFS PV1 and it has various different access modes read write many is of course going to be very important if I want to do anything like live migration or I want to do some data import and data import is important because if I have for example an existing disk image that I want to use I'm going to need to have multiple pods being able to access that volume simultaneously you know I got the the import pod can attach to it and then as soon as that's done it can be attached into into the virtual machine part I'm going to show you that shortly so capacity is 40 gigabytes that is just the size of the the maximum sort of size of the volume that I require its path is NFS PV1 and it's on this particular server and again this is just an NFS server inside of my environment so I will create that that is available inside of my environment but of course it being available there's no claim that I have on that yet so I'm going to create next a persistent volume claim now this is where it starts to get a little bit more interesting and more relevant to the CNV use case I'm going to just go into YAML again because it's a little bit easier for me to show I'm going to say I'm going to use this so this is the PVC definition that I have now then this is where we can start to add some Cupert annotations so I'm creating this persistent volume claim called rel8nfs it uses this label containerized data importer with this additional annotation so what this is doing is as soon as I create this as PV claim it's going to look for an available PV again it has to I had to create the PV for because before because it doesn't have the dynamic provisioning with a size of 40 with a storage class named NFS and as soon as it's found one it's going to run something called the containerized data importer and it's essentially going to fill the persistent volume with the data that it finds in this particular disk image now this is just a rel8 cloud image it can be whatever I like it can be Linux, Windows whatever I want it to be and so as soon as I hit create on this it's going to notice that it's got a persistent volume that I've labeled labeled containerized data importer and it's going to pull that contents in so let me show you that so straight away bound found we're going to persistent volumes it has this particular claim but if I then go into my pods you'll see that I now have and just change this namespace to default nope let me hit sign up for gifcnb where's that gone created there we go import a rel8nfs it wasn't open gifcnb you see that this pod has automatically been started and this is the one that's going to start doing some of the the data import if I look in the not terminal if I look in the logs you'll see that it shows a a percentage output of it actually downloading the image from that that htp link it's going to resize the volume to the size of the persistent volume so I now have inside of my environment a pv with the contents of the rel8 disk image so if I go back into storage you'll see persistent volumes you'll see this nfspv1 and it's not currently in use by any additional yeah owner no owner so nobody is actually owning that at the moment so I'm free to use this has the contents of a rel8 image so now that I've set up networking I set up storage now I can actually show you the creation of of a virtual machine inside of this environment remember right we started with a fresh open shift environment with no virtualization whatsoever so I'm going to go into virtual machines now I could do this via the yaml I've got a definition here but I just want to show you the wizard as part of this so I'm going to say create virtual machines new with the wizard so I don't have any templates you know you can make templates if you want to so the source of my machine now I can pixie boot these machines I can point it to a url now this is important I could have done a little bit of what I just did with you know just pointing directly to that qcow2 and it would have created the volume and attempted to do that all for me a container disk image so if you want the source to source of it to be just a container you'll just run that ephemeraly or a disk now I've already created the disk I just wanted to load from that disk I created so I'm just going to hit disk operating system well this is a red enterprise the next 8.1 machine flavor it's going to be small this is nested virtualization workload profile so this is just a few additional parameters inside the libvert just going to say server this is going to be my rel 8 server dash nfs we'll go with that so Reese I know you have a strong open stack backgrounds does this how does this equate to open stack because in my head I tend to map things like if I do the source url that's a lot like you know creating a glance image or creating a vm based off of a glance image now I fully admit that open stack is not my forte so I might not be using right terms here but conceptually is that right? wrong yeah you're absolutely right so you think of url yeah you can think of that okay that's kind of like a glance image as in use use this glance image url and build a disk image based on that disk is a bit more like cylinder volumes as in I've already got the volume there just use that container is more sort of specific to to open shift and pixie well open shift sorry open stack never really supported pixie you know you really had to you know the easiest way of us doing pixie inside of open stack was to attach a cd rom with a pixie image so it booted up that and you kind of got pixie that way so so yeah that's that's kind of the difference and then you know flavor flavor is very much an open stack slash you know public cloud type term you know this has some presets and you can adjust these you know this has some preset resource requests you know for cpu memory and various other other things yeah I don't remember the object inside of open shifts inside of kubernetes off the top of my head but those are you can customize those you can create new you can remove them I think if you select the operating system as well like if you were to choose windows versus linux right it'll customize those additionally right it sets different liberal options it does yeah and it also adds you know if I was to choose windows it changes some of these many options as well there's some additional things that we can do with with windows because you know that's that's a really important point this isn't just linux on linux we can do windows on top linux as well you know pretty much anything that kvm can support will run just fine obviously in the supported product we have a somewhat restricted list of of operating systems that we support for you know obvious supportability reasons because we have to you know provide support sla but but yeah this is just kvm at the end of the day you know we're able to leverage you know I've been at red hat 11 years next month and so in that time congratulations yeah I've seen you know rev and open stack and all of these various different virtualization all of the enhancements that we've made on the underlying platform you know the rail the kvm the libvert and all the work we've done there around security and networking and storage we're able to leverage all of that with open shift virtualization we're not throwing all of that away and starting from scratch all we're doing with with open shift virtualization is teaching kubernetes and of course open shift how to manage those objects how to define all of those how to then extend kubernetes to to to give you access to those resources so yeah and I think open shift virtualization is using the same kvm that rev is because there's there's two different kvms because there's yes because there's kvm that ships with rel where the only supported guest operating system from red hat's perspective is rel and then there's kvm with open stack and rev excuse me and now open shift virtualization which adds more rels as well as windows etc so yeah absolutely so the reason behind that I won't go into too much detail but you know red hat has a firm commitment to never break api and abi compatibility across the life cycle of rel what that essentially means is that if our customers deploy their application or deploy their virtual machine on top of our infrastructure would say rel 8.0 it should work as expected and not require recertification throughout the entire tenure lifecycle of rel I can count on you know well I think I've only ever heard of one time where we accidentally introduced a regression that I absolutely got fixed in that a customer's workload or application wasn't working as expected of that time that is a huge commitment from an engineering perspective you know we're one of the only vendors that there really works to prioritize you know not introducing any changes that would that would interrupt that so that's you know that's incredibly important for for organizations that are there so what that meant I think it's particularly impressive with Kubernetes right there's a lot of alpha APIs in Kubernetes and alpha kind of implies that it's going to change so supporting those things is uh is big so great interrupt no no you're absolutely fine please do I certainly don't want to monopolize this um so yeah the problem with with that is is that you know rel lasts a very long time you know it's has a 10 year lifecycle your customers that adopt rel 8.0 today they have you know around about 9 years or so worth of life lifecycle on that it becomes more and more challenging for us to introduce new features and new hardware enablement as the operating system ages you know because it's more and more code that we have to backport it becomes much more difficult and we realize that this was increasingly difficult when it came to virtualization so we wanted to both provide our same guarantee for keeping that stable API and ABI compatible on on rel for the you know the QMU KVM binary but also allow us to not break it but be a little bit more aggressive with regards to the additional features and hardware enablement that we put into QMU and KVM and Libvert and various different things and some of our additional products that were targeted as virtualization platforms like Rev like OpenStack and now of course like OpenShift virtualization so we sort of created a bit of a fork so now there's two types of QMU KVM binary that you can install on rel one is just standard QMU KVM which does have limitations right just supports rel and I think you can only deploy four virtual machines on it and has limitations of the amount of memory and you know other hardware that supports then you have QMU KVM Rev but the binary is the same across OpenShift virtualization OpenStack and of course Rev as it's sort of originally named named after and that's where it has a lot more features is a lot more powerful there's a lot more bleeding edge as as it comes to you know it's capabilities and the code base that it runs on yeah it's and I know the engineering team puts a lot of work in into that as well oh yeah and is as well as working with the upstream to make sure that everything you know stays in line and works and you know the typical red hats we do everything open source and and upstream yeah exactly all right let's crack on with this wizard so I've filled in these details and hit next there so networking interfaces is the next option so by default it wants to put me on the pod networking now I could leave this I could add an additional one but what I'm going to do I'm just going to delete this because I want my machine to be directly on my bridge network so you'll see in here type network definition ah sorry exactly what you said was going to bite us did I'm in the wrong namespace because I put my network definition in the default project let me quickly go in there and do that again new with wizards quickly do this disk this is real 8.1 I was recording a demo and was going through the same thing and spent a solid half hour banging my head on the desk coming why why is it not showing up before I finally knew this thing go all right there it is so tuning bridge fixed the model is just vertio if you have any other guest operating systems that don't support vertio then of course you can use some of the more legacy ones but vertio is just fine it's just going to be a rally guest this type is bridge you know you can do SRAV as well but you know the type of this is bridge macro just going to leave blank so it'll automatically generate one for me so we go I should only have one nick that is attached directly to my br1 bridge on on my my implementation hit next right disks no disks found okay I need to add a disk source can be blank so I can literally just have a really just completely blank one I can then go in I can specify some additional things but I want to attach an existing disk that I have so attach disk it's going to say which persistent volume claim do you want to use oh did I put my I put my pvc in the wrong one as well didn't I all right so so I'll I'll see if you can multitask so what about cds or isos right can we attach isos to like what I'm going to throw at this time it is possible yes all right I'm going to have to quickly do that again and where's my open shift cnv here alphabet race you know this there we go all right so I don't think that I can go in here and edit this and change the namespace can I let's try yeah it's gonna it's good it's gonna fail is it pvcs are immutable after being created okay then let's try that again I always remember the project you're in always always always remember the project you're in yeah location is important it certainly is all right so it's also going to I'm going to have to go into here pv1 delete that that should allow me to reuse that directory okay so let me get that pv back up in the default namespace this time which is good um I need to do my pvc now before I do that was my persistent volume in here that's failed because I deleted it that's okay so I can delete that one so my let's create that persistent volume again okay there's my pv and I'll make sure I'm in the default project again yet now I'm in the default project I'll create that pvc okay that's bound again and we should see that pod being spawned yeah okay that's what I wanted to see earlier with me being in the default project so that's going to re-import my data again let's have a look at the log file it's a good thing about having NVMe storage it is amazingly quick yeah it is seriously seriously quick yeah all right so that is done and I should in my networking attachment definitions I have it in the default project my persistent volume claim is there it's bound okay we should be good to go let's try that again sorry folks I'm just going to run through this quickly Christian is complimenting your semi-transparent terminal window oh yeah yeah if so translucent terminal windows like they're a gift and a curse right like if you have any kind of like corrective lenses or something I hear they give you problems and the stuff so you kind of have to have like that nice vision but it is pretty it's very very nice yeah that's all right okay so managed to get back to where we were so select persistent volume claim right this is the thing now where we actually have a rel8nfs persistent volume claim so I can hit that this name this here is fine interface now you can be specific if you wanted to show up as you know sda vda or have you so I'm going to go with vertio it's rel8 again vertio is definitely the the way forward for that and hit that so it's attached at this now don't ever forget to do boot source I want the boot source to be disk zero because remember you could have a boot source as being something else right it could be pixie or something else like that so I'm going to say this zero is my is my boot source hit next on there here you can add some additional cloud in it configuration you know if you wanted to force it to be you know when we got limited amounts of parameters we have an EUI today you can put it all in a script if you wanted to so you can expose all of the capabilities of cloud and it should you want to but I don't need to worry about that I have pre customized this rel8 image with my root password and stuff so I don't have to worry about that virtual hardware attach CD-ROM right if you wanted to you could actually attach a CD-ROM here review you know pretty simple really sources of disk it's rel8 machine small flavor server profile this is the name of the machine I don't want to well this is an option I sort of glanced over earlier start virtual machine on creation you don't have to do this this is very much like what we have in rev and you know some other things like open stack where you don't have to start it first you can go in and make sure that it's configured everything first without just trying to start this machine up Nick zero is my bridge network which I call tuning bridge fixed and my storage is my NFS disk that we created a little bit earlier so when it create virtual machine successfully created virtual machine so see virtual machine default that details so this just like in any other snap in inside of open shift is you know it looks exactly the same but there are of course additional custom resource definitions that it is exposing and showing you know some additional insights into the machine when it's up and running we can go into you know various different things about the actual console of the VM I'll show you that in a second so I'm going to details events about it you know good use for troubleshooting and of course networking interfaces and disks that's going to appear I'm going to start the virtual machine so Reese real quick let's let's look at the YAML for the for the virtual machine because there's a couple of things that might be important right of one you can literally copy and paste this you know copy it out and and you know save off your virtual machine definition into your you know revision source revision control system so very easy to to recover at least VM definitions right because and you just started the virtual machine if you do OC get VM and then OC get VMI great they are two different things the VM definition versus the instantiation of the virtual machine exactly exactly right and you know I have you know I showed through the UI because I think that you know the wizard through through the OpenShift console is pretty is pretty cool but you know you can absolutely do everything that I've just done through and through the command line of course now you'll see that it's still showing no IP the main reason here is that OpenShift or the the configuration that I set with regards to the network definition is using a bridge network just as a layer two network the VM will just DHCP on its own but OpenShift doesn't have any IPAM control over this particular machine what you might find is if I do that hey I've got an IP address now hey the reason why it does that is because it has the guest agent installed in the VM so the guest agent is able to update through you know OpenShift virtualization what its IP address is and then it'll update inside of OpenShift so that IP address will now be shown if I go into overview as IP addresses you know IPv4 or IPv6 it's all there and you know good to go and it'll also be able to show you some utilization information once it you know once it's able to get some of that updated now you can go in here you can see the console so you know just like you know OpenShift sorry OpenStack and Rev you have direct access to the console and you know this I can get directly into this you know just fine or indeed just to prove that networking is is working I want to say 123.62 I can get out directly from this machine and also just to you know prove that it is actually there 62 yeah there we go I'm still hung up on your eight millisecond ping to to google right how'd you do that uh well I have gigabit fiber to my house that might help is it google fiber or can you can you throw a rock and hit their data center right like where where are you in your location of their data center I think it's a Heathrow so oh okay yeah I so that's close-ish it shows here I know that's going to Frankfurt so I don't know yeah I'm I'm impressed jealous kidding very jealous yes yeah we're very fortunate in Europe with our network connectivity that's for sure all right so so yeah that machine is is up and running I don't think that I automatically expand this disk no I don't so by default this flight sorry not this flight but this QME image is just a 10 gig volume but if I was to extend that partition 40 gigabytes that is on that on the NFS share that's available for me to for me to use yeah so the the importer pod at the very end so it it goes through the percentage as it imports the disk and then there's a line in there about expanding the image yeah so that that's different than the operating system actually recognizing I assume if you were to do something like a yeah disk bash L it'll show the full capacity it's just the partition hasn't been expanded yeah there you go that's 40 gigs right yeah it's a it's a 40 gigabyte disk so yeah I could go in here I could yeah it's a cloud and it'll because I think in your image customization if I remember you remove cloud in it I think cloud and it'll automatically expand it right yeah so now if I do XS it might need to need to reboot yeah needs a reboot but yeah it's still a 40 gigabyte disk now how do we link all of this together well if I do OC get pods I now have this vert launcher now this is important because remember Kubernetes whilst it's it's able to understand you know what VM objects are and you know how to associate it all and bind it all together it still launches a pod right to spawn that virtual machine a virtual machine is just a binary and that binary has you know a a a liberate configuration that that defines how that binary binary comes up so what we can do we can do OC exec it and we can go I want to work on that pod and give me bash inside of that terminal inside of that that container so I'm now inside of that that launcher container if I do a virtual list there's my VM all right I can do verse dump XML1 this is the pipeline to less oh I don't know if that's okay we're just looking at it there this is the libvert definition for that particular virtual machine so you can you can see you can see it all it is literally just a standard QMU KVM binary and you can see here is the QMU KVM binary that's that's running there yeah yeah KVM VMs are just a process exactly containers contain processes so it works out really well exactly yeah we're certainly not rebuilding anything from a you know a virtualization standpoint here we're just teaching kubernetes an open shift how to you know how to deploy virtual machines and manage them yeah and I think it's important to point out that you know we can kind of geek out and dig into all these things and kind of prove that yes it's all the same but from a user's standpoint you know if I'm an application team if I'm the virtual machine administrator I don't really care right you just showed you know yeah I can use the open shift console if I want to to go in and manage it just like any other virtual machine access to console all that other stuff if I'm you know if I'm deploying things programmatically it's just a YAML definition of what that virtual machine looks like so yeah and you could define you know as a sort of overall workload you know that workload could comprise of virtual machines and containers and you could define them using you know almost like one push deploy all of these resources you know some of them just happen to be in virtual machines some of them are containers and you know open shift will still do all of its its magic on on the networking front so yeah it's it's pretty cool yeah I saw Christian made a comment about getups because he loves getups so I love getups everyone should love getups all right so let's just to prove something here I'm just going to create one more pv and one more pvc but this time I want to paste it into here I call this Windows 2019 and I think I think I have let's try this Win19 Win underscore 19.2 now there was a can import a qcout2 but I know that there was a bug in the previous version I don't know whether it's fixed so let me just let me just quickly fix make sure that we're not going to run into any problems so let's do this see the html QMU image convert format is qcout2 output raw win19 win19.image how long is the command just find the syntax QMU image convert I was going to be impressed that you remembered all that I have to look it up every time yeah I know I know oh yeah I I don't even try to remember capital O of course classic yeah so then I'm just going to specify that as a raw file instead of a qcout2 I think I know that they were working on that but I just can't remember whether they fixed it off the top of my head so so read read read we should be good to go on that all right so that's we'll make that a 50 gig disk read write many which is that 19 containerized data room quarter should be good to go yeah let's try that all right that's found instantly that's good now we should see another pod running to import the windows image and this is slightly bigger unfortunately container creating so well that's importing I'm going to go a little bit off the reservation and show do you have or are you able to show any of the live migration stuff yeah I don't see why not I'm a little short on memory so let me show this particular VM running just to prove to everyone that we can do windows virtual machines as well and I'll kill one of the VMs and just live migrate one of them the there's a question in chat are those sparse raw images like what kind of images are those and give me image info so I'm 99% sure that my computer is going a little slow this is just a raw file yeah so this is a sparse yeah okay yeah cool thank you for you can always use the sparsify as well if needed on the on the disk before you upload it and kukau 2 does work yeah oh yeah absolutely yeah you were pointing out that the windows kukau 2 there was a bug or a glitch in the image that you're using yeah for some reason I think when I imported as a kukau 2 it doesn't come up yeah I can't remember I have a new version of that out there if you want to redown load it I fixed that I think oh you did that'd be fantastic yeah let me find you the link real quick actually all right all right so I'm going to create a virtual machine now let me check how much memory I've got on this machine 3-M all right so we've already used all of my memory and we're using 11 gigabytes of swap already so we're doing quite well yeah I mean this is a pretty pretty fast machine but all right let's see what we can do then let's create a new machine he was visited right so do this really quickly as his disk it's going to be windows 2019 media this is going to server when 19 tests we'll just start this one I have I have trust that it will work I'll leave this on pod networking just so you can see the pod networking and how that works just going to add my disk and this is the attached disk it is the windows disk vertio big disk is my the disk I just selected again if you've got cloud in it there is windows cloud and it's well I said support it that you can you can do it that there's ways and means of achieving that so this is cool as well so by default with windows is going to attach this vertio windows drivers so if you need them if for example your machine or virtual machine comes up and it can't access networking or or storage or something like that you can do it this way so let's say you're setting it up via the you're maybe provisioning an old version of windows that doesn't have vertio support it will attach this as another disk or cd rom so that you can access those drivers directly from the cd rom interface so you can install drivers or windows set it will run it attaches this by by default you can of course turn it off all right create virtual machine virtual machine is creating and it's starting this machine now assuming I don't have some out of memory killers getting invoked huh assuming we should be good to go we hope yes yeah all right we have an ip address straight away of course because remember this is pod networking consoles oh there we go hey look at that it's booting so far so good I mean but let's let's let's look that's let's realize the enormity of what we're doing here right like we are inside the open shift cluster console with a running windows box yeah absolutely right like there it is we could log into it as admin so forth so on uh-oh maybe we can log into it this is the one you got from me or somebody else somebody else there we are okay fourth or first time lucky typing incorrectly yeah but yeah there you go right and this is just you know it's really smooth it's easy to use when you're directly in the console it's not like slow or anything you know it's absolutely fine yeah despite swapping despite all the swapping and everything else going on yeah all right like fully working vm networking but it's still eight millisecond yeah nested virtualization so yeah I mean this is just using the standard pod networking now right so if I go into this virtual machine here you'll see that it has just a standard you know pod network I'm not going to be able to get access to it from here because you know it's it's on the thing but if I was to go to one of the machines that should have access to it 10 to 0.2.2 no it's on worker 2 isn't it might be a firewall not allowing it but but either way you know you can access oh yeah the windows firewall is probably engaged yeah um you know that's just on a on a pod networking interface so anything you want to do you know exposing that via um you know a route or a service or or a load balance or whatever you know you've been doing through your normal open shift day to day activities you can absolutely do that with this you know set the port you want to use wherever it's listening whether it's a database or a web server or whatever it might be it's kind of a relevant that it's a virtual machine and you can administer it in in any way that you want right it can just be a normal via normal windows vm inside of your inventory or you know linux vm you've managed with satellite or something like that it's it's kind of a relevant what uh um how you do it or if you continue to use the the same sort of someone's requesting that you install wsl inside that windows box yeah so and also also there's a a question so why is that special referring to accessing you know windows in open shifts you know in linux and i think you kind of just address that of it's a virtualization environment in open shift that is just as capable of doing all of the things that you would expect from any other virtual you know virtualization environment this one just happens to be you know kubernetes based right so like you have your kubernetes environment and your vm environment living in the same place that really lowers the overhead and you know all the operational complexity the of having disparate systems spread out you know throughout your data center now it's just open shift yeah you can scale open shift and you can manage open shift and you don't necessarily have to worry about this virtualization platform and this container platform and this hardware platform it's the hardware and open shift and off you go yeah christian's making a comment about open shift on open shift of yes you could technically deploy coro s virtual machines into open shift virtualization and yes either deploy distinct open shift clusters or if you really wanted to definitely not supported you could create worker nodes on your worker nodes sure be that would be a bit inception like but I mean it's technically possible you know just to prove a point yeah you know you absolutely could whether it would make sense to do so I'm not not so sure well so I could see it being a potential thing where it's like hey everybody gets their own VM because they can just tear it down and spin it back up but then everybody wants to like test inside that VM too maybe that's a potential case right like there's all kinds of ways that you can slice and dice your compute and you know security standards and operational standards to do whatever you need to do and if people want like a real machine or they just need a windows desktop for some reason you could just spend one up for them right like here's a windows desktop you know you can get licensed unlicensed or temporary licensed versions of windows for testing purposes all day long so this is a fantastic example of how you could take you know any kind of work environment and say oh you need a windows box for testing here you go and you can add that to your ci as well all right like so you can now spin up this windows box or run your test on the windows box and spin it back down as christian points out in chat you could spin up windows to test the edge browser for your app there you go like that's a fantastic example yeah precisely yeah and also if you've got if you know if you're just using pod networking then obviously every pod can contact every other pod on the cluster so you don't have to worry about well you know i need to get my you know vm that's running wherever you know connecting through it's just there already so yeah all right so i'm going to delete this windows virtual machine because i think we've proven that that works and there we go it's gone so let's look at doing some live migration shall we just to prove that it does work so remember you have to be doing this on something that has shared storage for obvious reasons key yeah there is some specifically rwx correct pv right right correct it has to be a read write many pvc yeah now there are there is some work on going to do live block copy as in not using shared storage as in you won't do live migration it'll do the copying of the bits and eventually when it gets to a point where it's transferred all of the bits and it can do a you know an immediate switch over you're good to go you know obviously if the rate of data change is higher than you know network bandwidth or have you it'll never migrate that's kind of there's no way we can stop that easily but yeah this is just on shared storage so in theory you know you can there's an object you can create there's an object so you can create like a really simple where is it the most simple object here you can create which looks like like this virtual machine instance migration is a custom resource type so it's a migration job as you know it can be whatever you want it to be and you specify the name of the VMI that you want to change you want to move you know relate server nfs or you can just go in here and do it through the UI so you know so Reese if you create that object the one that you just defined basically that just request rescheduling through the kubernetes scheduler yes but it will ensure that it doesn't land on the same machine okay but you're you're basically you're not saying you're not saying go to this host you're just saying go to any other hosts and reschedule to any other node eligible host of course right but it won't it won't turn this one off and start it on another one it will do a proper live migration yeah yeah so you know I mean we can we can do it through this if you if you want no I I'm just highlighting that you don't have to you don't have to say go from host a to host D you can say just leave host a yes yes oh yes absolutely yes there is a way you can expand on the definition and you can specify the destination host so I'm just going to keep paying that so we can see that it works let's say my great yeah my great yes is migrating you see on the background is worker two and if everything's set up nicely and everything works if you hover over that now it's worker one and you'll see that there was just the five millisecond yeah so that's now running on worker one so we can just verify OC debug nodes OCP for worker one got some example dot com I love this OCD bug by the way you know anywhere that you've got a route to the the server itself sorry to to the OpenShift API you're good to go there you go there's my VM Relate server NFS on worker one so we know that it migrated just fine and that machine shouldn't have even noticed that it was was migrated still up same IP address good to go so live migration works you can also do a node maintenance so if you want to you know take down a machine you want to drain it of all of its pods but the critical thing here is you do it through kubernetes so that it doesn't just terminate the pods like it typically would if you wanted to drain it drain a node it'll actually migrate the virtual machine first so I know we've already and that's in the VM definition right I think that's the migration strategy or something like that yes but you can also yeah you absolutely can do that but there's also you can just specify you know the maintenance so I'll quickly show you this just copy that so there's a this here there's a question about live migration with sriov devices and whether or not that works I don't believe we have that capability today because we've only just got those sorts of capabilities with open stack so as with everything you know the base underlying infrastructure will support that type of capability but it's about ensuring that we can also do it through OpenShift virtualization so I don't know the answer to that question we can we can absolutely find out okay so real quick node maintenance which I see in the API is a kubvert's CRD how is node maintenance different than for example coordinating a node as I understand it this is going to ensure that the migration happens first okay well we can kind of verify let me just check the definition of this what have we got here running true I don't know whether it has any a fiction or yeah so let's let's try let's let's see what happens so it's running on worker one I've got node maintenance here which is just worker one maintenance node name is this particular one let's work one maintenance so see get nodes I have those oh see apply dash f this file and then in a few seconds yeah scheduling disabled on that worker yeah there we go it'll migrate that workload directly back on to if you hover over that migrating does it give a status or click on it maybe might happen a bit too quickly I was wondering that earlier this running virtual machines well it's running on worker two now so it already happened basically yeah again this these even though they're virtual machines this is a I spoiled myself to a new new machine relatively recently and so it's it's pretty quick you deserve it Reese well I really my previous machine I had for about eight years as my daily driver yeah exactly and so I build them just a last yeah so you know this one is is particularly nice um what are you doing PSA scrap but no can you KBM you see yeah there's there's the binary which is on this is now a node two so we know that it it moved and again yes and real quick to highlight so when you do a node debug you're logging in as more or less as roots to the host and as roots on the host you can see and access all of the processes on that host even though this QEMU KBM process is running inside of a container correct correct yeah I could have gone into the the container but I just wanted to prove that I'm on that particular host and the binary is running there because I could connect into the the pod but the pod doesn't necessarily show me which host it's actually on yeah yeah so yeah that's node maintenance like migration what does everyone else want to see we can we can start doing some host path stuff if that's of interest I would be interested yeah if Andrew's interested I am all right no worries so let me I can get rid of this machine now I think we can always build a new one for it okay so we need two host path so the first thing we need to do is we need to I'll blow this up in a second mco.yaml oops so this is a little bit bigger so this is a machine convict the reason why you have to do a machine convict is because we need to create directories on the underlying host because it's using local storage now with host path we are literally using a path on the underlying file system of our worker nodes to store those disk images so we're no longer using nfs or anything like that now you can do you know data migration between you know if you've already got a host path and you want to move between nfs or back and forth you can absolutely do that and you can absolutely you know move between various other you know non-local storage should you want to and so we're going to apply a machine convict to these to these machines and we're just going to add a new system defile this is going to do two things the first thing it's going to do is it's going to make a new directory on the root file system called you know slash var slash hp volumes then it's going to relabel it so that we don't have any sd linux issues and this is going to come up and it's going to set it to run up on system boot so you know see apply-mco.yaml so my machines are now going to reboot enable that again for schedulable let me go into virt manager ocp4 worker one and ocp4 worker two and we'll just let those two wait a minute I know they're small but I just want to see those rebooting as the mco is applied it might take a minute I'm assuming you could go into the the machine config section of the yeah of the administrator and see it applying so yeah there's your there's your file and I think if you look at machine config pools it'll show that it's been if you were to look at dating true yeah there we are so it is it is applying those and we should see the reboot relatively quickly all right whilst that's doing that let's have a look at some of the other files that we're going to apply pretty quickly in a minute so the first thing we do is we apply the the mco once we've applied the mco we need to apply the configuration of the host path provision as it's just the resource definition for that and I'll show you that mcnlhpcr.yaml so this is basically going to do this we're going to create host path um uh provisioner there's the kind of host path provisioner and the path convict so we're telling it to use var hp volumes I might have lost that because uh yeah there we go it's because our nodes have been rebooted and I have a very small cluster and my routers are running on my workers out of memory hardware or indeed out of memory I mean one one of the two yeah I think that's the pod disruption budget I think because it's because it's a small cluster it doesn't exceed the disruption budget to to reboot both nodes at the same time yeah it'll just go ahead and do those it's uh I think that's okay pending is okay grab dash v dash i running yeah so we have a bunch of pods that are pending waiting to come back up yeah yeah that's just open shifts and kubernetes recovering services after a node reboot pretty pretty standard pretty aggressive node reboots where I've got a very very small cluster and I have no district I have no regard for services staying up it's like in the early days of virtualization when network admins hadn't quite caught on that you know IP storage means you can't take the network down so yeah we'll just reboot the router it's just a couple of drop packets the app will be fine big deal yeah there's a lot of pending pods at the moment oh yeah we were looking at some of those files so yeah then I'm going to have to apply that custom resource definition then I need to create the the actual storage class for host path and this actually has a bit the big difference here is we have a this is psc this actually has a dynamic provisioner right so we can actually say um yeah this is actually a provisioner has a provision type called host path provisioner new storage class so we'll create that and what else do we have then and then when we're ready we can just create a pvc and I'll just create all these files so we're ready to go again we can of course do all of this via the ui as well and all I'm going to do is I'm going to create another vm based on rel 8 we're going to call it rel 8 host path and we're just going to run the same containerized data import to support that rel 8 image directly for us but of course we specify the storage class name as host path provisioner as well this seems to be taking it's time to apply the to apply that mco I'm sure it doesn't help that your your physical host is swapping uh yeah that's very true let's see uh yeah so we used all the memory and 14 gigs of swap now it's almost as much as in my laptop right yeah so I know that we're getting towards our our time here we only got about 20 minutes left were there any other questions whilst we wait for this to to invoke that we want to any other questions from chat any questions from you know out there in viewer land that you want answered right like when did we release this as a tech preview right yeah so originally originally 10 or 311 right I think and then now it is GA and 4.4 no so it is 4.4 it is still tech preview we do expect it to GA this year though right so you can do I think at the very beginning we talked about doing emulation versus nested virtual virtualization both work emulation of course you're going to have a even more substantial performance penalty and I think if I remember correctly with the operator it's as simple as when you initially deployed changing that false to a true or is there still a you still have to create a config map I don't remember yeah you do so before we had the operator install you had to run a deployment script that would you know deploy all the bits you needed and yeah you could set a parameter there which is kvm underscore emulation and you know that's that that works just fine for you know just doing a little bit of testing but you know the performance is pretty bad certainly would you'd never use it for production but you know so long as you have a relatively modern machine you can enable nested virtualization just just fine you know even by default I think on you know libvert it's it's you know enables it by default when you especially when you do copy you know host cpu model it'll pull through the the nested virtualizations you can do it there all that node label apart is it you know wants to see is that it has dev kvm available if it has dev kvm available it knows that it'll it'll work just fine yeah and it doesn't care if it's first level or second level right absolutely it doesn't it doesn't so there's a question in chat so does ga mean it's open source slash free not not sure how licensing works so I'll I'll let you answer that one Reese because it's it's the you know the normal value of an open or a red hat subscription stuff right sure so everything that red hat does is open source and we have a very you know you know part of our mantra is absolutely upstream first so we always develop all of our new features security fixes enhancements whatever they are they always go into the community first so everything that you saw here is available today it is you know it's all based on open source I've deployed the red hat supported operator but there is an equivalent upstream project called Kuvert yeah yeah exactly community called called Kuvert but what you saw today is technology preview you know you can install it we provide all of the bits you know with an alongside open shift so you can try it out it's just not fully supported we can't provide our standard support level agreement for it so if you raise an issue a bug you know we'll do our best to you know help you with it we'll never put the phone down on any customer ever but there's obviously a limit to what we can actually do in terms of being able to to support that you know we we we provide it with no sort of guarantees that it's going to work just yet so and to be clear GA tech preview or none of the above doesn't change the licensing of whether it's open source or not doesn't change yeah it it sort of changes depending on whether or not you want to use supported versus not supported kind of the location that you're getting it from right as in if you're getting completely upstream in the case of open shift virtualization you're really using the Kuvert projects if you're using the unsupported or not supported yet version you get it from red hat as tech preview and then when it goes GA you get it from red hat but the difference being we will fully support it at that point duck hunt yeah duck hunt is just the the app that I was just showing to prove that the the open shift cluster was running it's just a really basic game that I deploy just to validate clusters up and running it sounds like a great way to to waste some time yeah like yeah so my team does a lot of enablement you know that's internal training for our technical resources but we also do a lot of you know labs at red hat summit and various other conferences and so we try and make it a little bit more fun once you know the you know the attendee of one of our lab sessions has you know got their cluster up and running you know how do you prove it and I refuse to go down the path of deploying word press or something you know like that that's the most boring thing in the world if I can get a game running and the way that I deploy this game you know we do a proper source to image you know downloads the code from get repo builds it using the you know standards you know push your bill pipelines spits it out into the internal registry deploys it in a pod scales it out attaches a roots to it and you expose you know you expose it and you can use it through the you know the ingress that's using almost you know all of the features of OpenShift right there to run a game so you know it's pretty cool it's all based on on open source stuff and it just I don't know it's just a bit more you know entertaining when you're when you're running a lab or a demo or something like that in doing the word press install isn't entertaining well when was the last time you installed it I mean they make it really easy I'm not knocking the word press install at all it's really easy but it's it's an overused like example exactly in a sense yeah yeah so there's a a question does OpenShift Verts slash Kubverts make any use of the Libvert APIs absolutely it does yeah so that goes back to what I was saying earlier we're leveraging all of the work that we've put into virtualization on on Linux for you know the past and at least 10 or 11 years that I've been a red hat so all of that engineering all of that effort is is literally being reused so we call Libvert to instantiate that virtual machine so when I was doing the debugging earlier on that on that on that virtual machine to show you like behind the scenes I was using verse you know we dumped the Libvert XML if you joined a little bit later you know go back into the the recording when it's available after the stream you can see us go into that you know it's just using Libvert underneath the covers it's using Libvert it's using QMKVM it's using everything from rel that you typically use and in a virtualization environment you know the big difference is that it's orchestrated using Kubernetes and not you know OpenStack or Rev or you know just standard you know major tools that I you know you're seeing here only because I've got a nested environment I think that's important to point out right because even with Rev and OpenStack KVM is the hypervisor KVM is the part that's actually executing the virtual machines all of the other bits and pieces on top are really focused around two things right one is getting the resources that those virtual machines need available to whatever host it may be using so storage network etc and then two actually scheduling it so whatever policies you put in place of you know I want high availability I want anti-affinity I want you know X and Y and Z so using the scheduler to actually make that decision but at the core it's still just KVM it's the same hypervisor it's we're just changing the management plane if you will yeah yeah exactly so did we did we break your your host yeah I I think I ran out of memory like it's a truly dead I believe so I mean we could we could try and troubleshoot some of this if if we like but I mean this is this is not how anyone would really run open shift in reality so I've I've really pushed the boundaries of not only the product but also this system so yeah I think something may have fallen over somewhere that happens when you're doing it doing it live doing it live oh yeah live demos absolutely so I don't uh so a lot of things a lot of things are pending um so let me get let me clean this up um for I in OC get pods dash a grep dash for you dash I running no grep featured and print yeah see no that's the you know normally my rule of thumb is never do a arithmetic in public I might have to add bash scripting to that yeah yeah unless it's copy pasta yeah so these failed in yes it's just the revision pruna okay let's see get pods dash a e-grep dash V so these are all the ones that are can now add dash V running so these are all the ones that are not wanting to come back up for some reason and I suspect it's because help me with the syntax if I want to look into the machine convig it's and I've got the MCO or like which config is it MCO no if I just want to look at the it's in machine config pool I think it's it might be MCP yeah there we are so it still says it's updating hmm if you can pull the you might be able to look at the nodes and see if they might be suffering from an out of memory condition although I wouldn't expect that oh the VMs themselves yeah yeah um can I run debug if home killer is rapidly killing off things right like this is going to get degraded real quick yeah the problem is it's like this machine's barely using anything but that's just virtual memory at this point my machine you know physical may may not be liking things yeah I mean I could take some drastic action if we want to actually see this host path stuff come up but I realize you've also got about 10 minutes left before the stream's meant to end yeah I I think it's fine I think you know conceptually what happens and you walk through this already is the machine config that was created is specifically for SC Linux because it's rel 8 or I'm gonna phrase that it's core OS but core OS is built on rel 8 which means that all of the normal features like SC Linux are there so we have to take that into account when we want to use local storage i.e storage from a local physical storage device to host virtual machine disks so that was the genesis for all of this with the machine config operator was to do that SC Linux relabeling to allow it to happen yeah theoretically once the nodes come back up we just use the host path provisioner in order to define right so one we deploy it to we define that it is a storage class and then three you simply start creating pvcs using that storage class and it will result in folders and files being automatically created on that whatever the path specified in the host path provisioner configuration for the pvs and those pvs don't have to be used with virtual machines either they can be used for anything oh yeah yeah exactly that the host path provisioner is certainly not virtual machine specific but we simply utilize it to host our virtual machine disk images and you know the beauty of having a provisioner is you know it does all of that dynamically you say I want this claim you know this size you know you can attach some annotations to it you know they could be covert annotations right pull this pull this disk image into into that volume automatically for me and I'll go away and create all the pvs and do everything dynamically for you that's that's really powerful and how is host path provisioner different than like an emptier definition um that's a good question I think it has something to do with the persistency but I'm probably not the best person to ask there I know that you know there's some big difference between the local storage operator which is you know slightly that you know that doesn't have dynamic provisioning but the beauty of local storage is you can use entire block devices instead of just a file system location but empty empty dr is is you know it's kind of a little bit more of a a hack I would say in that it's just you know use this kind of scratch space so I and I think the answer to that question is because I was trying not to answer or ask you a question that I didn't maybe already know which is if you create an emptier essentially you are using the standard graph storage right of so what is it varlib volumes wherever it normally stores the ephemeral data for container image layers whereas with host host path provisioner I can have a completely separate storage device maybe I attach or I have some physical devices in a local rate array or something like that great I can I can use that as and specify that as my path or as my location for those volumes to get created yeah so I I firstly rebooted those two workers to see if the MCO had actually run but it just hasn't run them yet so yeah something's getting stuck in my particular environment but as you say this would never happen on a you know proper deployment because I just haven't set up you know those those particular thresholds it just it was more than happy to take down both of my workers where the majority of the you know the infrastructure parts were also running to keep this thing up and running right so uh yeah I might have got myself a little bit stuck here well I think that's fine I think you are you've done on a phenomenal job of showing us all the ins and outs of virtualization so far and you know the fact that your home rig is falling over is completely completely understandable you know right like I totally get it um you know Andrew pointed out yeah the earlier this week that you know things are very possible on home labs but you're going to have certain limitations and here's one absolutely oh yeah running out of memory when you yeah like if you don't have the hundreds of gigs of memory that you need yeah yeah so just go ahead and file an expense report for you know a small data center I'm I'm sure it'll be fine my manager would love that yeah yeah yeah all right well uh I don't think we're gonna recover from this so not in five minutes no I suspect we probably well I I I know for sure that we we could recover from this but uh I'm not sure people really want to see that happen in the next five minutes we do have a very important question in in the chat though which is why isn't Reese wearing a red shirt I know that is a very important question I didn't get the memo I have a red shirt I could change but no it's probably not on camera but it's so it's funny right I think I have one company branded red shirt most of them are black with the with the red right red hats so yeah it's funny that despite red hats very few of our shirts are red yeah that's funny my wife was like I'm terrible addressing myself right like I spent too long in the military so I like I had a new pair of pants and I was like so they're blue can like I wear what color can I wear and she's like oh light gray and I look at all the red hat shirts and I'm like well there's one it's the volunteer one I got that's it like a light gray or a red there we go so yeah all right with that I think we're done here thank you Andrew thank you so much Reese appreciate your time any time coming up on the schedule later today we are having a OpenShift Commons multi simulcast I guess that you call it you know multi-stream OpenShift Commons will be doing a session with Andrew Clay Shaffer the DevOps luminary that he is talking about you know transforming you know your environments your work environments your systems the way things work in your company so join us today at noon for that's that simulcast and tomorrow at two o'clock eastern I'm sorry I don't know I do have UTC to hang on it is 1800 UTC there will be some deploying of OpenShift on bare metal happening so check that out Eric will be running that one tomorrow while I am off doing other things for behind the scenes work for the stream itself so yeah thank you all for joining us today have a wonderful day evening night week weekend the whole nine yards Reese again thank you so much for joining us today thank you and without any further ado I will send us out