 This video will highlight some of the functionality provided by the 2.0 release of container native virtualization. We will be deploying CMV using operators and use it to run virtual machines on an OpenShift 4.1 cluster. For this, we will show how to prepare some of the images of the VMs. We will be deploying VMs in different ways, including batch mode using Ansible modules. Some of those VMs will have access to multiple networks, and we will show this by connecting using an admin network to connect to a Windows virtual machine using RDP. Finally, we will put a node into maintenance and see how this triggers the live migration of the VMs that are running on that node. We will carry on most of the tasks using the web user interface of container native virtualization. Let's get started. The first thing we will do, of course, is to check the documentation. Here, we will find a section that explains how to install CMV. I already have gone through the prerequisites here. I have an OpenShift 4.1 cluster, and I already did the preparation of getting access to the operator catalog and subscribing to the hyper-converged cluster operator for QBird. This is the HCO, and it's an operator that deploys the different operators that together form the CMV 2.0 system. The only remaining step is to actually deploy an instance of this HCO deployment, for which we go through the web UI to the installer operators and create a deployment of this HCO. This is my cluster here, and if we go to the catalog and check for installer operators, as mentioned, I already have this installed. I only need to select the hyper-converged cluster operator and create an instance of its custom resource. This will trigger the deployment of the different operators. We will see here that containers are started to be created, and we'll get back to the documentation that explains that once this finishes deploying, we can already access the web UI and start working with CMV. After a while, we can see that the different components that are part of CMV are now up and running, including the operator that takes care of deploying the web UI. So now we can switch to the web user interface, which is a modified version of the OpenShift console, with a few virtualization specific workloads. To help us do some tests, we will define a virtual machine template for a very simple test VM. Templates are an easy way to create a template to deploy multiple virtual machines with a similar configuration. Here, we will create a template for a zero-space virtual machine. We will use a virtual machine image that is stored on a container registry. And we will use the default settings just to have a very simple template that allows us to deploy multiple VMs. Let's create a test virtual machine from the servers template that we created. We will use the wizard to deploy that virtual machine, and we will select the servers template that we created. And this will populate most of the settings for the virtual machine. We will name our machine TestVM, and we will ask it to start upon creation. The rest of the settings are already there, and we have the VM created. So we can see that it is already starting, as we specified. If we go to the details of the VM, we will see that it will start on that node. And once it's running, we will be able to access the console. The VM is already running. It got an IP address from the pod network, and we can go to the console tab and see how it's booting. And once it finishes booting, we will be able to log in. Okay, so the VM finished booting, so we can log in. And that's it for this initial test. Let's go to the list of virtual machines, and we will stop the VM and delete it, because we don't need it anymore. Now, suppose that we have a VM image stored locally, and we want to create a virtual machine from it, instead of downloading an image from the registry or from the internet. We have a buildCTL tool, a command line utility that helps us upload that image and creates a persistent volume claim for us to store it. It's very simple to use. We just need to specify a name for the persistent volume claim, the size that we want, and point it to where the image is stored. So here I have a compressed image of Cirrus, and I will use buildCTL image upload to send it to the server. Again, I just specified the name for the pbc size, and I point it to the image that I have here locally. Note that this already created a persistent volume claim for it, and now it's waiting for the pod that will handle the image upload for us. We can check the logs of the pod. We can see that it already is writing data. Notice how it auto-detected the endpoint for the upload process to happen, and it's uploading data here. We check back the logs. Okay, the upload finished. Here we can see that the image has been expanded to the desired size, and if we go and check the persistent volume claims, we will see that we have this pbc that was just created. Using that mechanism, I already uploaded images, VM images for two other operating systems, CentOS and Windows. Let's provision a CentOS virtual machine that uses that CentOS persistent volume claim that was created by uploading the image. Previously, we deployed the test virtual machine interactively using the VM wizard. Now we will use a different approach, and we will use a badge approach, a non-interactive approach based on an Ansible module that is called KubeBird VM. There are a couple of blog posts in the KubeBird website that explain, that provide an introduction to that module and pointers to several additional resources. But for now, here we have an Ansible playbook that will deploy a CentOS VM using the KubeBird VM Ansible module. Notice that this VM will have two network interfaces, one on the default pod network and another one on a network that we define that is accessible outside of the pod network. Also notice that the storage for this VM will be the persistent volume claim that we uploaded. We just have to run this playbook, and it will create virtual machine for us. Ok, we see that it finished, and if we go back to the web UI and look at the virtual machine sections, we see that we have this new CentOS VM that is already starting. Now that the CentOS VM is up and running, we will create a Windows VM. We will use another option to deploy VMs, which is to provide their YAML definition. Notice that this VM will also have two network interfaces, one on the pod network and one on the externally accessible network. Notice also that this VM has an eviction strategy of live migrate. This means that if the pod that runs the VM needs to be evicted, it will do so by live migrating the VM to another node. The VM has created and it is already starting. Ok, now the Windows VM is up and running. As it has two network interfaces, we see that it has two IP addresses. The first one is attached to the pod network, and it behaves as the typical container workloads do. We would be able to access it, for example, through exposes services, which we haven't defined, so we don't have one. The other network interface is attached to an additional network to which we have level 2 connectivity. This allows us to do things like accessing the console. We have seen access to the console through the web UI, but in this case we have an additional way to access the console, which is RDP. If this demo was run on Windows from a Windows system, we would use this. But as we are running on a REL 8 system, we are instead going to use the manual connection approach. We will use an RDP client to connect to the Windows VM. Notice that we are using the IP address of this network. And here we have an RDP connection to our Windows VM that is showing. Ok, let's head back to our cluster and see what status we are. We have two virtual machines running ascentos and a Windows VM. Let's go back to the details of the Windows VM. This VM is running on this particular node. So, what happens if this node requires a hardware maintenance? Well, there is a maintenance mode for nodes that will help us here. Remember that when we created the Windows VM, we defined an eviction strategy of live migrate, meaning that in case that the pod where this VM is running needs to be evicted, this will trigger a live migration of the VM to another node. So, let's do this. Let's start maintenance of this node. The reason for maintenance is demo purposes. We'll start maintenance and if we go back to the list of virtual machines, we see that the Windows VM is migrating already. Checking its details, we see that it is now running on a different node. Remember that we had an RDP connection open to this Windows VM. If we go back and check, we see that the connection has been alive during this migration and we still have access to it. And this concludes our video. We have seen highlights of the functionality provided by the 2.0 release of container native virtualization, including operator-based deployments, different ways to create virtual machines, access to multiple networks, live migration on node maintenance and more. Thank you for watching.