 Hi everyone, I think it's time to start. So let's introduce myself. My name is Alicia Frauzi, I'm from Reddit, and today I'm going to tell you what happened in the past years in Qvert. So first of all, this is not going to be an introduction about Qvert. If you had a chance to attend our virtual office hours, you got already an introduction. But if you are completing into Qvert, you can think about Qvert as a Kubernetes extension, so designed to be Kubernetes native in order to run virtual machines using Levered and KVM inside containers. So in Qvert, we focus on many different areas. We add a new virtualization feature. We focus on improvement of security. But also we try to be part of the Kubernetes ecosystem. So we integrate with various projects in order to offer a more robust and scalable platform. So Qvert recently has been graduated as a CNC-avocability project. And this shows how you can effectively be around virtual machines using Kubernetes. So one of the strongest use cases for Qvert is running GPUs workload. And usually this cannot be effectively rerun with the regular containers. One example is the slicing of a physical GPUs. So with Qvert, you can create and slice partitioning GPUs using many other devices. And then you can assign those virtual devices to different VMs, for example. So initially, the creation of these devices needed to be done manually in Qvert. And recently, we added the automation. And the Qvert will automatically create the mediated devices based on the configuration. So this has been one of the recent improvement that we did in this area. Usually to run virtual machine effectively, we need to pass very low level information to the pod where the VM is running. An example is CPU topology. And in this direction, we had recently the NUMA affinity for the assigned device to the virtual machines. This, for example, is particularly useful if the VM is using SRIV devices or again GPUs. So one of the goal of Qvert is to abstract the workload definition from the different option in order to tune your VMs. One example is the real-time workload. So as a user, you just simply need to specify that you want to run a real-time workload. That will be just an option on the declaration of your VMs. And then Qvert basically automatically picked the best Qvert option. But it also schedules the VM on a node with the kernel with real-time support. So we focus on different area. Storage is one of those. And in storage area, we did a lot of improvement. But we also integrated with other project in storage ecosystem. And we are going to see many examples along the way. So usually when we talk about storage, we have two levels. So we have the level, the storage that we pass to the pod, and then we have the storage seen inside the guest. So this can be particularly problematic if there are, for example, file system changes. One example is the case of Snapshot. So we rely on CSI in order to take Snapshot. However, if during the Snapshot, the VMs still continue to write on the file system, your Snapshot can end up to be not consistent. So recently, we added the coordination between the CSI Snapshot and the QMU guest agent. So the QMU guest agent is basically a process that runs inside the guest. So basically, while we take the Snapshot, then the QMU guest agent will FS-free the file system. And so basically this, we can guarantee that the Snapshot is consistent. Always in this same kind of problem occurs when you try to expand the PVC. So certain storage classes support PVC expansion. However, initially, this expansion was not visible inside the guest. So recently, we added the support for online resize. So basically, we notify the VM that the file system has changed. So you can see also the expansion inside the guest. So Backups is one of the strong and needed feature for virtual machine. Unfortunately, CSI doesn't offer this feature yet. So one of the most popular tool to take backups is Velero. And in Qvert Organization, you can find a plugin that helps you to integrate Velero with Qvert VM. You can use Velero in order to back up the disk of your VM. One of the traditionally set tool in virtualization in order to manipulate, customize your VM disk, our guest FS tool. So recently, we had a new command to VIRCTL. VIRCTL is the Qver client with a new command that helps you to setting up a guest FS pod with the PVC attached. So basically, you are able to use guest FS tool in a containerized environment. And for example, rescue a faulty partition. So another area is, of course, network. Live migration is one of the top feature implemented by Qvert as Kubernetes doesn't foresee pod live migration yet. So this is implemented by Qvert and realized on Livered. Unfortunately, not all the virtual machines are migratable. And one of the causes can be that certain devices cannot be automatically detached from the node where the VM is running and automatically attached to the node where we want to migrate. So that was the case for SRIV devices. And recently, we overcome this limitation by basically unplugging the virtual function device, perform the migration, and then create an equivalent virtual function device on the target node where we want to migrate. We have another example of integration in the Kubernetes ecosystem. Istio is one of the very popular tool for service mesh. So today, Qver VM can participate in service meshes, so for example, you can inspect the virtual machine network traffic using the Kali dashboard, for example. One recent feature we added is the support for single stack IPv6. This is particularly needed if your cluster support only IPv6. So security has been one of the key area where we focus our effort in the past year. So initially, in order to run virtual machine in a pod, we needed to specify additional capabilities. So we tried to improve and remove this requirement. So today, you can run virtual machine in a pod with the standard security profile. So we don't need this extra capability anymore. Always in the security field, we add an initial support for TPMs and virtual TPMs. This is particularly needed if you want to run Windows virtual machines. One of the emerging area in virtualization is confidential computing. So we have added an initial support for IMD Secure Encryptive Virtualization, or SCV. So basically today, you can run QWERVM with Encryptive Memory. However, the VM is not attested. This is something we are still researching. There are open PRs. And this is something coming in the near future. So we want to integrate with many projects and be part of as to the Kubernetes ecosystem. So in QWERVM organization, you can find also a collection of Prometheus alerting and Grafana dashboard in the monitoring repository. And this can help you to understand the QWERVM alerting issues. So Kubernetes is everything with scale. So we want to be able to create easily a group of VM and also be able to manage this. And this is exactly the goal of VM pool. So basically, VM pool allows you to define, manage, group of VM based on template. One very popular tool in Kubernetes ecosystem is Tecton for CIHD. And in QWERVM organization, you can find a collection of Tecton tasks that allow you to perform operation on QWERVMs, like, for example, creating VM disk customization, executing command inside the guest, or maybe waiting for a certain state tool. And there are many more. So as already said, we want to be part of the ecosystem. And we focus on many different areas, either from supporting different platform, from integrating with popular tools in the Kubernetes ecosystem, but also with other projects that have been born with Kubernetes, like some Kubernetes SIG project. And here, I have a couple of examples. So this year, we added an initial support for ARM64. So this support with Mild AR images. In Kubernetes SIG project, there is something called cluster API. So basically, these allow various cluster provider to create Kubernetes cluster on top of Kubernetes cluster. And this is actually a perfect use case for QWERVM. So in Kubernetes SIG project, you can also find a QWERVM cluster provider. And we are going to see that in a demo at the end of the presentation. Again, we want to integrate with many projects as possible. And we also added the help check of QWERV status in ARGO CD. So another very popular CI CD tool. So what's next? So confidential computing probably will be an area where we will focus in the next year. And we want to be able to deploy full, confidential, and tested virtual machines. As already mentioned, we want to distinguish and separate the VM option from the workload definition and abstraction. And this is exactly the goal of labor APIs. So basically, you can define the kind of workload that you want to run. And then basically, QWERV will pick the best option in order to suit and match the user use case. Already mentioned the QWERV cluster API. We also plan to add more features, like, for example, a better drain mechanism. If, for example, the underlying node has been drained. One of the expected traditional features for a virtualization platform is the whole plug of various kind of resource. And this is actually one of the main challenges that we are facing with Kubernetes because of the pod immutability. So usually, if you want to change some resources assigned to a pod, this usually means that you need to restart the pod. And this is particularly problematic for virtual machines. So we already had the implementation of volume hot plug, but we also plan to do for other kind of resources, like, for example, adding and removing network interfaces from Iranian virtual machines. So that was a quick overview of what happened in the past years. You can reach us in many ways. We are on Slack. We have a mailing list. And we meet weekly on Zoom. So you can feel free to join us. So I would like to show you a demo. And then there will be some Q&A. So here I'm going to show you the QWER cluster provider. So basically, I'm going to create a cluster on top of another Kubernetes cluster using QWERT. And you can see also QWERT in action. So you can have a feeling how that looks like. So here I have a Kubernetes cluster with two worker, with two nodes, one master and one worker. I have already deployed QWERT there. You can check the status of QWERT using the CRD. And we can also check the various QWERT infrastructure pod. So everything is up and running. So now I want to create a cluster. And this is the definition. So I will just go through very quickly this base on a template that is available in the QWER cluster API repository. So basically, I can define a cluster. I have something called QWERT machine template that defines the template for my VMs. There are some configurations for QWERT. And then we are interested also in the machine deployment. So this basically is a CRD that controls the number of the worker I have in the cluster. So I can try to apply this YAMA and various resources has been created. So first of all, we can check which QWERT machine has been created. So I have a cluster with one control plane and three workers. And we can check the VM. So there is one VM starting up. And as already mentioned, we deployed virtual machine inside pod. So this starting pod is the pod where my VM will be started. So it's taking a while to start the pod. So we can check the reason. So basically, this is pulling a container image. So with QWERT, we are using container image in order to deliver VM disk. So this is a very handy way how we can ship VM disk with the containers. So it's going to take a while in order to pull this image. But once it's present on a node, the creation of other VMs from this kind will be very fast because we are basically creating a snapshot of this disk. So yeah, it will take a while to pull. Then the VM needs to boot. So yeah, we can simply watch the pod. The container disk is quite large. So I have a first pod up and running. And after a while, I will end it up with four pod. So let's check it again. I have four pod because I have four node. OK, I can check also the VM status and it's everything up and running. OK, so as already mentioned, all the VMs are created from templates. So we can check the template. And I have two kind of templates. I have one for the control plane, and I have one for the worker. So we can inspect, for example, the control plane. It's a very simple VM. Now we can also inspect that one for the node. They look pretty similar for this example, but you can customize and define and assign different resources. OK, so together with the Qvert machine template, we do also, OK, sorry. First of all, now we just check what the VMs are up and running. But now I would like just to access the deployed cluster. So with the cluster creation, we have been also different secret has been created. And we are focused especially on two of them. So we are going to try to get the SSH key. So the SSH secret basically contain a private key that has been injected in every VM. So I can instruct this secret, save it in a file, and use this key later in order to access the VM that has been created. So this is, again, a simple Kubernetes secret encoded in base 64. So here we can see the private key. So I will simply save these in a file, give the SSH permission in order to be able to use the key. Then I have to start, of course, the SSH agent. And then I will basically add the key. So together with the key, I have also, I want also to save the cube config. So here you can see also the cube config of the deployed cluster. So again, I'm going to save this. And it will be useful later. OK, so I got all the secret I needed. And I want to access the control plane. So as I already mentioned, we have something called that's the cube client. And this has different command that help you to interact with the virtual machine. So for example, Vercitl as an SSH command that help you to access your virtual machine. So I can simply get, again, the name of my VM. And I want to use the key that I save and got it from the secret and try to access the control plane. OK, so now I am inside the control plane. So for example, I can inspect which container running on it. And I am trying to check what I have inside my home. I don't have any dot cube. So I'm basically missing the cube config. So that's the reason why I also save the cube config in a file. So Vercitl has also another command called scp that allow me to copy file from my local host into the remote one. So I can basically copy the cube config I saved previously on the control plane. So in this demo, I'm just going to access the control plane but you can access any nodes. So we can check it. Yeah, we copied successfully the cube config. So now I can use these in order to access the deployed cluster. So for example, I can get the node and I have a cluster with one control plane, three worker. We can check the pod that are also running on this. This is the boring pod that we got with Kubernetes installation. Cool, so everything is up and running. Now what I want to try to do is try to scale. So we can try to get the machine deployment. This is something that controls the number of the worker I have in my cluster. This is something a little bit that works like the replica set. So we see that we have three replicas. And now I'm going to try to edit this CRD and try to modify the number of replicas from three to four and see what happened. OK, so this has been updated successfully. And we can try to get the VM. And I have a new one starting up. So if I get the pod, I have already a pod running. This is because I have pulled a container image and it's already present in the disk. So it's very fast to create new virtual launcher pod. OK, and I already have a four worker. So what we can try now is to access again the control plane and see what happened in the cluster that I have deployed. So if you look now, there is still not a fourth worker. And this is because it takes a while to boot the VMs. Then the node needs to access the cluster. But if we try the command after a while, we can see that we have a new node there. There is a 20-second page. So everything is up and running. So we have managed to scale very easily using the Kubernetes cluster API. OK, so now I want to play a bit with the storage. So let's try to create a new volume, new PVC. And I will try to hard plug to add this storage at runtime. So let's create a new one. So VIRCTL, again, has a new command called add volume. So first of all, let's check before adding which kind of volume I have in the control plane. So I have simply two of them, VDA and VDB. So yeah, let's exit. So as I already mentioned, VIRCTL has a command called add volume that allow to hard plug resources. So I can just simply use this. I have to specify the VM where I want to add the new storage. Need to specify the claim name. And then we will see what happened. So the request has been successfully submitted. But we can see that we have a new pod here. So this pod is actually the trick that we use in order to store the hard plug volume there. Because the hard plug volume hard plug is not transparent to Kubernetes. So we want to protect the PVC from being accidentally deleted, for example. And this is the reason for this HP pod. OK, so when this pod is up and running, it means that we have been successfully managed to add this volume. So we can try again to log into the control plane. It has a sage and check the disk again. So now you can see that we have a new disk as the header. So we have been successfully managed to add the node. OK, so now let's try to increase the size of this volume. So I can basically modify the size of this PVC. Let's try from 10 to 20. Let's check if this has been already applied to the PVC. That's the case. So let's try to log it again and check the disk. So you can see that now it's 20 gigabyte size. So this is everything I wanted to show you. Hopefully you saw Kuber in action. And for example, we have the Kuber cluster API. You can create maybe on your bare metal server additional cluster, maybe for one for development, one for testing, and one for production. OK, there are any questions? I don't know if you need a microphone. Hi, great presentation. A question, is Kuber now fully supported by OpenShift so that I can run OpenShift bare metal and run virtual machines on it? Yes. So you can install, you can have OpenShift on your cluster. You can install Kuber there. But this is still a very early stage, this Kuber cluster API, so deploying OpenShift in OpenShift is something that is not supported yet. OK, thank you. So basically, your nodes were bare metal nodes. And can they be provisioned with MetalCube? Yeah, I mean, so where do you install Kuber? So bare metal, let's call it like in this way. Basically, can I have a cluster API with MetalCube as provisioned, and then over that bare metal, can I use Qvirt? Yeah, if you have KVM, yes. Or even you could use emulation, of course. If you have nodes with KVM and virtualization capable, yes. OK, thank you. Thank you for the presentation. What are the steps for making a custom Qvirt VM image? Sorry, can you repeat the question I missed the first part? What are the steps for making a custom Qvirt VM image? What do you mean, custom Qvirt? Yeah, how do you build an image with a custom VM image with some stuff pre-installed on it, for example? You mean Qvirt or an operative system? For Qvirt. So I can have a VM image with my application pre-installed on it, and to deploy this with Qvirt? Yes. So I mean, Qvirt is just like an extension and help you to deploy the virtual machine. What is running in your VM is up to you. So you can prepare, of course, a VM disk with already something pre-installed. So for example, when I mentioned tecton task, that's exactly one very good use case for tecton task. You can use, for example, guestFS tools in order to add, for example, additional packages on the disk. Or otherwise, we support also cloud init. This is something that you can specify in your VM definition. And basically, when you boot the first time, it's going to install the packages to some configuration. So it really depends how you want to configure, but it's, of course, possible. Thanks. Sorry, I think you need a microphone. In my imagination, we cannot use any image without preparing for Qvirt, like having QBADM binaries on it or some prerequisite to be used as a disk. No, you can have a simply VM disk. So you can also prepare it maybe even locally with VIR manager, for example. Then you can copy that disk in a scratch container image and then could be used, for example, as a container disk. So the container disk is basically what I show in the demo when I was pulling. So actually, that's a good example. So for example, what I was pulling during the demo, it was already a Fedora VM disk. We'd already Kubernetes installed. So basically, when I started, it was already up and running with all the packages there. So it has, for example, Cryo and all the things needed for a basic Kubernetes installation. Thank you. So yeah. I think. Can we have that online? Oh, OK. So. Yeah. Yes? Can you repeat the microphone? Can you repeat? You're saying? Yeah, I mean, you need to support nested VMs if you are not on bare metal. Or you can use emulation. But of course, this is not a suggested way. So nested is, of course, the best option. Hi. In your cluster API demo, I noticed that. Sorry. Where are you? Right here. Oh, hi. Hi. In your cluster API demo, I noticed that you copied the Kube config into the Control Plane node and accessed the Kube API server from the Control Plane node. How would you have exposed Kube API server outside of the Control Plane node? So how could you have run it from the same session that you were running the virtual front? So it's, of course, running in AVM. So you need to expose the part. There are various ways how you can do this. So, for example, Vercitial has also a command called expose. You can forward the traffic. So there are ways you can do that in Kube. It really depends also on your network solution that you are also deploying with Kube. It really depends on the use case. But it's, of course, possible to expose a part from the VM and then be accessible outside from the deployed cluster. That was just a demo. And it allowed me to just show a couple of things. But yeah, that's possible. What does that expose look like in Kubernetes terms? Are you creating a service or how are you exposed? You're just creating a service. So, for example, the SSH, we create a service in order to expose the SSH part available inside the virtual machines. That's another example how we are using Kubernetes services in order to access various networking inside AVM. But the SSH case was exactly the same. Thank you. So we still have three minutes. Does anyone have other questions? I think so. No? OK. Done. Thanks for your attention. Enjoy KubeCon.