 Hello, everyone. Welcome to Cloud Native Live, where we dive into code behind Cloud Native. I'm Annie Tavastro, and I'm a CNCF ambassador, as well as I lead marketing at Vision, and I will be your host tonight. So every week, we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, and they will answer all of your questions. So you can join us every Wednesday or Tuesday to watch live. And this week, we have two amazing speakers, Kevin and Saad here with us to talk about making VMs a first class citizen in Kubernetes with Q-Bird. And as always, this is an official live stream of the CNCF and as such, it is subject to the CNCF Code of Conduct. So please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of all of your fellow participants as well as presenters. With that done, I'll hand it over to our speakers to kick off today's presentation. Thank you, Annie, and thank you everyone for joining us today on our webinar, Q-Bird, making VMs a first class citizen in Kubernetes. So my name is Saad Malik, and I'm the CTO here at Spectra Cloud, where we focus on Kubernetes management, and joining me is Kevin Rubick, the principal architect of Spectra Cloud. Kevin? Hi, welcome all. So this webinar is not intended to be a deep dive into all the advanced use cases of Q-Bird, but rather more of a basic introduction into the common operations and workflows for end users. We'll start with a few slides about Q-Bird technologies and then transition to the main event for Kevin to do the demo. After the demo, we'll jump into the Q&A section. So before we talk about Q-Bird, why are we doing VM management on Kubernetes? So even though we're seeing a massive adoption of containers-based workloads, the reality is that VMs are here to stay. For most organizations, refactoring all of their existing applications to containers is a significant amount of effort. Even today, if you look at the number of VMs running on VMware, it's over 85 million, and that number is only gonna grow over the next few years. At the same time, there is many organizations are expressing concerns with the whole uncertainty around the Broadcom and VMware acquisitions, so most of these organizations are also looking at alternative hypervisors just hedging their fetch and de-rescuing their operations. And there's also additional benefits that potentially saving on some of the more expensive hypervisor costs. However, for me, I think one of the biggest advantages of running the hypervisor inside of Kubernetes, it's that your Kubernetes footprint is already expanding. Your platform engineering teams, your ops teams are already learning how to use Kubernetes and managing the infrastructure. And if that's the case, why not have the same platform manage both your container-based workloads and your virtualized workloads? One, all of the training and support and operations are as unified. And two, as your workloads over time do migrate towards containers as you start refactoring your applications, it's the same shared infrastructure that you're using. There's no additional hardware cost that you can add into the picture. Now, looking at the technology behind this scene, so Kubert is what provides VM management. It brings the Kubernetes-style APIs and manifest to drive both provisioning and management of virtual machines using simple resources. The technology has been started in 2017 by Red Hat, but was quickly adopted by CMCF as an incubating project in 2019. The underlying hypervisor is a KVM, but Kubert does leverage a technology called Libvert, which acts like an abstraction layer between different hypervisors. By the way, OpenStack, the OpenStack platform itself is also built using KVM and Libvert. So the technology has been mature for many, many years. Some of the really cool capabilities that Kubert provides as powerful hypervisor capabilities, one, not only does it help provision the state of the machines, but it also automatically restarts VMs when they crash. They enforce the VM configurations, everything from your CPUs to your RAM, networking storage, and then of course, many different networking options. You are able to, for quick POCs, a quick environment, use the POT networks using popular CNIs like Calico and Celium, and we'll get a little bit more into the details of that, but you can also leverage multis for more advanced configurations, whether you're exposing direct VLANs or bondings to your VMs directly. So looking at a little bit of the architecture of Kubert, a user's interface, using KubeCuttle or the K9S or favorite tool with the Kubernetes API server. From here, they'll essentially provision what's called a virtual machine or virtual machine instance. There is a controller running here called the VIRT controller, which essentially looks at the VMI and will launch an actual POT called the VIRT Launcher. So the POT gets scheduled to a node and then the VIRT Launcher will specify to the VMI that, hey, the POT for a VIRT Launcher is on node A. There is another control component called a VIRT handler that will look for that VMI and when the node is updated. And once it sees that a POT has been scheduled to its same node, it will directly communicate with a VIRT Launcher to create a domain in Libre and domain essentially is creating that actual KVM instance. Now all the operations, of course, manage directly by this Libre and this VIRT handler directly on the node until the time when the user says, delete the actual VMI object, in which case the reverse operation happens. A deep provisioning happens of the domain and then the POT itself is fully killed. Now, just one last aspect before we jump into to Kevin, what does it take to run Kuber? Like what does the entire stack look like? Well, you have to obviously start with some physical bare metal boxes. These servers become part of the underlying host clusters. Now, the first question we asked is, who manages the life cycles of these servers? There are many ways of orchestrating bare metal servers. One of the more popular projects is CanonicalMass, which is a bare metal management interface. And CanonicalMass allows you to provision and manage the life cycle bare metal machines like a private cloud. If you have a small Kuber cluster, you don't need a big data center use case or self provision capabilities. There is an open source project called KIROS that provides tamper proof immutable operating systems that you can install onto these boxes for Ubuntu, Fedora, SUSE, any operating system that you may have. Once you have the actual bare metal boxes, you obviously need storage. Storage comes in two different flavors. You may have a software defined networking solution like commercial product like Portworx or RookSafe. And obviously there are direct access capabilities with storage array networks. Or whether you're using flash arrays or nano apps or LEMCs. On the networking side, there aren't many different CNIs. By the way, this is just a representation version. There are many different choices you can do, whether you go with Caliq or Silia. Again, like I mentioned, if you do have more advanced networking requirements, exposing different VLANs, you can use a product called Multis. You go ahead and provision a Kuber. That is on top of it. You obviously have to install a Kuber piece. For monitoring you can use something like Prometheus and Grafana. Backup, you have a couple of different solutions like Valera or Trilio. And then once the cluster is up, you can provision your different namespaces, provision your containerized applications along with your virtualized VMs. And then the last slide is just some specific requirements relating to Kuber. You do need a Kubernetes cluster that is 117 or later. The host nodes ideally are running bare metal with virtualization support from the BIOS or firmware pass directly down in. If you are testing Kuber, whether on a private data center like VMware or on a public cloud, you can also enable emulation support. For storage, if you're just looking to provision virtual machines on individual nodes without any live migration support, you can have various regular read-only ones or read-read-once kind of volumes. But if you do need a ability to live migrate VMs across multiple nodes, then you do need a CSI driver with read-write many capabilities. And again, for networking, keeping things simple, you can use POT network, but if you have more advanced use cases through VLANs, you can use multis. So Kevin, what are we gonna be seeing in the demo today and what does the environment look like? Yeah, thank you, South. So the environment that we have to work with today actually looks like this. So let's see, we watched a slide there. So let's grab, get it back up here. It provides, it's a number of bare metal nodes. We have one control plane node and five worker nodes, all deployed by canonical mass on top of which we are running Kubernetes 126. Then in this particular environment, we're using portworks enterprise to provide a distributed storage solution, psyllium four-pot overlay networking. So that runs on top of here. We have an additional nick in these machines that give us access to particular VLANs in the network. So I can show putting a VM directly on one of the VLANs for which we use multis. And then we have a couple of regular components like Metal LB and NGI-NX to provide load balancing and ingress services on a bare metal cluster, which Kupferd and Prometheus Grafana can then use. And then using Kupferd, we can run virtual machines. And of course, we have this side-by-side of containers and virtual machines running next to each other. So what I'll show is these steps of creating first an ephemeral virtual machine. This is what you often see in other demos. And then what we can do to put such a virtual machine on an existing VLAN. Then how you can create a persistent virtual machine using something called data volumes so that the machine actually lives on a persistent volume and you can make changes to it. Then we show live migration and then backup and restoring via a snapshot. All right. Let's get to the demo. So all of this is available on a GitHub repo. So if you want to check out any of the manifests here, you can get it at krayweig at cncf-kupferd. The link will be posted in the channel as well to get it there. So we have these manifests here and let's first start at looking at what one of these things look like. So a virtual machine manifest is just a spec similar to like an OVF definition of what the VM should look like. And it has very similar items like how many CPU cores should be in there, which disks should be part of it. What network interfaces should we use? Then there's a translation to what that means inside of Kubernetes. So that same number of CPU cores also means that we want to reserve that amount of storage capacity and you can play with over commitment here. The interface inside the VM, we can translate to what that is on the Kubernetes side. So in this case, we'll start by putting this container on the regular pod net or this VM on the regular pod network as if it were any other container. And then we'll look a bit further what it requires to put that on the VLAN. And in this case, we're just saying that the volume for this is a container disk, which means that it's not persistent. If we shut this down and start it back up, we just get the same image all over again. But it's a great way of starting a VM from a known image that is ephemeral. So it can be really useful for runners, for example. And then we can do stuff like cloud init, for example, to provide additional steps that automatically need to be run as the VM starts up. You can also do this via a CLI. So if you install the VIRT CTL plugin, you can do this with like Kuperswet and HODL crew install VIRT, for example, to install this automatically. You will get this CLI, which gives you access to a lot of the common items. But when it comes to like creating a VM, you'll see that that is actually quite basic. You can provide some options and it will spit out a basic manifest here. But it's usually better to just use the Kuperswet guide to write the manifest directly as we have here. So let's apply this. There we go. And then we'll see inside of the cluster that we now have a virtual machine. So it got the configuration that we wanted and it is starting. And that will create a virtual machine instance, which is another definition, which is essentially pretty much the same configuration, but now this is like desired state configuration of an instance that actually should actually exist in the cluster. And so if we look at POTS, we now have a VIRT launcher and this is the process that actually runs QMU to run the virtual machine with the resources allotted to it. So this POTS will have these, let's see, resource limits apply to it here to specify how much can actually use from the cluster. And you can see here that network information automatically gets added to this POTS depending on how we configure it. So currently this is on the POT container network. And so if we look at the VM, at the VMI, I believe, we can see that it's being given an IP address on the POT network, POT network is 164.0.0 slash 18. And so it got one here and it's running on host AC300. If we wanted to connect to that, we could just say Kupferred VMC and say we want to connect to the VMI. Actually, what was the name that we had for this? This machine right here, we want to connect to that, which is like in the namespace C and CFR. And we'll give us some information of what to connect to. We can say that we just want proxy only. And now it's listening for this on 42121 and then I can spin up a VNC console and connect to this. Actually, I can do that. So we'll take a VNC console right here and we can connect to, we'll adjust the, actually not this one, it is, yes it is. And there's a quick audience question as well. Hey, I have a question. Why do we need VMs in Kubernetes? Why do we need VMs in Kubernetes? Because not all workloads are easily convertible. So for example, if you have a Postgres server trying to containerize Postgres is actually quite a bit of work if you want to do that reliably, since you'd have to do like a multi-container distributed database kind of setup. And for a lot of workloads, might actually be not really worth it doing all that refactoring of the application for all parts of the application. And so it can be really useful if you can take some parts of the applications that are hard to containerize, bring them into the Kubernetes cluster and then run them as a regular VM, still have the motion capabilities and just hold off on the conversion of that particular piece while you work on other components of an application that are much easier to containerize. So you can move the whole thing forward and not have to be blocked on these kinds of things. So that's why it makes sense to do. And I would just add that Cupert is the cloud native orchestration, the cloud native approach to being able to manage virtualization with a complete hypervisor built as a single package. And obviously it can run in any environment whether it's bare metal or cloud or data center, even though ideally it runs for bare metal so that you can get maximum performance. Great, and they had an extra question as well. What's the difference between these VMs and work nodes? So the worker nodes are the bare metal nodes that are actually running the cluster itself. The virtualized VM workloads run as containerized pods inside of the cluster, right? So you might have, I'm just throwing out an example, 10 worker nodes that are comprised of the bare metal nodes. And maybe Kevin, you can show the nodes here in your KNS. But notice that Kevin's environment has five worker nodes and one control plane node. But if he looks at all virtual machines across all namespaces, you might have many different VMs. So if we just show all, now we see different VMs across different namespaces. Can we run non-Linux kernel VMs? Anything that KVM supports, including Windows, Linux, FreeBSD, they're all supported on the KVM. So you see Windows running here, different window versions, yeah. So let's see, I just stopped this particular VM so we can see the status is stopped. Did that by adjusting the wrong state to false here. And of course in GUI tools, you would have like buttons to do that for you. And let's take a look at what happens if we wanted to put this on a VLAN so that this externally accessible or easily externally accessible. That's same manifest, but now what we're saying here is we want to use a bridge interface inside of the VM. And we want to bridge it to a multis network which is a network called in this case VLAN zero. So if we take a look inside the cluster and look for network attachment definitions then for the CNCF webinar, these are per namespace. So it can be really useful to actually lock this down on a per namespace level. We can say that we have something called VLAN zero which will give access to in this case, just the native VLAN. But of course you can put different VLAN IDs here. I think we have like a different one here which will give access to VLAN 128, for example. And then when you associate it to such a network name it will make that VM accessible on that particular VLAN. So let's apply this manifest which will update our VM. There we go. You now be linked to, there we go. There we go. So now it is linked to the multis network and we will start it back up again. And so we'll set, oh, is this running is already true? That's great. So we'll see in the virtual machine instance that now this is running again and it will take a little bit longer and we'll see something that's similar to this where it will get an IP address that is actually on VLAN 130 which is the native VLAN for this particular port. So VLAN zero gets translated to 130 in this particular port that the machine is on and it gets an IP address directly on that network. And so this can be used to make sure that VMs can be migrated from an existing solution like Hyper-V, VMware onto a Kubernetes-based virtualization cluster without changing anything in the VM. You just convert the VM, you bring it here, you can keep it same name, IP address, everything and run as usual. All right, so once we have that, let's take a look at create, let me create this first and then let's take a look at what this VM is. So this is a different one where we see something called a data volume template. And so here we specify that we don't want to run this VM ephemerally. We want to grab the same container image that we had for the other one but push that onto a read write many persistent volume of a certain size and size that we can control and then run the VM from that volume. So everything else here is the same. And instead of a container disk, now it says a data volume here with the name of that template. And so what happens is that inside the cluster, let's bring it into our webinar namespace just make it a little bit easier. There's now a PVC for the data volume. We'll see to create it. There's a scratch DV here to initially link the container volume on. And then there was an import process that just already ran that copied over all the data into this persistent volume. And now the parts that gets created with this, let's see actually the VMI that we'll get created in a moment will then be linked to that persistent volume. So the container disk one that we have is not linked to any persistent volume here, but the, here we go, the virtual machine instance that is now tied to this data volume is automatically linked to, let's see where we are, to a persistent volume claim. And so this automatically gets translated. And it will see if that data volume exists. And if it doesn't, it imports the data and gives it to that VM. If it already exists, it gives that existing PVC to the VM so that everything stays persistent. Now we have another VM here, which is again on the part network, but this is fully persistent and you can save state to this. So once we have something that we actually care about, we wanna make sure that we don't lose it when we upgrade Kubernetes or a node goes down for maintenance. For example, we wanna be able to move those workloads around. Let me exit out of this. There we go. So what we can do is we can do live migration and we'll do that for the second persistent VM. And so what this manifest is, is it's just a virtual instance migration resource that tells Kubernetes or Kubevert to migrate a VM of this name in this space. And so if we apply this manifest, then we can see that now another part for this launcher is started on a different node on the one on one. And we are migrating this information over. And we can actually see that by looking at virtual machine instance migration objects. And it's already happened. Of course, this is a small VM so this happens quickly, but you can actually track that data here. And there's a lot of technology that Kubevert will automatically do for you to make sure that these live migrations can cope with like bigger workloads. So by default. There's an audience question. So Deepak asks, what would be the reason some of these companies still use VM rather than containers? Because it's the same reason that we still have bare metal servers that we still have mainframes here and there. And in reality, what happens is that new technology just gets added and it becomes like 10 times more ubiquitous. But the previous version iteration of that technology typically never fully goes away. And so there will always be or for a long time there will still be VMs around. There are running some really important workloads and it will be very difficult to get rid of all of them. And I think just another data point to add is most new organizations obviously will start new projects, greenfield applications all containerized or serverless. But there are many legacy organizations that have applications for many, many years. That data point VMware alone has 85 million workloads running on VMs. And they're estimating by 2025, that number to be close to 120 million. So we're still gonna see an increase of VM based workloads, whether it's different type of load balancers or firewalls. Somebody put a comment about PF sense as a different solution. There are appliances that customers or vendors are providing to customers as virtual appliances. Many of them are still virtualized based. So we're still gonna see adoption of VM technologies even though over time it is gonna start slowing down when containers and serverless become ubiquitous everywhere. You need that bridge between the old and the new. Exactly. And so that's why it's important to make sure that all of the core capabilities of running VMs at scale can be done in Kubernetes, like live migration, like snapshot capabilities, so that you can confidently move those workloads over because there's definitely benefits on doing all of the same inside of a Kubernetes cluster because these workloads here, I have now control over how I publish this. So this could be a VM that's running like it is a container inside of the pod network. So it's fully shielded from the network. It runs in an overlay. There's no attack surface until I create a Kubernetes service to expose one or more of the ports on that VM, for example. And then I can control how that happens, whether that's via cluster IP so that other workloads in my cluster have access or via load balancer so that they can be externally accessed or even as an entire VLAN so that the whole VM can be accessed depending on how far you are in your migration towards Kubernetes operations process. So it gives you a lot of flexibility of creating those hybrid applications without having to stitch your legacy VM platform and your new cloud native Kubernetes platform together. And the other really big advantage, just think about the 1800 plus integrations available in the CNCF landscape. Everything from your logging, your monitoring, your ingress solutions, all of these technologies for most part will work, not only with container-based technologies but also with virtualization, with QBird itself. So the example, I don't know if Kevin, you're gonna show later maybe a backup solution or Prometheus example, but all these different capabilities work the exact same way, even for virtualized workloads. Indeed, yep. And so, where's my... There was a extra audience question, I think after that was gonna get back to the demo, to the regular program. Okay, here I have another question, is that VMs in communities meaning that nested virtualization or not? No, this VM is actually running on bare metal so directly on the hypervisor. What happens is that essentially all of the containers are running natively as containerized processes on the Linux host. It's Ubuntu running on the bare metal machine. And then there's KVM as a kernel module that provides hypervisor capabilities and all the VMs run as that. So it's no nested virtualization. All that this launcher does is it just kicks off the key mu process of triggering the hypervisor to spin up a VM but there's no nested virtualization happening. For testing, if you wanted to test on a cloud or if you wanted to test on VMware or another private cloud like OpenStack, you can of course launch the Kubernetes clusters on those VM technologies and you can do nested virtualization even though it's not recommended in production. Yeah, yeah. And Kupfer provides an emulation mode to like running without nested virtualization but that would be even slower. All right, let's look at two last steps, one is to make a snapshot of a VM before we do some maintenance to it. For example, we can ask Kubernetes to take a snapshot of the machine. This natively integrates with the CSI. So it requires that the CSI snapshot is enabled which is when we provide this for customers something that we automatically enable. And then the CSI will do the heavy lifting of creating the snapshot but it will give you an easy to use route but it will give you an easy to use resource called a VM snapshot to maintain the state that you want. So if we do this, we apply, then we can see that now a VM snapshot exists of this VM, this persistent VM that we have and that will also show if we go into a volume snapshot as a snapshot of the underlying PVC. So all of those like dependencies automatically gets resolved and it finds that it needs to use the pork work snapshot class to make a snapshot of the PVC that belongs to this particular VM. In this case, we just have the snapshot but what happens if we restore this snapshot? So first to be able to do that that we have to shut down the VM. So we will take our VM and edit it and set it to a run state of false. Shut it down, there we go. So there it stops. Actually it takes a little bit longer for it fully stops because the launch has to terminate but we can already ask this to now start restoring the snapshot and this looks very similar. It's a VM restore which will then again, look for the VM, look for all of the persistent volumes that are associated with that VM and then we return it to the state of the snapshot that we choose. So this will be the snapshot. So in the snapshot that we made here we gave it snap VM2 as a name and that is also the snap that we call out here to restore. So we apply this one and what we'll see is now we have a VM restore object which has completed. So again, because this offloads to the storage integration directly, this is really quick to do. And what we'll see is that if we now take a look at the virtual machine and let's set it to start. What we'll see is that the data volume has now changed to a restore. So a new PVC has been created. If we look for PVCs here, we see that our original data volume is still here which was the state before we restored the snapshot. So you still have the ability to go back to that as well. And now a snapshot has been restored creating a new PVC with the content of that particular volume when the snapshot was created and the virtual machine is now connected to this particular PVC. And so here we go. We have our new pod here and we can see that this is now linked to our restore PVC which has the content of the snapshot that we restored. Great, and there's a few audience questions as well. So Sejan asks, is the Q-Port to port horizontal slash vertical scaling? Let's see. There, it depends a little bit on the technology used. So for example, if you use portworks then it can do automatic scaling of the underlying persistent volumes if they start to run out of space. And many of the CSIs will provide that kind of capability. I haven't, I don't know if you've seen it. Yeah, so Q-Port does for the virtual is workload. So you're like for the CSI in storage, you could do horizontal scaling, but for the VMs themselves, there is a manifest resource type called a virtual machine instance replica set. Essentially, you can think of it like a deployment or an actual replica set that we're all used to in containers. And you are able to attach a horizontal pod autoscaler to that virtual machine instance replica set so you can drive it based on CPU and memory and it'll do automatically horizontal scaling. Now it's really interesting you asked about vertical scaling because Q-Port is 1.27 just added support for being able to hot plug and change your CPU and your memory requirements on a running pod without having to shut it down. Q-Port just yesterday implemented support for doing hot plugging of CPU and memory. So the next release of Q-Port will see also vertical auto scaling for live VMs as well. Yeah, or at least the ability to adjust your VM to a new spec. And then probably you'll see automatic vertical scaling implemented after that, yes. Great, and then there was another question. Is there any integration between Q-Port and Ansible for example, to manage the inventory? Yep, Ansible does have, I mean, this Q-Port is a very mature product obviously with seven years behind in the making. There is an Ansible module called Q-Port VM that allows you to manage all the lifecycle aspects from provisioning VM to day two operations to deleting VMs. So absolutely you can use Ansible for that. Yeah, essentially that just makes like these steps more straightforward to do. When you're talking about managing the software inside of the VM itself, you're still free to use whatever traditional OS management and software management tool you want to use inside of that Ansible, Puppet, Chef, that's kind of options. Great, and then there was another question from Deepak. What would happen if any alternative that comes to Kubernetes, what would these companies do to continue with the Kubernetes or get into a new technology? Is it always scalable to change tech every time? Well, once you've converted this, essentially you've converted it to KVM and KVM is essentially is used in all of the major cloud platforms that are not Azure or what's the other one? Hyper-V and Xen, the other one. Yeah, so everything else, Amazon, Google, Nutanix, they are all running versions of KVM. So if some other container orchestrator were to come along, then yes, you might have to move your VMs to that other platform, but it's very, very likely that that hypervisor will be KVM as well, because the container orchestration doesn't really have anything to do with hypervising. And so that technology will probably just stay the same. Cool, and then there was a last question so far Arshkan, what is the possible device plugin be effectively utilized within KVM to enable integration and management of GPU resources? Yep, that actually is natively supported. So if you want to pass through a GPU or other hardware devices for that matter, then that is something that this totally supports. Yeah, device clusters, absolutely. Great. All right, that concludes my demo. Cool, let me go back to sharing my screen one moment, please. Yeah, perfect. Cool, any other questions from the audience? I think there's a lot of fantastic questions. There's been a lot for sure. So far, I didn't see anything new there, but I have questions as well. Do you have anything else to finally say in the thank you portion or should I get to my questions? I mean, I can just, for my closing comments, I would just want to say that I think there's lots of advantages to running virtualization in Kubernetes, right? Obviously, being able to use a cloud native approach and framework, many of the capabilities that Kubernetes provides from service discovery to auto healing, secret passing, these are natively supported, whether it's through containers or for virtualized workloads. And then like we mentioned, many of the integrations that exist in the CNC of landscape will continue to add value for both virtualized workloads and container workloads. Good, and if the audience has any questions, please ask them now. Now is the perfect time to ask them, so go ahead and type away. But while we see if the audience types any questions in, I would like to ask you, is there any prerequisites for running virtual machines on Kubernetes? Kevin, you wanna talk about storage networking? Yes, the biggest is that if you want live migration, which you probably want, you don't want VMs to be stuck on a single node, you must use a storage solution that supports read-write many persistent volumes. They can be block or file system, but in most cases, it has to be a file system type to have read-write many support, depending on the storage solution that you use. That is I think the big one. And then on the networking side, it is just recommended to make sure that you have a good network design that separates out data from management and gives enough NICs, we would recommend at least four NIC ports to separate out different streams reliably, so that you get a proper reliable cluster to work with. Great, and then there's a question from the audience. Can we create Android OS VM via Q-Port? I think the question there is basically ARM64 support. I think that that is on the development, but not yet available. Okay, so coming at some point. So also, Hermit, can I run Q-Port on any Kubernetes cluster in any cloud or only in certain environments? Well, the challenge is that if you run Q-Port in the cloud, the cloud providers typically don't give you access to the low-level networking, and so it might be difficult to publish the VMs directly on some of the virtual subnets that you have as that technology tends to conflict. So if you run it on the cloud, it's recommended to keep the VMs running in pod networks in overlays and then publish them as services. If you want to go further and really expose VMs directly on VLANs, as you would do in your own data center, we would recommend running your own bare metal hardware cluster. Great, and then there's a question from Jesus. Can Snapshots be exported, which format S3 is supported? The backup technologies typically do that. So if you use, it kind of depends. So if you use Valero, then what happens is that the backups get converted into a CSI snapshot, and then it depends on the snapshot provider set up on where these go. So if you use like a typical Valero layout with for example, portworks enterprise, the quick setup is to have a snapshot and it gets stored inside of the existing portworks storage, but you can also set up portworks cloud snaps, which will then actually move those snapshots off to either S3 or something like pure array flash blades, any like NFS or S3 compatible storage solution allows you to move that off. If you use something like Trilio, then you can do all of that in one shot where it will take a snapshot of the VM to get at a readable copy of the data while the VM continues, and then it will offload the snapshot data to S3 or NFS location of choice, and then you can restore from there. So it's typically the backup solutions and the storage solutions that provide options there. Good. And then there's an audience question of feedback, which is a pretty much a big question, or like a large one, I think what would the role of AI be in communities in the future or maybe in DevOps or cloud? Any thoughts there? Yes, that's a great question. Obviously we're seeing generative AI is taking over the world, every single person is using it. I feel there's gonna be two different aspects. One is gonna be in terms of the workloads, being able to make it easy to develop workloads, optimize and run and place them. But the other aspect is at the infrastructure level, being able to provide intelligent placements across different nodes, being able to schedule different workloads across different clusters without a human intervening and specifying run my workloads, it's gonna be more AI driven automatically. I think there's gonna be a lot more work we're gonna see depending on how and which stack of the area people are looking at. Good. And then from my side, a question came to mind. Is there something like VMotion for Kupert? Yes, that's live migration. We just showed that. So yes, you can migrate your VMs without taking them down from node to node. Good. And then you can trigger the live migration, not just for individual VMs, but like Kevin showed, you're also able to evict an exact node itself. All the VMs on that will essentially live migrate over to other nodes. Yeah, Kupert has a resource that you can configure and you can set an evict strategy. And what we do by default is we set that to live migrate so that whenever Kubernetes decides that it needs to evict a virtual machine, it automatically converts that to the motion to a different host. Good. I have a few more questions in mind but if the audience has any questions, please feel free to talk them in and we can get to them. So from my side, is there something like DRS for Kupert? There is. So there is something called the de-scheduler which is an existing project for Kubernetes that helps you evict workloads from nodes and let Kubernetes reschedule them on other nodes that are not as busy. And it has built in logic on like how busy a node has to be before that eviction process happens. This can be leveraged for Kupert because it essentially does the same thing. You just configure it the way that you want, how what the thresholds are for no utilization and under utilization and then it will automatically decide to evict certain VMs from nodes which will cause live migration to nodes that are not as busy and that way you have your essentially DRS at home. Good. Final question from my side. So also the other members, if you have any questions, please go ahead. How does networking work for virtual machines on Kubernetes clusters? So the quick way is there are two options. One is running a VM as if it were a container workloads. And so it just shows up as another pod inside the containers on the regular CNI overlay network that you typically use like that Calico or Cillian provides. And then like any container on Kubernetes, you don't directly have access to that particular pod but you can publish access to parts of it using Kubernetes services. And so you can create a service and then via Metal LB give it an external load balance address, for example. And then whatever particular port you decided to publish that port lands in the VM and you can access that. So if that were like port 22, you get SSH access. If it was port five, four, three, two, you get Postgres DB access if that's running on the VM. And then a second option is to use something like multis which allows you to define additional networks that could be direct VLAN access, for example. And that's what we showed in demo here where you can give a VM access to a specific VLAN and then multis will give you the different network configuration that lands it directly onto that VLAN and then it can get either an IP address there or you can statically configure it. And then it was, then it will be running just as if it were plugged in directly on the network or if it was like similar to a VMware cluster where you give it a network, a port group that is in a similar VLAN. Good. Final call for questions for now. If you have anything, please send them in. And then Jesus asked, can we at Endpoint use this targets of Kubernetes load balancer service? Yeah, yeah. So you can just use regular service resources and it works just the way that you would expect to. Okay, great. So if anyone doesn't have any questions, do you have any final words? No, Saad or Kevin? Yeah, yeah, absolutely. So I think everyone, thank you for attending the webinar. If you want to see and play around with all the Kevin's workflows, his link is available directly on this page here. I believe Annie, you also published that link onto the chat. Kevin does have a webinar next week or in two weeks on bare metal Kubernetes with MAS. That's also going to be done with Kevin and a expert from Canonical MAS. So please do scan this QR code if you're interested in attending there. If you're more interested in about bare metal clusters, how to provision them, how to maintain them, please do take a look at our blogs. And also SpectroCloud is invested in this area for managing a complete stack, everything from bare metals to virtualized workloads. If you have any questions or get any thoughts or comments, feel free to reach out to Kevin or me on LinkedIn. We're very happy to help answer any questions, whether it's bare metal Kubernetes, whether it's Kube word or anything else in general. Absolutely. Perfect. That was a really good ending words for today. But as always from my side as well, thank you everyone for joining the latest episode of Cloud Native Live. It was great to have a session about making VMs a first class citizen in communities with Kube work. We also really love the interaction and questions from the audience. Really lovely to always see that. And we bring you the latest Cloud Native code every Wednesday or Tuesdays. And in the coming weeks, we have more great sessions coming up. So stay tuned for those. Thank you for joining us today. And see you all next.