 Visible it back. Am I audible? Awesome. Good morning, all. So this is a bit continuing to the previous talk of VMs and LibWord and the KVM world. So this is in the platform track, but this talk also covers a lot about Kubernetes. How many of you have heard about Kubernetes? OK. I guess almost everyone. So the bottom line of the talk is running virtual machines on top of Kubernetes or OpenShift rather. So one of the interns in my team is here. And the first reaction, I was pitching this project to him and was like, why someone wants to do that? Running VMs on top of Kubernetes. Kubernetes, as a core, is meant to be running containers. And it started from the cloud native world. So in this talk, we would go towards why it makes sense for this and cover what are the parts that support this functionality and this too. So let's start. So a bit above me. I am Vatsal Parekh. I work at Radat. I'm a QE associate quality engineer in the continuity virtualization team. I work in Pune. So let's first talk a bit about the world of virtual machines. So coming from the bare metal world, like let's go back in the history. If you go to 1998 or 2000 and when the cloud computing started, the virtual machines changed the world. Like they were the fun. They gave us isolations, like coming from the total bare metal thing. It gives isolation. It gives the flexibility, security. They were scalable. You can scale up and scale down over shoe machines. And all the other overt features and all the other virtualization stuff that we just heard in the previous talk. So virtual machines are fun, right? Isn't it? Well, kind of, because VMs feels like 2014-ish. The containers are the new world. This is the thing we have been hearing. So for the people, how many of you know about containers? Like everyone knows about Kubernetes, but about the layer difference between the VMs and the containers. So this is typical on the lab. This is a typical virtual machine installation you would have. You would have your infrastructure, hypervisor, KVM, VMware, whatever you have. On top of that, you would have the guest OS and the bins and the app and the whole stack, the whole nested stuff. But converging to the new world, we will have infrastructure OS. And then we will have our container engine, for example, Docker. And then on top of that, in the containers, we will directly have our bins. So that's more near to the host. And containers, they're very easy to run. They are very quick, lightweight. So there are many plus points of the containers. You might have seen this few things, containers, containers everywhere. And in the future, everything is containerized. If anyone wants to deploy a new web application, is there anyone who will still stick with the VM stuff? I guess probably no. And everyone is coming with the new Google Amazon. Everyone is embracing the Kubernetes world. So containers were there, but the new game changer was Kubernetes. You can run a Docker run and you can run a container. But to run it at a scale, to run it in a production level, you needed an orchestration tool. So when Kubernetes came in 2014, 2013 era, it became the game changer. People start adapting it more and embracing the containers more and more. So this is the slogan that I feel should be the next thing, running everything on Kubernetes. But what about the old workload? We have been running VMs from 2014, 2016, 2018. VMs have gotten us to 2018. They have been just fine. It's working for us. And now you suddenly say that containers are the new thing. Containers is the next thing. But I still need my old VMs. How do you do that? I mean, for enterprise, they can't just go and convert everything to containers. I mean, they can. It takes time, but they can. But I still need VMs. There are no Windows containers as of now. I have heard of. You still need your old infrastructure to run. So presenting you the Kube word, the emerging path for the best of the both worlds. You need VMs, you need containers. So Kube word will help you run VMs on top of containers. But why? VMs on Kubernetes? So let's see a few of the use cases that would make sense for this project. So Kubernetes was a game changer because it was able to give a great orchestration ability to the container, to the Docker thing. So bringing VMs to Kubernetes will give us the ability of both. VMs are great. Kubernetes is great. But somehow, if we can orchestrate VMs in the way Kubernetes can, that would be a big plus. Bringing VMs to cloud native world, in the sense integrating it with other tools like STO, GlusterFS. I mean, GlusterFS as a storage is there, but the cloud, the CNS version. It provides a consistent migration path to fully migrating to infrastructure, which is simply container-based. You can let go your old VM ware, any of the virtual machine stuff, and simply go to OpenShift, Kubernetes-specific thing, where everything is containerized. Even your VMs are containerized. This might sound strange, but we'll see how it can go. And there are some more use cases you can find of your own. Make sense? Maybe, yes, to bring VMs to Kubernetes. But how? Replace VMs with containers? No. So the simple answer is, running Libvert inside a container, running inside a pod. How many of you know about the pod? Like, well, yes. So I guess everyone has already tried hands on at least once with Kubernetes. So this is a simple answer. We'll see in the architecture now. So what we need to do is, this is the API server, which cubes CTL command context. So you see, alongside with that, we have our own controllers. So inside a pod, there will be a word launcher pod. It will be a LibvertD eventually. And a virtual machine as a base is a LibvertD process like you pass the image of the virtual machine image to that LibvertD, it essentially runs the VM. Now, what we need to do to bring it to the Kubernetes world, we simply pass it the image and let it run inside a container. And that container is inside a pod. So this pod will be a VM object. So for that, if you might have seen in the previous slide, I said it was a CID everything. So Kubernetes brought up a new API called Custom Resource Definition. So you can simply put out your own objects inside Kubernetes, whatever you want. And that lets us define objects that are not core Kubernetes, that are external objects. So that's why they're called Custom Resource Objects. So a virtual machine will be an object inside your Kubernetes. And you can simply do OC get VM. And you can list down the VMs you have. So virtual machines have their own kind. And you will have the ability to express all the parameters you want for a VM, just like you do in an old VM, bare metal VM world. And this is eventually going to be Libvert. So you can expect things to be, which are in the Libvert. You can expect the same things to be in the Kubernetes world as well. So this is a spec of the VM, which you can create once you have the keyword installed in Kubernetes. So you can see that. So I have a new kind, which is a virtual machine. You might have seen in the pod. If you were defining a pod, you will do kind pod. But here I'm doing a kind virtual machine. And I'm just giving the name metadata in the metadata. And the spec, I define what I want. And this is a networking part, which Kubernetes does. So this is simply just as a pod. You had pod. You had replica sets. Every other Kubernetes object was there. Now you have one more Kubernetes object, which is virtual machine. And how does it VM and the pod sit together? So VM is actually a pod. I mean, on the back, it runs as a pod. So as a high level object, the virtual machine will be available to every other Kubernetes thing. So metadata labels, monitoring, and all the other Kubernetes ecosystem stuff will be available to the, I mean, virtual machine will be available to every other object in the Kubernetes ecosystem. So the main Kubernetes components, kubect components, are the kubect main operator. Apart from that, we have containerized data importer. So if you were to bring your VM data, like if you were to import disks to kubect, you can use this tool. So it's basically a controller which runs on the Kubernetes cluster. And you define the location of your disk as an annotation of the persistent volume. So the controller will see that there is a disk with this annotation. And I need to bring the data from the location that you define and dump it to this disk. And then you can use this disk to attach it to a VM, a VM object. So VM pod will have that disk attached. And you will have the VM with the disk that you originally had in the old cluster. That was made, CDI was mainly for the disk. You can import a fresh new image, a fresh new CentOS, Ubuntu, any image you have. And just let it be inside the persistent volume and attach it to a VM object. It will eventually pass to the LibVert and it will start a virtual machine. And then we have one more interesting component, that's V2V. So I mentioned about the migration paths to a more Kubernetes world. So you want to decommission everything you had on the old infrastructure. So how would you bring everything to the Kubernetes? Open it. So that's where we use the V2V. You can bring, you can just go and give the credentials and location to the VMWare path, VMWare cluster. Select the virtual machines and it will bring every virtual machine disk to the Qubeword or Kubernetes. So the disk and storage. So virtual machines are, as a storage backend, we use the persistent volumes in the Qubeword thing. And the good part of being in the Kubernetes ecosystem is that you have many of the options in the sense, you can use, if you're on the Google Cloud, if you're running Qubeword on Google Kubernetes, you can use the GC disk. If you're on the Amazon Azure, you can use that disk as well. And if you're on the OpenShift, you can use GlusterFS or whatever you want. So there are many options you can use for the storage. So that's why bringing it to the Qubeword ecosystem, it gives a lot of plus points. So the PV, the persistent volume, is one-to-one mapped with a VM. You can define the option to be immutable or immutable. And the last point, so it gives a benefit of having a wider range of options rather than having to, being binded to one O-word or the one VM-specific, pro-word-specific thing. So you can, with the CDI, you can simply fetch the details from a HTTP endpoint or if you want to pass it from the local host as well, there are some options in the working. We're also working on bringing the option to simply upload the image you have. On the networking part, we are working on the Istio part where you can define your VMs with the services just like you do with VMs with the other port or the other cloud native applications you are already there. And so you can also SSH to VMs just like you do in the old bare-metal world, right? So you can do it with exposing the VM using a service, Kubernetes service, and you just apply a label and expose the services. So then using the cluster IP, you can directly SSH to that VM. We are also working on integrations of a cube-word with other providers. For example, you want this to be a provider inside your manager queue. It's there, we also have a Ansible module for it. You want to play with, you want to write playbooks that create VMs or play with the VM objects inside the cube-word, which is eventually running on the Kubernetes. We are working on the foreman and the terraform. So integrations are also in the working, the manager queue and the Ansible modules are already there. You can go and check it out. So to know more, we have a user guide, API reference and the docs. The cube-word project is quite new. It's been like eight months now, the project is in place. We are still ramping up on the new features like integration with the device drivers, the CSI thing, the storage part, the high availability part and all. So these are the API reference, we have docs. In cube-word, we are on the free node, cube-word channel, we are on the Google group, or just tweet at the cube-word to say hi. Thank you. Questions? And again, does it make sense to anyone to run VMs on Kubernetes? Okay, awesome. Question and answer. Anyone? So what about high availability of VMs? So let's say I need a replication, two replications. So you have said, you told that, you have one-to-one mapping of the storage, of the legacy systems. Okay, so what about the, you can say you need to replicate the storage as well for that? Or that storage is mountable to one-to-many. Okay, so there is a new object in the making which is a VM replica set, VMRC. Okay. So that will be mainly for the, what you say. So that will be, we will make a open-shift template for that. And that template, you can create VMs after that template. So it will be replicated across whatever the node or cluster you want. And it will run VMs, multiple VMs of the same disk. And I am not sure about one-to-one mapping, how it will go with the one-to-one mapping there. But the VM replica set as a separate object is already in the making. Or it is already there, I guess it's there. Okay, okay, thank you. So if you don't mind, can you just explain me how about the IP management will work for these provision VMs? So for the network, you can define the network policy in the VM object. Or if you don't define, it will give you, like the pod will be assigned an IP. And you can expose that IP to whatever services or application you want. And this, the VMs, so in the Kubernetes world, you would want to run VMs when you are actually doing the stateful application inside the VM. So you can even define the MAC address externally. I mean, you can manually define the MAC address as well. Or you can manually define the IPs as well. No, it is not something like infrastructure as a service. Can you speak a bit louder? Can you hear me? Yes. Okay, so we have something similar product, so we deployed it in Kubernetes. So like infrastructure as a service. So when you are saying that we are going to spin up a VM inside that Kubernetes or OpenShift, so it should be something like infrastructure as a service, right? Not able to hear, please be loud. So when you are saying that we are going to spin up a VM inside Kubernetes or OpenShift, so it should be something like infrastructure as a service, people go and spin up their VMs, right? Yes, so if you don't define the VMs, we'll get the IP, I mean the VM object, which will be running eventually inside a pod, that will get the IP the same way a pod gets IP inside the Kubernetes. And this is not targeted to be infrastructure as a service thing, like this is the where you want to go in a Kubernetes or cloud native world and still have your VMs the same way you were having previously, but have the ability of cube, the larger ecosystem of the Kubernetes and all this good part of Kubernetes with the VMs. So this is that part of the conversion. We are not targeting the VMs, but we are targeting the integration of the both world. Okay, so what kind of service type we are going to use here? I mean, service type, when I'm going to access these VMs outside these clusters? So there are mainly two things. We are giving the cluster IP and the node IP to the VM that is being exposed. Is that your question? Yeah, so... Yeah, you can expose the both thing. We have our own CLI as well, like just like CubeCTL, you can use the word CTL as well. It will do the stuff for you or you can manually create a service, CubeCTL create service and assign what type of IP you want to be exposed and it will do it for you. Thank you. Continue. That's my first question. The typical booting of a VM takes time. I have usually seen it takes the similar time inside running inside a container as well. Like that, we are not doing anything there. It's just running as you were doing it previously. You can, there are certainly some features we are providing like graceful restart or restart policy and all, but there is no difference in the way you do it with VM. Like you boot up VM or shut down the restart VM. It's just as it is. Right, but container ecosystem is generally a lot more of a theme, where you just restart containers, very more, lot more... Containers are just fine. That's what the larger part of the theme was that people wants to move to containers, but you might still want to have your VMs, right? For the stateful apps. So that's where this helps. When you're moving to containers for the faster thing, but you still have VMs, so you can use this project. Yenev would like to answer some... One more question, and then I'll give it to you. So one of the main guidelines of Kubbert as opposed to other project is that we extend Kubernetes. We don't change Kubernetes for the purpose of VMs only. If you think about containers, then when you're moving to stateful containers, it's not really, you know, make sense in containers to just, oh, I want more memory. I can't just kill it and rerun it. That's how stateless, right? So when you move to stateful, you really need to change the runtime configuration. And Kubernetes doesn't have it at the moment. We are working with the Kubernetes community to actually allow the runtime configuration to change. It's a major architectural change for Kubernetes, but it's something they need to do as well, or we as a community need to do as well for stateful containers. And that's how we're going to scale, for example, scale up and hot plug, for example, memory or hot plug CPU by changing runtime configuration of the container itself, which inside there's a VM. But it's a main principle of development of Kubernetes. We extend Kubernetes, we contribute to Kubernetes. So there was a question earlier about high availability. Kubernetes doesn't have fencing support, for example. If a node is disconnected, we have no idea what's going on there. On stateless containers, yeah, you run a container somewhere else. What happens for a stateful one? What happens if there's a PVC, right, with the Postgres database? You can't allow both to access, right? It's a corruption right away. So we extend Kubernetes, which happens to work very well for the stateful set as well. That's where the initial slide was, that bringing best of the both world. Extending Kubernetes and bringing VMs in the way that VMs just stay as VMs. You can still use the best part of the Kubernetes you are liking. And one more question I have is, so when you're running a pod for a VM, do you require some kind of privileged container? Initially there was, but happy to announce that it's no more. You don't need the privileged containers any more in the latest release of Kubebert. Can you explain how it works? Kubebert, for example, would require some kind of privileged permissions. So the container, the Kubebert container itself is running in the namespace. So I'm not quite sure. I can take it offline or you may answer that. So this is actually one of the amazing features that we got out of extending Kubernetes. So you need the DevKVM, right? So essentially you need the device plugin. But we want containers to have device plugin. We want to connect devices to containers. If you think about GPU workloads for containers, they need to get into the GPU. So by using device plugin, we actually got this feature into Kubernetes and we're using that back to pass to DevKVM, for example. There was a question earlier on networking, same thing, right? You get a single IP for your port, but what if you need multiple interface, right? VMs need multiple interfaces. How would you do that? If you wish to have SRAOV for fast networking, for example, device plugin, which again contributed to Kubernetes, is what it needs to get the DevKVM, for example. Yes, but there's also work in progress to make Libret less controlling. Today, Libret likes to control the whole host. But now these days, I don't need it to control the host. So I have Kubernetes doing it. I just need Libret to run the VM. So there's an ongoing, very interesting architectural change to Libret to just do what it needs to do for a single VM and not control the whole world. Yeah, one last question? Yeah, quick question. So if I talk about like I was talking earlier, so right now, all our VMs are like Libret and KVM-based, right? So I assume that the keyword is like Kubernetes and Libret. So will I still be, I mean, will I be able to, I mean, use all those existing VMs with the Kubernetes like managing those? There are no changes in the Libret that we are making like to just be able to run KubeVert. KubeVert is more about extending Kubernetes to Libret, connecting Libret with Kubernetes. So whatever Libret supports will be supported in the KubeVert plate. So the same existing XML file it will be reading from? I guess yes. Okay. So, I mean, is it like already public? I didn't think much. Yes, so this slide will be shared and it's already in the GitHub. The simple way to run KubeVert inside a Kubernetes cluster is just KubeCity will apply the manifest and it will create all the controllers on the KubeVert, on the Kubernetes. So the product is already available to use? Yes, it's already there. I mean, the product? The project is already there. The product is still 2BGA in some time. Right, right. Thank you. I think that is it. Thank you. There is a tea and coffee arrangement at level eight at the same venue. So we will be getting back at 11.45. Thank you. So this is actually one of the amazing, you know, features that we got out of extending Kubernetes. So you need the DevKVM, right? So essentially, you need a device plugin. But, you know, we want containers to have device plugin. We want to, you know, connect devices. If you think about GPU workloads for containers, they need to get into the GPU. So by using device plugin, we actually got this feature into Kubernetes and we're using that back to use that, you know, to pass to DevKVM, for example. There was a question earlier on networking. Same thing, right? You get a single IP for your port, but what if you need multiple interface, right? VMs need multiple interfaces. How would you do that? If you wish to have SRIOV for fast networking, for example, device plugin, which, again, contributed to Kubernetes, is what it needs. Yes, to get the DevKVM, for example. Yes, but there's also work in progress to make Leveret less controlling. Today, Leveret likes to control the whole host. But now these days, I don't need it to control the host, so I have Kubernetes doing it. I just need Leveret to run the VM. So there's an ongoing, very interesting architectural change to Leveret. You just do what it needs to do for a single VM and not control the whole world. Yeah, quick question. So if I talk about, like I was talking earlier. So right now, all our VMs are like Leveret and when KVM based, right? So I assume that the keyword is like Kubernetes and Leveret. So will I still be, I mean, will I be able to, I mean, use all those existing VMs with the Kubernetes like managing those? Yes, there are no changes in the Leveret that we are making like to just be able to run keyword. Keyword is more about extending Kubernetes to Leveret, connecting Leveret with Kubernetes. So whatever Leveret supports will be supported in the keyword plate. So same existing XML file it will be reading from? I guess yes. Okay. So, I mean, is it like already public? I didn't think much. Yes, so this slide will be shared and it's already in the GitHub. The simple way to run keyword inside a Kubernetes cluster is just a Qubectl apply the manifest and it will create all the controllers on the keyword, on the Kubernetes. So the product is already available to use? Yes, it's already there. I mean, the product? The project. The project is already there. Of course. The product is still 2BGA in some time. Right, right. Thank you. I think that is it. Thank you. Tea and coffee arrangement at level eight at the same venue. So we will be getting back at 11.45. Thank you.