 So, as I was in this list, I'm Michael, I work in Kinvo company. I am a contributor of Kubernetes currently, but before I was contributed to OpenStack and mostly to OpenStack COLA project, which packs the OpenStack components into Docker containers, and runs OpenStack in container environment. I'm working for Kinvo, which is a company based in Berlin. We are doing general Linux related development, but the most of work done by Kinvo is visible in Rocket project but also in SystemD or with scope. You can check our activity on our blog, you can check our GitHub, as everything we are doing is open source, or you can just write us an email if you have some questions regarding the company. So, I will start with explaining some basic concepts around my presentation. First of all, container and virtual machine. I think that most of you know the difference, so I will not focus on that that much, but container doesn't use a new kernel inside, and virtual machine uses the separate operating system and is simulating the hardware. Container is just isolating several things in Linux system. By Cloud, I mean any kind of service, which is provided over network to the user, and user doesn't have to know where the service is actually provided. In case of container-based Clouds, container-based Cloud is Cloud environment where the user demands some containers and doesn't have to know where they are physically located. They are scheduled automatically, and today there were a lot of talks about Kubernetes. That's the most popular container-based Cloud system, which is open source, but there is also MISOS and Docasform. There are two machine-based Clouds. OpenStuck is the most popular of them from the open source project. From no open Clouds, which are focused mostly on virtual machine, there is LWS and there is EC2 service. What is the problem and what I try to address? The problem is that these Clouds, container and virtual machine-based are separate, so for running a Cloud-consisting conference, virtual machines booted from some Qco or raw images, you use OpenStuck. For running containers, you use Kubernetes, and it's very hard to maintain a single environment, single infrastructure, which provides both of them to the user. That's the problem I would like to address, so how to create a homogeneous Cloud environment which addresses both VMs and containers. One of the answers, which is implemented in several ways, and I will show them, is putting virtual machine inside container. It sounds crazy, but it works and it, in my opinion, makes sense, and I will explain it in this presentation. But first of all, let's begin with question, what needs to be done to run a virtual machine inside container? What characterizes the container which is able to run virtual machines? First of all, it has to be privileged, so we need to give the most of Linux capabilities to that. It needs access to C-groups because, for example, Libvert is using C-groups for resource management, for virtual machines, it's pulse, for QM processes, it's pulse. We need also to provide an access to all the needed devices. We would like to share with VMs, and if we want to use KVM, we need to also share the KVM devices. It's just a device in the directory. And here comes the question, whether this idea of putting VMs inside containers, whether it improves security somehow, and the obvious answer is no. That's because it's privileged, it has access to devices, it has access to C-groups, and that's why if someone gets inside the container running virtual machine, we should assume that he has an access to the node, to the host. So it doesn't provide any security. The idea of packing virtual machine inside the container is only for simplifying things and creating a homogeneous environment, but the security of VMs is the same and we should care about bugs in any software which manages our virtual machines or runs it. So how to do that and how to use the concept of containerizing virtual machines in the cloud environments. There are two most popular ways. First of them is to put every QMU process in the separate container, and the second one is to just put a levered demon inside container and have many QMU processes inside that one container with levered. In case of QMU in container, we have, yeah, in this case we have some hosts, some nodes. We have two or more QMU containers and which run the virtual machine. And the two most known examples of cloud systems which are using that approach. First of them is Borg. So Google Borg internally for virtualization is using containers and is putting each virtual machine inside another container and just schedule them as the other containers. And also Rancher OS has a control plane for virtual machines and it's using exactly the same approach. And they have a Docker image with QMU. You can even just pull it and write Docker run, Rancher VM, something, something, and you have running virtual machine. And the advantage of that is that we don't rely on any other tools for managing the lifecycle of VM. And if we, for example, shut down the VM, the QMU process just goes down. And for Docker or any container runtime system, it's just shut down of container. And in the same way Kubernetes and the other container cluster systems see that fact. But there are two disadvantages. The first of them is that you have to somehow manage the images. So if you have a Kubernetes environment in which you would like to run the containers with VMs, you need to somehow provide the QCOL or role images. And if you are developing such a solution, you need to somehow provide the image service for that. And you have to put your own effort on providing the external storage and play with QMU options. In case of delivered in container, we assume that every node in the cloud is running one delivered container. So in case of Kubernetes, it could be demon set. And there are a lot of QMU children of delivered manages their life cycle. And the most known examples of that are the most known example is OpenStack Collab Project, which I mentioned on the introduction of myself. So it's a project which containerizes the OpenStack. They also have an option to run OpenStack on top of Kubernetes. And there is also Wirtlet, which is a project that aims to run, which aims to make a VM a native citizen of Kubernetes. So it's implementing the VM pod feature. And there is also a Kubeward project, which the guys developing it had a presentation yesterday and the virtual machine and infrastructure service track. And the main advantages is that Kubeward provides an abstraction for managing images. It manages the remote storage. And it's much easier than dealing with QMU directly. But on the other hand, you need to somehow interact with that libVirt, which is itself a layer of abstraction. So it's not very easy to decide whether we go with QMU or libVirt. There are some projects which use the first approach. There are some projects which are using the approach of libVirt. And we may see in the future which approach was better and which layer of abstraction provided more problems. So how exactly it relates to the cloud. As I mentioned, Wirtlet is a project which uses VMs in Kubernetes. And how it do that? It uses the container runtime interface. There was a presentation today explaining what it is. But I will explain it quickly. So CRI is a mechanism in Kubernetes which allows you to write your own server which provides some runtime service to Kubernetes. By default, Kubernetes uses Docker. So if you run some port on Kubernetes, you receive a bunch of Docker containers running somewhere in the cluster. By CRI, you can replace that with any kind of runtime system you want. This is how it looks. So in nodes in Kubernetes, there is a kubelet, which is a demon managing a lifecycle of the container and node. So it only receives the information from kubes scheduler what it has to do. And the most known example of the CRI service is Rocketlet, which uses Rocket. But for virtual machines, you can also use CRI. And just by getting a definition of ports, run the virtual machine and interact with Libreth instead of Rocket. So these things work. But do we really need such an inception? It maybe sounds crazy because why do we need to run virtual machines inside the container? But I think we need this inception because the goal of Kubernetes and container management system is to be as small as possible and do not implement a more complicated logic. Instead of that, they want to give the people the opportunity to create this logic itself. A good example is a concept of operators. There were a bunch of talks in this track about operators. And they are using Kubernetes. But they prevent Kubernetes itself of being too big. So that's why Kubernetes community doesn't implement logic of upgrading stateful applications except the concept of stateful sets. But that's why people are moving more complicated deployment logic to outside things like operators. And I think that we should see any solution trying to run a virtual machine inside container as a solution of such kind. So we are just using the simple logic of Kubernetes to achieve something more complicated. And we just add one layer of abstraction to achieve something which gives profit. Because I think that separation between VM clouds and container clouds, it's a huge problem which even somehow may prevent some people from thinking about using Kubernetes if they are using a lot of VMs and have an instructor structure acting with just virtual machines. So unfortunately, I cannot show a demo because I have no adapter for my laptop. Yes, BC. So yeah, we tried it before the talk. Unfortunately, I cannot show a demo on my laptop. We have sometimes so I can. There is a one demo on GitHub of virtual how it works. So OK. Do you see everything? OK, maybe that would be better. So in this demo, first of all, we run the virtual server and then start the local Kubernetes cluster. And after that, we have a definition of POT. I will try to stop it here. Yeah, so this is just an usual POT where we define a container name Fedora. It uses the Fedora image, which is served by some HTTP server. It's just a key code. Oh, maybe that will work. So this play port, no, it's the ESBC. It's the HDMI, and I'm dubious. This didn't work before. Oh, it's just this play port. Oh, it's a display port. And it's full-size display port, which we don't have. OK, so yet another try of saving my demo. OK, so that's the definition of POT. And we can just create this POT by kubectl create. And it will work. Let me continue the demo. And it takes some time to run the virtual machine. That's why for some time the container is in creating state. But after 14 seconds, it became running. Now we can get into the container. So that's why there is the local compose exec libvert brush list. So the libvert is the name of container, which runs libvert. Yeah, we can also access a console of containerized libvert. OK, here it's just slow typing. OK, and I can also show you. I have a project just called Docker libvert, which is very tiny Docker environment for putting libvert in the container. Just to not make Docker run comments very long, I put this definition inside Docker compose.cm file. So I expose port of libvert here. There are the necessary mounts I mentioned in my presentation. And there are also volumes for libvert, where the actual instances and disks for libvert are stored. So we can just use named volumes for that. And I have also here a start script, which wraps libvert. Here I do some magic with detecting which type of processor we have and which KVM module I have to load. And also some necessary CH modes and CH routes for configuration files. That's because if you mount some file to the container in Docker, there is no way to define to which user it should belong and what are the permissions for the file. So if we are mounting these configuration files here, we need to CH mount them inside start script. And the configuration of libvert and QM is very small. So for libvert, we just want to make it listen on the socket so we can contact with libvert outside the container to not have to enter the container every time and just be able to use a verb on the host or even a graphical verb manager on the host. And we have also QM config, which defines the user ad group. And that's that example. I mentioned also about QVert project. So the difference between VIRTLAT and QVert is that VIRTLAT, as you saw on the demo, uses a pod definition for running virtual machines. QVert uses third-party resources for that. I'm not going to explain it in details because QVert was explained yesterday on the talk. And let's go back to the presentation. That's unfortunately all I wanted you to show. I'm sorry again for not showing you live demo from my laptop. Do you have any questions? Yes? So the use case is... OK, so the question was, why do I even want to run virtual machines in the containerized environments? And what's the use case for that? Use case for that is, I think, the migration between virtual machines and containers. So some company which was using virtual machines for a long time thinks about Kubernetes, doesn't really want to manage two separate clouds for a long future, and they want to be able to use Kubernetes, but at the same time provide some option for traditional virtual machine users without necessity of maintaining the virtual machine-oriented cloud. I realize that it's not a problem in case of AWS or clouds which are not managed by ourselves, but I think that using Kubernetes as a main infrastructure without managing separate open stack, for example, without having it on top of Kubernetes or using something as like virtual for machine, I think it really simplifies things and that is my assumption. Maybe I am wrong, but that's the idea I have behind it. Yeah, this is fairly near and dear to our hearts at 4.0.S., and a lot of this work fits into our work and rocket as our project, and I spoke earlier today about operators, I think this kind of dovetails over that as well. And you touched on this a little, but I want to make sure we draw it out. One of the reasons why you would want to package virtual machines in containers has little to do with what you think of as the execution isolation and has a lot more to do with the convenience of packaging and distribution, the ability to do verification on that package that's a discrete containerized package, and most importantly, to schedule it dynamically around grid compute resources with an orchestration system like Kubernetes. So the units of scheduling, the thing we know how to move around between computers in these systems is a container, not a virtual machine, right? So that is a whole other set of reasons why we would want to package virtual machines inside containers is to give a handle to orchestration systems on legacy applications that exist at this time already in virtual machines. You get the container around them as a package and now you can schedule them dynamically on your cluster resources the way you can containerized applications, which certainly he touched on but I want to make sure to kind of bring out and put a nail on top of it. There you go, sorry. No problem. So what's the next question? It looks like... Of your concept, of your implementation, support fancy virtual machine stuff like live migration? For now it doesn't, but both Vertlet and Kubevert want to address it somehow. Next question? Oh, I'm sorry, again, I keep forgetting about it. The question was whether we want to address more complicated operations for virtual machines like live migration, for example. And that isn't implemented yet, I think, in any of projects, but the most probably we will need an external controller similar to Operator which will consume the third-party resource called Live Migration or something like that and call Live with API underneath. What about Live? What about scaling? Live for the fancy virtual machine feature. So the question was what about the other operations like scaling of virtual machines? And to be honest, I didn't think about it yet. Yes, but probably Live Migration would be the first more complicated feature of virtual machines which will be touched by a project and it's, I think, designed now and maybe implemented in the new feature. Okay? This is the first question, first answer, this second. So the whole use case is like you have the operator which has deployed Kubernetes cluster, let's take Kubernetes, some millions, hundreds of millions, and then they said, okay, now we need VM payload. And then this comes a solution but that operator stills to open stack or throws open stack out of the window because unless you throw out the stack, open stack out of the window, instead of simplifying things, you add Kubernetes and afterwards you still need to have live your XML managed by null, managed by hit. So you insert it underneath a restricted thingy and the whole definition thing unless you throw out Nova and hit, doesn't go away. So the question was, the question, the test was that putting open stack inside Kubernetes doesn't simplify things because you still have Nova, you still have hit and a lot of components and it doesn't really simplify things for the operators. I only provided Kola as an example, Virklet project isn't using open stack at all. So I just wanted to make this presentation you like objective, don't promote a single solution but if you want to throw out open stack, you are very free to do that. Okay, so part of why we invested a lot of time and effort and friends of ours in the community and contractors and other folks we've worked with invested a lot of time and effort specifically in porting open stack, the open stack control plane into containers and running it as a Kubernetes application is actually that our findings are quite contrary to what your suspicions are. By unifying around a single management interface, that is the Kubernetes API, by deploying open stack which is just a bunch of applications, as wonderful as it seems and as magical as it seems because it's a VM management system, all it really is is a big stack of Python apps. So we put them in containers, we run them on Kubernetes, we schedule them with Kubernetes, we recover from failure, which are quite frequent in those Python apps with Kubernetes and actually we've found and I think there's a fair amount of stuff on the CoroS.com blog about the project between CoroS, Kinvulk, Intel, the Rocket open source project and all of the pieces that fit into the open stack port for Kubernetes. The finding actually is that we reduce the administrative overhead by unifying around a single cluster management interface instead of trying to deploy open stack applications in an open stack silo outside of Kubernetes. So that's like the aims there to answer the speculation that that adds to complexity and would only make things more difficult. All I could do is encourage you to grab the stuff and try it out and see if you find that that's true. I have concerns about containerizing controls control a part of open stack. I have concerns about containerizing QMU with application payload of the tenants. Okay, so tenant application payloads. The folks you're serving, they still consume open stack APIs and schedule their virtual machines through the open stack facilities. But your policies are running inside these containers. Yes. So that is a concern. Well actually, let me back up for a minute there. It is not necessarily true that customer VMs are running inside containers when we're kind of talking about two separate things here and we blended the issue together a little bit. The talk is about running VMs inside containers. The open stack work, which relates to it and encompasses it to run parts of the control plane in terms of containers, does not necessarily imply that your end user, your customer VMs are packaged in containers. They are VMs consumed from open stack, scheduled with open stack. Running on top of the hypervisor at least being virtual machines. Yes. So is that a better answer? Yes. And then that gets more at the, now I don't know what you're asking about. Yeah, but. There is nothing to do with those containers. There, QMU processes are not, or leave them are not under this control containers because they are tenant VMs. No, no, I think that doesn't relate. If you want to use the concept of tenants for VMs like an open stack, you can run open stack on the Kubernetes and just provides the open stack control plane to the end user and threads the whole Kubernetes stack running this as a thing only for your internal operators. So, yeah. Thank you. Thank you. Yeah, that's why I wanted to put the things neutral if someone needs open stack, I think they should be free to use open stack if not then not different people have different needs. Thank you.