 Hi, welcome to this presentation. My name is Carlos Venegas. I work at Intel as Cata Containers Developer and today we're going to talk about what is the latest integration of CloudHead Provisor and Cata Containers. So containers, why containers are so popular? Well, let's take a look to this command, Docker run engine X. With that simple command, we are having a lot of power. Why? Well, we are having a command to run an application that is not installing our system. It is going to be downloaded for us and executed in an isolated environment. For isolation, Linux containers uses two kernel functionalities, C groups and 9 spaces. 9 spaces limit the process interaction with some subsystems. For example, the PID namespace limits what processes can be seen. There are other 9 spaces, for example, network or mode, just to name a few. So keep in mind when you use a tool like Docker or Kubernetes, your workload is going to be isolated by some mechanism like C groups or 9 spaces or probably others. So why use Cata? Well, when using C groups and 9 spaces, the kernel is shared between all the containers. If one container can find a vulnerability in the host kernel, potentially it can get access to all the system. Hopefully the container ecosystem has evolved to delegate the container creation to a replaceable component. This helped by creating some API interfaces. So any program that want to be responsible to create a new container can implement those interfaces and add innovation to the container isolation level. Some of these interfaces are OCI, Open Containers Initiative, the Container Dishin interface and the CRI interface. These interfaces are for different container solutions, but Cata tried to follow all of them. By using Cata, the workload is going to be isolated by another mechanism, by using virtualization. So every workload is going to be running in their own virtual machine. Even if the container find a vulnerability in the guest kernel, it won't be affecting the rest of the containers. Okay, then what is Cata containers? Cata containers, it is the combination of virtualization and traditional containers. The Cata stack, it is a set of different components. From one runtime that interacts with the container ecosystem and also interacts with one program that is running inside the virtual machine to create containers inside of it. And also one abstraction layer for hypervisors or virtual machine managers. On this presentation we are not going to focus in all the stack but just in the hypervisor layer. Cata containers support different hypervisors or virtual machine managers. From Kimu, that was the first supported virtual machine manager. Akron, Firecracker and finally most recently Cloud Hypervisor. All these solutions provide different set of features and are focused for different use cases. Before we talk about Cloud Hypervisor, let me just start with Rust BMM. Rust BMM is a project that is a set of components written in Rust to build your own hypervisor. In the project many companies are working together right now and there are some projects that actually share code that is for example Cloud Hypervisor, of course. Amazon Firecracker, CrossBM, Dragon Ball from Alibaba. And the idea is to share functionalities to build new hypervisors but without have duplication of code and take advantage of that. So one of those projects that is taking advantage of that to build new use cases or a hypervisor for cloud use cases of course is Cloud Hypervisor. So Cloud Hypervisor is a virtual machine monitor. It runs on top of KBM. The hypervisor is designed to run applications that usually run in a cloud provider. Some of the goals that Cloud Hypervisor has is to have minimal emulation, low latency and to use low memory footprint and also looking for safety. This is by using the Rust crates, the Rust programming language and also by having a small attack surface by covering some specific use cases, cloud use cases. So one year ago, Samuel Ortiz and Andrea Floresco from Intel and Amazon presented what they thought was the list of features that Cloud Hypervisor or Hypervisor for cloud applications should have. So they took the cry specification and see what were the functionalities that they required as part of the containers specification. And they mapped all these functionalities to functionalities for hypervisors. The functionalities that they abstract after this analysis was one virtual socket, virtual storage, virtual networking, device assignment, share file systems and resource hot logging. All these functionalities already exist in other hypervisors like Kimu. And we are going to see what we are using in the integration for Katna and Cloud Hypervisor. A virtual socket is used for two reasons. One, it is to get communication between the runtime and the agent. So it can send commands to execute some functionalities inside of the virtual machine and also to get CLI streams from programs that are running inside of the container that is inside of the virtual machine. The solution that we use for Katna and Cloud Hypervisor is Hybrid Vsox. This is a socket that implements a unique socket in the host side. And it is a big TIO socket in the guest side. This is a replacement for use a multiplex serial socket. And the hybrid side, it is that it is not needed to have a kernel with a functionality of Vsox in the host side. It is only dependent in the guest. Virtual storage. Virtual storage, it is used for at least two use cases. The first one, it is because we are using a virtual machine, we need a way to boot our OS and the default way to do it is by using a premium device. The other use case, it is hot block storage block based volumes when it is possible. This is great for have great performance because we are giving some volumes directly to the virtual machine. But this solution has some issues, especially when it is needed to share the files between the host and the guest. Because once we provide the block device to the guest, there is not that way to synchronize what is happening. So for multiple rates and writes between host and guest, it is not a solution to use. Anyway, for this virtual storage, what we use today, as I mentioned, it is virtual PMEM for the boot guest OS. And virtual block to attach block base volumes. Share file system. A share file system, it is used when it is not possible to use devices like virtual block. And especially in cases when the host and the container need to share information via share files. Cata uses virtual fs protocol with local provider. This protocol was created by Red Hat. And as a now, this is the replacement of the legacy protocol called 9PFS. Some use cases, for example, it is Kubernetes using share files to pass some variables or metadata to the container. Devices admin. For containers, it is sometimes necessary to use a device, a specific device that is living in a host. But because we are using a virtual machine, it is not possible to share devices. Some cases, for example, it is to take full advantage of some specific hardware. To load that, we use VFIO, virtual function IO. It is a kernel framework that exposes direct device access to user space. So at this moment, when we have an exposed device by the kernel to the user space, now we can provide that device to a virtual machine. The virtual machine is going to take full control of the device. Other use cases are share network cards or other PCI devices. As an additional note, the cata kernel that is going to be used for that specific container, it may need additional drivers to identify and take full advantage of the device. Device hotplugging. Some operations that containers have are changed resources over the time. What it means is that they can change the limit of CPU utilization, memory utilization, network utilization, and these operations are done by C groups. In the case of cata containers, because we are also having an additional layer to limit resources that is the virtual machine itself. So in order to resize the amount of CPUs or memory utilization, we need to hotplug or unplug these kind of resources. To load that, in the case of cata containers plus cloud hypervisor, the API that they use to achieve this is using ACPI. In that way, we are able to hotplug some devices. Just to summarize how well cata fits with the cloud hypervisor design. Let me show you one map between the APIs between cloud hypervisor and cata containers. On the left side, you can see the hypervisor API of cata. It is an interface, so if we want to add a new hypervisor to cata, we need to implement these functionalities. We can see in the other side a few lists of API calls that we need to do to perform the tasks that we are doing. Something that is really nice is that most of the API calls that we have are just one simple call to the cata containers API. This is the final conclusion of the current integration of cloud hypervisor and cata. So by today, we have support for Docker. We have use cases that are running in our CI and it is fully working with the use cases of cata support. We have support for Kubernetes. We have an end-to-end testing working with cata. We have tests for container D, cryo and botman. Thank you very much for watching this presentation. I think that's it. Please feel free to ask any questions via IRC or Slack. Here is my YouTube user in case you want to ask me anything. Or feel free to contact the cata community via the mailing list. Thank you and bye.