 Hi everyone, we had some technical problems, but we are ready. My name is Weiner, I work for Red Hat in the virtualization team for a while now. This presentation was actually proposed and most of the slides were made by Triva Williams who is the technical manager on the Open Infrastructure Project Foundation, where the project is hosted, but she could not make it here, so she asked me if I could be presenting it. I'm engaged in the project that I will be talking about, Cata Containers since more or less 2020. And basically, I'll be talking about Cata Containers and this is the mantra of the project. It's like the speed of containers, the security of VMs. This is what we are seeking in this project. Okay, a little bit of the history. Well, the project actually began in 2015 when Intel launches clear containers. I don't know if many of you remember of that project. And then in 2017, it was merged with other projects and now it's under the umbrella of the OpenShift Foundation. And in May of 2018, the first release. So this was the first release, what's not the perfect release, but it was a workable version of Cata Containers. Cata Containers. And 2019, more or less one year later, we have the first adopters using Cata Containers on production, so we have Alibaba and other players. In October of 2020, a major bump, a 2.0 release, I will explain a little bit of each of those releases later, two years later, another major release. And guess what? 2014, we will have probably the next major bump. I don't think it's for intation. This is just a coincidence. And basically, we sit here. We are in the next generation of containerized workloads containerized workloads and so on. Okay. But what is this? What is exactly Cata Containers? In a traditional container, a containerized system, you basically use namespace and C groups to isolate the process, so to create the containers. Of course, there are other layers of software and features being used, but the main ones are the namespace and the C groups. Then you can have other technologies like SecComp before enhancing the security of your containers. SecComp, for example, to filter out CIS calls that you don't want your application to execute. And you have capabilities and so on, so forth. The thing is, if your workload, the application, finds a hole, he can, in the, for example, container B, he can escape and have access to the container A, even have access to the host kernel. So this is the warm, basically, going from one container to another. And in order to solve, not exactly solve that problem, but add a new layer of security for the containers, we have Cata Containers with the approach which is basically your container is now isolated within AVM, so it's running inside AVM. And you have this, now this layer between the host kernel and your application. Even if your application is able to escape from the namespace, for example, it's going to have access only for, with access to the guest Linux kernel, not the host. So this is basically what Cata Containers is. Moving on, when I joined the project, we used to integrate Cata Containers with Docker, for example, so you would be able to, like Docker, dash, dash, run, timing, and then spawn a container in Cata Containers. We dropped the support, and nowadays we have the support only with Kubernetes. So this is basically how the container is initialized in Kubernetes. So the first one on the top, you have a container runtime interface, that by Kubernetes you have, in this word, container runtimes, it be many things. But yeah, you have cryo and container g or nerdctl. And when you need to spawn a container, you're gonna, one of those container runtimes, they are going to run, for example, Run C, or C run depends on how you configure your environment. But basically Run C is in charge of actually running the container, or starting the container, providing the environment for the container, and managing the life cycle of the container. With the Cata, this is the workflow with Cata Containers. Basically it sits between the container runtime and the container within the VM. So now cryo and container g, they are calling the Cata runtime and the Cata runtime is in charge of creating the VM, spawning the container process inside the VM, and so on. It's important to say that you don't need to change your application, right? The same application that you have, you don't need like recompile or something like that. It seems less, it works. You just need to change a property in your configuration file when you're going to deploy your application on the cluster, and it's going to automatically work for you. Okay, this is an example of use case of Cata Containers. So in this example, you have a active workload. Traditionally, how you protect or you separate those workloads. Basically you run a sensitive workloads on node three and the other one on node four. So you are physically separating the workloads. And with Cata Containers, you can even spawn the two sensitive workloads in the same node. So this is going to allow you to increase the amount of pod containers that you can run inside your node. I'll run a little bit fast, sorry. So this is the product that we made out of Cata Containers. This is the OpenShift Sandbox Containers. Here there are some blog posts where we explain more use case that you can use with Cata Containers in the OpenShift Sandbox Containers. We are in the catalog of the red hat, so it's very easy, simple to just click and install it. So a little bit about the releases. The second major bump, we improve it, the agent, because there is a agent, let me go back a little bit. There is a agent inside the VM here and that communicates with the runtime. There is no container runtime inside the VM. And we improved it, we changed the agent, now it's written in Rust, we changed the protocol, just to load the amount of memory that we use on the VM because it's a concern, right, this overhead that you're gonna have in the system. All right, so, VirtiUFS is now the standard shared file system, because basically on Cata Containers, the container image is not pulled inside the VM. It's not even Cata who is responsible for that, it's the cryo and the upper layers. So we share the container file system with the VM now using VirtiUFS, that was the first time that we used it. So, and then we proved some stuff for security, we introduced the support for Cloud Hypervisor, Cata supports Cloud Hypervisor, Firecracker, and QMU as the virtual machine monitors, so to respond the KVM VMs, a little bit of stability, some integration with Kubernetes. Version three. So we have other enhancements in terms of performance. We have this single binary drag ball hypervisor, which is basically the Cata runtime in the VMM, the virtual machine monitor, they can be combined into a single process, so you don't need two processes. And this, we made this mostly because we rewrite the runtime in Rust again, but this was not just a rewrite for the sake of rewrite, right, because Rust is nice. No, we changed the architecture a little bit, and now, yeah, so support for C-group V2, then we migrated to the version of VITA-UFSD writing in Rust as well. We started the Confidential Container spinoff, which we will have a workshop tomorrow, if you're interested, know more about the Confidential Containers, a little bit of, we bumped versions of QMU, kernel, and so on, because, of course, you need to have the guest kernel, you must have the rootFS and everything, this is maintained and we test it in the project. What's coming now next week? Sorry, next year. The presentation from Fabiano Fidensio, Kevin Forron, this week, he mentioned some stuff that we are working on, just listening here. Yeah, so the project is getting rusty. We have only now, I think, the runtime, which is in go, and yeah, if you want to contribute, unfortunately, I'm out of time, sorry for that. If you have questions, comments, you can find me on the outside the room. Is there any time for at least one question, maybe? If there is any question, I don't see any question, maybe. Go ahead. The question is about the isolation through the ends. Does it mean that you lose the flexibility of containers? Yeah, the limits, they still work. We use hot plug, for example, for dynamically increasing, decreasing limits.