 In 2017, we introduced the idea of pilot projects to foster development in strategic areas where we identified a gap or an opportunity for open infrastructure. So over the past six months, we've been working on, slide change? Over the past six months, we've been working on defining the criteria and the process for graduating pilot projects to the next phase. Now, we did this in an open collaboration fashion. So we started with a big, messy, anything-goes-public brainstorming etherpad, and then we condensed that down to a first draft, and then we took that first draft and we reviewed it through a series of iterative refinements with the board, the pilot projects, the OpenStack Technical Committee and the User Committee, and on the Public Foundation mailing list. The result is really much better than any one individual or small group of people could have produced. So I think what we ended up with, which we call the confirmation guidelines for open infrastructure projects, it really captures the fundamental essence of what makes open source projects successful and sustainable over the long term. So some elements are strategic focus for the project direction and keeping an eye out for opportunities for cross-project collaboration, best practices for project governance, technical best practices, the sort of active practice of open collaboration and active engagement with developers, users and the broader ecosystem. So pilot projects are the projects who are sort of learning about these open collaborations while confirmed projects are the ones that we believe have demonstrated a level of maturity in applying the concepts of open collaboration. And on that note, I'm pleased to announce that Cota Containers is the first pilot project that the OpenStack Foundation Board has approved for confirmed status. Cota Containers has been the center of some really interesting opportunities for cross-community collaboration and integration. So to tell you a little bit more about that, I'm going to bring up two collaborators from Cota Containers, Firecracker and the RESTVMM project. Please welcome Andrea Florescu and Samuel Ortiz. Good morning, everyone. My name is Samuel and this is Andrea. And today we're gonna talk about how we're evolving the container landscape through open source project like Cata Containers, Firecracker and RESTVMM. So containers, containers, everyone loves them, they're everywhere. But as people move container workload closer and closer to prediction, they're realizing it takes a lot more than just fetching a docker image and running RunC to actually run a container in prediction. And when it comes to security, workload isolation is a key attribute to any container infrastructure in prediction. As a project, Cata Containers has been focusing on this specific aspect of the container software ecosystem, workload isolation. And we've been doing this by adding lightweight virtual machine and hardware virtualization as yet another container isolation layer. And over the past year, we've been really busy making sure that our architecture and our implementation really fits the need of this industry. And since our first stable release back in May 2018, we've made six new major releases, and a new one is in the pipeline, hopefully coming in a few weeks. And this has brought a lot of improvement in the Cata Containers project. So first of all, we improved our performance. We did that by adding features like VM templating, which allowed us to improve boot time latency. We also added support for TC mirroring that helped us with networking performance and compatibility. And finally, we're soon going to be integrated a brand new implementation and technology called VertiOFS, which is going to allow us to improve our Shopify system performance and again, compatibility. We also focused a lot on operability and simplicity. We now have distributed tracing enable across all Cata Containers components. We can do live updates and live upgrades between major releases of Cata Containers. And finally, we simplified our overall architecture by using a technology called virtual socket or Vsoc, which allowed us to remove some of the Cata Containers components. Our industry support also got better. We integrated a bunch of hardware vendors specific contributions and that allowed us to enable more architecture than X36, like ARM64, PPC64, and S390. And finally, we also worked on our security architecture. We make it even better by, for example, enabling workload isolation inside the virtual machine. So we now have isolation outside the virtual machine through hardware virtualization and also inside by using common container isolation techniques inside the virtual machine. But our security architecture also get improved by adding support for more iPavisors. And this is an aspect of our work that I want to focus on today because it led to a very fruitful collaboration across industry to create a new project called RazVMM. But first, iPavisors. From day one, Cata Containers has been running on top of the most performant, reliable, and versatile open-source iPaviser QMU. In 2018, we tried to reduce the QMU feature set and potentially also reduce the QMU attack surface by enabling NEMU inside Cata Containers. And finally, in December 2018, we added support for this brand new iPaviser coming from Amazon, Firecracker. So let's look a bit at Firecracker. Firecracker is an open-source, lightweight, virtual machine monitor written in Rust. It leverages KVM to provide isolation for cloud-based, multi-tenant workloads like containers and functions. It currently runs in production and by using AWS Lambda and AWS Fargate are currently using Firecracker in production. And now that we briefly introduced Firecracker, let's look at some properties that make Firecracker such a special project. So first, it boots blazingly fast. The time from executing, the time from receiving the start command up to the first user space process being executed is just under 125 milliseconds. Each Firecracker process has a low-memory footprint, which is one of the reasons we are able to achieve high densities with Firecracker. But the main enabler for running thousands of micro-VMs on a single host is actually oversubscription. So each Firecracker vCPU is scheduled as a regular thread and we rely on on-demand paging for oversubscribing the host memory. But in a multi-tenant cloud environment, the most important thing is actually security. And properly enforcing isolation of workloads is a hard requirement. Firecracker has two security boundaries. First, it uses KVM to separate guest workloads and then the Firecracker process is sandboxed by the jailer, which is using standard isolation techniques like namespacing, cgroups, and deprivileging. In 2019, we've been working on adding platform support. Firecracker currently only runs on X8664 and we are working towards adding ARM and AMD as well. We also did a bunch of refactoring in order to create standalone virtualization components that can be used by other projects as well. In the container space, we are focusing on providing a production-ready Vsoc implementation and the main focus here is to change the Vhost backend, currently used by Vsoc, with a Unix domain socket. We're also actively working on Firecracker Container D, which is an open-source container runtime on top of Firecracker. So we enabled Firecracker in Cata Continues back in December 2018. Got official support with Cata Continues 1.5. And as we're working and integrating Firecracker support into Cata Continues, we found this is a very interesting project. It's a lot smaller, much, much smaller than any hypervisor that we support. It's much smaller than QMU. It's even much smaller than NEMU. But still, it can run almost all container workloads. And when I say almost, I mean, it has some limitations. So for example, if you wanna run a workload that needs direct device assignment, if you want to directly assign your GPU inside your container workload, that's something you're not gonna be able to do with Firecracker. If your container workloads need the dynamic resizing, you wanna add more CPUs or add more memory, that's the kind of thing that Firecracker is not able to do. So we started to ask ourselves, what would it take to extend Firecracker to support all container workloads, to fill those gaps? Or rather, we started to ask ourselves, what would it take to build a container-specific hypervisor? And this led to the creation of the RustVMM project. So RustVMM is about building and providing shared virtualization component, standalone virtualization components written in Rust. And the idea is for people to come and select which virtualization features they want and build a customized and specific hypervisor for their workloads. So for example, a container-specific hypervisor. So if you wanna know more about this, please come to our talk on Wednesday and drag myself, I'm gonna go deeper into the RustVMM project on Wednesday and you can also join the RustVMM session at the PTG on Thursday and Friday. So if you want to know more about the project in general, anytime, you can check out our GitHub repositories or if you have any questions, please let us know on Slack on using GitHub issues. Thank you. Thank you. Thank you. Thank you.