 Hello everyone, we're excited to be here at KubeCon EU and we're going to tell you about sick runtime and some of the work that we've been doing and a little bit about what it looks like in terms of cloud native runtimes in the future. So our charter, so the CNCF sick runtime is there to help enable the adoption of the execution of all the different types of workload types that you can run in cloud native environments. So that could be latency sensitive, it could be batch type of workloads and any type of specialized workload. We also have three TOC liaisons, CNCF TOC. Currently we have three co-chairs, one of the co-chairs, and then we also have a tech lead. We meet every first and third Thursday of each month and we also have an email list for communications and as well as a Slack channel for chatting and asking questions and getting involved. So what do we do? So we have three different types of activities, we have outreach type of activities, we have supporting type of activities, we also have education, type of activities. In the case of outreach, we try to reach out to new tech projects and people working in newer technologies in the cloud native ecosystem to see what's out there and to see what the trends are and just to advance the field and make sure that we have something coming in the pipeline as far as how people are going to be running some of these workloads in the future. We also support different community members in different projects, navigate the CNCF ecosystem, sometimes it can be a little bit daunting with all the different projects. We help the existing projects through the process in the CNCF to go through the different stages of sandbox incubation and graduation. We also interact with other SIGs and other people related to the CNCF. Finally, we do some education, so we help the community understand some of these projects and where do they fit into the whole ecosystem. All of this in the hope that getting more contributors and getting more involvement from their community. So in the scope of the SIG, we have many different projects in different areas. So here are some of the logos of some of the projects that I have presented. Some of them have actually applied for different stages in the CNCF. Some of them are in graduated state, for example, Container D and Cryo and Kubernetes is the main project that started the CNCF. There are some more projects at the edge like Cube Edge or Open Yurt and other projects that allow you to run WebAssembly on your system. So a lot of different areas. More details about the scope here, so we're in different areas. We have the general workload orchestration with Kubernetes and Volcano and MetalCube.io. We also have the specific runtimes like the WebAssembly runtimes, the container image registries, the runtime shims like Container D and Cryo. Another scope in the SIG is the special purpose operating systems. Some examples of that is flat cars, operating system just meant for running containers. Also, we look at the machine learning ops and edge and AI type of projects like Cube Edge, Cube Flow, ML Flow, so some of those projects that allow you to run at the edge and run your end-to-end machine learning pipeline. Finally, we also take care of work groups. So on the fall, under the six, right now we have only one work group, but we are hoping to get some more in the future. So Renault will talk about the container orchestrated device work group we currently have. So in the runtime scope, so some of the projects that presented, so we have this project sysbox. In this project allows you to run containers, but containers look like VMs. So in the container, you'll have something like SystemD or you can run Kubernetes or even Docker inside the container. So the idea here is that you are not running VMs that can be heavy. So in this case, you're just running containers that are generally faster and more efficient. And they can also be more portable for when you want to publish, say, your container's image in a container registry. So another project that got involved in our SIG and had a presentation is SSCM. And this is WebAssembly runtime. So WebAssembly or the community created this standard called WASI that allows you to run WebAssembly on a system. As initially, WebAssembly was intended more for the browser, but with this standard you can run it as executable as a runtime. So many different types of applications have actually come up and as possibilities. Some of these are pretty early, but there's different types that are possible. For example, embedding WebAssembly modules in SAS applications, obviously sandbox, maybe running these modules at the edge, maybe having applications with IoT where you can run them in small devices. Also some blockchain smart contracts and managing these in WebAssembly modules. So lots of different applications in these project aims to target all of them or some of them. Crossless is another project that presented. And this is also in the WebAssembly space. And they allow you to run WebAssembly on top of Kubernetes. So you can create your WebAssembly module, compile it, create your WASM file, and you can push it over to a container registry. And with this project, when you have all the components installed on your Kubernetes cluster, you can actually instantiate a pod with WebAssembly modules. So you run like a regular workload just like you would run a container. This is a very early project, but a lot of exciting things coming up in this space. Trowel is another project involved in our presentations. And this is a container image registry. And we also have some other container image registries in this CNCF and some of them in the community like Quay and the CNCF we have Harbor. But some of the differences with this project is that they aim to be more of a lighter weight image registry that can be run all the time on top of Kubernetes and maybe possibly in the future allowing users to distribute this container images across all your nodes on your Kubernetes clusters in a faster way and maybe possibly using some protocol like a P2P type of mechanism. And this project is also written in Rust. It's part of the reason why they are aiming to be lighter weight and faster. So rootless community also presented and got involved in R6. So this is a mechanism that allows you to use this username space facility in the Linux kernel where you can run container type of mechanisms as root inside a root namespace. But on the host, you will be a different user. So hence you're calling it or they're calling it rootless containers because on the host, you might be user 1000. But inside that username space, you might be root. And then as root, you can actually instantiate different containers in there are some limitations as far as handling the network in and the file system. But the community is also working on addressing some of those and they're working to tackle some of those challenges. But another space to watch out for and we'll see more of this in the future. So in another scope of the SICK, we have the Special Purpose Operating Systems. Broadtail is another project that we got a chance to see. And they allowed you to run these very lightweight VMs that have a very lightweight operating system. So essentially you can define your operating system in what they call a init file, like a v init file. And you can tell it exactly what to do inside that VM. And no need to have demons like SSH demon, shell or login. So something very lightweight and the idea here is that it's more protected and it's also lightweight so that it becomes more efficient and faster. So very interesting project and we'll see more of this project in the future as well. So also on the scope of AI, MLops, so CubeFlow presented. And this project might be familiar to many people and it's basically end-to-end machine learning on top of Kubernetes. So in essence, you can use some of these machine learning tools like Jupyter, PyTorch, TensorFlow Flow to build your models, to train them. And with CubeFlow, you can take it all the way to production. And it allows you to create all the full pipeline that takes it from training and to tweaking and all the way to serving it in some service, Kubernetes service, whether you want to run it on the cloud or so or wherever you want to run it. Open yours is another project in the CNCF that they're currently in sandbox. And this project is similar to CubeEdge and it has a main controller plane that runs in the cloud and then it has components that run at the edge. So essentially they're allowing you to manage workloads at the edge. So for specifically running these type of workloads that have become very popular now, especially with 5G. Some other projects presented in the past and if you're interested about them, you can reach out to us or you can reach out to their communities. They'll be happy for you to get involved. And we're also doing some outreach and some upcoming activities. There's been a lot of awesome WebAssembly runtimes that we reached out to some of them. There's some of them under the bytecode alliance and some projects like Wasmur, which is a WebAssembly runtime. Then you have something like Wasm3, which is in a WebAssembly interpreter. We also have projects that are currently in the CNCF and we want to get more involvement from them like K3S. And any future project or technology, for example, if you know something about quantum computing or something that might be interesting, also open to it. And I will hand it off now to Renaud and he'll talk about the container orchestrated device workgroup. Hello, everyone. My name is Renaud Gobert. I am a software engineer and a video. I've been working for the past four years on the cloud. Today I'll be presenting to you the container orchestrated device workgroup or Cod. To give you a brief overview of our group, we are a small group of device vendors, container runtime maintainers, and contributors, as well as community secretaries. And over the past five years, we've seen an exponential growth of people making use of devices. In the AI field, for example, with machine learning and deep learning, to do network data plane exploration or even to do encryption and decryption exploration. Devices such as FPGAs, GPUs, NICs, or even custom ASICs have become actually pretty ubiquitous in the data center. And so the goal of this group is really to improve the device support in the cloud. What that means is that you as a user or as a cluster admin, you're trying to make your experience seamless. You don't want you to switch into your nose if it's all custom runtimes, custom drivers, or even kind of tinker with your distribution's intervals. Being vendors, users, and cluster administrators and clusters that have these devices, we are familiar with the problems. And we've laid out a roadmap to try to show what are the main issues that we've been facing. The first one really is at the runtime level. It's about exploding the device to container. It's actually a really hard problem because the space is actually very fragmented. Kubernetes has a concept of device plugin, while Nomad has its own concept of device plugin. Docker has a concept of entry plugin mechanism, while Podman has a concept of hooks, and Nomad has its own concept of hooks. If you look at the different runtimes out there, it's actually pretty much the same. Everyone has implemented their own mechanism. And all these differences across the space makes it very difficult for vendors to provide a uniform experience and actually the same set of features across the different projects, meaning that some projects have actually different features than others. But that makes this appropriate and actually pretty high. On the Node level, when you are actually using an external device, your main kind of observation or goal here is to perform a computation that is quite slow on a CPU on that device. And typically, if you don't choose the right memory CPU device or NIC, you might actually end up taking a very slow path when, for example, transfer data from your CPU's memory to your device, which end up actually nullifying the benefits of using that device. And so choosing the right device is actually really important. And we've been doing a lot of work on the Kubernetes level through the CPU manager or the topology manager. Finally, when you are at a cluster level, when your application actually needs to use multiple nodes, whether you are in a small or big large scale deploying application, or you are a supercomputer, you typically need to make sure that your nodes are actually kind of close to each other so that when they start, when your applications start talking through a network to each other, you're not occurring a really heavy penalty. So finding out the actual rights, policies, extension points, and just general nods for you to indicate to your administrator, which now it is actually closer to the other, is a really important, difficult, but challenging problem. And so one of the solutions or one of the projects that I'm going to talk to you about today is focused on the runtime. It's called CDI, the Container Device Interface, and it's a unified plugin architecture for runtimes. Three things you need to know about it. It's based on CNI model, the Container Networking Interface. It describes the devices that are available in the machine and describes the operations of runtimes performed. Here on this slide, you can see a very simple example, where a vendor has created a JSON file at CCDI vendor.json that describes that there is a device that is vendor.com-flash-device on the node. It has a device type, sorry, on the node. It has a single device on the node that's called MyDevice, where the only operations that a runtime must perform is to mount the dev card one and dev card render one into the container. This is the space of a vendor. As a user, you'll be actually more interacting with your container runtime. It's actually not that different from a normal experience. You just invoke your runtime with the flags-dash-device, MyDevice, or vendor.com slash device equals MyDevice, and you just invoke your image and your command. From a Kubernetes perspective, we've actually kind of created this proof of concept that shows what we're intending to do. It's based on the same model as volumes that you can see in Kubernetes. The general idea is that when you as a cluster administrator provision machines that have physical devices on them, you're also going to tell Kubernetes that you have a what we call device class. A device class is just a representation of that device, typically with the model information, for example, the PCIID. So that Kubernetes can identify which devices on which node matched that device class. So for example, think GPUs and ASICs. As a user, you are going to submit to Kubernetes a device claim that claims a set of devices, for example, three GPUs or half a GPU, and Kubernetes will internally match that claim to an actual device or a set of device. Finally, as a user, you'll be submitting a pod that makes a reference to that device claim. From there on, your pod will get scheduled on a node and you'll be able to see that device inside the container. We've also created a roadmap for CDI. By the time you see this presentation, we'll probably be in public availability with a first feature in Podman, or a first implementation being available in Podman. We've been working with different runtimes, for example, container D, Podman, in different groups, for example, HPC advisory council on how this solution, how the specification should look like. We've been showing some POCs. Finally, you can actually see the formal spec on our GitHub. We've also been working on a Kubernetes cap and you've seen in the previous slide what this would kind of look like. We are also working towards having the first version available. The set of things that we have defined are having supported at least two major runtimes. It seems to be given a Podman and container D that are going to be supporting CDI. Having a better level in Kubernetes and making sure that we are working to have at least two different plugins. From there, we'll be tagging your version with v1.co so that users understand that their plugins will actually be supported over time and we can actually lay out the groundwork for an ecosystem to build on top of this. Cod is a new, actually exciting and very new group. If you want to contribute, if you have ideas, we'd love to hear from you. Feel free to drop into our Tuesdays by weekly meetings. If you're working with our specifications with our projects, let us know either by dropping a meeting or even sending an email to say runtime. We'd love to hear from you. That's it for the SigRuntime presentation. You can read a lot more on what we're doing for SigRuntime or Cod on the cncf.io mailing list. You can reach out to us on SigRuntime on the Slack cncf. You can also look up some of the main information on our GitHub. Feel free to also join the SigRuntime meetings by weekly on Thursday and give us some feedback. Thanks for attending.