 Welcome, everyone, to the Tag Runtime Update Lightning Talk. I am Taylor Thomas. I work at a company called Cosmonic. And I'm Alexander Konevsky, or most people know me, Sasha. I work for Intel and Tech Lead for Tag Runtime, co-chair of Container Orchestrated Devices Working Group. So we're both co-chairs of various working groups that are part of this. But the Tag Runtime group is made up of a bunch of different projects that are constantly ongoing all around the runtime space and all the main groups that are listed are all on cloud-native AI, batch system initiatives, container orchestrated devices, IoT Edge, special purpose operating systems, and WebAssembly. You can get this QR code leads to that list and a link to all the relevant information about the different working groups. And just as a general point, these working groups are completely open to the public. For example, I know in Wasm, we run a meeting every two weeks, the meeting's on the calendar. You can come join. That's pretty much the standard practice for a lot of these. So if you never join in one of these, a working group's a great way to get started and kind of show your ability to help contribute to the community. So just keep that in mind. Now the purpose of this is to go over two of the working groups that we're involved in and kind of give you an overview of a lot of the emerging technology that's happening. So first off is WebAssembly. And just out of a quick show of hands, who has heard about WebAssembly? Okay, now lower if you have not used it. Okay, see, there we go, that's everything. So it's emerging, but there's a lot going on. I just came down from Wasm Day upstairs. So Wasm's showing up in a lot of places in the cloud native ecosystem. It's showing up in Docker and the container run times. It's showing up in Kubernetes via RunWazzy and other extension points. So if you look in the ecosystem, you'll see it in places like Envoy, Kubor, Open Function, those kind of things. You'll also see Wasm native application platforms. Wasm cloud is one that I'm a maintainer of. There's also Wasm Edge. Those are the two ones that are part of the CNCF. And then there's plugins and extensions and you just have to go to the CNCF landscape and search for Wasm and you'll find it in some really interesting places. So that's where we're currently at with things. Just a quick update on where we're at and hopefully this kind of is a motivation for people to come and take a look at what we're doing. And so we're working on standardizing an OCI artifact definition. This will become the standard way to store WebAssembly modules in an OCI registry and be able to fetch and read data from them. That might not sound like much, but once you start to learn about what WebAssembly does and particularly the component model, that sounds really cool. Then we have an ongoing thing of WasiCloud. WasiCloud is a group of interfaces. If you don't know what that is, it's okay, you can go learn more. They help us interface with the different cloud technologies, things like key value stores, blob store, those kind of things. And then we've been doing a lot of curating Wasm knowledge and learning. So if for some reason you're a Wasm project and haven't presented there yet in the CNCF, we like to hear from it because we record those meetings, go check them out. If you go check them out, we have people presenting on every project that we could find that's doing stuff in the CNCF. And soon after this conference, we'll be starting work on a white paper on platform engineering and how WebAssembly affects it. So that's what we're up to. All right, so switching gears a bit. I came from a working group which is working on accelerator devices. It's not really emerging, like the work what we are doing, it started like several years ago, all the preparation things, but now with rise of AI and machine learning workloads, suddenly it become like, oh, hot topic. Everybody talks about it. Everybody interested and so on. So we are a small group of people, like mainly Intel and Nvidia, but we have like a lot of fellow travelers from like Red Heart IBM, like from container runtimes and so on. We are patient about how to enable the devices. And we started to look from very bottom looking how devices, how accelerators can be exposed. And we come up like main outcome of a working group is container device interface. It's something which is happening on the runtime level of the CNCF stack, which describes if I want GPU zero, what it actually means for the containers, like what our libraries, what hooks needs to be executed, what permissions needs to be given, and so on and so forth. So main discussion, what how it works and so on, it happening in our tax CNCF repository and everything around it. We can't fit in this talk like with detail description. So if you're interested or if you're curious how devices are exposed in the Kubernetes stack and actually rest of the CNCF, look at the several sessions on the KubeCon, which have like keywords like year A, CPU, CDI. All of those leads to something what our working group is working about. And we have a tag runtime booth. So again, we can talk about more details. I have one and a half minute. So crash course, what CDI is about and how it helps. With CDI is, as I said, like it's low level specification of what accelerator means. We are using it as a basis for year A, dynamic resource allocation, where we can say I want GPU with particular properties, I want X amount of RAM, how to share it, yada yada. At the same time, we are trying not to teach the KubeNet about all those internal details of accelerators and so on. So to KubeNet, it's something seen as a like simple string which identifies the device. And when we have like protocols between the KubeNet and runtime which says this workload requires this particular device. And inside runtime, in container D, in cryo, it got expanded to what container actually needs to happen like on kernel level and drivers and so on and so forth. We have fellow traveler, like node resource interface which can help to fine tune your native resources like CPU and memory to help your accelerators. But it's a bit of side story. Beside Kubernetes, we implemented the same support for like most popular common line tools. So Podman was implemented several years already. Docker, it recently got added. Singularity, in some mode it works. So if you have like common line experience with containers, if you have like HPC kind of experience of containers, I think we have it covered. If you see anything can see I'm stuck, but it's not covered but we still want to use accelerators. Talk to us. We will help you with that. See you all at the tag run, time booth.