 How's everyone doing? I hope everybody's having a great KubeCon. I'm excited to be here to talk about CNCF-TAC runtime and some of the work that we've been doing. So a little bit about our charters. We are here primarily to help the adoption of different types of workloads and whether these are like batch sensitive workloads or latency sensitive workloads, but all of them in the context of cloud native environments. We work closely with the technical oversight committee and the CNCF, the TOC, and then we have the liaisons. So currently we have three liaisons. They're part of the TOC and we currently have three chairs. We also have a tech lead. We're also looking for more tech leads and folks interested in participating and joining, feel free to reach out. Our meetings are the first and third Thursday of every month and our communication happens over Slack and also happens through our mailing list. Yeah, some of the things that we do, primarily three things. So we do outreach to projects. For example, we go to GitHub repositories we see what's coming into the CNCF and we try to see if these projects are interested in presenting in our meetings, try to engage them. And sometimes what we do is like we talk to some TOC members and see if some of these projects are good fits for like the CNCF. We also support existing projects, navigate the whole ecosystem of the CNCF and also the different tags and the CNCFs. There's different scopes like observability, there's to act the livery, there's the tag storage. So like the different tags and how these projects fit into these tags. And finally we go out and educate, for example, this TOC is educate the community on what the tag is doing, what some of the projects are about and how people can get involved. So the scope of the tag is primarily around some of these projects. So you have things like WebAssembly, you have things like Kubernetes falls into the scope of the tag, container runtime shims like container D and Cryo. You have things like K3S, which is Kubernetes at the edge and Tinker Bell that allows you to provision bare metal machines. So a lot of different projects but all within how you run workloads. So we have these different scope areas. So the general workload or orchestration and have volcano and Kubernetes fitting there. And then we have the VMs and runtimes. So Cryo, container D, the WebAssembly runtimes. We have the container image registries like Quay and Harbor and other projects like rootless containers are there. Another part of the scope is a special purpose operating systems. And that's operating systems meant to run something unique and in this case it's just to run containers. So like flat car, it's an operating system that allows you to be there just to run containers. And then there's the AI edge MLOB space. And some projects there include like super edge, cube edge, there's things like Qflow and MLflow. So all in scope of machine learning. And finally, we have a working groups and we're open to expanding this. So the current working group is the container orchestrator device which I'll talk about in the last part of the presentation. So now for runtimes. So what are some of the things that we've been doing? So we had the Wasm Cloud project come into our meeting and talk about what they do. If you wanna learn a lot more about this, there was a Cloud native Wasm day earlier on Tuesday and I'll encourage you to check out some of the talks. But essentially what they want to do here is have a model where you have these capabilities and actors. And these capabilities and actors are web assembly modules. And the capabilities can be things like allow something to connect to a different system. For example, like a Redis database or like a MySQL database and do that with web assembly. And then the actors, it's a model that makes some action on those capabilities. For example, if it's a database, insert some data into the database. If you're interacting with that device at the edge, you would actually be reading some data from that device. So those are some of the examples. So a lot of interesting things happening here. So excited to see what happens next with this project. So Wasm Edge is another project that got involved and this is primarily a runtime for web assembly. So if you build your web assembly binary, you can use this to just run it and it provides some capabilities like sandboxing and they're trying to comply with the WASI specification from the byte code alliance. And use cases are running web assembly edge devices or IoT type of applications. You can even use it with web applications or embedding that into SAS applications. This project is currently in the CNCS sandbox and we'll see a lot more of this. In Native is another project related to web assembly and essentially this is a ahead of time compiler for folks who want to run C binaries in some specific cases. So you may not actually want to run web assembly because you may need the runtime and in that specific case, you may need to have something like WASI, web assembly system interface. And with this, you will run something with just the C libraries and the C linking and make everything possible through just like a C binary. So that's in Native. And Quark is another project. It's another runtime and OCI compliant. It's a pretty early project, but essentially what they're trying to do is you have this runtimes that are based on virtual machines like Kata containers and you have things like Firecracker from Amazon. And basically they're trying to trim that layer down. The VM has some performance implications. You have to bring up the VM layer. So it's just that extra layer. And what they want to do here is a hypervisor at the bottom from scratch and called Qvisor. And that basically is written in Rust. And then on top of that, you have a custom kernel. So the main purpose of this is having something higher performance. This project is pretty early, it's still not in the CNCF, but still in progress. So another initiative that we got the folks involved with is QoS for container runtimes. So right now you have things like the CPU scheduler in Kubernetes where you can specify like PN of CPU if you want the full workload to use up that CPU. To use up that CPU. And with this, you know, in some Kubernetes nodes you may have some capabilities like you may have higher memory and lower memory or latency or you may have like something like device that is faster than another. For example, like a disk drive that is faster than another. So you would actually be able to select these on different Kubernetes nodes depending on how you have this configured. So that's the end goal of this project is still going on. So there's a pull request in container D. It's currently supported with Cryo. But yeah, so we'll see more of this as well. So another project is SysBox. We got to see in essentially this project is a Docker container, but in the Docker container everything acts like a VM. So you have system D, you have like a init processes. And use cases for this type of thing is people running CICD systems so they want to have that VM experience or just people who want to use Docker containers and that have been using a lot of VMs and then want to like an easy path to use containers or they want to use that behavior that you typically get with containers. So it's faster than a traditional VM because it just runs in a Docker container. So rootless containers is another project from the entity folks and container D and essentially this allows you to run a container as a root user. So the container actually thinks it's rude. So it has the access to things like slash proc slash, say some main components of the system. So you're following the container that it's thinking it's rude. But essentially on the host you're running as a different user. So their use cases is like you wanna prevent running containers that may compromise your host. We'll see a lot of progress on this as well. So I encourage you to check out the website. Trial is another project that it's a container registry but they're targeting something higher performance than like Harbor or Quake which are some of the container registries. One of the things that they're doing is just the writing the whole thing in Rust. So they're thinking like maybe this is a lower memory footprint and they're also thinking about creating a P2P mechanism so you distribute your container images across a fleet of Kubernetes clusters in a more efficient way. So when you start your workloads you already have that container image and they just start right away. This is a pretty early project. So it's not even the CNCF sandbox but we'll see more of this. And the scope of workloads, what type of projects would actually been talking to the tag. So Keita, you've probably seen a lot of this and there's some mention about it and the keynote. So it's basically auto scaling your Kubernetes parts based on events. So for example, you may have something like Apache Kafka or Amazon SQS or you may have something like a file uploaded to something like S3, that's a particular event or multiple files uploaded to an S3 bucket. And this will automatically detect that through metrics and basically auto scale up or down depending on the specific event. So this actually takes the HPA, the traditional HPA in Kubernetes to the next level. And this project is currently in incubation. K.R.Mata or Karmata it's a project that allows you to manage several Kubernetes clusters across multiple clouds or if you have a private cloud or if you have edge clusters. So it's the centralized control plane for managing all these Kubernetes clusters. And it actually builds on the initial idea around Federation B2. It's a project where the Kubernetes community but the Cube edge folks took on some of the code and took on some of the architecture and then created this system that allows you to manage the Kubernetes clusters in all these different places. And there's a, you know, the motivation factor here because you want to be able to manage Kubernetes clusters in at the edge, right, with something like Cube edge. And this project I think is applying for the CNCF sandbox. Volcano is a project currently going for incubation and what they're trying to do is like workloads that you need to schedule or a lot of resources ahead of time before you've run that workload like very intensive workloads and AI machine learning type of workloads qualify for this. So for example, for processing a lot of data you need to preserve like a hundred cores or a lot of memory across multiple nodes. So this project allows you to do that. So it integrates with things like TensorFlow, Spark, PyTorch. Some of those really popular big data and machine learning frameworks. Confidential computing is an initiative from the folks in the Cata containers community. So you'll see in a lot of these projects there's a lot of overlap with some of the other tags. So for with this project, you'll see overlap with something like the tax security. So in essence, what they're trying to do here is having the end user create a workload that that end user only knows about. So when the end user wants to run in something like a cloud service provider like AWS or Azure or Google cloud, they know that this CSV or CSP owns the infrastructure underneath but they don't want the cloud provider to know what's being run there. So that's why it's called confidential. So there's things like encryption at the container image level, there's encryption at the memory level. So a lot of different layers of security and providing that confidentiality in whatever you're running. K3S Kubernetes distribution or Kubernetes lightweight for the edge. So what they did is actually they took the Kubernetes source code and they packaged it up in a small footprint. It's less than a hundred megabytes. So they created different components out of the main components in Kubernetes. So you have the K3S server, which includes all the components in the control plane in it also, instead of using some LCD, it uses SQLite database or you can connect to something like MySQL. And then you have a K3S agent that has other Kubernetes components that you typically have on a Kubernetes node and like the Kube LED, the networking and it also uses container D. So they're looking at, embedding this into like lighter nodes or ARM type of processors at the edge and maybe a lot of applications with IoT, CICD. So you'll see kind of like the general theme is very similar to some of the other existing projects. And they're currently in the CNCS sandbox so they're looking to go into incubation maybe within the next year. Qvert essentially is Kubernetes allowing or virtual machines managed by Kubernetes. So instead of using your typical open stack to manage your virtual machines or using something like AWS Outposts or something like Google Anthos that allows you to manage your own data center, you can use Kubernetes for that. So you bring up your VMs with the Kubernetes control plane and interacts with a Kube LED to manage all your VMs or your fleet of VMs. Currently in going for incubation. So we'll see that going be in incubation maybe in the next couple of months. Crosslet allows you to run WebAssembly modules with Kubernetes. So we looked at some of the other WebAssembly projects like Wasm Edge, but those actually don't necessarily run on top of Kubernetes. So this makes it possible you build your WebAssembly module and then you can run it on a Kubernetes node. It can be located anywhere like in the Edge or it can be located in your own data center or cloud provider. So in the scope of special purpose operating systems we had this project called Vortail presents and essentially this is an operating system defined by a tumble file. So you can create your own operating system for your specific workload. There's no SSH login or there's no shell. So it's very custom made. They also have a library of different operating systems and a similar way that you have like a Docker registry but in this case you have a library of micro VMs with operating systems. So for your specific use case you can use some of these. And the maintainers are also working on another project called Directive which is a way to run serverless workloads using Knative. So now for the machine learning and Edge and AI space. So we had the TFX project come in and present. And this is machine learning end to end management. So you create your machine learning model you manipulate your data and you try it and in the end you actually send it over to some place to serve that model to serve it as an inference model. This is from the same folks from the TensorFlow community, the Google folks. And yeah, so this is just very similar to something like Qflow. And speaking of Qflow and TFX, MLflow is also a very similar project that manages end to end machine learning. In this case it allows you to track your models. You can have different versions of machine learning models. You can do your CI CD. So maybe you have a model version of 0.1 and then for some reason that actually doesn't work then you can revert to like 0.x or 0. Or you have 0.2 and then you want to revert to 0.1. So you can manage all that machine learning lifecycle and take that to production. Now for QDL, which is another project is also very similar to MLflow and TFX but the difference here is that you can run that on top of Kubernetes. So you can tweak that model within your cluster. And again, you can make something optimal for your machine learning model at the end. Serve that in a production environment using a Kubernetes service, a typical Kubernetes service. This project is currently in the CNCF sandbox. ACRI is another project that is also looking at the edge computing space and essentially what they're trying to do or what they're working on is with Kubernetes automatically detect devices at the edge. So typical use cases are like sensors like temperature sensors, cameras or other type of device that you may want to put in the edge. And some of these devices can come and go depending on like maybe there's something breaks like a cable, somebody trips over a cable and this project will automatically detect that. And it will also allow you to have backup devices. So like if one of the devices fails then it automatically gets added to your pull-off devices or if somebody's out in the field they can plug in the device and they don't need to do anything that just auto-configures. There was an interesting talk earlier on Tuesday that talked about how to use ACRI with WebAssembly and Crosslet, so I encourage you to take a look at that. A lot of new developments in this space and this project is in the CNCF sandbox. Super Edge, it's a project very similar to Cube Edge and that allows you to run workloads at the edge and using Kubernetes. So very similar. So I mean, this is something that the CNCF is trying to figure out. Some of these projects have a lot of overlap and some organizations may feel that they want to use a specific project for certain reasons. Maybe the configuration, the source code. So different things. So the CNCF is working on coming up with like ways of end users to navigate this ecosystem. So as you can see, Super Edge is very similar to something like Cube Edge. And it's currently in the CNCF sandbox. Then OpenYurt is another project that is also, again, very similar to Cube Edge and Super Edge. And yeah, so you have your edge component and then you have the centralized component that can run in the cloud or your own data center and just to manage your workloads. So we did have some other projects present in the past and got involved. So these are some of the examples like we had talked about Cube Edge, Quay, Talos which is a special purpose operating system for running containers. So lots of different projects. So encourage you to take a look if you're in the space. And we also have some upcoming projects in Clavera containers. It's a project that is working on a different take on confidential computing. It's the folks from Alibaba. I'm so excited to hear about them. Armada is another project that allows you to manage workloads across multiple Kubernetes clusters. Then there's K0S, K0S. It's also a Kubernetes distribution for the edge. And of course the WebAssembly projects will see a lot more from them in the future. And maybe we'll see more of that like maybe next year or two in the next CubeCon some more progress in those projects. So if you have a project in the space or if you know of anything, just reach out and let us know and we want these projects in the space to get involved. So for the container orchestrated workgroups. So we have this single workgroup. But again, like I said, we're trying to expand this and create workgroups for other areas. One example could be like something at the edge, another workgroup that could be created or working group could be created will be something like related to machine learning. So lots of opportunities to work together and try to come up with standards. So what's happening in a lot of these open source projects is that you see folks in different projects working on different standards or ways to identify something whether in machine learning or whether a workload in the edge. And we want these standards to come together and have something common that helps the end user because like if you have too many things everywhere the end users can get really confused. So for the container orchestrated working group what they're trying to do is come up with a way to define container devices or device specifications for regular containers. And the challenge here is that you have a lot of different disparate definitions. It's very fragmented. So the team is working to try to get that together. And again, the use cases are runtime, specific things like deep learning, 5G and targeting specific devices like GPUs or specific processors. Yeah, so that's all that I have for all the projects. So we have the mailing list, we have Slack channel, we have the repositories, so check those out. And feel free to reach out, we're happy to help and we just want more people to get involved. That's all I have. So if you, does anybody have any questions or anything that, any questions about the project or anything on how to get involved or happy to take those? All good? Any questions online or? Okay, so I mean, either was very clear or there wasn't a lot of interest. Thank you.