 Thank you for being here and I'll be talking about the CNCF tag runtime, some of the activities that we've been involved with and some of the communities that we've been reaching out to and this session is mostly about the audience to understand what we're doing and how they can help with the tag and the different groups within the tag and also how they can maybe attract some folks who are interested in contributing. All right, so a little bit of what I'll be talking about today is some of the overview of the tag and what it means. The activities that we've been working on or we've been having presentations and the different working groups within the tag. We have the IoT Edge, the batch system initiative working group. We also have the container device interface, wasm, special purpose operating system, recently created and we have an AI cloud native working group that we just created the charter and we expect that to be approved in the next two weeks. Then we'll touch on some future work and activities that we're planning to do and how you can get involved. So I have a slide on the charter with ChatGPT that I showed at KubeCon Amsterdam and I went to ChatGPT this time and they didn't actually have a lot there. So it seems like they shut down some of their algorithms so it doesn't say much about tag runtime but I did actually went to Google Barg and it came out with a better answer so just for the fun of it, this is the answer. So it gives you a pretty good detail answer about the tag. I guess it looks at the GitHub repository and the website and it provides pretty good information so it's good to understand some of these things that are happening out there and I think tag runtime is no different. And just for the fun of it, also I went to an image generation site or generative AI, image logo creator and it did actually create three different versions of logo. It's not actually very close to what our actual logo is but I mean, if you want to get started it is something. But a lot of talk on generative AI but what it is with tag runtime, it's there to help different users and communities use cloud-native technologies when it comes to workloads. And these workloads can be batch type of workloads like high performance type of workloads that run in multiple machines or they could be latency sensitive workloads for example microservices. So all of these in the cloud-native context. We work with the TOC, TOC is the CNCF TOC, the technical oversight committee. We have three TOC liaisons, we also have chairs, I'm one of the co-chairs. And we also have tech leads and the tech leads help out with the different activities that some of the chairs also do. But they tend to focus a little bit more on technology. We meet every first and third Thursday of every month and communication happens over email and Slack. So these are some of the sample projects within the tag runtime scope. You can see there's a variety of different projects in different areas. You have things like Container D, there's a Container Shim. You have Harbor, which is a container image registry. You have other projects like Cata that helps you auto scale workloads in an automated way with different metrics. Unicraft, it's another project that allows you to run Unicurnals. So a variety of different projects in different areas. And these different areas, we loosely define them into scope areas. Within the tag, you have the general workload orchestration. Where Cata and Kubernetes fit in, and you have the runtimes, the VMs, such as the Container D and Cryo Shims. Lately, there's been a lot of conversations around Wasm. So the Wasm runtimes also fall within the scope. Several less workloads is another area and Knetive is one example of those projects currently in the CNCF. Then another special area is the special purpose operating system area. So there are several projects there, like Bottle Rocket, Flag Car. Recently, we just created a new working group that will tackle some of these areas in special purpose operating systems. Another area that is super exciting is AI and machine learning. So we do have a lot of projects that fall within that scope, such as Qflow, Kserve that allows you to serve machine learning models. And then, obviously, we have the areas of the different working groups under the tag that describe here that I mentioned also previously. So now, some of the activities and presentations that we've had in the tag. So we do have a website. So there's updates on the website. So some of our community members are posting information there. And also, we request anybody in the community who has any feedback or information that we'd like to share or would like to update on the website. They're free to do so and they can create a pull request on Github repo and make the change to the tag runtime website. So one of the exciting things that we have going on now is the proposal to create a cloud native AI working group. There's been a lot of excitement and hype, if you will, about AI and generative AI and LLMs. And we feel that we need to address that area. So there's a lot of community members that have gotten together and started creating a charter. And as a matter of fact, they already created a charter and that's being reviewed by the CNCF, TOC. And we expect that to get started maybe in about two weeks from now. And some of the sample deliverables for this working group are things like white papers on cloud native AI. And we could have things like the landscape with all the cloud native projects and how they can help enable machine learning ops and AI type of workloads, some surveys that they can send out to different community members or organizations to understand more of the ecosystem. Reviews and recommendations are another area that is interesting in that the working group is interested in tackling. Also reports on new trends in the industry that can help cloud native. So all of these in the context of cloud native and how AI can help cloud native. And also how cloud native can run AI type of workloads. Another thing that we have going on is that we recently created the CNCF tax club organization. So Nikita from the TOC created the pull request or the issue to start this initiative. And right now this is available to all the tax. But one of our container or working groups in the tax actually created an artifact that is actually hosted in this organization. And this is just a home for these working groups to have a place and to have something that can be shared across the community. And it can be actually used by several organizations or by different community members. This is a sample of presentations that we've had in this year and last year in different areas. We have the container of tools, projects like Unicraft and container D presented and some others. In terms of workloads, we had Cube Edge graduation presentation. The Cube Edge project is pretty mature. So they wanted to provide feedback to the community and they presented in our meeting. And other areas of presentations include like Kubernetes related projects that enhance the Kubernetes ecosystem like Kata, Eraser or Cube Stellar and so forth. And finally, we had quite a few presentations also. On operating systems, this will actually be more developed within the special purpose operating system working group that I'm going to be talking about in a little bit. And just to give you an example, we had the Eraser project presented in our meetings. And so Eraser is a project that allows you to remove non running container images in Kubernetes clusters and in the nodes and going backwards, oops. And this is a demo of the project so you can see that here we're creating a sample cluster with kind. And the idea here is that it'll create a service that brings a container image or create a deployment that brings in a container image. And that container image or deployment or demon set will be deleted in this case is the demon set. That is going to create it and then it will be deleted. So you see it's coming up, this demon set. And it is a job that is completed. And then the image is not being used anymore. And you can see that the pods have been deleted. And right now it actually installs the control plane of Eraser. And the idea here is that after this is created, then there won't be any container images related to this specific demon set that was created. So you can see the cluster got, the image got deleted and it's not there anymore. So another project that actually presented is the Flat Carp project, which is a special purpose operating system just to run containers. It's a minimal distribution of Linux. It helps the whole ecosystem create this server that is more secure, more automated. And it provides this declarative way of provisioning. And you can see that it's actually being used by a wide variety of different organizations, commercial Azure, AWS, VMware, etc. Unicraft is another project that presented. It's a Go-based tool framework based on, sorry, it's CraftKit and it's presented and it's based on Unicraft. CraftKit allows you to create Unicernals. So it's the tooling around Unicernals created for Unicraft. So it's Go-based and as you can see, it's a CLI-based system. And here we have just a sample demo of Hello World. And you can see that we update the craft package update. And we run and we modify main.c file to create that Unicernal, which is with this command line. And it also has this craft file that defines how you define that Unicernal. And right here, we're building the Unicernal with craft build. And we just do a craft run. And you can see your Hello World of Unicernal. So another project that we had was Cata, but they were interested in graduating. So they presented and they showed us their progress. So Cata allows you to automatically scale deployments up and down based on different scalers. And the scalers are integrations with different endpoints like Apache Kafka, Messagen, or AWS SQS, or different CI CD systems that allows you to spin up many workers to run your CI CD jobs. It actually provides over 55 scalers or 55 different integration points. It can run on Linux or ARM. So the idea with this project is focusing on scaling your ASP, but not the scaling internals. And so you can see the progress of Cata towards graduation. In V2, they had about 20 scalers, but when they went for incubation, they were up to 40 scalers. And now that they graduated, I think they're over 60 scalers. So it continues to grow over time. And as you can see, there's a lot of end users using the project. About 11% of creditors users are using Cata Azure Container Ops is using this project in production, and AKS and OpenShift are offering a managed version of Cata. So another working group, shifting gears a little bit, or now going into working groups, rather. So we recently created the Wasm working group. So there's been a lot of conversations around how WebAssembly can help cloud native and WebAssembly runtimes can be used to run cloud native workloads, and initially started with this PR trying to reach out to the community members who were interested in the space. And we did actually get a lot of interest, and we came up with a charter, a document, and folks started collaborating, and they created the working group. And right now they're actually having meetings almost every week. These meetings are actually getting recorded, getting posted on the CNCF Tag Runtime channel on YouTube. And there's a large lineup of different presentations. So we got a lot of participation, and it's a very exciting feel. So the other working group that we recently created was a special purpose operating system working group. And there's a lot of interest in this space as well. So you can see that we had some presentations on October 5th, and we have a full lineup of different operating systems. And there's a lot of conversations about standardizing the APIs that talk to these different operating systems. So excited to see this evolve, and we'll probably see a lot more than the rest of this year and next year. And some of the things that we're thinking about is doing or addressing the special operating system working group. For example, standards to run containers, common APIs to manage these operating systems, speed up the OS provisioning for Kubernetes. And different standards to run the containers or run specific things like WebAssembly modules on top of these operating systems. So another working group is the IoT Edge working group. And the scope within this working group includes many different projects that are about workloads at the Edge, for example, Cube Edge, K3S. But now they're also collaborating quite a bit with the Wasm Cloud and Wasm Edge communities, or the Wasm ecosystem. And we'll continue to see progress as well in this working group. But this working group is not reason. It's been there for maybe about two, two and a half years. So there's a lot of different deliverables that actually have come out from this working group. And one example is this Edge native applications principles white paper. And basically talks about the differences between what it is to be cloud native and Edge native. So with Edge native, there are some differences with the applications where they need to be aware of the different constraints at the Edge. For example, they don't have large amounts of CPU and memory. They have access to last power. They need to be more resilient to network outages or not being able to connect to a centralized location. They need to be more secure in terms of having a lockdown mechanism if somebody breaks into the closet where they're actually stored. So a variety of different principles that they need to be taken into account and this working group has actually created this white paper to educate the community. And now they're working on the next step of this white paper. And that's the application design behavior white paper. And this is how you put these principles in practice. So this is currently in the works. We expect this to be published in the next maybe month or so. And that's quite a bit of work that we can say that's happening there in that working group. Another one is batch system initiative. And this one has been there for maybe two years. But right now with the advent of AI, there's a lot of overlap with some of this high performance type of workload. So we might see collaboration with this working group and the cloud native AI working group. But right now they're working on a batch landscape. As you can see, there are not that many projects. There's a few, but we expect this to continue growing. For example, there's one called Volcano that allows you to schedule batch jobs and communities and some other projects that are allowed to manage how these jobs get run across multiple set of nodes and in a Kubernetes fleet. But yeah, it's an exciting field and I think we'll see more tiles here popping up in the next few months. They're also working or have created a white paper on batch scheduling tools. And the features around batch or details that are looking at our scheduling policies, preemption, access to SDK and APIs, or the different project characteristics within the batch space like whether they're open source, whether there's active development or whether these projects can become a CMCF project or are already a CMCF project. And finally we have the container orchestrator devices working group. And this working group has been working on container device interface standard for all kinds of different container runtimes. So the idea is to have support in all of them across the board. And if you can see, they've added support for container D, they've added support for cryo, they've added support for potman in singularity and they also have vanilla docker support in the works. They've also, these artifacts that they've created, which I mentioned earlier, have been moved to the CMCF tax organization. This is again, this is just not specific to tag runtime. But this is the first working group and group of people who have actually both artifacts to this organization in the CMCF. And they're also converting some of the tax CMCF that I owe URLs with package names. So what's next for the tag runtime? And what are some of the things that we're looking at? So obviously we want to have more people participate in the tag. I think this is a common problem in a lot of the open source communities. But and this is no different in the tag runtime. Some of the things that we can do is actually revisit some of the old projects. Have them present again and provide an update or re-engage or provide advice on how they can improve or how they can pivot, maybe pivot in different ways if they don't have a lot of traction with contributors. Keep on reaching to the different projects in the different areas. For example, cloud native AI is happening now. So there's LLIMs everywhere, so we'll continue to ride that hype. Another project that is interesting is Open Tofu. So if you actually follow some of the developments from HashiCorp and Terraform licensing, their licensing was changed to business licensing. So there was a group of folks in the open source community that decided to fork Terraform and they've started this project called Open Tofu, which is an exact fork of Terraform. But it wouldn't actually evolve into something else. So we plan to engage them as well. Continue to look at the variety of different runtimes that may come up in the ecosystem. Wasm are the ones that come to mind. Or some of the ones that are used in IoT Edge. Also continue looking at different tools in the Kubernetes managing ecosystem, how you manage multiple Kubernetes clusters as they grow, as they become more of a challenge. And obviously continue to have updates on our tag runtime website and have these on a regular basis. Yeah, with that, just feel free to reach out and talk to me or talk to anybody in this room. Feel free to come up to me and happy to answer any questions. Thank you.