 Okay, so a little bit about what we're going to mention today. We'll give you a brief tag runtime overview. Then we'll talk about some of the activities that the tag has been involved with, some of the projects that have come and presented in our meetings. Then we'll dive into what the working groups are doing in the tag. We have three working groups, the IoT Edge Working Group, the Batch System Initiative Working Group, and the Container Devices Interface. Then we'll talk about some of the future things that we're thinking about in terms of work and how you can contribute, how to get involved. Finally, we'll talk about how to reach out to us and how to start a conversation. All right, so what is tag runtime? I was just curious about tag runtime. In order to chat GPT, I wanted to ask, do you know what tag runtime is? As a matter of fact, it did, and provided a pretty good description, a few paragraphs. If you're interested in the natural language response, you can go there and find out a little bit more. In essence, we do have a charter. Tag runtime is there to help improve and people engage with the communities to help with the different workloads in the cloud-native ecosystem. These workloads can be latency-sensitive workloads, like high-performance microservices, or they can be batch-topped workloads, or they can be low-power-requiring type of workloads, like the ones you run at the Edge. Again, everything in the cloud-native ecosystem and contacts. We work closely together with the CNCF TOC. We have TOC liaisons. Nikita is one of them next to me. I'm one of the co-chairs, so we also have chairs and we also have tech leads. This is a slide that shows some of the logos of the projects that have actually presented in our meetings. As you can see, there's a wide variety of them. You have operating systems, you have runtime shims, you have projects that allow you to run workloads at the Edge, so a wide variety of them. We decided to loosely categorize these projects. We have the general workload orchestration type of projects, like Kubernetes and Volcano. There's also projects that allow you to run things at the Edge. Then we have more of the projects that relate more to runtimes and VMs. You can find there Container D and Cryo. Additionally, the special-purpose operating systems, things like Flat Car and Talos that help you run container images. Basically, it's just a lightweight operating system that makes it easy for you to patch. Then there's a serverless workload space, and Knative is one of the projects right now in incubation, an exciting space there. There's a wide variety of projects in the MLops, AI machine learning space. There's a lot of excitement around the large language volumes. Some of the projects there are open-year QBatch, and you have things like K-Serve that allow you to serve machine learning models. Then there's a working group category. Now we'll talk about some of the activities. This doesn't work. We got an overview of what tag runtime is, and let's see what the tag has been up to. We now have a shiny website of our own thanks to the support from the CNCF. If you head to tag runtime.cncf.io, you can find all information about the tag, obviously all our meetings, Slack channels, and so on. We got this website done last week or the week before, I guess. It's very new and young right now, but we plan to add more information. If you want to add something, please feel free. We plan to add more information around it, especially what the working groups are up to, all the artifacts generated by the working groups, like white papers and so on, which we will cover later in the presentation. Our hope is that we will probably get more contributors this way. I know other tags are probably more popular and have more folks, but our hope is to get more contributors to tag runtime. The newest and shiniest thing on the radar recently has been a proposal for a Wasm working group. Wasm has been a pretty hot topic in the community recently, but there's never been a dedicated space for it. Heba, who is from Microsoft, she recently got involved in the tag, and she's been amazing with driving engagement, as well as starting new initiatives. She proposed that maybe we should start a working group for Wasm, and we were like, let's do it. That sounds really cool. There's a GitHub issue opens if you scan that QR code, you can call it the CMCF tag runtime or the 58 issue. Just feel free to plus one if you are interested, or if you have specific ideas or opinions of what the working group should do, or ideas or topics that should be discussed there. Please feel free to comment on the GitHub issue. I think Heba is working on a draft charter for the working group, so this is the right time if you have ideas that you want to include in the charter itself. I just want to call out Heba started getting involved maybe a month or so ago, I guess, and she's been phenomenal, amazingly helping drive new initiatives like this and so on. If you are interested, I don't think she was really involved in the specific runtime space before, but she's been involved in a lot of other CMCF projects. Even if you're very new to the runtime space, or if you are a veteran already, and if you want to get involved in a particular idea or topic, where you want to set up a working group or just a space where you can talk to others about it, please feel free to reach out to us, and we're happy to chat more about it any time. Yeah, and if you actually know someone who wants to get involved to you, feel free to reach out to them, and they can reach out to us. That also works. So in the last year alone, like Ricardo said, we've had a bunch of projects present at our meetings. First of all, I just want to say, there's been such massive interest recently that we've had to... So we had bi-weekly meetings before, but we had to schedule ad hoc weekly meetings, and I think we might just move it to a weekly cadence right now. So the interest's been great. So we have a bunch of projects present at our meetings, like container runtimes for you to edge and so on. And for example, we invite projects that are already CMCF projects in the runtime space like ContainerD or Cryo, or even Co, which was a Sandbox project, or OPCR, which is the OPA container registry, which is also a Sandbox project. And then we also invite projects that want to join the CMCF, right? So maybe they are in the runtime space. They're already open source projects. They are interested in joining the CMCF, but they're not quite there yet. So we ask them to present in our meetings and just give them suggestions on how to improve things like how can they improve, or get more diversity for contributors from multiple companies, or increase their adopters, and things like that. So that's kind of one of the ways we engage with both CMCF and non-CNCF projects. And so one other way just projects present at our meetings is also if they're moving between levels. For example, Cryo has applied for, Cryo and Keda both have applied for graduation, so they've presented at our meeting. And even FlatCars applied for incubation and to join the CMCF, so they presented recently as well. So we engage with projects when they want to move levels, and this is actually a great way to know what's really going on in the CMCF ecosystem as well. I wouldn't go into detail on the past presentations, but one specific project I wanted to call out was Clustinet. So I don't really recall when they applied for Sandbox or Incubation, probably move to move levels, I guess, for incubation, right? That was around last year. Last year, yeah. So they couldn't get through on the first try, but then based on the feedback from the TOC as well as tag runtime, they reapplied again, and this time it got through, right? So they finally joined the CMCF too. Now we just want to talk a little bit more about two specific projects. One is FlatCars and the other one is Keda, who presented at our meetings because they are also moving levels. So FlatCars applied for incubation to join the CMCF. We're not familiar. So FlatCars is a Linux distribution for container workloads with high security in mind and low maintenance. But let's first try to understand what container Linux really is, right? And how does it really differ from regular Linux? So first thing is that when we're talking about running workloads in containers, all of the dependencies are kind of embedded within the container itself. So you don't really, you just really need to run the packages on the host that you need to run the containers themselves. There's like a minimal distribution for containers. And second is the focus on security, of course. So the partition, the actual partition that runs the operating system and host the operating system files is actually immutable. So any security threats that try to modify the operating system won't be really affecting you. Then finally, it's about like automated updates. So like on the way this updates works is basically atomic. You can either have be updated or not be updated. And there's also like a provision to roll back if your update doesn't go through. And finally, it's just more about the concept of nodes and provisioning. And it's more like a paradigm of like declarative configuration. So if you say your node is on version X, then it's on version X. And you also know like what CVEs are fixed in it, what bug fixes come with that version. So overall, it's like just about simplifying the secure operations and keeping security and scalability in mind. I won't go into detail into all of this. I just want to call out a few things so that they do have an LTS channel with an 18-month long support. There's big new GPU support. You can also run it in FIPS mode. They've been also working with other projects like Fedora, CoreOS, because both of them have certain upstream projects that they both work with. One of the bigger things was the cluster API support because it also involved adding ignition as the provisioning mechanism. And there's a bunch of other stuff. Like try to keep it quick. There's also active support and integration with various platforms, including major cloud providers and Kubernetes installers. I think one big thing, one major thing was also like service providers like Giant Swarm have a flat car as a base OS integration. So they have a PR, PR number 911, which proposes flat car for incubation. We're also working on all the due diligence documents and checks right now. But if you'd like to show your support, or if you have questions or ideas, please feel free to comment on this PR. Okay, so a little bit on the project Cata that also presented and they're applying for graduation. Some key statistics. And Cata is a project that allows you to scale your deployments and jobs and they want you to focus on more on the scaling and less on the really internals of the scaling. So make it easier for people to just scale based on metrics. But one of the things that they're doing is that they're adding more scalers. So these are the interfaces that allow Cata to talk to something like Apache Kafka, to like AWS SQS, or to a specific metric server and identify whether a pod needs to scale up and down. So right now they have 50 plus of these scalers and when they apply for Sandbox, they had about 19 or 20. They're also thinking about adding production grade authentication, which is really important for real-life production type of workloads. And they're adding support to scale your deployments to zero or pause auto-scaling. It might be useful for edge type of workloads. Then they're also adding support for ARM, which is using a lot of edge devices. And this is a diagram that shows the growth of the scalers of Cata. So you can see that from before Sandbox, they had maybe 15 and now applying for graduation, they have about 55 plus or 60. So it's some of the things that you... It makes you think about some of the things that maybe the CNCF is looking for a project in terms of growth and adoption. Some other key statistics, they've had a 40... 280% growth in numbers of users. So that's 42 listed end users. And these are only the listed end users. That means there's probably a lot more that haven't been accounted for. Additionally, about 12% of the Kubernetes users are using Cata. So that's a pretty large number in growth from 4.7. Azure Container Services in the backend runs Cata. So that's the core of the service for a major cloud provider. And AKS, Azure Kubernetes Service and OpenShift are creating a managed Cata version as a service. This is right now in preview. We expect that to happen pretty soon. So now let's dive into some of the working groups in the fall within the tag. And the first one is the IoT Edge Working Group. This working group has actually been interacting with a lot of different projects in the CNCF ecosystem. And there's a lot of excitement around WebAssembly. So you heard Nikita that we're creating a WebAssembly or Wasm Working Group. So some of the projects that they have been interacting with include like SuperEdge, CubeEdge, K3S to run Kubernetes at the Edge. And there are some other projects like Wasm Edge that allows you to instantiate WebAssembly modules or run WebAssembly modules at the Edge. Additionally, they're talking to some other WebAssembly runtines that are part of the By-Code Alliance. And they're also integrating with this project called Acry that allows users to automatically detect devices at the Edge. So for example, you could have a camera or a sensor at the Edge. And if that camera goes faulty, then this project would actually automatically detect it. Or someone actually plugs in a new device, this project allows you to detect that device and come online. Another exciting thing that the IoT Edge Working Group is working on an Edge-native principles white paper or applications principle white paper. And this white paper talks about the similarities and the differences between cloud-native applications and Edge-native applications. So you can see there are a lot of similarities in terms of observability. They can use things like open telemetry. So those are very similar. For manageability, both type of applications can run in containers. So either Kubernetes or maybe other orchestration system. But they're very similar in that aspect. Now for the differences, they're mainly focused on aspects related to being in a constrained environment at the Edge. Maybe they're in a box, in a remote location. So sometimes the applications need to be more resilient. They need to be aware of that. They need to check for network connectivity. Some of these network connectivity may not be very reliable. Sometimes they have high bandwidth or maybe low bandwidth. You cannot actually rely on scaling up and down just like you do in a central application. So those are some of the things that the white paper talks about. I encourage you to read it if you want to find out more details. They also talk about these nine Edge applications principles. And they're mainly focused on awareness by the applications of things like hardware. I mentioned that before too. But it's general awareness of where they're located. Additionally, there's another aspect of being able to be aware of at-scale management because you may have like hundreds, if not thousands of different Edge devices, and they need to be aggregated into a central location. So you can see there's projects like Cube Edge that help with that. But these type of applications need to be aware of that. Cool. Thanks, Ricardo. So I want to talk a little bit more about the Batch System Initiative Working Group. So to give some context on why this working group has started. So you have like most of the users are right now trying to migrate their batch workloads to cloud-native environments. But the level of support for batch workloads varies across like the batch systems right now. And migration between them is really painful. So this working group was started to create a specification for batch workloads so that other projects in the batch ecosystem today can engage between themselves. And migration across batch systems is pretty seamless. What they're currently working on though. So what they're currently working on is a white paper. So this is a work in progress on the various batch scheduling tools available in the ecosystem today. So if you actually start looking at the CNCF landscape or other blog posts, is this like a lot of projects out there and there isn't really detailed info on which project you should probably choose if you're probably like a system architect and trying to decide which one to pick up. So this is what the white paper currently looks at. For example, what does this scheduling policy looks like? Is it priority base? Is it first come first serve? Is it user configurable and so on? Is preemption supported? Is it single cluster, multi-cluster? What kind of SDK or API support does it have? It also looks at the project characteristics. Open source, it is CNCF project. What level of CNCF project? What level of Kubernetes integration and so on? There's a Google Doc that's already started to look into this. I'm very interested in contributing to this effort. I highly recommend reaching out to the Batch system in Asia, the working group. Kind of on similar lines, there's also the CDI or the CODs, container orchestrated devices working group. So what they essentially work on is this thing called CDI or container device interface, which is a specification to make it easy to run third-party devices on different container runtimes. It also looks into device plugins and how to make it easier to use. Sorry, let's go ahead in the interest of time. So the main thing that they've been working on is CDI support. So they're looking to add CDI support in multiple container runtimes. So they've added it in container decryo. Docker is currently in progress and discussions. And the main thing that they're also looking to improve is the CDI implementation of the Kubernetes community. So it's mostly right now a bunch of device vendors, runtime maintainers, and Kubernetes signaled contributors working on this. But if this is something that you're interested in, I also highly recommend joining this. In regard of what's next for tag runtime. All right, thank you. So what are we thinking about doing next? How you can help? So some of the activities that I think will be helpful. So we want to recruit more contributors. So we have some co-chairs. We have some tech leads, but we need more. So there's a lot of activity. There are a lot of projects. If you were on a keynote today, you can see the amount of projects that the CNCF is actually working with and are trying to join the ecosystem. And a lot of excitement around the different technologies. So if you're excited about how to run cloud-native workloads, we want to hear from you. Just one quick thing around that point. So the TOC is also looking to move. So there's Sandbox Project. There's incubating projects. And what we have right now is I think annual reviews only for Sandbox Projects, but we also want to include them for incubating projects as well. And we want to have tags in world and driving these. But we just really don't have enough people in tag runtime to help with the level of projects that we have in the runtime space. So we really, really need people if we want to make the CNCF landscape scalable, really. Yeah, and like she said, so traditionally the TOC has been doing this annual reviews of Sandboxes and they've been having these meetings on a monthly basis. But now some of these work is actually going to be done by the tags in the tag runtime. It's one of them, but there's a lot of other tags. So there will be lots and lots of new incoming projects. Additionally, we want to revisit some of the existing projects. One example is this. I think we mentioned before, the Container D presented in our meetings. And Container D is a graduated project. So they're very mature. They have hundreds of adopters. But they are working on new things, right? So they have a new component called Run Wasi to run WebAssembly runtimes. So just because they're in graduate state doesn't mean they stop, right? So we want to continue revisiting these projects and helping them and engaging with the community. Additionally, we want to continue working on these very exciting project areas. We have these MLAs, AI, space, and edge. So there's a lot of talk about LLAMs now, large language models. So that's closely related to how you run these workloads, machine learning workloads with these huge machine learning models. How do you run them end-to-end in cloud-native environments? Additionally, we want to continue to reach out to runtimes. You know, there's a Container Runtimes, but there are new runtimes related to WebAssembly. So there could be something else in the future. So we just want to be out there looking out for new and exciting projects. We talked about Wasm. We talked about IoT and Edge. That's another interesting area. There's also another area related to Kubernetes tooling. So there are projects like Help You to manage large fleets of Kubernetes clusters or manage applications that run on a large fleet of Kubernetes clusters. How do you place them in different locations, different data centers? How do you make it more redundant? So a lot of tooling around that, and there's a lot of exciting projects there, like cluster data, open cluster initiative, I think, at KR Moda. So continue to reach out to these projects. And finally, we want to continue working on the Tag Runtime website. So we have our first version up that we just showed you, but there's a lot more we can add there. There's a blog section that we can develop. There's things like AboutPager, just those little things, and then continue improving those. So with that, we'd like to thank you for joining. And if you have any questions, feel free to reach out to any of us. We are on Slack, Twitter. I encourage you to join the Tag Runtime mailing list and the Tag Runtime Slack channel. Just ask any questions. We're happy to answer. We're happy to get you involved, to get you started, to point you to the right direction. Yeah, and we can go from there. Additionally, we have meeting notes. So if you want to go back and look at some of the videos of some of our previous meetings, you can also do so. So with that, we'll open it up for some questions. Thank you. Quiet. Yeah, we meet every first and third Thursday of every month. So we're thinking about actually expanding that because we've been getting a lot of requests from different projects to present. So you saw, again, the number of projects that are part of CNCF. There are other projects that are not even part of the CNCF that are actually requesting to present. So we might actually increase it to, like, a weekly cadence, but we haven't done it yet. We also need more folks to help us out with contributions. For example, we need people who are interested in being a meeting scribe to actually take notes within the meeting. Is there any chance? Because it was a really good discussion. Yesterday's tag runtime meeting. Oh, yeah. We had a tag runtime meeting yesterday here. And unfortunately, there wasn't any recording. But it was more like an open discussion. I think some of you might have been there. And yeah, I think maybe for the next KubeCon, we might actually make it a little bit more organized where we have an agenda. But in any case, it was really good. And a lot of folks interested in the space and folks that are not even in the CNCF, so some of the folks actually work with the OpenStack Foundation with the Kata containers community were there. So it just kind of gives you an idea of the breadth of different projects and people interested. Well, thank you very much.