 Hi everyone, as tag runtime maintainers we're used to not having like a huge turn on. So this time around we got most speakers just to keep ourselves company in case we don't feel lonely. So over to you, Alexander. All right let's start, welcome. I know it's long day, we are like last in a row of sessions for today. And we would like to talk about the tag runtime, so like very interesting and very wide range of things what we are doing. And on stage we have like myself, Alexander Kanievsky, Rajas, Ricardo and Daniel. All of us are part of tag runtime but we are doing different things and we are going to speak about parts which we are mostly involved on. Very briefly, overview, few things from our working groups and then how to get involved with all of those. Let's start with tag runtime. You probably know like how the scope of the six in Kubernetes are done. You've probably seen some of the tags in CNCF which also scope it to particular subject. For us we are, from one hand we are standard tag. So like we have like Slack, we have regular meetings, we have like chairs, we have technical leads, all the standard. What is different for us is the scope of the projects what we are covering. And what you can see from the statement is like it says the whole area, the whole range of work loads, how to run and get it run not only on Kubernetes but on CNCF tags. Just to glance what it can be about, like you see like different scale of the projects. You can see something like monstroses like Kubernetes, you can see something small like Cryo container, you can see very different from a perspective like confidential containers. You can see something like QBH, ACRI and so on. All of this can be summarized in few particular areas what we are looking at. Like general orchestration, like Kubernetes, VMs and runtimes. So again like Cryo container, Decata and similar edge, like all the projects related to it. Again what is happening on your host, like specialized OS images like Flatcar, Talos, many other projects which is specific to cloud providers or different companies which is also part of the CNCF landscape. AI, machine learning workloads, we all have our interest in those. So far it's like some working groups but looking at the hype what we currently have around all of those, maybe we will soon see the tag AI or something like that. And with that huge scope you can understand what we have working groups which are concentrated on specifics in each of those areas. So a little bit about some of the project presentations that we've had in the tag. The big one. So we have a variety of different tools and projects. So in terms of containers, we've had things like Cryo, container deep present, workloads, you have things like the QB edge graduation, Open Tofu which is an open source project that is a fork of Terraform, HashiCorp Terraform. We also have Kubernetes related projects like KDA for cluster auto-scaling, Kubernetes auto-scaling, we have things like Cube Stiller for Kubernetes management, Cube Clipper also Kubernetes management. So these are some of the examples. Operating systems, a special operating system like Alice was mentioning, examples of flat car or Kairos. And finally the interesting topic that is very relevant now that a lot of people are talking about AI related type of projects like Q-Ray that helps you scale AI type of workloads, training workloads in case GPT that allows you to monitor your Kubernetes clusters using natural language connected to NLM. So one example project that presented is Cube-Ray and this is again an initiative project in the AI space and it's composed of different parts. You have a core, you have a ray cluster, a ray job and that's the main part of the project and there's also some community managed optional components that are external like the Cube-Ray API server, the Cube-Ray CLI and the Python Client. Another project that is part of the runtime is Flat Car as mentioned before, it's a container optimized Linux, which means it's got a minimal distributed, it's a minimal operating system for distributing containers, it's got a minimal surface attack, it's immutable, it runs AB partitioning, which means you can run updates also automatically and you can declare your configuration with Ignition File before and during provisioning. So, but what I want to show about that is not focus on Flat Car but how it comes to the bigger picture with other CNCF project and other working groups. So, Flat Car contributed to CAPI, so currently how you create an image in CAPI, you kind of have complexity around there because you need to create an image that have like three dimensions of complexities like of the Kubernetes version that you run to run, the cloud provider that you run to run on and the OS image, so all of them needs like all of these three parameters creates a specific image that you need to maintain and when you update you must to replace the node itself. So, it's kind of painful, therefore we started to work on system DCSX and what's interesting about that is that it's really simplified things, so instead of having a matrix of all of these like different things working together, you got a linear complexity because what you take, you take the basic OS from the cloud provider and at the top of that adds the bits that are interesting for you, for like for the Kubernetes version or so on. So, you can also create updates that are don't requires to create a new node but you could create also in place updates. So, it's basically like seem link for slash usr or slash opt that could be used. Now, it works well on flat card because it's a mutable operating system and it requires that slash usr would be read only and now we collaborated upstream and now system DCSX got a feature that allows also to operating system that don't have only read only slash usr petition also to enjoy this functionality. So, let's create an interesting story also upstream with Cappy and system D. So, another story that comes out of that system DCSX is a collaboration with Wazel. So, there's a link here to a library of different baked images of system D. So, basically in the declaration file that I mentioned before, you can kind of point to a link to pull the image from and basically run whatever was a distribution you are interested in. So, just simplify a bit things. So, a little bit about our working groups. So, we recently created the cognitive AI working group. So, we're very excited about this. This is happening everywhere in terms of AI. That's the conversation. Genitive AI made a big last year with the release of child GPT. So, one of the things that we're very excited about is that we created the white cognitive AI white paper. This is our first deliverable, but we're thinking about doing many more things. So, check it out. It's how it was published two days ago. If you have any feedback, send us a Slack message or any way we can help, but just let us know. So, another thing that we're working on is the cloud native AI landscape. So, as you may be aware, there's a big landscape in the cloud native ecosystem. And I think sometimes it's hard to read. So, what we're trying to do is just constrain that landscape into AI type of workloads and the cloud native ecosystem to make it easier for people to find the different projects and to get started. So, that's something that it's at the top of our minds. Additionally, we have a repository where we're thinking about a place to store artifacts or things that the community is working on. Examples of that are little tutorials or things like a workshop that can be created for people to learn AI in an easy way. For example, the full lifecycle management of AI that includes training and server and machine learning models. And another thing that we're thinking about is starting a reference implementation to make it easy for people to understand AI across the board, including the machine learning lifecycle. So, finally, we are thinking about reaching to open source projects related to the cloud native AI ecosystem. And these include the AI-specific projects that are maybe part of the Linux Foundation data in AI. So, what we're trying to do is create that collaboration across both foundations so that we can work together and people can understand each other from both sides. And with that, I think about the problem in organizations when you have data scientists on one side and on the other side of the organization, you have developers or DevOps engineers or operations engineers and they don't quite understand each other. So, the idea here is to bridge that gap and make it better and more efficient in the future. So, a little bit on another working group that we have is the Wasm Working Group. Danielle already alluded to some Wasm features in the slide card, but they're working on very exciting things. They've had presentations from Wasm Vert that allows you to virtualize Wasm components. One of the big things that they're working on is OCI support for Wasm modules. You may have heard that the Wasm ecosystem has created a model to share components or Wasm components between different applications. It's called the Wasm component model and OCI is a big part of that because that means that these components can actually be stored in artifact stores that are OCI compliant, just like containers. And they're working on other things like the Wasm observability standard and the Wasm cloud interface and that includes Wasm modules that allow users to talk to a cloud provider, like an S3 blob storage or a way to talk to a messaging application or a way to set up an HTTP service. So, there's another working group. It's a special purpose operating system working group. So far, we started it a couple of months ago. And as you can see, we covered anyone interested to show what they're doing. So, like, first presentation is any operating system that's kind of interested to be part of the group kind of joined and just demoed. There are links there to all the different meetings. If anyone in the crowd wants to join, always welcome. We're open for new attendance. And I just want to share what we do and how to get involved. So, there are two QR codes. The small one is for a panel tomorrow. So, in case you are still here, because it's the end of the day, I must say. And if you are interested to join. So, there is a schedule for just a meeting talking about, well, whatever we want to talk about, like if there are any different, whatever, standards that are interesting issues that we run through, what we consider is a special operating OS for containers and so on. And the other QR code is just like to get to in contact, like about the meeting and so on. So, we're kind of at a starting point. So, if you want to get involved, it's a good time. So, a little bit about this other work group that we have, the IoT Edge Working Group. So, the scope of the IoT Edge Working Group includes a lot of cloud-native projects. You may have heard of them. Cubatch, Open Yard, K3S. It's a very relevant cloud-native space where you run workloads in a constrained environment. And there's different requirements for these constrained environments, like smaller modules, a smaller footprint, or you have things like lower energy consumption requirements. One of the example projects that are part of the ecosystem or that is part of the ecosystem is Acree that helps you discover devices automatically at the Edge. So, you plug a device and it's automatically discovered and then you remove that device and that device is automatically removed. So, this working group is also working on the similarities and the differences between what it is cloud-native and Edge-native. So, with cloud-native, a lot of these applications are based on observability at a higher level and then also managed in a centralized location, whereas in Edge-native applications, you have considerations for being in that constrained environment. And for example, you cannot scale that much as much as in a centralized location. Your security is different too because maybe your Kubernetes or K3S cluster is located in a small box at the Edge or maybe at a toll booth or some place where there's a camera and there's a little box. So, security considerations need to have some sort of log there or some sort of encryption to the access to that mechanism. So, what we're seeing here is the differences between cloud-native and Edge-native and they're helping out with this. And another working group is the Batch system initiative. This focuses on large-scale batch workloads. They're very relevant to AI now, especially for training type of workloads. So, they created a landscape for these type of projects. So, you have projects like Volcano, which you mentioned before. We have projects like Carmada. It's another one. There's one called Armada and there's another one called Carmada. They're both related to orchestrating batch workloads. And they've also created a white paper on batch scheduling tools. So, they look at the different features and characteristics about this type of tools and how they can be used for running this type of workloads more efficiently. All right. So, the next one is the one which I am passionate about. So, a container orchestrated devices working group. It's all linked to what was previously said. Nowadays, we have a difference in the workloads. So, most of them seem to be like the new one is machine learning and AI. And obviously, all of those require accelerators and access to those kind of devices. This working group is also about collaboration, but a bit collaboration, a bit different one between the companies, between the hardware vendors on how to get the best user experience for utilising accelerators. And actually, not only accelerators, but overall hardware in the CNCF ecosystem. We are not very good on producing white papers. However, we are producing with specs and libraries. We were first a repository which was created under the new organisation in CNCF, CNCF TAX. And thanks for our lessons who helped us to do it. We are trying to keep all the information there. We have several fellow travellers. You might see some talks here on this QtCon. And maybe you can watch later in the recordings, like search about with hardware, like CPU, DRA, GPUs, more or less, it all converged to what we are doing in the background. We have a few updates. The new release of our spec is coming out. We are constantly trying to fix more corner cases or more advanced usage scenarios, like user namespaces where you are not running containers as root to access your accelerators, and so on. So, soon it will be published. Where? Where we are working? Where are things I used? Short answer everywhere. Let's start with non Kubernetes use cases. So, if you are on Red Hat systems with Podman, if you are on other systems with Docker, if you are in HPC world, in OCI mode, all of those things, we work tirelessly to get it enabled. Like, you can ask, I want GPU zero. And when in background it's happening what GPU zero is actually about. In Kubernetes, one thing what you need to know is what we think what we are doing is the basis of DRA. DRA, well, we have this DRA con nowadays. With CDI, we think what we are producing is things what enable it from a like lower layer of a stack. We also did this implementation for device plugins. So, like even all the accelerator devices can utilize with new approach of communicating what accelerator is about. And we have several fellow travelers what I already mentioned. We have NRI in container times, both cryo and container D. Those are helping to tune your native resources like CPU and memory to get the best out of the rest of our system, like accelerators and our peripherals. We are working on some caps. Have a look. You will have a link later on. Please join if you have some performance or like hardware visualization things what you might be interested to get in. All right. So we've looked at what tag runtimes up to what's happening with all of the working groups. So now let's try to look at how you can get involved in contributing to tag runtime. One of the aspects that we would like, you know, more contributions from is to reach out to other projects. Reaching out to other projects looks like something along the some something along the lines of a of opening an issue or starting a discussion at an open source projects, GitHub repository, which basically tells them that, hey, we are from runtime. We would like y'all to present at our tag runtime meetings. And what these presentations look like are basically an update or an overview of what the project is up to. How is it relevant to cloud native ecosystem? And these projects are mostly focused towards the runtime bits on how to run workloads on cloud native infrastructure. And whether they are aligned to things like edge or things like artificial intelligence, things like container devices, wasm, so on and so forth. Right. This also helps us see whether we can invite those projects to be part of CNCF of the CNCF landscape. And this also applies to existing projects in CNCF and help them move through the CNCF landscape. So these are some of the avenues. These are some of the avenues that you can try to reach our projects to one easy trick is to go to landscape.cncf.io, figure out projects which are not part of CNCF but listed on the landscape and then try reaching out to them, mostly from say container registries, orchestration provisioning, things like that. The other aspect is to interact with end users. So most of the contributions in tag runtime are maintainers specific right now, but a gap exists wherein we are looking forward to talk more and more to the end users. All of the, for example, the working group AI white paper that got out that had a lot of maintainers co authoring that white paper. And we would like more contributions from end users. So this is why if you can help us, you know, start conversations, end users see what's working out for them, what's not working out from them, that's like a great another avenue. So technical, so the end user tab is a way you can get engaged with end users and try to see how we can, you know, build synergies between the tab as well as the tag runtime aspect, right? Moving on, staffing a booth at, staffing tag runtime booth at KubeCon. So thanks to Steven, Steven's over here in the crowd. We have tag runtime booth. We started this from KubeCon cloud native on Chicago last year. And now here in Paris, we have another part time booth. So staffing the booth being there like maybe presenting an update from a working group or like just helping out with, you know, getting people introduced to tag runtime, that'd be like a great value add, right? So that's another avenue. The other aspect is annual sandbox review. So to give you some context around this, earlier last year, a technical oversight committee in CNCF came up with this proposal to help tags review annual sandbox reports. So these are the reports presented by sandbox projects on what the project has been up to, what their roadmap looks like, you know, what the adoption has been so on and so forth. So as a tag member, we had to give them recommendations based on whether we want the project to continue as part of CNCF sandbox landscape, whether the project is ready to move from sandbox to incubating stage, or whether the project has concerns and how we can remediate these concerns, right? So we had to work with the TOC liaisons that were part of tag runtime. As part of this, we got to know a lot of projects that are out there in the landscape. Tag runtime had most of the projects. Clearly, we couldn't meet the deadline of reviewing all of the projects. We got help from, say, other tags as well. So that was great. But this was this was a great avenue to like, you know, for the community to come along and collaborate on all of these things. From from now on, after after this exercise, we did a retro, we figured out that this was a very resource intensive bandwidth intensive exercise. And we try to automate all of these reviews. But I just wanted to call out all of the contributions that went out through this. So watch out for such avenues, which may come up later on and you can help, you know, help us part help us be part of this or drive these efforts. Speaking of technical oversight committee, a lot of contributions are about helping the technical oversight committee in CNCF. So any, so to give some more context, tags are supposed to provide updates to the TOC. And as part of tag, and as part of the process, these updates have moved from like sync meetings to async updates. So at tag runtime, what we've done is like collated a document with all of the updates that go out every month. So if you can help us bubble up these updates, right? So all of the tag leads bubble up updates for the month, and then send it out to TOC. These updates may include what projects have contributed to tag in that month, what the working groups are up to, whether the tag is working on something else, or some white paper, any of the contributions that are forthcoming, whether we need help in any particular area, so on and so forth. So just helping bubble this up, bubble up the information and sending across that can be like a huge value add as well. Recommendations to TOC about project moving levels is another great one. So in the CNCF ecosystem, the tags operate as reviewers while the TOC operates as an approver to a project, kind of an equivalent. Here's an example where we got pinged from TOC on asking a recommendation for the coordinate or project. This basically means that we get to have our say in terms of what projects get into CNCF, what projects transition through the levels of CNCF and so on and so forth. So if you want to have a say in contributing towards the project moving levels, this is a great avenue as well. The other aspect is to start attending tag runtime meetings. But tag runtime meeting is not just yet another meeting, it's the coolest meeting out there. This is the one where all the project maintainers, which are in the purview of runtime come and present their project, you get to interact with the maintainers, ask meaningful questions and get involved with the project as well. So we meet on first and third Thursday of each month at 8 a.m. Pacific. You can scan the QR code and like, you know, navigate from there. We've talked about working groups so you can get involved in any of the working groups. That's a great avenue as well. Another bit is the contributor ladder for tag runtime. So your contributions to tag runtime may roughly look like this, you join the tag, you start attending meetings, you start reaching out to projects. Maybe you start contributing to white papers, you get involved into some of the tag deliverables. Then you help with leading the initiatives and eventually you become a lead, maybe a co-chair or a tech lead, so on and so forth. So your journey may not exactly look like this, but it may be an overall summary of what's happening over here. So this is an effort that we're trying to drive across all the other tags as well. So to get up, to get to a contributor ladder, which is like cross tags and very much not specific to one, but just wanted to call that out as well. Another role that's coming up is a tag ambassador. So these are CNCF ambassadors who can get involved with tags as well. So if you are more interested in this, feel free to chime in on this issue. This is what all of us look like to see all of you all attend a tag runtime maintainer session at KubeCon and just people attending our meetings. So thank you for making it to this session. We really appreciate it. We hope to see some of, at all of you all, but at least some of you all at our tag runtime meetings and helping us out in the tag as well. So thank you one and all. We're open for questions now. Yeah, thanks again for the talk. I learned a lot. I was just curious about the project outreach part. I got the impression that anyone who's sort of starting to get involved can help out with that effort, but I'm curious about what the process of doing so might look like. So do we reach out on the tag runtime channel saying that here is an XYZ project that I feel like could be interesting if presented a tag runtime and if there's like a template ish thing. And what is the process in general? Yes, a question. The question was, what's what's a process for project outreach within the tags? It's kind of pretty wide open. One of the things I do is I scout the GitHub repositories for projects. I connect with other community members. I look at newsletters or things that are out there and find the place where I can contact that project. But typically that's the GitHub repository of that project. And what we do is actually open an issue or open a discussion. And we encourage the project maintainers to join the tag runtime meeting anytime in the future. And they're free to add time or when they want to meet. We actually have fixed time, but they're free to add the date when they want to present and they can add it to the agenda. So it's very wide open. And they get the time to present. So yeah, I think it varies between different community members on how they reach out to these different projects. Some community members might be more involved with other working groups or Kubernetes groups or other open source communities. And they have their own means of contacting the projects. But the idea is just to constantly be out there and outreaching to different technologies related to the runtime and see if they're interested in presenting. The idea is not also to force them to present, but to just open it up and see if they're interested. We don't get presentations from all of the projects that we reach out to. But we do sometimes numbers and try to get as much as possible. And I think the communities are very friendly. And overall, I think we have a pretty high success rate in terms of projects presented in the meetings. Yeah, I just had a comment, not a question. So I remember Steve, so Stephen and Daniel are the new chairs of tag runtimes. Can we at least get a shout out for them? So Stephen's not on the stage, but I just want to I think Stephen, you got involved in tag runtime from coupon Chicago. I remember meeting you in the hallway and we were pitching your tag runtime and now you're here. Do you want to talk about your journey and how you got involved as a chair and how your journey has been? That is going to be inspirational. Should put it in a spot. Just put me on the spot. Sure. Yeah, it's just a matter of volunteering and getting involved and kind of following through the the ladder that Rajas mentioned. So I would say first to talk to contributing or suggesting projects, you know, please post in the Slack as well, suggest we can add a template to the doc if you want to reach out to other projects. But it really just started from, you know, starting with that project outreach, attending meetings, being involved. That's the most important part is just, you know, showing up, contributing in whatever way makes sense to you, and then focus on your area of interest. Clearly, there's a lot of different areas of interest here, which is great. You don't have to be an expert in all of them, right? But you're whatever you bring to the table is very important. Yeah, I mean, you don't have to write code or to be a contributor. So any so just look at the or think about the your strengths and how you can help. So everybody's different. We're big on diversity of opinions, diversity of backgrounds of, you know, the type of career path that you follow. We're pretty open. And we understand that everybody has a different background and can help in many different ways. I just have like a follow up question, you know, of course, great topic, great discussion. You're mentioning projects. I mean, are we talking about related projects? Are we talking about CNS, the CNCA projects? Just wondering what is the definition of projects? And of course, the second part is like, how do we go in? Like, for example, if I'm interested in cloud native AI working group, how do I find, you know, the related projects? I mean, are they available on like GitHub or? Yeah, thanks. Thanks for the question. So if you're interested in cloud native AI working group, first step would be to join the WG artificial intelligence slack channel on CNCS slack. The other one is these projects are not necessarily part of CNCS landscape, but are relevant to cloud native and specifically relevant to tag runtime. That means they're either orchestrators or help or projects with help in running workloads on cloud native infrastructure. And with that into consideration, they can intersect with any of the other working group areas. So if you talk about artificial intelligence, projects which are related to artificial intelligence, but are also relevant to cloud native. But now in artificial intelligence, we're also trying to look or get opinions from projects which are specific to artificial intelligence and not very much intersecting with cloud native as well. Because one of the one of the deliverables of the working group is to figure out what artificial intelligence for cloud native looks like. And we need perspective from the other side as well. Like, you know, what, what, what are the gaps in those projects which can be filled by cloud native. So a thumb rule can be if you're interested in any project that you would like to reach out to, or you think is a good fit for tag runtime to reach out to is to just reach out on the tag runtime slack channel and just let us know that here, you know, this is a cool project that's out there and you may either want to reach out to it or you help you want help from us and we are free to, you know, extend help in any way possible. Does that answer your question? Thank you. I had, does anyone else have questions? Otherwise, I can have one more. Yeah. First question you need to prop up. Yeah. So I just had one question, like about the scope of the tag, right? So I did a, I did an unconference at AI have yesterday about just like batch schedulers for AI and what do we need better out of Kubernetes and general. And there was a maintainer from Q and there was a maintainer from Volcano. That was there. And turns out that they don't really work well together in some aspects and no one's talking to each other. So it's, and I saw like in the batch landscape, there was Q and there was Volcano. So is tag runtime like a avenue for these people to talk to each other or do they need to do it individually as projects? Because I'm just wondering like, how can we get these projects to talk to? Otherwise, you'll end up with a similar problem, right? I mean, tag runtime can be an avenue where we can help build synergies in terms of like how we can, you know, find out ways where the projects can collaborate better. So coordinator was an example where we found out synergies between coordinator and the Kubernetes 6 equivalent of it in-house. It can help out in six scheduling as well and things like that. So yes, definitely. But if you want to add. Yeah, absolutely. I mean, tag runtime can be an avenue. They can come together. We can, I think one avenue is what tag runtime can help out is propose some sort of standard or propose to the projects to create a standard standard so both projects can collaborate with each other. So not just like, okay, you guys talk to each other and see what you work out, but maybe create some sort of like deliverable in terms of a standard that has come up to both projects and they can work together. So make it, you know, do some work to come together. Another example, what we are doing in our working group, like we are trying to make we experience exactly the same across the whole landscape. So we are talking both to cry or to container the maintainers how to get one thing out. So it's not, it's not about what we are not talking to together. We talk, but we helping like to drive feature and to be consistent across the landscape. And just one final comment I think that a tag can help out with is awareness about other projects. So projects are doing like the same thing and multiple projects do like a lot of workload schedulers all that kind of thing. So just having the presentations, having them come to kind of a central place just for awareness or us promoting those project presentations just brings awareness and then more collaboration hopefully for those duplications. I mean we can take also the example of the operating system working group. We're sitting together at the same table like you know working in different companies, big companies that you actually don't sit together at the table. So I think yes it's definitely create an avenue to have an open discussion and see if you could you know find common goals together to work and contribute also upstream. So yeah definitely. Thank you so much for joining us today.