 Thank you for coming to this session. We're very excited to be here. And thank you for coming to KubeCon. And today, we'll talk about the CNCF tag runtime. We have a great lineup of speakers here, each one of them working in different areas in the tag, different aspects. And I think it's going to be great to hear from them. Some of what we will be talking about today, so we'll give you a brief overview of the tag. Then Alex will talk about the batch system initiative working group that he's working on. Then Kate will talk about the HIOT working group and her project, ACRI, which is a maintainer of that project. Then Zipnick will talk about KEDA. It's a CNCF project in incubation and how he uses that and users take advantage of that for auto-scaling. And finally, Samuel will talk about a confidential containers project that is in the confidential computing space and is used to run workloads in a secure way. So what is the tag runtime? Or what are tags? Technical advisory groups that work closely with the CNCF DOC to help projects go through all these maturity levels and become more usable and become more of something that helps the cloud-native ecosystem. So in essence, tag runtime specifically is related to workloads and to different types of these workloads, whether they're latency-sensitive or whether they're batch-related workloads and all in the context of cloud-native environments. So we work closely with the TOC. I'm one of the co-chairs. We also have tech leads that do things or do different diligence in the tag. And they work closely together with the TOC, work closely with the community. And we meet every first and third Thursday of every month. And we're happy to have anybody join these meetings. The more participation, the better. And the more participation helps the community and helps these projects grow. And our communications happens through email and Slack. So the scope of the tag contains a lot of different projects. So there's a wide variety of them. You have the runtimes. You have the projects that allow you to run workloads at the edge. You have things like confidential containers that allow you to run containers in a secure way. You have things like CADA that allow you to run autoscaling, depending on different types of metrics. You have the traditional projects like Kubernetes and ContainerD that just allow you to orchestrate workloads. And then we also have more specifically this different areas, the general workload orchestration where Kubernetes is. Then there's the VMs and runtimes in containers type of projects or container registry type of projects. There's also the special operating systems. For example, the operating systems that allow you to run just containers or lightweight operating systems that just meant for containers. There's this large space around edge machine learning ops and AI. And there's a lot of projects around that space that help users create those flows end to end for machine learning models, creating those models and taking them to production. We also have this space of several less workloads. We have some working groups in the container orchestrated device and two working groups that are in progress, the Bash System Initiative that Alex will talk about and the H IoT working group that Kate would also talk about. So we've also had some presentations since the last KubeCon. So the tag is continually expanding the set of projects and the scope and the engagement. So these are some of the examples of different projects in the different areas. So we're expanding on cluster management, how to run multiple clusters in different clouds, in different locations on-prem, or in cloud service providers. We have different projects also to help users run Kubernetes at the edge like KZOS. We have things like workload constraints and scheduling projects to schedule certain Kubernetes parts with certain constraints around latency or storage or different things. And then we have also the serverless type of workloads projects that we've had engaged in the community like K-native, which is in incubation in the CNCF right now. And right now I'll hand it off to Alex. He'll talk about the Bash System Initiative. Thank you. Appreciate that. I'm Alex from G Research, a quantitative research firm in the UK. And I'll talk to you about this Bash System Initiative. But I wanted to point out that it's Friday on the final day of the conference, long conference. It's just before lunch. I didn't think that I would be able to engage and lighten, connect with you just if I had bare words on the page. So instead, I'm going to try and keep it interesting with some animals. This is a batch of pandas, which English has all these words for special groups of animals. This is an embarrassment of pandas. That's what we call this. The history of the batch working group. We were, early this year, we came to the tag runtime group to discuss our own project, Armada, where we have a multi-cluster approach to batch scheduling. And in the course of that conversation, it came up that the runtime group had been chatting with a bunch of other batch projects. And we're all sort of saying the same thing, and that maybe we should have a separate conversation specifically around batch. At the same time, there's another batch initiative that I'll mention a little more later. Aldo talked about it yesterday, which is a Kubernetes batch conversation. And both of these conversations were happening. There's a Kubernetes batch working group, but it felt like there was space for this other batch conversation, where people who are creating batch projects on top of Kubernetes should come together and have a conversation. So that's the history and sort of what this is. And I'm wondering, oh, nope, nope. So this is sloths. And this is a bed of sloths. This batch of sloths is a bed of sloths. The kinds of projects that we're talking about are, I mentioned our multi-cluster approach called Armada. There's Volcano, which you saw on a couple of slides earlier. There's MCAD, IBM, there's Unicorn, and Cloudera. There's Slurm as represented in the conversation as well. So these are the kinds of projects that we want to be bringing together and all communicating about where we're going in this sort of batch world, either to join forces or just to compare notes or maybe just emotional support. Yeah, we'll figure it out as it goes. Hedgehogs. Does anyone know what the group of hedgehogs is? It's a prickle of hedgehogs, just to make sure. And I wanted to point out here more of the distinction between the Kubernetes batch working group and the CNCF batch of working group, because it can be a little confusing. The Kubernetes batch working group will be more focused on lower level things, features that we need to get into Kubernetes to be able to support batch more natively. So they've been working on improving the jobs API. You might help me out, Aldo, but I think there's all at once scheduling is next on the agenda. And features like that, that if we can get into Kubernetes, all of these higher level projects would be more capable of being able to use Kubernetes better. Whereas in this CNCF batch working group, we're talking more about the projects that have been built on top of Kubernetes largely because we couldn't actually do batch scheduling effectively with the features that Kubernetes has had up until now. So just trying to make that distinction just a little bit clearer. But please, come to both. Come to all. If you're interested in this space at all, you may be interested in both. There's no problem with going to both. Penguins? What are my notes? Oh, yes. Penguins, they are a waddle of penguins. It's fun. With this one, there's flamingos. It's a flamboyance of flamingos. And really, the last bit is to just join us in conversation. We've got a schedule of every other week we're having a meeting about the CNCF working group. It's on Mondays at 7.30 AM Pacific Time. Hopefully that lines up with everyone. There's some information that you can get a hold of us. There's Slack channel. There's this Zoom room every two weeks. We'll be there chatting about stuff. Please feel free to reach out to us directly, either me or Klaus or, I think it's Alex, but way, way. Any of us three can direct you to how to get more involved in the conversation. And these are otters. And otters were, oh, what are otters? A raft of otters. Yes, thank you. Thank you. The notes aren't coming up, and I needed the notes. But yes, a raft of otters. Hopefully that was as entertaining as I hope it would be. I think we can now move on to Edge and IoT. Lovely. Thank you. So I'm going to transition us to talking about the IoT and Edge space. So our working group was previously part of the Kubernetes space, and now we're moving to the runtime tag and the CNCF to a bigger and broader scope, since there's a lot happening on the IoT and Edge space. So to start, I'm going to talk about some of the CNCF sandbox projects in this area, and then I'll talk about one more in particular, and then tell you how to get involved if any of this interests you. So when we talk about the cloud, everything is homogeneous there. A lot of the servers are the same. They're in a static environment. But when you move to the Edge, there's a variety of devices with different compute. And ranging from the larger compute, we have those servers that are on-prem on the Edge. And more and more, people are wanting the same level of orchestration that they can have in the cloud for their workloads on the Edge. So there's several CNCF projects that are aiming to provide this. So Kube Edge, OpenYart, SuperEdge, they're enabling you to have that Kubernetes experience from the cloud to the Edge, where sometimes the characteristics are air-gapped, and there's other unique things that you're concerned with on the Edge. Also, some of these servers have less compute. So they themselves are more constrained. And so the Kubernetes distro that you're deploying there needs to take up less resources. So another project that handles this is K3S. They kind of slim down Kubernetes so that you can run it on some of these smaller servers that are processing all that data around the environment on the Edge. Also, when we talk about the Edge space, more and more, there's a discussion of WebAssembly on the server. And that's because when you make your applications, WebAssembly modules instead of containers, they're smaller. It's a binary that's portable across different OSes and platforms, and it has quick startup time and better security, or just as good of security. So there's a couple of projects in this space, WasmCloud and WasmEdge, that are helping you deploy those WebAssembly modules both in the cloud and on the Edge. So then as you move to smaller compute, you have those smart devices, those IP cameras. And these devices have a little extra compute on them where you may be able to embed some extra workloads on them. And so there's some talk happening in this space of maybe putting a WebAssembly runtime on these devices and adding some modules there. And so that's work that's kind of being discussed and explored right now to keep your eye out for the future. And then finally on the Edge, we get to these really small constrained devices that kind of have one fixed function. These are sensors, controllers, devices that just have enough compute to do exactly what they need to do. And so we can't put any extra workloads on them, so the question becomes, how can we then easily get data from these devices and let our applications running on our servers on the Edge leverage that? And one CNCF Sandbox project that I work on aims to solve this problem. It's called OCRI. It stands for Kubernetes Resource Interface, and it's that interface that abstracts away the details of discovering IoT devices on the Edge, and then it represents them as native Kubernetes resources in your cluster. And it then can automatically deploy workloads to use your workloads to use those devices. So just how in your pod spec you could request CPU and memory, OCRI enables you then to declaratively request an IoT device, so an IP camera, a USB thermometer, a robot arm. So it's extending that declarative nature of Kubernetes to these IoT devices. And it's purely Kubernetes native, so it runs on any Kubernetes distro, and you can learn more about it and the documentation there too. So if any of these projects interest you, please come join the working group. Like I said, we're transitioning right now from Kubernetes to CNCF, so we're changing all of our locations of where our Slack exists, where our GitHub pages, so a good place to start would be to join our current Slack, all the conversation will happen there. And we do have bi-monthly meetings that we're continuing to have during this transition. And as a part of this, we're building out a new charter to kind of draft what area that we're filling, and we're really just in general trying to build cohesion around all the heterogeneity of all the devices and compute and workloads that are happening on Edge. So if that interests you, please come join. And with that, I will transition us over. Hello, everyone. My name is Binega, I'm from Red Hat. I'm working on Knative and KEDA projects, so I will speak specifically about this project. So unfortunately, I don't have any most in the slides, so I wouldn't be too particular. So what is KEDA? Like, what is the aim of this project? So what we are trying to do is basically to make Kubernetes event-driven auto-scaling very simple, very user-friendly, so you don't have to configure anything, almost just a couple of lines of code, and that's all. With that, we can auto-scale Kubernetes deployments or we can even schedule new jobs based on some events happening in external service. So imagine you have some external service that is processing some messaging, some messages, and you would like to process them. So sometimes you might need to use jobs or deployments based on the type of workload that you would like to prefer. One of the benefits of this approach is that we are able to scale the resources down to zero, so you can save resources on your cluster. And we have connectors to more than 50 different adapters, different external services that you can use to scale on. So we have connectors to Kafka, Prometheus, RabbitMQ, some AWS services, et cetera. So to give you the very specific or to describe the idea on this example, so this is very simple example application that consume messages from some external systems. So in this case, it is a Kafka consumer application that consume messages from Kafka topic. And okay, we have this application deployed on our cluster and we would like to auto-scale this application. So what are our options if we are using Kubernetes and we would like to use some, you know, let's say default tools, so then you need to use HPA. But the problem with HPA is that it only can scale based on CPU or memory. And if you are having like this kind of applications that are more event-driven, this is not might be the best fit for you to actually do the scaling because the CPU or memory consumption may not correlate with the actual needs for scaling because you would like to scale the application based on the staff, on the external service. So in this case, based on unprocessed messages in the Kafka topic. So this is the very same use case, very same example, but we are using Keda in this moment. So you don't have to change anything in the application, anything in the Kafka broker setup. You just set up Keda and you very easily can define that, okay, let's scale this deployment based on the metrics from this Kafka broker and it should do the job. So basically we are building on top of HPA but we are extending the capability and we are trying to do it as simple as possible because the previous iteration of similar approach that was using a custom metrics for HPA, it's quite hard to actually configure this stuff from the user perspective. So we are really trying to make it very simple for users. And this is about the project. So the project itself starts as a collaboration between Microsoft and Red Hat a couple years ago. Then we transition to CNCF and we are incubating an incubation phase at the moment and I would like to say that it really helped the project a lot like moving to CNCF because we have all the support and all the needs that we need to recover. We do releases every three months approximately and we are meeting every other week so you are feel free to join the community and extend the stuff. These are some users that are using our stuff and they are happy with it. So thank you. So we are only a few slides away from lunch so bear with me, it's going to be fine. Okay, I'm going to talk about confidential computing. I'm Samuel, work for Apple and I work on a project called Confidential Containers. I just recently got accepted as a CNCF sandbox project and I'm going to talk about Confidential Containers and another project called Inclaver which are the two CNCF sandbox projects that actually try to make Confidential Computing a part of Kubernetes. And the idea with those two projects is really to take a Kubernetes workload, a pod, and make it run in a confidential enclave. It can be a TDX, AMD, a CV, SGX, whatever. But really the idea is to say we want each and every workload, each and every pod to get its own dedicated enclave. And this is very different from what a lot of, not a lot, many CSPs are currently offering which is Confidential Computing nodes. So they give you a node, a Kubernetes node, and that's running inside a Confidential Computing VM. What we're doing here is we give each pod its own Confidential Computing VM. So each and every one of your pod is separated through a Confidential Enclave. So why do we do this? It's all about protection. It's not about making workloads secure. It's about making them more secure than what you currently have with your regular Kubernetes security setup. We're protecting workloads from the host. That's the most important promise from Confidential Computing. So basically with Confidential Containers we are removing the host from TCB. You're running your communities pod in your communities cluster and you do not have to trust the CSP anymore. You basically don't have to trust about what it's providing you with. Your kernel, your workload, your pod, your container images. Everything is now running inside a Confidential Computing Enclave and can be verified, attested, and measured by you as a guest owner, as a workload owner. We're also protecting from offering protection from the other workloads. Each and every one of your pod has now its dedicated Enclave. So pod-to-pod protection is guaranteed by hardware and by the Confidential Computing hardware. So as I said, we're looking at two projects that are CNCF Sandbox projects that are working on Confidential Computing. The first one is Confidential Containers. And again, this is giving you one Confidential Enclave per pod. It supports multiple hardware implementation of Confidential Computing, TDX, SGX, SEV, SE. All of those are coming from different silicon vendors and are the main Confidential Computing implementation right now. The other project is Inclaver. And Inclaver is working at a different granularity. It basically provides an Enclave per container. So you can run a pod with multiple containers and typically one of them will run inside an SGX Enclave because this is not as... It doesn't support multiple hardware like Confidential Containers. It only supports Intel SGX. So it's a different approach. It's more granular, but it's less flexible. Confidential Containers, that's the high level diagram. Lots of buzzwords, a lot of boxes. I wanna make sure you don't understand this. So you go come back to me and ask questions. So basically, really, I mean, the idea here, I mean, the couple ideas that I want you to take back is, again, you have a pod. It's running inside a Confidential VM. So this is protected by Confidential Computing at the pod level. And this is a virtual machine. Each and every Confidential Computing implementation uses virtual machine as its basic units of instantiations. And this is using right now the CADAC Container Runtime. So you're using CADAC Container Runtime to create a Confidential Computing VM and run your pod inside that VM. The big difference between CADAC Containers is the right-hand side of this diagram where we run the entire attestation. And last but not least, the container's images in that architecture, they're living in the guest. So the host never sees container images anymore. Kubernetes itself doesn't touch, pull, or mount, or sees or modifies any of your container images with that architecture. Come back to me if you want more details. Come back. Inclaver is the second one. It's, again, slightly different. And you can see that you have multiple runtime for multiple, many of your containers. So you can have a Run C Container, which is just a regular container, the same thing that you would run on any Kubernetes pod. And then you have something different there, the top, the right-hand side at the top, where you see that one of your container, one of your application, is running inside an enclave. And this is really an SGX enclave. It only supports Intel SGX. So this is quite different from Confidential Containers, but it leverages Confidential Computing to protect part of your workload with Kubernetes. Join us. This is our GitHub. Every Thursday we have a meeting on Zoom. We're a CNCF sandbox project, so we have our own Slack room and everything. We're just very busy. So yeah, join us, help us, contribute, come if you have any interest. Thank you. So we have about 10 minutes for questions. Unfortunately, we don't have anything from online yet. So, yeah, one second. So, this question is for Alex. You had a great bunch of animals, groups of animals, so I'm really curious if we have an English name for a group of Kubernetes clusters yet. A mess. That sounds good. Maybe the name for the group of Goldfish. Oh, there you go. Which is a troubling of Goldfish that might apply to Kubernetes. I think we should take a vote on this, yeah. Thanks. So for the, well, so for the working group and the runtime, is your, like, so are we attempting to keep up to date on the Kubernetes constructs like Q and others and talk about integration with these? It's part of the, sorry, this is for Alex. Sorry, for this working group, the batch initiative as opposed to the Kubernetes one, or? Yes. Yes, so sort of, I mean, the conversation is growing as we speak, so it's not yet very well-defined, but in general, I imagine that we will be keeping up to date on what all of the various projects are doing, how we're reacting to the changes that the batch Kubernetes group are implementing in Kubernetes. If there are any ways that we can collaborate and join forces, that's how I imagine our conversation at the CNCF level taking place. Thank you. Thank you. My question is for Kate. I think you mentioned the H working group migrated from Kubernetes to CNCF. Can you give us some insight of why and what were the benefits or what were you limited at within Kubernetes? Yeah, so the question being, why did we migrate from the Kubernetes working group to the CNCF, right? Yeah, so when we're talking about the edge, some of these projects that we mentioned aren't all Kubernetes specific and in general, our working group presentation that we gave the other day we talked about, for example, secure device onboard. There's discussions of how do we onboard and provision servers at the edge. That discussion is not Kubernetes specific. It's specific to all technology that we're talking about on the edge. And so it really was an opportunity to kind of expand beyond the Kubernetes space as we talk about handling these both constrained IoT devices and larger servers that are running on the edge. Does that answer your question? I have a question for Samuel. I wanted to know what the performance impact of running your workloads in a confidential container, especially in terms of IO, we've got memory, disk, network. I think it depends on the silicon implementation. So I'm... Samuel, we lost the OS. Okay, sorry. But if you're... And that's not really related to confidential computing itself, but it's more related to hardware virtualization. If your workload is heavily IO bound, this is probably the worst case scenario for running anything, not only containers, but any workload inside a virtual machine. If your workload is IO bound and you do not do a direct device assignment, basically you don't have direct access to your GPU or your networking card or your disk, then you're gonna do a lot of memory and VM access, basically, and it's gonna cost you a lot. But that's not related to confidential containers. It's more related to hardware virtualization itself. Confidential containers adds latency in terms of boot time, because you do have to do the remote attestation if you wanna be able to verify what you're running. That's... It highly depends on the kind of attestation you're running. And it potentially adds overhead when running with memory encryption. But this is typically, according to silicon vendors again, very, very minimal. So the whole memory encryption is hardware implemented in the memory controller. So the overhead is very, very small, typically not measurable in a meaningful way. Hope it answers your questions. And we have time to do it. You mentioned the SGS extension, the Intel extension for implementing the secure memory regions. Do you know... Do other vendors of XA6 compatible systems and more especially like ARM version eight or version nine have analogous extensions for secured regions of memory? I think from what we see is that all the vendors are actually kind of moving away from this kind of implementation, the SGS trust zone kind of implementation where you have this application boundaries. And they're going into the SCV TDX and ARM v9 is really providing the same architecture. Basically saying we are going to extend the hardware virtualization ISA, the instruction set, to support confidential computing through memory encryption, measurement and attestation, and also protecting the CPU states, your caches, your registers. So this is basically taking the current hardware virtualization implementation and extended it to support confidential computing. And this is very, very different from SGS, from trust zone, from all those implementation. And this is where the industry is going. If you look at ARM v9, which is the next ARM design, it's not instantiated yet. But if you look at ARM v9, a big part of the feature set is just that. Being able to run confidential computing with ARM and being able to do that as a hardware virtualization extension. Sure.