 Hey everyone, thank you for joining. My name is Peter Hunt. I'm also a software engineer at Red Hat, working primarily on Cryo, but sometimes on KubeLid and Signode, and sometimes on C, Podman, container-related technologies. Appreciate everyone coming out right before the party. Hope to get you all pumped up, ready to go for this evening. And we're gonna be talking about Cryo's senior year. So I'm gonna start off, quick introduction. What is Cryo? And Cryo is an implementation of the Kubernetes Container Runtime Interface, compliant with your open container initiative. It's a lot of jargon, a lot of acronyms. What you can take away from that is it takes the spot in the stack that Docker used to occupy. So it pulls the OCI container images and starts the containers and pods. It's responsible for all of the operation underneath the KubeLid, but before the OCI Container Runtime. Some design philosophies that Cryo takes on is it's a balance of stability and features with a focus of security and performance, and specifically, it's purpose-built for Kubernetes. And we'll describe some of the consequences of that later. So here's a quick architectural diagram of what Cryo is. So you see on the left here, a KubeLid talks via GRPC to Cryo. You know, KubeLid asks Cryo to pull an image or create a container or start a pod. Cryo's image and runtime services then are responsible for doing that. So for the image service, Cryo has a library containers image, which actually does the pulling, and then Cryo has for the runtime service, it uses OCI runtime generation to actually generate the container spec, and then uses an OCI runtime to actually start the container for a pod it uses CNI to provision the networking resources. And underneath on the disk, you know, it uses container storage to allocate the disk resources like the copy on write file system. Cryo also has an entity called Konmon, which we'll be talking about a little bit later, but it's a container monitor that actually pays attention to the lifecycle of the container. Why would anyone use Cryo and why do we implore that you do? So Cryo is, we aim to make Cryo secure by default. So, you know, we try to have a minimal attack surface by reducing the set of operations that Cryo is responsible for. So with other generic container runtimes, it's responsible for a lot of, a lot more operations than Cryo, which is only looking to satisfy the Kubernetes CRI and the things that the Kubelet wants it to do. So Cryo doesn't need to build images or push images, for instance, because all it cares about are the things that Kubelet wants to be able to do. Cryo also prioritizes security features and tries to get them consumable in a way that the Kubernetes API can, you know, satisfy. We had experimental support for user namespaces for years before we just recently added it to the actual CRI and that allowed folks to test it out and, you know, work out some issues with it beforehand. And also, you know, it was specified through annotation so it was actually consumable by, you know, in Kubernetes. We also ship with a smaller capability set because we don't expect, you know, we want the containers that are run in production, which is Cryo's main priority to be as secure as possible. Cryo also aims to be as performant as possible, specifically for Kubernetes. Because Cryo's behavior is customized for Kubernetes, common operations are optimized for. So, you know, currently with the generic plague, the pod lifecycle event generator, the way that the Kubelet maintains the state of what the status of the pods and containers are is by frequently relisting and asking Cryo, hey, what's my containers? What are my containers? What are my pods? What are my containers? And it does that over and over again fairly frequently to maintain its state machine. While we're improving that asynchronously throughout this time, Cryo has optimized for that use case because we know Kubelet is gonna be doing that a lot. I'll also be talking about another optimization that Cryo, another set of optimizations that Cryo has made in a little bit. Ultimately, what we want for Cryo is for it to be boring. So, you know, even though we have plenty of these exciting features that, you know, optimize for Kubernetes, what we want really is that an admin chooses Cryo as their container runtime after deliberating over the options and then promptly forgets about it because it's doing the job that it needs to and really nothing more. And we're excited to announce that we are in our senior year, hopefully. We have just recently applied for graduation and, you know, we're excited. We've been in the, as the C&S, we, Cryo was born in the Kubernetes Sandbox, what, Kubernetes Incubator in 2016 and then it was, it moved up to a Sandbox project, which it is now in 2019. We're excited, you know, we have a lot of production users that are using Cryo, OpenShift, you know, Susan was for a while left and so we're happy with those relationships and we think that Cryo is very ready for production now. It has been for a long time, but I think we've really proven it at this point. We got a security audit in the spring and they had mostly pretty good things to say and, you know, a couple of bad things we fixed already. So now it's all good things. And so we're excited for graduation and hope to have that go through soon. So next up, we are gonna talk a little bit about the container monitor that I mentioned earlier, Konmon. And I'm gonna talk about a rewrite of Konmon. First off, we're gonna start out, what is Konmon? Konmon's a little helper agent that manages the life cycle of the container. Specifically, it actually starts the OCI runtime process and it watches for the exit of the container. It manages the logs of the container, takes the logs from standard out and writes them to disk. There's one instance of Konmon per container and also one per exec sync request. So if you have an exec probe, there's actually a Konmon running underneath there. We'll describe why that's important later. Konmon is called by Cryo over the CLI so it has this pretty large CLI set now. And it's currently written in C. So we have a couple of reasons that we want to go through the process of rewriting Konmon. Specifically, some of the points that I mentioned earlier and why the consequences of those, one container per Konmon per container and per exec session means that there's a lot more process overhead than we'd ideally like. Everyone here knows now that the standard unit of container is pod. It's 2022, Kubernetes is largely one. So we're really thinking about, we're really thinking about containers in groups. It's also Konmon is CLI based and we're really looking for a more modern API mechanism than just passing through the CLI. For instance, we have an API version flag that actually we used to specify different behavior for change that we made years ago but because of API compatibility over the CLI that's a little bit more difficult than it is with the more versioned and smart IPC mechanism. And it's a bit of a clunky program. It's a little tough to work with sometimes. It was written in C and it started a long time ago and it's really served us well and it's very stable but we're looking forward and looking up and trying to do new things. What we want out of our container monitor is really we want one Konmon per pod because the unit of containers in 2022 is now pods. We want an IPC API based mechanism to speak with Konmon. CLI is good and it works but it's clunky and it can be better. And we want a more modern language. C has served us well but everyone knows the common pitfalls of C and a lot of the difficulties that can arise from using it. And so we want to kind of work around those by having a language that works a little bit better. So I am happy to introduce to you a program that actually satisfies all of these constraints and more. We have Konmon RS. I originally, it would have been cool if we could have named it Podmon because it's a pod monitor after all but for some reason people said that that would be confusing, I wonder why. So instead we settled on Konmon RS, a Rust implementation of Konmon. It currently covers the scope of the existing Konmon but with a couple of differences. Some highlights of Konmon RS is that we have a native Golang client API which wraps Captain Proto as a protocol. Captain Proto is an API framework that's used by Cloudflare and a couple of other folks. It basically advertises zero serialization between the different, even different languages. So we can speak between Go and Rust and it basically passes a block of memory over and those are each M mapped into the different languages. So it's very fast and also has less complexity than something like GRPC or TDRPC or something. It, Konmon RS runs on the pod level not for each container which is exactly what we want now. And it also supports having the exact sessions within the container which is beneficial. It aims to keep a low memory usage so Konmon used something around two megs per container and we're aiming something between four and six. It would be better if it was lower obviously and we're working on that but we really want there to be no memory penalty for using, theoretically there should not be a memory penalty for using Rust and if you had a pod with two Konmons for one for each container and then maybe a couple of Konmons for the execs we aim for Konmon RS to be much less memory than all of that would ultimately end up using. So Konmon RS to be able to support pod level as you know, to support all the containers in the pod it's gonna be multi-threaded where Konmon used to be single-threaded or still single-threaded. And we have some exciting features that we're able to enhance Konmon RS with because it's written in Rust so it's a little bit easier for us to add new features to it. To be able to use Konmon RS in cryo all you have to do is add a drop in file to cryo's config directory and specify the runtime type as pod and then point it to the location of Konmon RS and then restart cryo and cryo will come up and use Konmon RS and hopefully you can also promptly forget about that choice. Right now we have passing CRI tests and integration tests, Kubernetes and node tests basically all of the tests that cryo expects for fully functioning to be fully compliant with Kubernetes, Konmon RS is passing now. We're planning for integration into Podman. There's a couple more pieces that we need for Podman which we've not quite gotten to yet so we'll talk about that a little bit. We have RPM packages available and we also have static binaries available for each commit so if you wanna download and use it you can currently and we're looking for adding more distributions in future as we stabilize. In the future, I'd like to describe to you a world in which we have all of the pieces that Konmon used to have so there's currently some gaps. We have all the functioning pieces we need for cryo but Podman needs a little bit more it needs an attached exec session because Podman the process goes away there needs to be something holding open that exec currently cryo's doing that. We need checkpointing for Podman and also actually soon for cryo because we're having checkpoint and restore functionality. We need support for seccomp notify to be able to be notified when a container uses ASIS call that it wasn't supposed to or wasn't expected to and the journal D log driver which just adds compatibility to what Podman currently uses. We also looking forward have some features that we're pretty excited about PID namespace holding is one that I personally am really excited about. So currently if you have a pod level PID namespace you need the infric container to hold open that namespace because you need to have the PID one in the namespace to stay alive for the duration of all of the processes within the PID namespace. Currently the infric container, I mean it works, it's worked for a long time but it adds a little bit of process complexity and overhead and it would be nice if we just had some process. Imagine what a process that survived for the duration of the pod and was able to hold the PID namespace for it and ConmonRS satisfies those requirements so I think it'd be cool to be able to have the PID namespace held by ConmonRS and then we'd be able to drop the infric container in all cases. Also looking for IPv6 port forward support which is required or desired by Podman there needs to be a running process to keep port forward requests going in IPv6. We want logging rate limiting so in just additional log drivers in general we have right now there's like the Kubernetes log format and we're looking towards journal D but also we have the opportunity for a JSON logging driver which Podman has wanted for years ever and has never actually gotten and so being written in a more modern language it'll be easier to integrate some of these new features and open telemetry tracing which I'm actually excited to say we have experimental support for we're still kind of working through the details of it and I didn't have time to put together a demo of it and I wanted to but open telemetry tracing is coming and so you can actually track the especially with the Cubelets support for open telemetry tracing and cryos you can track the life cycle of a container from being created in the API server, being registered in FCD, being created by the Cubelet, being created by Cryo and then all the way down to Conman RS now which we think is exciting and opens up some opportunities for being able to track the life cycle of your containers and pods. So that's Conman RS, we're pretty excited for it and now I'm going now for something completely different we're gonna talk a little bit about some load optimizations and specifically some better reporting mechanisms that Cryo has added semi-recently. So if you've attended some of these talks before you might have seen me talking about situations of load in Kubernetes and specifically between Cryo and Cubelet but I'll give a refresher for anyone who's new here. So the problem is in situations of load Cryo and Cubelet get into this bickering match where they have trouble syncing up between so basically the Cubelet needs to have a time out on each container and pod creation request because it needs to know that when it sends out that request, that request and it just disappear into the void. The problem about that is that Cryo can take an undetermined amount of time to create a pod or container, especially under load. Usually there's some entity that is the bottleneck it's often the SDN or maybe the disk IO and so Cryo might not be able to create that container in time and because of that Cryo and the Cubelet bicker about trying to create that specified resource Cubets like hey please create this resource and Cryo's like hey I'm working on it name is reserved gets this awkward situation. So this has been largely solved in Cryo as of 119 or something like that and I'll describe so it's quite old now but I'll describe the solution because we've added a little bit to that solution. So the solution to it is to fine tune our behavior to the Cubelet which Cryo is able to do because it's only for the only client that Cryo materially cares about is the Cubelet and running containers in production. So the basic idea is to finish creating the resource and then save it until the Cubelet asks again. So I'll walk us through that example. So the Cubelet asks Cryo hey can you create me this pod and Cryo's like sure I'll reserve this name so that no one else no cry cuddle or someone can come in and try to take that pod from you. At some point in that process maybe Cryo gets stuck on SDN because it's taking a real long time there's a lot of pods being created at the same time something, Cryo takes too long and the Cubelet times out. Cubet's not sure whether the timeout is because the request disappeared or because it actually is just taking too long. So just in case Cubet re-requests hey can you create me this pod. The second routine is aware that the first routine is working on creating that pod because it can tell that the name is already reserved. So what it does is it waits and basically tells the first routine hey when you're done creating that let me know because I'm trying to give it to the Cubelet. Eventually the bottleneck either clears up or you know the networking you've finally on gums and Cryo's able to create the resource. So Cryo that the first routine detects that there was a timeout and pings the second routine saying hey I'm done my thing now and returns to the Cubelet. Cubet's not paying attention anymore it assumed that the request disappeared so this dotted line represents Cubet's not really listening anymore. But what it is listening to is the second routine which has said hey I have this resource for you but you need to request again. The reason we do another round of requests even though the resource is already made is because there might be a race between the Cubelet timing out when Cryo sends it so we worry about having this resource disappear. So instead we time out one more time in the Cubelet and then await the inevitable Cubelet re-request that resource from another Cryo routine. That routine sees that it has the resource and it's able to return it no problem. So this was a good optimization because it changed the situation where there used to be a lot of thrashing on the node where container would be created, it would take some time and then it would have to be the creation would time out and at the time we actually removed the container because it aired out and that was bad because that actually made the resources be even more constrained. So now we've improved the situation a lot. Now we're basically returning the resource just about as soon as it's actually done. Another small piece of it is we're actually throttling the request from the Cubelet. So if the Cubelet had its way or the way that it used to work is if Cryo was just like immediately after getting the duplicated request, hey, I'm already working on it, I'm gonna return an error to you, Cubelet, then the Cubelet and Cryo, like Cubelet would propagate through the event API like a whole bunch of like names reserved, name is reserved because Cryo would be returning these errors really quickly and Cubelet's trying to create this pod as fast as it can. What, because we know that the Cubelet is going to re-request this object, as long as the error keeps being time out, what Cryo can do is it can actually throttle the Cubelet and wait on in this routine to saying, hold on, Cubelet, slow down, we're working on it, don't worry, taking as long as it can. So that reduces the number of events in the pod API. So that's very nice and reduces churn and makes the admins not as scared because there's not a bajillion messages saying, oh, name is reserved. A new thing that we've done semi-recently, which I'm also very excited about, is we return where the pod or container creation is stuck. So we used to just say this kind of generic error, like name is reserved, we're working on a Cubelet, chill out. But now we're also keeping track of where in the container and pod creation process that resource is stuck and that way we're able to return. So when resource, when one of the Cryo routines that isn't the original one returns an error saying, hey, we're working on it, but it's not done yet, it also says, and it's actually stuck at something like currently at stage sandbox network created. So it's saying sometime after the sandbox network was created, we're stuck. Or says something like sandbox storage creation. So what that error would indicate to me is that there's an IOPS throttling problem. If it said stuck on sandbox network creation, I would guess that there's a slowdown in the bottleneck in the SDN. So basically this improvement allows an admin to see the state of the container or pod creation process like why it's taking so long and hopefully make read-immediation steps faster. And I'm very excited for it because in the past it took kind of just a lot of intimate knowledge with Cryo and the timing of things to be able to actually tell which was the problem, which I ended up doing a lot of that work for the people that I support. So I'm excited that it'll be doing that for me. And next up we have Rinal talking about some six-store stuff. All right, so folks, everyone must have heard about six-store. Well, I'm happy to say that Cryo has support for six-store style signatures. So the containers image library that Cryo uses for polling images. Merth support for verifying those signatures. Podman fully supports it. So Podman can be used to sign and push those images and the signatures to a registry and then you can use Cryo to verify those signatures when you pull the images. However, we need to improve the UI of what happens when a signature verification fails. So today the CRI doesn't distinguish between the types of image pull errors. So we'll do some upstream work. So like Kubler can distinguish and give the right message to the user when signature verification fails instead of just a generic image pull fallback. Then we also did work so that the Cryo release binaries are now signed using Cosign so you can verify them as well. So next I'll cover some upcoming features that intersect Cryo and the work we're doing in Signode. So first of them is username spaces. So far, this username space is supporting the Linux kernel, but Kubernetes has not been able to take advantage of it. So Peter mentioned that we had annotations-based support in Cryo. But finally in 125, we got phase one alpha for username spaces merged into Kubernetes. So this phase one supports stateless pods. It means any pods that don't use persistent volumes will work with username spaces. So the supported volume types are like MTRs, secrets, config maps and so on. So with username spaces, you get an additional layer of security in your pods. So you can be root inside your pod while being non-root on the host. What it means is if a process is able to break outside of the container, it's not able to attack the host or other containers running on the node. So it's very useful and one more advantage is you are now able to run any random image from any registry that runs as root by default and not have to worry about changing it to be non-root. The kernel takes care of it for us. So here's a simple example of how you utilize this. So you just add that host users equal to false in your pod spec and that will enable Kubelet and Cryo to enable username spaces and you are in this username space pod. So next up in this area, we'll start adding support for other types of volumes, persistent volumes and that's something we need to tackle upstream first and then make available in Cryo. So checkpoint restore. So this is another feature that was merged into the Kubelet. So basically it uses the CRIU to checkpoint container state. It's only a Kubelet API at the moment. So there's no Kubernetes API for it. So if you want to use this feature, you have to hop on a node and directly hit the Kubelet's endpoint to checkpoint the container. So the current use case that has been targeted is forensic analysis. Say you're a bank and there is a bad actor that's able to break out of a pod and is trying to attack or do something bad on your node. So what you can do is you can get on that node and then checkpoint the state of that pod and then you can move that checkpoint state to another node and then you can start that pod back up again so you can analyze what was happening inside that pod. And this can happen without the knowledge of the attacker. Like they can continue being in that pod. So this allows like, this is like a security feature. So other use cases of checkpoint restore like faster startup for pod such as JVM. So we know that Java takes time to start up, right? So potentially we can checkpoint a pod after that startup phase is done and then you can launch hundreds of copies of such pods. So that would be a net improvement in startup time. So this is also early and let me just merge the checkpoint support. So next we'll start tackling the restore support in the Kubernetes API. And finally, event at plague. So today the cubelit uses the generic plague. What is plague? So that's a pod lifecycle event generator. So that's what is being used by the cubelit to materialize the lifecycle of a pod. So once the cubelit starts a pod, it needs to be aware when it dies or it gets killed for some reason. Because you know, whenever it dies, cubelit starts it back for you, right? And how does it know that? It knows it through the plague. The way the generic plague works is the cubelit periodically relays all the pods and containers from the runtime over CRI. Now the overhead of doing this is very high when there are lots of pods. Like imagine you're trying to push the boundaries of how many pods you can run on a node. You are at 600, 700 nodes. And very frequently the cubelit is requesting this list of pods from the runtime. So this adds a lot of overhead to both the cubelit and the container runtime. So we are working on a feature called as evented plague that moves to a list watch model, like a cubelit that is operator's work today. So with that, the cubelit will be able to list the pods less frequently, way less than it does now, and then rely on events being sent from the container runtime to generate the plague events. So this will drastically reduce the overhead of cubelit and runtime, and hopefully we'll be able to run way more pods than we do today. So this work is targeted for 126. There are PRs open and we are hopeful that it will get merged. So this brings us to the end of the talk. Thanks for joining us and we are happy to take questions. You also, so in a tragic event where Kanman dies before the container does, then also we don't catch the containers exit because Kanman is the parent of the container process. So it's the one that can catch the sig child that the kernel sends when the process ends. Cryo is not at all, Kanman actually demonizes. So Kanman is a child of system D, not even of cryo, so cryo can't catch those. So at one point we had a Kanman monitor, but Kanman-mon we called it, but it was racy and kind of difficult to work with. So we've made it, we worked hard to make sure that Kanman isn't gonna seg fault or anything like that or exit before the container does and it actually can't be umkilled either because that would be in a bad state. We hope that we're looking at situations. PID FDs would help with that where we can actually, cryo can keep track of the life cycle of the Kanman or Kanman RS instance. We could also maybe use EPF to catch Kanman exits, but we don't currently have anything. So the ideal is that that doesn't happen. Right, as of now it does not. It would, I agree that that would be even more problematic of a situation. And so we're, you know, it is on our radar trying to figure out the way to actually pay attention. You know, I really look forward to PID FDs in general Kanman could also use them, but you know, before that maybe we'll come up with some EPF thing so that, like that risky situation doesn't come up. Because I wanted to have a good resume. No, because, well because it like was a natural fit for kind of the piece of the stack. It was, you know, we already had Kanman NC, but it would have been kind of clunky to add, you know, asynchronous and you know, all of these things and go would be, the go runtime uses too much memory for the number of Kanmans that we want. So we felt like Russ was a good fit for that. And you know, I kind of feel like in general, the layer of the stack that we're at, I think we're gonna start seeing more projects being written in Russ, because it's a pretty natural fit for like the space. I think Russ has a right mix of low level and high level. So we can do all the low level things we were able to do with C, while also having access to like an RPC API, like Captain Pluto, that we can easily integrate while not paying the huge memory or CPU penalty. That was a motivation to me. We have attracted some interest in it, but not kind of contributors, mostly people. Some people are looking at, you know, some issues and stuff, but it's, you know, a little slow going, but we're hoping that also the integration of Russ into this whole ecosystem will appeal to people. So what was the motivation for using a CNI for setting up Cryo? And what are your thoughts on this CNI conflicting with other NetCon? For say any other CNI that works. And I've seen this issue that the Cryo CNI conflicts a bit. So are you talking about Cryo shipping a default CNI? Yeah. So I think that's just something that we ship. So it's easy for you to get started with Cryo, right? So you can easily start up a local or cluster and you'll be able to see your parts, but that's something we never recommend for production use cases. And we make it easy for you to like, drop in any CNI conf and the binary and it should just work. So that's more like a starter thing to get it right. And also actually now the packages that we ship in all of the upstream distros, they don't actually require the container networking plugins, they only recommend them. So if, you know, if you're on Fedora, you're able to actually install Cryo without installing the container networking plugins. So it's not required, but you know, we suggest it if you don't have another solution that can come out of the box. And I've seen that the plague issues I see with Kubernetes, they're not very descriptive. So how would you go about digging in deep in those issues and actually figuring out what's causing the plague error? So I think like, there are two things, right? Like one thing Peter mentioned is improving what we log in the container and things. And the second part of it, like improving what we log on the Kubelet side of things. Like the whole Kubelet code that manages sync pod and generates all this is hard. And what we are hoping is with event it like, we have some ideas on how to simplify it and like make it better. And also folks in the community, folks on Google are also working on some documentation. So more people can come up to speed, contribute and simplify that whole area. The existence of tracing support in the Kubelet, we also had the idea, we were talking about it today, like instrumenting sync pod or the pod workers that are actually doing the plague generating and managing, if we were instrumenting them with open telemetry, like maybe that could also give some better insight into what's going on there. You'd be able to watch us span and see its behavior. So that might help too. So right now, so cryo currently doesn't have open telemetry support? So it does have some preliminary support. I think we're still working through trying to get, figure out exactly the granularity of spans. We have it on, we have a number of functions that are like emitting them. I think that's, but it's in like the main branch. It's like not yet been released. I mean, convoin still has only, it's only on like 0.3.0 or something like that. So it's pretty young, but there could have been a demo that we did today, but I didn't have time to put it together where like you actually can't see the spans. Actually, you might be able to find an image I have if I have enough time or send it to you or something of like what that looks like for convoin RS. So we're getting there, but we're not quite there yet. Thank you. With the introduction of convoin RS, is there plans to eventually deprecate convoin itself and then replace it? Yeah, yeah. I would say the longterm, assuming that we can make sure that, and I'm pretty sure that this can be the case, but I just want to maintain that. One of the main ideas is to make sure that the ultimate memory usage is not egregiously more than it was with convoin. Assuming that, which I think will be the case, then eventually we'll deprecate and remove convoin, and it'll be convoin RS moving forward. And then obviously this will move upstream into like open-shift threat or downstream. Yeah, yeah. We're working on pulling it down as well. I wanted to say very impressive work with convoin RS, and I appreciate the description of why Rust might be the good tool for the job. Could you show the slide again on how to switch the CRIO runtime and then maybe elaborate on like what future scenarios people might consider switching to convoin RS to be an early adopter? Yeah, so this slide is the one you were looking for. Yeah, so there you go. So why you would want to early adopt it. So one of the things that we kind of triggered our desire to do this is we wanted better C-group accounting of the convoin resources. So like in situations where you wanna keep the convoin on a separate CPU set, then the container is running if you have a real-time pod that's doing some network latency stuff. You could, you know, that was originally what kind of got us started thinking about having a pod-level convoin to be able to isolate that. So that's one use case. I mean, you know, we'd be happy if folks wanted to try out and let us know, you know, even if you just wanna be at the bleeding edge because, you know, that'll help us iterate on getting it better and stuff. I would say there was another thing that I thought, why else? Yeah, I mean, so one more thing that we wanted to tackle is like right now when we do an exec right, we are spawning a new convoin. And at study state on a node, your execs are the most expensive operation that a container on time is doing. So we wanna reduce that process hop here. So with convoin RS, like there won't be an additional convoin, we'll talk over our PC to convoin and it'll directly do a run C or a C run exec drastically reducing your CPU usage. The other other thing that I just remembered is actually can help with CPU and memory accounting on the node. So because we have multiple convoins now, it's hard for us to use the pod overhead feature, which was originally made for Cata because you have like a VM that has some sort of standard amount of memory in CPU. But, you know, if we have a variable number of convoins, we can't really well guess how much memory in CPU they'll all use because it's not proportional to the number of containers. With convoin RS as one per pod, we can guess like, okay, it's not gonna use more than eight mags, definitely. So, you know, if that's the pod overhead and then you can more tightly fit your containers on the node because, you know, there's not, there's not this like mysterious amount of memory that's used by convoin. Questions? Over time. Well, let's get out of here. Thank you, everyone. Thank you for the answers. Thank you.