 Hi everybody. Thanks for coming to this pre or after lunch, post-lunch talk. Please don't fall asleep. I hope it's exciting. Hopefully you can stay awake. To introduce myself, I'm Chris. I work for Apple as a software engineer and I'm joined with... Hi, I'm Eric. I'm also a software engineer over at Apple. Cool. As you can see, we're going to be talking about Cata containers and virtual cluster. We're not really going to touch on cluster API, but that's basis. The basis for this is all being provisioned through cluster API. To get us started, we are in the multi-tenancy track right now. I'm going to just make an assumption that all of you are interested in running multi-tenant Kubernetes. If not, hopefully it's still interesting to you. But as everybody here knows, multi-tenancy in Kubernetes is hard. It's not easy. There's a lot of steps that you need to do. Realistically out of the box, it just doesn't work. There's a lot of pieces that you want to use as a user of Kubernetes that just aren't accessible. To make this even harder, hard multi-tenancy, actual isolation of workloads is very difficult on top of that. This is all based on the amount of attack vectors that Kubernetes has and the difference between the access levels that a data plane has versus a control plane and the pieces that are attached to a control plane that you have access to when you get something like cluster admin. This talk is going to be split into two sections. We're going to first talk about control plane multi-tenancy. Then we're going to talk about data plane multi-tenancy. Then we're going to talk about some improvements that we've done or features that we've added to Cota recently to make that multi-tenant story even more advanced in Kubernetes itself. To get us started, we're going to first talk about multi-tenant control planes. There's a bunch of tools in this space these days to do this. We're only going to focus on one. It's one that I'm a maintainer for. It's called virtual cluster. There's another project called Vcluster that you could use some of these same exact technologies and tools to implement with that. Lots of other tools in this space as well. Before we get started on that, we're going to first talk about some issues that you have running multi-tenant control planes itself. One of the biggest things that you run into, or I hope a lot of you have run into, is clumsy and thrashy clients. This is something that if you have a Kubernetes API server, you expose it to a bunch of different tenants, they're going to do things that you don't want. Things like not setting resource version zero and getting basically direct etcd hits on, say for instance, a pod list. Maybe in a 2000 pod namespace where it's just having to do the JSON serialization. It's hitting etcd. It's causing a lot of churn and in essence, causing problems for other tenants through those requests. In this example, we're basically just saying three tenants doing this pod list and basically hammering etcd, causing problems for other tenants and in essence, taking down that single control plane from that standpoint. On top of that, users these days, as Kubernetes has become this base platform that we can build platforms on top of, they want access to all of it. And out of the box, you can't do this. You can't add namespace access to just any random tenant without a lot of controls around it, without figuring out how to implement like custom APIs to make all of that possible, things like HNC that you could throw in here to do this. But out of the box, it's just not very, it's not very possible. And when you layer those, those technologies on top, you end up in a space that isn't exactly Kubernetes. You end up then in a place where you can't do cloud native tools, things like normal CI CD tools that want access to the entire cluster or multiple namespaces, or the ability to create a whole namespace to deploy like a red, green deployment or a blue, green deployment. And so you end up in this place where it's just hard to implement these strategies. I hope that's things that you all have actually experienced as well. Now in talking about isolating these, again, this is only one strategy to do it. There's many here behind this, but this is one strategy that we have been working on for a bit, which is running virtual cluster. And I'm going to talk high level. I'm not going to go into the in-depth of this because this is, in essence, rehashing a talk from Fago from Alibaba or now at Microsoft who did a presentation about this in 2020 in KubeCon. But in essence, what we're talking about here is taking Kubernetes and throwing it on that pink box on the side and calling it a super cluster. And then taking Kubernetes API servers, controller managers, and etcd instances that are dedicated to tenants and running those as pods inside that cluster or even outside of that cluster, but just those pieces. So you'll notice that there's not a scheduler there. But then you deploy two more main components that VC Sinkr up at the top and VC manager. VC managers behind the scenes, what's actually orchestrating creating those control planes along with cluster API. And the VC Sinkr is sitting there as a multi tenant scheduler that listens to all tenant control planes and takes pod schedulable resources and sinks them down to the super cluster. This is going to be a massive piece of work, but it's in essence the same amount of work that the normal cube scheduler has to do, but just augmented by different control planes now. And then on top of that, at the node level, because we're talking about data plan isolation in a bit, there's a handful of pieces that are important that green box or I guess technically all the green boxes are virtual cluster. And there's a piece in there called the end agent or virtual node agent. And this is an on on node agent that acts as a proxy that takes in request from each one of those tenant API servers. There's a lot behind this. I don't expect you to understand all of these pieces. But if you're interested, face talk goes really deep into it. And it's, it's a fantastic deep dive behind the scenes. The rest of it, the blue boxes are all Kubernetes bits that we all know and use CRIs, cubelets, so on and so forth. And we've got workloads that come from those tenant control planes. At the end of the day, a user of this system would talk to one of those dedicated control planes, tenant A, tenant B or tenant C, and they would end up as pod scheduleable resources in that lower level cluster. Now what this actually looks like behind the scenes is something like this. So imagine you have a massive cloud of infrastructure and you want to leverage that and you want to start bin packing workloads into it. And you have now have something like tenant A, B and C, and you can even have things like a malicious tenant and a clumsy tenant, all talking to their own API servers. And those workloads are then getting scheduled down into the super cluster, but all of that's isolated. So any of those interactions, and we're only talking about the control plane here, all of those interactions are isolated. So if somebody goes and says, is the clumsy client and does massive pod list, they don't affect anybody but themselves because it's just hitting their Etsy D instance. They're not then causing problems for everybody else or that malicious tenant that's trying to do something bad. We actually have pieces in the sinker that will allow us to stop pods from being scheduled if we don't want them to, for whatever reason, pod security policies, for example, that they can't change. Again, being a cluster level resource. Yes, pod security policies are going away, but there's other resources that are like it. Forget what it's called. So anyways, this then takes your infrastructure and allows you to start bin packing it in a much denser way. So you can say, if you look at this and say that that first node or that first rack now has some workloads from the malicious tenant, the, the clumsy tenant and that tenant A. And this can be done with anything from Run C to Cata to G visor, any of those stacks under the hood, but we can get some benefits out of actually working on the data plan and making it more isolated. So to go back to this issues that we have with multi tenant control planes, when we inject something like virtual cluster or V cluster, since it has a very similar architecture, you're going to get rid of those clumsy or you can solve the clumsy clients because they're not going to affect anybody. Worst case scenario, they're hurting themselves, which is really not, not as bad as hurting the rest of the system. That's really what we're trying to protect against. You then get access to cluster level resources because you have cluster admin to that tenant control plane and it doesn't, all those resources don't end up in the super cluster. And then you can also use all of those cloud native tools like you normally would. Any of the operators that you want to deploy, things like crossplane, you could deploy into this because you have cluster admin capabilities and you can deploy cluster scoped resources. But that only solves half the problem. Now we need to talk about workload isolation. Sweet. So, pretending I'm an infrastructure provider at this point in the context of this conversation, basically we're doing a remote code execution as a service. And it's not that I'm doing that. I'm doing it for multiple end users. If we're going to look at the most basic level, people want to be able to run their workload on a machine. And because we do multi-tenancy, you can both run on the same exact machine. And they're just the process running on the host, running on a host Linux kernel on your one node, one of many nodes where they're all running. We're going to use containers because we want to have a view that they're the only thing running on the machine and that it's constrained appropriately. There's no denial of service between workloads, everything else. So these are great features of the Linux kernel using namespaces, using C groups, judiciously picking what capabilities are appropriate for the workload, filtering out different sys calls, things like that. And it works pretty well. The concern from a security perspective is that this is a single interface. So all of these capabilities are based out of the host kernel. So if you do have a zero day in a kind of privilege escalation, you're now rude on the host. Me as a provider, that's concerning. But you, as maybe one of many tenants on that node, should be pretty concerned about this as well. So mutually interesting tenants, yeah, that's bad for them in that case. So we have different options here. The first option is essentially YOLO, like the security profile that we have, like what is the cost of an escape? It's pretty low cost. If we don't have very sensitive information, if the tenants kind of do trust each other a little bit, it doesn't matter. Don't pay the cost for extra isolation. We're talking about hard multi-tenancy. So that that's out. The other option you have is don't run multiple workloads from different tenants on the same node. So give everybody a different node pool. This has a challenge. One, you can have fragmentation of resources where two tenants, one of them uses a hundred percent of the capacity, another one uses five percent. That's unfortunate. Two, if you do have an escape, infrastructure provider, that's concerning. I don't have two libraries of isolation anymore. Also, if you have an escape in, yeah, you're stuck on your one Kubernetes worker node, but a worker node has a lot of capabilities as far as authorization to Kubernetes APIs, as well as whatever the network has access to. So for this, we're going to say that's not quite enough. And we're going to look at providing stronger workload isolation. So let's have two layers of isolation. And kind of in Kubernetes and container area, we would call this generally sandboxed runtimes, being that we have two layers of isolation. A good example is G visor for this and another one, since we're talking about Cata, you can guess it is we're going to be talking about Cata containers. So in the example going from a traditional container on that side, I guess it's left side versus the right hand side with Cata containers, we launch a minimal virtual machine. So one layer Harbor virtualization. And then within that on a guest Linux kernel, they then we're creating regular containers using namespace, C groups, all the things, second layer of isolation, the guest kernel, a touch more detail what that ends up looking like. Kubelet talks to container D or cryo talks to a task shim below that. So like a Cata shim or it could be traditionally like a run C1 or it could be anything really. In the case of Cata, what we do is we work with a virtual machine monitor. So maybe like a QMU cloud hypervisor to launch that minimally configured virtual machine. Little Linux kernel boots up user space and it process will be this Cata agent who actually managed the life cycle of the container itself inside the guest. We'll talk about these little components a little bit. And that's when I'm kind of introducing it on the bottom. There's way too much information you shouldn't care about, other than to know that networking just works. You usually drop a V eth into a network namespace. We then all traffic is piped directly into the guest such that the container workload has access to it without any configuration necessary. And then we can get into how we can leverage these things to come together. Cool. Thank you for that. So virtual cluster introduces one thing, which I left completely out till we got to this out of the box. Kubernetes does one thing that is kind of odd. In essence, services because they're a virtual component are actually allocated. The IPs from them are allocated from a site arrange that's hard coded into the API server flags. So you set that and it basically gets allocated. Now if we go back, if you think back to that, that diagram before you, in essence, have three tenants talking to another tenant, what tenants are you supposed to basically make those service IPs routable from? That's the space where things start to break down. And so we got together basically to try to figure out how we can make this all possible using caught it under the hood. And that's what we're going to dive into. So again, this is that architecture. Imagine those three tenants, those service cluster IP ranges that are in those API server flags, those are allocating the routable IP addresses for them. And they can be overlapping 192.168 slash 32, right? So we're going from there and we're punching that into the super cluster, which now has a completely different range or potentially overlapping. And you can cause a lot of problems. So what we started to do, and I'll walk through a quick little like breakdown of this is we're going to talk through Andrea up in the corner coming through and creating a pod against the tenant API server. Andrea is a decent client. Not going to cause too many problems for us, but it's going to go and create a pod. And to walk through how all of this stuff functions and what we've added to make this possible is at the same time when that pod gets created in the API server, we actually have an instance of cube proxy running alongside that. This is in a Cota container that's considered a privileged Cota container for the control plane. And it has a one single sidecar along with it. So we're not making any changes to any of that core code base. And that's called the exporter. And the exporter sits there and it says, Hey, I got an update in my IP tables. I need to do something. Now that's jumping ahead a little bit because at this point, virtual cluster, like the architecture that I was showing before is actually syncing that pod object down into the API server. And it's doing its normal Kubernetes bits hitting the cubelet or the cubelets getting a notification saying, Oh, I got to schedule this workload. It hits container D or cryo. It schedules the pod and it creates that entire space in this world. Now we're showing a Cota workload running there rather than something like run C. So going back to cube proxy over here, I'm going to use some fun things. There, cube proxy over here. We're actually going to get an update on that. And we're going to push IP tables rules back into the API server. Now some of this is a little bit is not the most scalable architecture as it currently stands. And we'll talk about that in the in a couple of slides later. But right now, what we're doing is basically taking those IP tables, syncing them up to the tenant control plane saying, Hey, this, this tenant has this set of IP tables. And I want you to apply that to every single workload that it has. And I talked about this VN agent before being an on node agent that basically takes in those, those IP tables. And it says, Hey, Kata shim set these because this is that same tenant. So now we have per tenant IP table rules that are getting pushed to every single workload that it goes alongside. So if you repeat that entire process, you're pushing a new set of rules into those, uh, into those Kata containers so that it has that new space. And this creates a routable service cluster IP range. I forgot one last slide, which is it pushing through the Kata shim to the Kata agent and pushing the rules in. And at that point, yeah, if you're in one of these tenants, we'll show you a little bit deeper dive of that. So imagine now we have two nodes back in that example, and we have two different Kata workloads running ones at 10, zero, zero, one and 10, uh, one, zero, one. And those are on some random node IPs that we don't really care about in this one. Say you're coming from that one where we're doing that curl to the foo service. This is one of those things that gets pushed into the IP table rules. This is actually now taking one, nine, two, one, six, eight, uh, to 20 and saying, Hey, I got to figure out where to now in the Kata guest, we can actually do IP tables D Natty to that 10, one, zero, one. And then it just hits host networking goes across and lands in that actual workload. Yeah. So quick demo of how this all functions. And then I'll come back to that slide just to give you a little bit of a refresher. And this is all recorded because I am not going to do this live. So this is going to go through and do exactly what we're talking about. I'll show you the deployments here just to show what a cluster is like to create. So if we actually look at this, this is what a virtual cluster spec looks like. Am I doing time loss? Okay. So that's what a virtual cluster spec looks like that you go create this cluster. You have a cluster domain in there that gets pushed into things like Cordy and S that you'll see deployed and some other flags that don't really matter for this demo. I think that start cool. So this is now going to go and it's going to go VC. It's going to use a cube CTL, uh, virtual cluster plugin that allows it to actually write out a cube config for this tenant control plane specifically. So it's going to bring up an STD instance. Remember I was telling you before it only brings up a couple components. So it brings up the STD instance, the API server and the controller manager. The cluster is now created. And now we have access to it. And just to show this is what a cube config looks like for one of these tenant control planes looks something like this. We're adapting all the certificate data and all of that. And maybe I can speed this up from here. I'm still nervous. I know it's recorded, but yeah, I guess I should probably speed this up. But anyways, we're going to now show you what that virtual cluster looks like. So if you cube cuttle, get all dash A. So all namespaces, this is all you see. And I'm going to do one more time where I'm actually hitting the super cluster just to give you the difference. So we're talking to a tenant control point running in the cluster that I'm now hitting, which has all of these resources. And there's a bunch of pieces here that are that are important that I'll show later. But in essence, we now have a couple new namespaces. So that default dash with the hash VC sample one, that's the cluster name and the prefixed namespaces. And this is how we can basically create namespaces on behalf of users in a lower level super cluster and isolate them. So we prefix all those namespaces. If you have long namespaces, this is always a question that gets asked. If you have long namespaces, it truncates it for you, but still makes it unique. So we're going to clear that up. So we get it back up to the top. And now we're going to go and we're going to deploy something into that cluster. First thing we're going to do is deploy core DNS, because who needs a cluster if you don't have DNS in there. And what's important about this is those namespace prefixes that I just showed, those are actual parts of the FQDN when you're in a cluster. And so we need this because we configure this core DNS instance as the name server on every single pod that comes from your tenant control plate. So everybody has their own isolated DNS servers. You can set that to resolve to whatever behind the scenes you want. Again, I'm going to show you the difference in clusters. So what's cool here is I'm going to highlight that there's replica sets and deployments, but nothing from that tenant. And this is why we deploy a controller manager because we only care about pod-schedulable resources. We can take that and say once the controller manager in the tenant scheduled it or created a pod for it, we'll take that and push it into the cluster. Now the super cluster becomes a pod-scheduling domain and allows us to scale that even wider. And I miss talking about it, but it showed the pod up at the top with its dash default namespace. So next thing we're going to do is we're going to deploy a Cota workload. And this actually isn't calling out Cota in here from a runtime class standpoint because what we've done is to make it extra secure, you don't even have to set a runtime class. We inject that automatically. You deploy a pod like a normal Kubernetes deployment and it ends up in Cota behind the scenes. And what I'm deploying here is two different applications. PHP Apache because we all love PHP, right? Cool. And a networking pod so we can actually do some tests. And one service so we can actually do some routing tests in the actual cluster. I'm going to speed it up. I want to, I think I could probably speed this up a little bit. So now we're going to go get pods and services in that tenant control plane, show that those things were created and show the service cluster IPs. So you'll notice here that we have 103201 for the Kubernetes service, 10320106. That's allocated from the tenant control plane on the services. And then those IPs for the pods are that 192.168 range. That's exposed by the supercluster. So in this world right now, we have routable pod ranges. And we expect things like CNIs to do zero trust out of the box at that standpoint. Now you'll see in the tenant control plane, same IPs for the pods, but different IPs for the actual supercluster services. So these are here, but they're non-routable from the tenant standpoint. Now if we go and say exact into this pod and we hit IP table save just to show what's happening. And I'm going to use default Kubernetes just to show what's actually happening. So this wrote a rule into that Cota container that is that 103201 range, not the superclusters range, with the comments about that kube proxy normally injects. And I'll do a couple more tests in here just to show how things function. So here we're going to do using that, using one of the actual PHP applications, we're going to grab the pod IP and we're going to try and route to it, which is going to go and hit the same route, the same networking stack under the hood. And so we'll do exec and we'll use that networking pod again. And we're going to, oh, we're going to grep for that one to see how that actually translates in there. And you'll see that this is a destination mapping to that 192 range. I'm not showing that cubes set actual definition, but in essence, that's the, it's the same as that 1032 range that's exposed at the tenant control plane. Continuing this through, we're now going to curl that and we're going to curl the Kubernetes service. And I'm just going to hit the health endpoint, because I'm unauthorized anyways. And so this is hitting the DNS server inside the cluster, because we configure the name servers automatically for you on every single pod. And that hits your tenant control plane, not the super cluster. No, it doesn't ever hit any of those services that are exposed at the lower level. And then we're going to try one more thing. We're going to grab the service cluster IP of that Kubernetes service, do the exact same request slash to that health endpoint. Again, this is not routable in the super cluster. If I went to one of those nodes and tried to curl this, it wouldn't work. And I'm getting that health, okay, end point. And we're going to do a couple more tests in here because it gets fun. We're going to do networking. We're going to curl the PHP application on default. And that's going to return back. Okay, bang. Yeah. And we're going to do one more time where we go and do the exact same thing where we grab the service cluster IP, and then we'll do one more test after that just to show all these things working. We've got the actual IP of the pod. Cool. So now we have that 1032 range. We'll curl that. Okay, bang. Sweet. Okay. So now he basically are showing that this is now a new routable range for all of these pods. We're going to do one more test, which is again that IP address that we had above. So we showed this IP address before, but this is that test networking pod. One more time getting an okay bang. Yes. Sweet. So now we're actually able to hit the pods directly if we want to. And then we're going to do one other thing, which is, I think, an interesting layering of the architectures here. So now I can take this load balancer service. So if you noticed before, when I was talking about that virtual cluster, this is on, on AWS, we have the cloud controller manager deployed for AWS configured. But because we do dual services, when we actually deploy this, we can leverage the platform to actually expose things at a platform level. So if you wanted to do a platform level ingress, for example, you can do that kind of function or in this one, a type load balancer that's going to go and actually create an EOB and attach it to the same, to the same pods that are in the cluster and be routable as a, a platform level load balancer rather than having each tenant need to own those types of tools. So you can layer this and build, build pieces that are in essence invisible to the user, but act kind of like magic. Now I sped this up like crazy because if you've used AWS, you know that these EOBs aren't immediately routable. And I'm going to pause it because it goes back and that is the piece. So now we're basically curling that EOB endpoint and being able to get that okay bang. So going back to the slides, I think I'm getting close on time, going back to the slides real quick. This is showing that entire same structure. So behind the scenes, when you deploy any of those pods or any of those services, this entire flow is, is happening behind the scenes for you pushing those IP tables rules into the Cota containers and making them all routable. Now I'm talking about the future. And this is where I was saying that this isn't the most scalable architecture. If you remember 116 days with things like IP tables, the amount of the amount of network traffic that goes through each time one of these things is updating, it's a lot. It's an absolute lot. Luckily, the community actually started this. We learned about this after we started this project or after we pretty much finished this project. There's a new piece of work coming that is still a cap, but it's called caping. And caping is at sigs.ks.io slash KPNG. And this is the new node group proxy or an NG proxy. And this allows you to actually set up a network control plane, which every single workload or every single node at that point would then have a network agent rather than using something like the VN agent or, or any of the CNIs, it then gets all our cube proxy. What it's going to do is it's going to actually have a GRPC connection back to that network control plane. So our long-term plans are to integrate with this instead, because we can actually do really interesting things like pushing single rules through, because that's a piece that they started to implement in the caping world, rather than pushing the entire IP rules set. There's also a bunch of syncs that they're calling it behind the scenes on the node agent side where you can do push it straight into EVPF and all of that stuff. It's a really, really cool project if you haven't heard about it. Again, still in a cap phase, but could be very cool for, for Kubernetes to remove some of those big pains that we have with cube proxy. And then the last thing, now I talked about this being hard multi-tency. We're getting closer. We're not entirely there. Those, those pods are still routable on the cluster and you expect something to have like a zero trust network out of the box. That is the one piece of this that we're still, that we want to still get to, and that's going to come in another KubeCon talk. So, and this is the end. So yeah, please provide feedback. If you have any questions, we're happy to, unless we don't have time. I don't know. I don't know. I have one question. Hey, this was really cool. So you showed the, the routing with the IP tables and everything. I just curious, service meshes are obviously huge. Is this something that would work with meshes out of the box? And if so, what about like, if your tenants want to use meshes inside their virtual clusters? Yep. So they could actually technically run, as long as they have the privileges to be able to access that guest OS, they could technically run a service mesh in there and do some of those same functions. The pieces where service meshes don't allow us to do the same type of function is the way that the informers work for those types of interactions and are set up to talk to a Kubernetes API server. Out of the box, they're going to talk to the super cluster. And so you have to break that down. So there's a little bit of work to make that possible. But with this architecture, you could just layer a service mesh on top of it and allow that to function. So you could have like linker D and Istio in the same cluster at that point. Thanks. Awesome talk. So the second part of the talk is pretty much about the techniques that you used to isolate the, the workloads in that big data plane, right? But that's only a problem that you need to solve because you are with the big cluster approach that has the super cluster with the, the massive shared data plane. So you could have done something different. Like you mentioned a few options at the beginning, maybe have some hosted control planes in a managed cluster and then have completely separate and dedicated data planes for each of them, like whatever. So can you talk a little bit about how, like why you choose the big cluster approach and how, how is it a good fit for your scenario against different options? Yeah, absolutely. So why virtual clusters? A good use case to actually leverage control planes can be expensive, especially if you live in a, in a on premise world where you're deploying into bare metal host. It's a very expensive thing. And so being able to layer this architecture allows you to build massive clusters that you support at that level. And then smaller, more single purpose control planes that you can easily isolate. That's pretty much the big piece behind this. There's other, there's other tools like the, where this originally came from, from the folks at Alibaba, it allows them to do this sort of isolation in their massive multi tenant environments. Hey, two questions starting with the V cluster. So once you set up V cluster, you know, you think all your problems are solved, you can give it to your customers and like, Hey, you have basically your own Kubernetes cluster, do whatever you want. Two minutes later, they're going to go and try to deploy for me if you stack or the Nvidia DCGM exporter or something like that, that requires a daemon set with host access, and they're going to be really annoyed and said, Hey, you didn't give me a true, true Kubernetes cluster. I wonder if both with and without Cata containers, you have been doing any work around that? Like if you say that you run with Cata containers, and you really would need to schedule that pod inside the same container as the workload is running to be able to extract metrics for that workload. The same goes with like log collection, fluent bit and that type of stuff. Yeah, I'm sort of struggling. Can you, can you help me? Yeah, help me a little bit with what you're, what you're trying to get to. So I'm trying to get to that, you know, if users want to run their own, say, Prometheus instance, they're going to want to run their own node exporter. And you can't, obviously can't let them run that on their metal host because that's where everything runs. And then do you, do you provide any facility for them to run that inside the Cata container, the same container that their other workloads are running in? Not as of now. That's something you could potentially do. It's just something we haven't, we haven't looked at. We are doing things that we haven't pushed up yet, but we are actively talking with the rest of the folks on virtual cluster around how to expose metrics better, but not node level metrics. So like not, not NPD or like, yeah, no problem. And what do you do with kind of logging? Like how do people get, get logging if they can't deploy their own like fluent bit demo sets and that type of stuff. Yeah. So out of the box, which you, you get access to logs through normal, like, through that VN agent, it's basically a proxy that allows you to get to the, to the actual logs on that, on an individual container. It injects the prefixes, which is what keeps it so that you can't talk to another tenant's workloads. For running things like a Damon set that has a logging agent out of the box into it, that's something that we don't have yet. Okay. Yeah. I mean, you can run your own logging agent and collect that on their behalf and do some multi-tenancy there. I guess it's just the same thing that we do. The next question is around performance in Cata containers. Yeah. Now I haven't looked at Cata containers in a while, but both, both in terms of kind of GPU pass through and storage, it looked like it would have a lot of overhead. Like the Cata container way of doing storage pass through with Vertio FS seemed very unoptimized, at least a wild back. There's actually a wonderful talk on Friday afternoon, like, whenever else is gone almost, like a good talk right around then about how to mitigate the cost of shared file system performance. Okay. I guess I need to change my flight. No, but I mean, basically to be, I mean, best case scenario, your provider has some kind of direct assigned storage. Yeah. So you would mount it inside a Cata container with like kind of fast or something like that? Yeah. Or to be able to do direct assignment of the block device itself. So a lot of times you end up creating a block device, attaching it to a host, mounting it, passing that mount into, and it's just a little bit extra if you can actually just only do the mounts inside the guest and you're dealing with it at a block. Yeah. It's better. It's much better performance, but it still is. What type of applications do you guys use this for? Like the use of any high performance applications like neural net training? It's a variety, including, you know, if you look at different database applications, things like this. So you don't, you consider the performance acceptable for like all types of use cases? In our specific instances, I think that, yeah, I haven't had any issues in our particular, but really it varies very much depending on your end user and particularly what they're looking at. Cool. Thank you. Yeah. Great presentation. We have one minute left. Any quick questions? Any last minute thoughts? Comments. Comments. Yeah. You can also file feedback right here. Awesome. Oh, we got one more. Oh, so I see that you're using Cata containers for the workloads. But have you thought about using Cata containers to isolate the control plans? Why wouldn't you, or why would you? In this world, behind the scenes, these are actually being deployed into Cata containers. We're just not talking about that piece. Under the hood, because those are being deployed into one or many of those superclusters that are configured as like the default runtime classes being set to microVM, it's just automatically deploying. We're just not calling it out in the slides. Thanks. Yeah, as well, there's some Cata stickers up here if you all want to after. I don't know if there's enough for everybody, but there's some. Cool. All right. Thanks, everybody.