 I'm William Dennis and thank you for coming to my session on building a no-less Kubernetes platform. First, a little background about me. I'm a product manager in Google Cloud where I work on Kubernetes engine. In 2019, I co-founded GKE Autopilot and that's what I currently work on. This is a new mode of operation for GKE, which I'll be talking a bit about today. I'm a very big supporter of the Kubernetes open-source project and in 2017 I also co-founded the certified Kubernetes conformance program, which is still used today to ensure portability between vendors and distributions of Kubernetes. I'm also writing a book published by Manning called Kubernetes for Developers. A little bit about this session, I hope it's going to be useful for all of you as you build or build on Kubernetes platforms of your own. In this session I plan to give you a behind-the-scenes look of the creation of GKE Autopilot, which is a fully managed platform for Kubernetes by my team. I'll be giving throughout the presentation some arguments for and against the idea of having nodes in a so-called node-less platform. At the end I'll talk a little bit about some of the future possibilities that can be enabled by this design. Firstly, let me do a quick audience poll. I'd like to know, raise your hand, who believes that it's possible to have a node-less Kubernetes where there is still technically a node object? Raise your hands. Got a couple? Not too many? Who doesn't? Everyone else? Yeah? And third option, who believes I was just adding a bit of controversy to my KubeCon talk to spruce up the marketing a bit? All right. Well, the truth probably lies somewhere in the middle of those three. But let's dig in. So what does, as we look at building a fully managed Kubernetes platform, I think it's worth taking a step back and looking at a traditional Kubernetes platform. And I'll be using GKE as the example here. So a traditional Kubernetes platform basically consists of two APIs that you need to interact with as the developer. The first one above is the Kubernetes API. That's what you're here for, right? That's what you're here at KubeCon for. That's what you use Kubernetes for. You want to describe your, you know, stateless app in a deployment. You want to maybe represent a Redis database or a MariaDB database as a stateful set. You want to create jobs for your workloads, et cetera. That's what you're here for. And then you have this other API under the line, which is the platform API, whether it's GKE or one of the others. And with that API, you have to configure the cluster in order to serve those Kubernetes objects. You might have to create nodes of a certain size or with certain capabilities. I believe that an ideal fully managed Kubernetes platform would just be the Kubernetes API, where you interact everything at that level using, you know, KubeCTL using YAML files, basically. So what does it mean to build a fully managed Kubernetes platform? Well, when I was looking at this problem in 2018, I wrote a position paper and I had three headings. Let me just quickly share them here. The first one was that a node list or a fully managed Kubernetes platform should still be Kubernetes. Secondly, the containers should be able to utilize unused reserved capacity. In other words, allow for bursting. People come to Kubernetes. One of the reasons they come to Kubernetes is to be able to pool their resources and potentially burst when needed. And the third thing is I wanted to make sure that we price this in such a way that it supported continued usage. I didn't like that you're creating like a toy version or something that you wouldn't want to run for 100% of your workloads 100% of the time. And on node visibility, I had a suggestion just to make them visible, while still hiding certain bits like maybe the VM. I'll be digging into that in a bit. So back to that first point, though. The first point being that no less Kubernetes or fully managed Kubernetes should still be Kubernetes. Why is that important? So when we approach this problem of let's build a simpler to operate, a simpler to use Kubernetes, a lot of people came to me and said, well, William, Kubernetes is hard. There's a lot you have to learn to get started. There's a lot you need to do just to deploy an app. Maybe at the same time we should simplify that and create a cut down or a simplified experience so you can deploy things more easily. I think that would be a mistake. And the reason that would be a mistake is that it misses the point of why people were choosing Kubernetes as so successful. And I believe the reason is that it's an orchestration layer designed for professionals. These are professionals, like they might be running a massive website for a Fortune 500 company, for example. And they're professional, right? They need power. They need flexibility. One day someone might approach them and say, hey, I need to run a Redis database in this, a state for workload that you have to run. Kubernetes can handle that. I believe that the power behind Kubernetes is that flexibility and the scalability. So simplicity of the Kubernetes layer was a non-goal. Only simplicity of operating this cluster is the goal. Now, when the team looked at this, we had a couple of different operational models to consider when thinking about how to actually run pods, how to run pods on this fully managed product. And maybe I'll take a pause there and just say that what this product does, basically, what our aim was is that you just create a Kubernetes workload and you don't have to configure nodes, you don't have to manage anything else. We'll provision all the infrastructure for you. So with that in mind, the team looked at a number of different models that were actually handling that compute. The first one was actually to use Borg. So I don't know how many people are familiar with Borg. It's a container orchestrator developed by Google and used internally and was the subject of an academic paper which you can find online. And it actually formed the inspiration for Kubernetes itself many years ago. So one of the options for running your pod containers for us would have been just to run them as so-called tasks or Borg. One of the advantages of that system is that it's massively multi-tenant and we would have like very rapid scaling and many other benefits. The second option was to reuse more of GKE and run each pod in its own VM. The benefit for that is that by being a lot closer to GKE it would be more compatible with various different infrastructure. The downside of that approach though would be that it would limit the pod sizes that we could offer in this platform to just a range of VMs that we have. You wouldn't be able to do something like a 5.25 CPU pod on this system because we don't have a VM of that size. The final option was just be like GKE, the existing product that we're looking at. Just be like GKE standard. Run multiple pods per node and basically don't change it too much. The benefit of that when we looked at it was that it would also then support other Kubernetes constructs that actually rely on nodes such as Damon sets, pod affinity, node affinity, things like that. In the end the team chose to make it just like GKE. One of the other nice things about this is that we were able to offer like a wide range of pod sizes scaling from a quarter of a core all the way to 28 cores in core-to-core increments. We'd be able to support something like a 17.75 vCPU pod without having to scale it up to a predetermined size. It also provides for maximum compatibility. That was one of the first foundational decisions we made on the path of building this product. The next question was around visibility. If we're trying to create a fully managed Kubernetes platform, one that's so-called nodeless, what does it mean when you go to kubectl get nodes? Should it list all the nodes that we actually have there? Should we just group that together in one and just return auto pilot node and just sort of hide all that detail? In the end we went with just list the nodes, be transparent, show the user what's actually happening under the hood even if they most of the time don't have to care about it. Second decision we looked at was what about the actual inner workings of those nodes? Should we be transparent about what shape those nodes are? Should we tell you if it's like an Intel or an AMD or this type of machine or that type of machine? Should we disclose how much of the allocatable table has been used? This was actually the subject of a lot of debate in the team. Some people were like, no, we should hide it, right? Because you shouldn't have to care about this, so why should you know about it? But in the end we actually went with full transparency. The reason is that we want you to trust us that we're going to do the right thing with your pods when you schedule them in this platform, but we're going to let you verify what we did as well. So if you want to poke under the covers and see exactly how things landed, it's all there fully transparent. I don't think we literally don't hide a single field compared to the GKE sound product. The other question was, can users access the VM object separately outside of Kubernetes? So at this point it's probably worth mentioning actually I showed a diagram before that developers use two APIs to interact with this system. It turns out there's actually a third one right at the bottom, which is the VM API. The reason I didn't show that before is typically people using Kubernetes don't have to care about the VM API. It's 100% managed for you, but it happens to be there and you can interact with it. For example, you can SSH into a node. So going back, one of the problems we had there was should we allow access to that object? Decision was no. In this case, we will completely eliminate that API since it's not necessary at all for the developer. So those were some of the key design decisions going into this. Now I'm going to cover a little bit about how we actually implemented this. Okay, so we built this product using various components that already existed in GKE. The way GKE works and the platform that we were dealing with is that nodes of the same configuration are grouped into a semantic grouping called a node pool. So for example, up on screen there, I have a node pool with eight VCPU cores and 16 gigabytes memory, and there's another one with four and four. And so what happens is if there's existing space on one of those nodes, the pod will just get placed. Okay, so let's say we have two panning pods here, one that can fit on one of these existing node pools and one that can't. So for the ones that can fit, there's a component called the cluster autoscala which will actually look at existing node pools and extend that node pool to add an extra node that can handle that pod and therefore that pod will be scheduled. In the case of the pod that didn't fit in any of the existing node pool definitions, we used a separate component that exists in GKE called the node auto provisioner and that component is capable of creating a new node pool definition, in this case one larger with 16 CPU 32 gigabytes memory in order for that pod to fit. But the final step is actually that cluster autoscala is then responsible for actually creating a node in that node pool to run that pod. So under the hood, that is what is happening, that is how we built it and how this system works and how it actually actuates on the user input is because we've pretty much eliminated the actual node API and the node pool API so there's no way for users to specify those things, we derive everything from the pod spec. So one of the simple examples is the resources needed, so things like the CPU and the memory, we derive that from the resource request of the pod. Another example would be node features and this is an interesting one. In the past, if you wanted to have a bunch of nodes with different features like for example you wanted to use spot compute, you would typically go ahead and create a node pool that was a spot node pool and then you would target that spot node pool with your pods. With autopilot, we flip the script there and you actually specify the requirement just in the pod spec and we will actuate on that and provision a node that can then handle that pod spec. Which I think is actually a really nice design because it means that all of the configuration of the hardware essentially, right, of hardware properties like this should be a spot node and in the future potentially other things like this pod needs a GPU, all this is done at a pod level, at a workload level, right where or the rest of your configuration is. You don't have to kind of do this multi-pass where you design your pods and you write those specs and you go and figure out the nodes that can run them. There are a couple of other additional components that we used I won't go into too much detail here but we used a component called release channels to keep them updated and node order repair to replace unhealthy nodes. Okay, so that's how we provision the resources to manage the pods. Another aspect of the implementation of this platform was an admission webhook. So we had to achieve two things here. I mentioned that we built the system to have a very wide range of resources for the pods but there were still some limits. The pod CPU needs to be between a quarter of a core and 28 cores and there's a ratio of CPU to memory. So what we do is we use a mutating webhook to look at the pod requests to ensure everything is within the range and if it's outside of the acceptable value we will actually just mutate the pod and fix it for you. When we do that we emit a warning if you're using cube CTL and we also write an annotation into the pod spec that basically logs what we changed so that you can kind of audit that. The second part is we have a validating admission controller and the validating admission controller is designed to enforce policies that prevent users from running admin level workloads on the nodes. Now why is that needed? The reason we need to restrict admin level workloads and basically preventing root access is that we want to offer a fully managed platform where Google SRE team basically responsible for running these nodes. What that means is we can't really offer you the users direct access to those nodes at a root level because then people can potentially go in modify the kernel, change bits and pieces and we essentially just lose the confidence to actually manage that thing for you because we don't know what's happened to the node. So it's important for us that all the nodes kind of look the same or at least have very well known properties so we have an admission controller to enforce those policies. What did we pick as our list of enforced policies? This is the list. Some of the simple ones are and I already kind of mentioned was limiting privilege pod. So this is a pod with the security context of privilege equals true. This basically gives you almost like root access on the node so we reject that. We also reject some Linux capabilities like sysadmin. Interestingly though, many Linux capabilities are actually still offered in this product so ptrace was actually requested by one of our security partners. They wanted to be able to use ptrace to inspect running processes. We looked at that, we felt like it was actually quite fine to offer so we added that into the list that you can actually use. Other things we clamped down on are stuff that directly relates to the node. The goal here is to build a node list product so we don't really want people using host port and running a container say on port 80 on the host because then if you try and schedule another container also with port 80 we can't curlicate that on the same node it kind of breaks our bin packing model and impacts the platform. So we had to limit that, the limited host networking which is also fairly highly privileged. Host path where we've also restricted although you can mount var logs in read only mode which means that you can actually as a user just have like a daemon set like scraping logs that's totally fine. As far as node affinity keys are concerned we restrict host name because again we don't want users thinking about nodes we don't want people targeting specific nodes but we allow many other node affinity keys like the zonal topology or regional topology things like that. One thing we didn't restrict was the ability to run a container as the root user. You can yourself restrict that using the open source pod security admission. We didn't actually need to restrict that because the security boundary of this product is still the VM. It's not actually a multi-tenant system at all. The VMs are still 100% your VMs, your nodes. So we didn't feel the need to actually restrict this and if we did restrict this from a usability perspective like half of all Docker images wouldn't run so that would be a problem too. So far as I've been talking about this implementation one of the interesting things is you could have actually done every single thing I've described yourself. You can use the cluster autoscaler you can use the node-order provisioner component you can write your own mutating webhook admission controller you can do all that you can literally build exactly what I just described yourselves today on GKE's data. So if that's the case why do we even need this other product? Aside from the fact that it's a bit challenging creating these mutating webhooks and so on. Well obviously one of the benefits of us doing it is that it's all preconfigured in a nice package but that alone is probably not enough. There's a couple of things that we add with this product that is sort of hard to do yourselves and that is the billing model is different it's request based so we charge based on the pod request rather than the nodes. Obviously that's not something that you can change as a user. Probably the biggest selling point is that by creating a fully managed platform with nodes in a known condition for the first time we were actually able to add node SRE under this product. So how the traditional GKE Kubernetes platform works is the nodes were kind of a shared responsibility model. Google would take a lot of responsibility for it but the users would also be responsible and that was because people could go in with read access and change them and it was sort of hard for us to know what had changed. So with this product since we've eliminated that we can actually for the first time offer SRE. Essentially that's kind of the bargain that you have when you use this product. If you're willing to give up that little bit of control about not being able to modify the node which I think hopefully for most workloads it's totally fine to give up. If you're running MariaDB you shouldn't have root access. What you get in return is a more fully managed system with us being in the SRE. The other thing is that we, as I mentioned we eliminated completely the VM API and there is no visibility on these VMs. The VMs are actually still there. They're exactly the same. They have a different prefix. So if you look at the cube CTL get node output you might notice that the autopilot node has the prefix GK3 instead of GKE. The virtual machines are actually still there they're still there in the product but the API is removed. This has the benefit for users that particularly security conscious ones they don't have to worry about things like SSH because there is no SSH into these nodes so it's like more lockdown. That's another advantage of the product. Okay, so I covered the design and the implementation of how we built it. Let's look a little bit now at the result. Where did we land? What does this look like? So at the beginning I showed this diagram of the two APIs essentially users have to interact with to get Kubernetes in order to use Kubernetes and even this diagram with that extra VM API at the bottom which you typically don't have to use but it's there and you might have to care that it's there. So with autopilot what we're able to achieve is we essentially shrink the entire GKE API surface area down to one command which is create. You just create it, you connect it to cube CTL from that point on you are 100% interacting using the Kubernetes API. So it's really a kind of a... I like to think of it as a very pure Kubernetes platform. The API you're using on this platform is just the Kubernetes API. There's no node pool API, you don't have to configure scaling, auto-scaling, anything like that. You just interact with it through Kubernetes. From a UI standpoint, obviously the UI representation of that API is also pretty nice. There's just three fields and you can create basically a production-grade cluster that builds is an arbitrary name. Okay. Now for the meat of this talk I guess. What are the benefits realized from all this? From this design of building this fully managed platform which actually looks a lot like the traditional GKE, right? It has pretty much the same nodes under the hood, the same multiple pods per node, a lot of the same things. Other than the fact that that potentially made it a little bit faster for us to build, what's the benefit to the user? What's the benefit of this design? And I hope this is relevant to all of you, particularly if you're also building your own Kubernetes platforms, maybe this might be of interest as well. I think the first benefit is that it enables really granular pod sizes. Because we're bin packing pods onto machines, we don't have to shoehorn the pods into VM sizes. So I did cover that earlier, but you can basically create like a 21.25-core pod and just slot it in there. We'll run that just fine. And then you can add a quart of a core and another one core, whatever you want to do. It's very, very flexible. And I think that matches what users want. Other nice things about keeping the node object or the node scheduling concept in this design is that things like pod affinity and anti-affinity continue to work. These are important concepts that come from the Kubernetes API. And if you remember back to my original proposal, I really wanted this to look like and be a fully capable Kubernetes platform. Well, it's not fully capable if you eliminate things like pod affinity and affinity. Pod affinity might be used, for example, if you have like a front-end pod and a back-end pod. And you want to say, I want these pods to always be together on the node. Well, it's hard to do that if you don't have nodes in a fully node-less system. You can't do that. This actually works quite fine with our design. We will ensure that that constraint is satisfied. Similar story with node affinity. So we offer zonal affinity. So if you have a zonal resource and you want your pods in that same zone, you can use that. One slide. I guess drawback of this system is occasionally you might want to separate workloads in one pod per node system. You don't have to separate them because they're always separated. But for that, you can actually just use the Kubernetes constructs of tolerations and node selection. So again, the Kubernetes API already has the language, the syntax to describe these things, and we can honour that in this managed product that still has nodes. Pod spread topologies work. And the last one here is Damon sets. Damon sets is often overlooked, I think, when it comes to fully managed platforms. But they're actually really important. Damon sets don't make much sense if you only have one pod per node because the whole point is you want to run an agent on the node. The good thing is with this proposal, with this design, you can still have a Damon set because there are still nodes. With one catch. And the catch is that Damon sets are typically used to actually modify the node. So how does that work in this system? We have, on one hand, it has nodes so you can theoretically have a Damon set, but on the other hand, we limited some of the administrative functionality which means some of the use cases of that Damon set no longer apply. The compromise we reached here is that we looked at well-known products and solutions out there in the community, out there commercially available, and we decided to allow this specific solution so they could continue to work on autopilot. So many security, logging and monitoring solutions that are commercially available, like all the people you see out there on the booth floor. Most of those solutions still work and if they don't work, they can probably come and chat to me and we can make it work. The reason we could do that is, as I said earlier, the security boundary for this product remains the VM. So even though by allowing certain partner workloads to have elevated privileges, the security boundary remains the same. It's still the VM. This is not a multi-tenant product, so we didn't have that concern. The main concern when we were building this from a design perspective was we had to make sure those nodes are supportable. We didn't just want to open the floodgates and let all kind of node modifications happen, but if it's a known set and professionally supported software, we felt like that was safe enough to offer from a supportability point of view. So it still allows daemon sets, and I think that's important because if you have a workload that you're running on-prem or you're running on another cloud provider or on GKE standard and you want to migrate it around, you probably have, in fact, most of our customers have at least one daemon set workload they just want to run. If we didn't offer this, essentially you would have to go in and add that whether it's security or logging or whatever. You would have to add that functionality to every single pod, which I think is a really big burden on developers. So that is actually one of the really overlooked benefits I think of still supporting nodes and still offering a multi-pod per node system. Other benefits, and this mostly kind of helps us as we work on the product further, but other benefits of this design are the fact that the infrastructure features, it's very similar to GKE, so a lot of things just work. One really good example is stateful set. So right out of the gate, on day one we were able to support persistent volume claims using block storage resources, which means you can run MariaDB, Redis, any other stateful workload just works because it's the same infrastructure. We didn't have to reinvent the wheel. When we look at adding additional features, things like new machine types, maybe if you follow the Google Cloud naming pattern, like N2 or C2, things like the highly optimized compute machines, stuff like that is actually going to be a lot easier for us to add because we chose this model because it's basically sharing that same infrastructure. Hardware like GPU, local SSD, again because these are just regular VMs under the hood, it's a lot simpler for the team to add support for those features. And finally it's possible to offer burstable class pods. So I mentioned this right at the very beginning as one of my three kind of objectives, and that was the ability to utilize unused capacity in the class to buy pods. Why do I think this is important? The reason I think this is important is that if you have like three or four pods all running on a node, it's likely that at some point some of them are completely idle. Let's say it's a web-serving application. You may have three pods that are completely idle. Now if a request comes in to one of those pods, you ideally don't want to constrain it to just the resources that that pod requested. Now certainly that pod needs to be able to handle that request within whatever SLO you have with the resources it requested, but wouldn't it be nice if opportunistically it could just burst, use the capacity and serve that request a little bit faster, delight the user a little bit more. I feel like that's a really important feature, and it is possible with this design. It's really only possible with this design because if you don't have multiple pods on the node, if you don't have this concept of the node, then you can't really pull that capacity for bursting. We don't actually offer this yet in the product that we built, but it's something that we're looking at, and I'd like to share a quick kind of design of how this might work. So if you look at guaranteed class pods today, the CPU requests equal the limits, which means there's no bursting. Traditionally in Kubernetes, you could set a much higher limit than the request and thus create a burstable pod that can scale up when there's extra resources. The problem with the design that we came up with is take this example of three pods that totally have requested five and a half cores and that are running on an eight-core machine. The problem is in a traditional Kubernetes model, those pods would be able to use up to eight cores, which could be a bit of a problem for us. It might allow for some gaming of the system where you could try and convince autopilot to create like an eight-core node, run a one-core workload on it, try and prevent other pods getting on there using some technique, which is potentially possible, and thus kind of get like 8X your resources. That would be a problem for us. So the ideal design would be, well, what if the user could burst within the paid-for capacity on the node at any given time? So if we clamp the bursting, in this case, to five and a half cores, then we would be giving you, you know, the capacity that you're paying for, which would be ideal. The only problem is it turns out that's a bit hard to do with the way Linux, the completely fair schedule that works in Linux today, it's a little bit hard to do that. So one idea that we're actively looking at is to potentially kind of round up the nearest integer number of cores by just turning off the unused cores and allowing full bursting within, in this case, six cores. That's just an idea we have. Stay tuned for that. What about the downsides? I did mention that there would be pros and cons. A couple of the downsides, I believe, of the design that we came up with here is that there is a potential for allocatable inefficiency in this system, right? It's possible if the user is creating and deleting a lot of pods of all different shapes and sizes that you could end up with very much underutilized nodes and that requires the team to build additional features like defrag to kind of correct that. So that's a bit of extra work that we end up taking. It doesn't affect the user. The user doesn't care about that, really. It's kind of more just a problem for the infrastructure platform to solve. The other potential issue, obviously, by running two containers, two pods on the one node, you can potentially have resource contention. Although we do have a way to solve that for users, they can still separate the workloads when needed. And a couple of downsides of using a multi-single tenant platform, like we do, as opposed to a fully multi-tenant platform, is that it's hard for us to add hot standby capacity. So one of the really nice benefits of a multi-tenant system is you typically have a massive resource pool shared by everyone. You can just... Individuals can scale up and down very quickly. We don't have that ability with this design, so that is one drawback. Adding a new pod, if it needs a new node, can take between 60 and 80 seconds. And there's a little bit of greater operational complexity on the platform side. Anytime you operate a multi-single tenant system, there's a little bit of extra ops complexity. But again, most of these downsides, by the way, kind of just a burden on us, I think just makes a couple of different problems a little bit harder to solve, but hopefully it's solvable. Okay, so in summary, and this is the takeaway here, I guess, that I'd like to try and convince you, hope I did convince you of this, is this node-less and does that even matter? Well, when I started this project back in 2018, node-less was kind of synonymous with fully managed Kubernetes. And I guess maybe the point I'm trying to make here is that it's not a good synonym. I maintain that this design is operationally node-less. It's node-less in the sense that you don't care about the nodes, you don't have to think about them. But I do believe there is a benefit, as I've hopefully outlined, of having nodes existing as a scheduling concept when needed and when relevant. So that was our journey. I hope that was useful. Yeah, I hope it's of interest to learn how the team went about and built this thing. Maybe it can inspire you as you're building your own products and services with Kubernetes. And with that, I'd love to take any questions. We have a couple of minutes. I'd be happy to continue the debate on Twitter, over a beer or however you want to do it. There are a couple of microphones. If anyone has a question, let me know. Thank you. Question? Can we have the mic go live, please? Testing. Okay, lovely. Hi, I'm the moderator. I'll be doing questions online if there aren't any, but there aren't any at this point. I just wanted to let you know that we have some time until our schedule in time, but we'll do a couple of questions. We can go just a few minutes over. Okay, sounds great. Please, take it away. Well, I hope you can hear me. Well, the company I work with is subject to a lot of audits, and they require us to split nodes in different subnets in order to segregate them on a network level. And also, they require us to install certain software and binaries on these nodes. Basically, we have to be in control of the machine image that we use to spin on the nodes. And I wanted to ask if this is capable, if you are capable of doing both of these things, actually. I don't know if the subnet separation thing is possible. That's something I'd have to probably take offline and do a deep dive with you. With the security components that you're running, I would like to think that it is possible. We do work with a lot of security partners to make sure that their solutions work. So unless it's like a homegrown, home-built container, I would say the answer is probably yes. It either works or can be made to work with this system. The other thing, by the way, is as you go to these auditors, hopefully you can position this, and we would help you, right? Position this as a more secure platform to begin with, right? Because things like SSH are completely eliminated from the nodes as well. I don't know about that because they were pretty reluctant of allowing us to go to the cloud in the first place. I'm not really sure how cool they will be with that. But yeah, then the other question that spikes up is, RDAV, for example, softwares, enabled through demon sets, or is that the way you do it? Which software, sorry? For example, antivirus softwares. Right, yes, you would install that with a demon set. All right. Provided if the demon set is using privilege, which it probably is as a virus scanner, it would need to be specifically allowed by my team, but we've already done about, I think, eight or nine solutions and there's room to grow that. So yes, it would be through a demon set. All right, thank you. And apart all the good things or benefits of the DKI autopilot, we are active users of, happy users of the DKI autopilot, but the restrictions, we come up with not being able to deploy things like secret store and these kind of drivers. Do you have any plan to overcome those issues that implicitly these kinds of designs imposes? Because, yeah, we want to use both, for example, for secrets and we weren't able to do that by using standard CSI drivers instead of creating direct connections to those both clusters and so on. Okay. I believe Vault might actually work now because I know I've tested it and I think I got it working, unless you're using a different configuration. Your own, let's say, the Google Cloud Platform GitHub repository has one secret store and there is a limitation of host path and privilege to actively being used. Yeah, I think when it comes to these restrictions, basically we have to either, if it's a kind of a partner, like a well-known container, a well-known workload, we can potentially allow list it. The other option we have is, if it's like a technique we have to enable, we essentially have to kind of productize it. So if you need a driver, it might be the case where we just have to offer the driver as a feature, where you can just turn on that driver. We should connect and continue this conversation, but I do believe Vault should actually work. I know when we first launched the product, we didn't offer mutating webhook support, which broke like half of the types of workloads, like Vault and things like that, which need to mutate workloads. We did add that in recently, so that was a restriction that we never really designed to be in the product. It was kind of like collateral damage. Okay. So we fixed that. So yeah, I do believe that Vault should actually work in particular, but we should follow up. We dropped you direct. Please, please. All right. Thank you. Cheers. All right. Last question, I think, then we might have to wrap. Yeah. Hi. Quick one. Yeah. I saw a couple of times the restriction on CPU increment being 0.25. Is that on purpose? And if so, why? Good question. It's a decision I think we should actually revisit. The plan was to start a little bit conservative, I guess, because it's much easier to relax these restrictions over time. And I believe that the theory was, and I was part of the decision, and I'm still trying to remember kind of like, why did we actually do that? I think it was just so that when we packed the pods on the nodes that are kind of, if they're known kind of sizes, like the little Tetris kind of diagram, we can kind of slot it in a little bit better. In hindsight, I'm not convinced it's actually needed, to be honest. So yeah, we might take another look at that. Yeah, I mean, would you like to just have like, just anything, like between, we probably need a minimum, but yeah. All right. That was a thumbs up for the people online. Great. Well, great questions. Thanks a lot. Like I said, let's keep the conversation going too. Thank you.