 Thanks for joining our talk, how Mercedes-Benz is securing 900 Kubernetes clusters without port security policies. My name is Tobias Giese. I'm a software engineer working for Mercedes-Benz Tech Innovation since 2017. I'm a certified Kubernetes security specialist and I'm involved into Kubernetes since I think 1.7 or something. I'm a former maintainer of the cluster API provider for OpenStack and I really love hearing and collecting records and I also love to optimize my home office desk setup. Yeah, my name is Tiak Rache. I'm also working as a software engineer at Mercedes-Benz Tech Innovation. I've been working with Kubernetes since about version 1.13, I'm not really sure. I'm very active in the local Kubernetes community. For example, I'm the initiator of the Kubernetes Meetup in our beautiful Swabian hometown of Ulm. And a lot of you are almost here. And also I love making and teaching music especially playing the drums. So we work for a company called Mercedes-Benz Tech Innovation which is 100% subsidiary and strategic partner for Mercedes-Benz. Our company has been developing software for Mercedes-Benz since 25 years. Yeah, we have around 400 employees who work in different business streams. They're called one customer, car, car sales, car after sales. And of course, from our point of view, the most important stream technologies in security where few people provide cybersecurity services for all of Mercedes-Benz and of course, we, the platform engineering guys, provide cloud services and infrastructure for all of Mercedes-Benz. So our team runs the main managed Kubernetes platform for all of Mercedes-Benz. We run around about 1,000 Kubernetes cluster. We cannot say the exact number at the moment because that's a full cell service platform. So every engineering team at Mercedes-Benz could provision or deprovision a cluster at any time, but it's around 1,000. They run in eight different zones. So data centers which are located in two geographic regions, one in Germany and one in the US. And they run on-premises and in the public cloud we provide both. So before we can go into the deep details in how we work in our security setup, we first have to explain what managed Kubernetes actually means at Mercedes-Benz. So our engineering teams don't have to think about Kubernetes operations at all. That means they don't have any access to the nodes, so no SSHing into the nodes, but also they don't have to think about stuff like what operating system is the node running, what's the network infrastructure looking like, stuff like that. They don't have to think about that at all. The second, maybe even more important aspect, is that we have very strict secure lockdowns on all of our clusters, so our customers have to think less about Kubernetes operations. For examples, no engineering team can store root containers on our platform. They're not allowed to mount host path volumes. And also, which is a bit more tricky to implement, we totally block system namespaces. So for example, no engineering team but us can access any resources or change or edit them within Kube system, for example. Yeah. And how we actually did or implemented that before our upgrade to validating origin policies is Toby's time now. Yeah. Thanks. Okay. So let's talk a bit about the status quo of Kubernetes 124, which was one year ago in our case. We used PSPs or pod security policies to enforce the pod security in our Kubernetes clusters. Also we're using Open Policy Agent. This is running as a static pod manifest because we have to run this completely independent because we add this pod or this container as a QB API server parameter because we're using the authorization webhook. Also we are handling validating webhooks and, like I said, the authorization webhook. Let's take a deeper look into the authorization webhook or more like the Qube flow of an API server request. So here you can see the complete workflow. A user creates a YAML or a GET request. And first, the authentication and authorization webhook will come in place. So here we can handle, for example, the deny of a impersonate or we also deny the deletion of nodes and so on. Next step is the mutating emission controller. Here we can default sync things or update things. And next step is the object scheme validation. This must be after the mutation because we want to check if the, or Kubernetes checks if the scheme is valid. And last but not least, the validating emission controller's webhook. So here we can validate if the manifest is valid or not, if we wanted to allow it or deny. I will talk about this later, but we are using validating emission policies. And this is the same step here. Last but not least, it will be stored in the etsy.ly and it's done. Okay, so let's give you a quick overview about the complete journey of our replacement of port security policies. First, like I already said, we are using port security policies. And the next step, we tried to use key vernal or gatekeeper and other policy engines. But this was not really feasible because we had performance issues. We have, we had benchmarking. We will see this later. And we had request responses up to 11 seconds. And this is completely no go. Last but not least, the validating emission policies started to evolve with Kubernetes 126. And this is where we are now. Okay, so, ciao. Yes, sorry. You maybe noticed there's one thing missing on the slides and it's the port security standard. So why don't we notice this here? Yeah, for those of you exactly following the development of policy enforcement and port security in Kubernetes, you may ask yourself, wait, port security policies were removed after 124, but they added port security admission and the feature of port security standards. And to be totally clear, port security standards are a really great and well thought out feature. It's basically just you can categorize a port and map it to one of the available security standards and then Kubernetes will automatically enforce very same default security practices on that port. So that is pretty great because you don't have to think about that at all. People who really know their stuff, the core maintainers will always update the port security standards. So no matter what API changes or future changes happen in Kubernetes, the standard will always be up to date and you don't really have to think about port security at all. And while this opinionated matter is very good for that reason, it's also bad for us for the same reason because it is very opinionated. And it gets in the very low details, but for example, we have down on the capability level things we want to allow which port security standards would block. A very concrete example for that is the net admin capability. If you want to know more about that, my colleagues, Toby and Mario, he's somewhere in the audience, once wrote an article, why I want to allow that on Medium. And we also have a few syscuddles we want to allow and stuff like that. So we're very detailed. So let's real quick talk about what we actually would need from a custom policy solution. First of all, and I think already quite well explained is we need a very flexible way to define policies for pods. Also, it would be cool if we could define policies for other resources as well for deployments, Chrome jobs, stuff like that, but that's a peer user experience issue. Another thing that's very important for us is a thing we call cluster pollution. So we're a very large organization. There are several thousand engineers working with our Kubernetes clusters. So there are several thousand opinions on which tools are the best to use and stuff like that. And we want to enable customers to use the tools they want. And so if we would implement our custom policy solution with tools needing, for example, custom resources, other cluster scope resources, we would hinder the user or the consumer of clusters, the engineering teams from using the same tool. So we'd really like to keep out of the customer space there. And last but not at all least, it's very important for us, but actually for everyone who wants to really run secure pods and first to also be able to mutate resources. That sounds a bit weird at first, but the only reason why we need that is because Kubernetes tends to have quite insecure defaults for some details in pod security. For example, most of you will probably know the flag allow privilege escalation on a pod. If that is not set, the Kubernetes core will actually default. The flag on a Linux level knew new privileges to faults. So but we would want that to be true to be secure by default. That's a very concrete example of stuff we why want to mutate if people don't don't set flags like that at all, which most users don't do. Because probably most users who just want to write a deployment don't think about that stuff at all, which is good. That's why we are here, the platform team. Okay, cool. So now let's take a look at the main open source tools. The players in that field, which we could use, which we looked at. The two big players in that space are Open Policy Agent and Geeky, Open Policy Agent and Kiverno. And we already use Open Policy Agent as a static pod on every note, as Toby already explained. So that seemed like the pretty logical choice for us. But a very big drawback of Open Policy Agent is that it's very, very complex to implement policies with it. It comes with its own Turing-complete full-blown programming language just for policies called Rego. And we're quite a small team. We're only six people running the clusters themselves. And so we would really like to introduce as little new technology or technology we're not very familiar with as possible. Also, when used in the way we already did it, Open Policy Agent is not really Kubernetes native, because if we wanted to change policies, we would have to change them on the node somewhere. And we cannot do it via the API server. You can use Open Policy Agent or the same policy agent with Gatekeeper. Then it is Kubernetes native. You can manage all policies with the API server. But in that case, it pollutes the customer cluster, as I explained earlier. Yeah, the second really big player is Kiverno. It also checks almost all the boxes. You can mutate, you can write policies, and you can write them in a very Kubernetes-y way, is what I'd say. Like everyone who is very experienced writing Kubernetes manifests will be able to write a Kiverno policy in quite a short amount of time. It is Kubernetes native. All policies can be managed via custom resources and via the API server. But it does need, obviously, custom resource definitions, so it would hinder users from using Kiverno themselves. But that whole pollution thing, it's not a hill we're willing to die on. It's just something that we really strive for. So we've settled actually on Kiverno in our first iteration. So let's talk about what was our experience like with policy enforcement in Kiverno. It was actually a pretty great user experience. We were able to really achieve a great readability and a great maintainability. It really just reads like normal Kubernetes manifests, all of us know and work with daily. And we actually had all our policies, which is a quite a large stack and quite complex implemented in a few days. And we had to work a bit on our end-to-end tests because they were quite part security policy specific. But we had them passing a few days later as well. And at that time, we thought, cool, goal is reached. We're ready to go. Let's go into canary deployment. Toby, what happened then? Yeah, thanks, Tark. So the first day of our canary deployments were like this fellows looking like and not really cool. It was pretty disappointing. So for example, we have a canary cluster customer that is using a development cluster and he is deploying a lot of controllers and operators into it. And these controllers are updating the pods every 10 seconds. So they update the annotations and labels and because of this and also because of a failure of us, Kiverno also enforced or mutated things during a pod update. This is not necessary because everything is not able to change. It's immutable, besides for the image. But yeah, this was a failure of us. But we had to benchmark the Kiverno controller somehow. So we have to press test it and we have to take care of this. So we started benchmarking. So how did we did this? We're using K6 from Grafana Labs and to be compatible to a real cluster, we used kind and add a max quota of two CPUs to this because our smallest control plane nodes have also two CPUs. And in this case, we are some kind of equivalent. We used 100 virtual users as a parameter for K6 and also 1,000 iterations. This means that during the stress testing, we are doing 100,000 API requests to the QAPI server. To do an API request, we used a simple pod manifest, which is a valid manifest, without any specifications, just an image, name, security context, and that's it. And also, we used the dry run for this. OK, so let's take a look at the benchmarks. Yeah, and this is pretty awful. So pod security policies response during the stress load with a maximum response time of 0.4 seconds and Kiverno is at 11 to 12 seconds. The medium response time is at 4 seconds and the minimal response time is longer than the maximum response time for pod security policies. This is not really feasible for our fleet because if you take green IT principles in mind, this scales nowhere. So we have to take a look at alternatives. So we went back to the drawing board and created a battle plan. So we had three little teams and each team is doing their own thing. So we had one team for the open policy agent implementation, so we have it already implemented. There's an existing setup, but someone has to write the policies for it to replace the pod security policies. Next, we had a team that is there to write a custom controller. So we can do this because we have enough knowledge to create controllers. We're using KubeBuilder for this and it's pretty easy to do this, so this team is creating the custom controller. Then also another team is improving our benchmarking suite. For this talk, we used this improved benchmarking suite, but we had to implement this somehow. OK, so while this is a really good plan, but we are running out of time because we currently want to update from 1.24 to 1.25, but 1.26 is already around the corner, so we have to think about this as well because we don't want to tell you behind too far. OK, so we created a plan for this as well. So we can easily update our control plans from 1.24 to 1.25, and if this is done, we can update the control plans to 1.26. And regarding the Kubernetes School policy, we are then able to update all of our worker nodes to 1.26 in one update step, so this is pretty cool. During reading the release nodes of 1.26, we then found, in our case, the holy grail because validating emission policies started to evolve in Kubernetes with 1.26, and we can use this alpha feature easily because we have a really good end-to-end test suite and we can test this and add this also to production. OK, cool, but what are validating emission policies? Validating emission policies allows us to add custom admission logic, and we can do this and create this to customize it to our needs as well. And also, it is lightning or pretty fast, thanks to in-memory abstract syntax tree. And also, like Chuck talked to said before, there's no additional control and also no CRD needed because we don't want to pollute our customer clusters. OK, cool, that is really high level about what is a validating emission policy in technique. Yeah, so let's just build some validating emission policies, right? By the way, the image you can see, that's how my grandmother imagines my job. I'm an engineer at Mercedes. Right? OK, so let's talk about validating emission policies. Very basic, what we want to do is we have an input, the admission request, we want to do some custom logic, and then we want to arrive at a policy decision. A cool feature which validating emission policies actually support, but which is optional, is we can also define a custom resource as input as well for validating emission policies so we can inject custom configuration values. Yeah, the emission request, I guess, is pretty clear. It comes from the API server. It just contains what the user actually wants to do. Let's take a look at an example of validating emission policy to see how we can configure the custom resource as input. So this is a very, very basic validating emission policy. And if you have a detailed look, you can see the paramkite property. And it's just a custom resource referenced in there. Our custom resource is called Admission Exception. We have lots of custom controllers running as well and all of our clusters because we have cool cluster add-ons. It's basically my main job to develop them for our users. And sometimes these controllers want to do things which would actually be not allowed in our policies. So it's cool for us to have the ability per cluster to manage a list of admission exceptions, of exception storage and mission controllers. So in our case, the custom resource is just simply a list of service accounts and their namespace. Yeah, but if we have that configured, we also need a way to tell Kubernetes which custom resource to use as input for the validating emission policy. For that, Kubernetes also introduced a new resource called validating emission policy binding. And it's very similar to other resource binding types of Kubernetes. You probably know cluster role binding or role binding, very similar stuff. You have a param ref, which just references the custom parameter and arcades the Admission Exception custom resource by name. And also, you have the policy name, which references the regarding policy by name. OK, so we have both our inputs done. Now we need to implement the custom logic. Yeah, we can take another look at our validate emission policy here and see the validations field down there. In this case, we only have one expression, but you could actually add as many as you like. But for the simplicity of this talk, we only add one. And you can actually see there's a very simple expression written down there in the expression. And that's in a new language introduced by Google, which is called common expression language or short cell. What is cell? Cell was developed by Google explicitly to be used in critical co-paths. So it is evaluated, guaranteed, to be evaluated in linear time. That's possible because it is mutation-free and not Turing-complete, very computer-sciency terms. It basically just means you cannot do stuff like for loops. You cannot mutate any variables, stuff like that. What it means for us is that it's very fast, and that's very good. And it's guaranteed to be very fast. Yeah, and if we take a very simple example like this one, I think probably most, if not everybody in this room, can already get what it's doing, because it's just some expression, like it's very similar to other languages. We take the object, which references the input from the admission request itself, and check if the service account name referenced, in this case, it would be a deployment, is in the list of excluded service accounts. So the parents is basically referencing the custom input from the CRD. Yeah, this looks cool and very fine, but I have to warn you, if you want to write complex validation with cell, this is a more realistic example of how policies look. Yeah, quite an unreadable mess, but we can look at a few details which are actually great examples for downsides of validating emission policies as well. So if you look at that part, you can see that we have to repeat ourselves a lot, because we want it to be like the user experience to also not only validate the parts for security reasons, but also to not allow deployments if they have forbidden pods in them, for example. So we have to write stuff like this list of resources we want to check, stuff like that. And another way more important, because security critical detail, you have to think a lot about the details of subresources in pods. There was in 125 a feature introduced called ephemeral containers. Those are containers running on your node. If you forget to enforce policies on them, you could just don't use policies at all. There is also quite new in the newer versions of Kubernetes now, Sidecar containers natively supported. And if you're writing in cell, you have to write that out all one after another. So the gist of it is, we started writing this stuff manually, but quickly realized that it's not fun at all. It's also very error-prone. We realized that because we thought we're done, but our end-to-end tests were still failing. So you will need, if you want to build something like that, something to generate your code. In our case, it was just a quite simple Helm setup, because all we basically have to do is repeat the policy we want to enforce for ephemeral containers and stuff like that. But this is very highly dependent on the policies you would want to enforce. Yeah. So we've got our custom logic internet as well in cell with some code generation. And now we're done, it seems like it, right? We thought that before. But now we're done, right? But there is a detail missing. What about Toby? Can you tell us more? Yeah, thanks, of course. So we also have to think about the mutation. No, it's working here. We have to think about the mutation of port resources. So the validation is working, but what is the mutation? There's the same feature like the validating emission policies coming in Kubernetes 131, hopefully, called mutating emission policies. But this is not available currently. So we have to think about something different. So if we go back to the drawing board, we can see, OK, there was a team building the custom controller. OK, this is pretty cool because we can remove the validation part, but only use the mutation part. So we have to mutate the allowed privilege escalation to false and run as non-route to true. That's it. OK, cool. Now, after this in mind, we can say we have a final setup that is pretty cool. We have validating validation and lightning speed via validating emission policies. And then we have the mutation over this tiny custom controller, which is really, really tiny. And that's it. So is it really faster than Kiverno? Yeah, I think it is. OK, so our max response time is at 1.2 seconds. And the average response time is at 0.4 or 0.37. In comparison to Kiverno, it is really much faster. Cool. So we are done, right? Yeah, I don't think so. By the way, this is my cat, Clyde. It's a real photo. He's also Kubernetes engineer. He committed a file once in our repository, but this is another story. OK, but why does he have this skeptical stare? So we have the validating emission policies in 1.26, and port security policies in 1.24. But what is with 1.25? We don't have anything here. So let's get back at last one more time to the drawing board and take a look here. So we have there also a team implementing the open policy agent policies. So we can use this for 1.25 with the admission controller. And we can use the validating emission policies with the admission controller for 1.26. That's perfect. So we are ready to go. OK, cool. By the way, our future plans are using validating and mutating emission policies to remove the HTTP overhead and have everything built in into Kubernetes. That's very cool. OK, then let's take a look at the final benchmark comparison. So we have here this our validating emission policies. We have PSP underneath. And we can see it is even faster than gatekeeper or policy agent. And it is much more faster than Qiverno, which is really good in our case. That is really cool. But let's take a look back what happened since then. So we have implemented this the first two quarters of 2023. But Qiverno improved the performance by far, especially thanks to our colleague Robin Afflebach, because he has contributed the benchmarking of K6 to Qiverno. And they were able to test this with their implementations. And they saw that there was really a downside. So they improved it by far. And just to notice this, Qiverno also can generate validating emission policies now. So you can add cell, also common expression language, to the Qiverno cluster policies. And it will create the validating emission policies for us. And also we have reporting for this. This is quite cool. So let's take a look at the Qiverno 112 benchmarking. It is really, really better now. So we have an average response time of only 1.2 seconds and the minimum response time at 0.1, which is really great. And also only possibly because our colleague has contributed everything to the Qiverno team. Cool. But last but not least, we have, I think, a few lessons and Jack will talk about it. Yeah. What have we learned? First of all, benchmark your policies. We mean it really benchmark your policies. We would have saved a lot of work if we would have thought of this from the beginning. Yeah. And another very important thing that we noticed or learned doing this is that good and really tool-independent end-to-end tests are crucial if you want to iterate really quick and if you want to build fast but also fail fast. I mean, basically, we had to implement it the same thing three times. You could argue even four times. But it wasn't that much of a time invest we did because we could always just build it, run the end-to-end test. If it's not read, it's wrong. But we were very confident if we had all of our policy needs filled up with the code we developed. Another very important thing, if you build your own policy solution, but this is no matter how you do it, you will have to take care of the nitty-gritty details of sub-resources on POTS. And you will also have to take care that if new features will be added, you think about that. So you have to very closely monitor everything that's happening in Kubernetes development. I talked about ephemeral containers and psycho containers before. That's the stuff maybe that will be new sub-resources added in the future. You would have to think about that. So yeah, in the end, you can say we paid a price for early adoption. We really did. But we really don't mind that. I mean, we're called Mercedes-Benz Tech Innovation, not Tech Trail Behind or something. Yeah. And the most important learning is that a pure Kubernetes solution is not only feasible and possible, but it's also quite maintainable. We have just basically just a home chart rendering a few policies. I mean, I guess everyone in this room has done that before. It's no rocket science. It's just Kubernetes. And so we're actually really, really happy with our final solution. And we will even be more happier if we can replace that tiny controller still running with mutating and mission policies. And then we're completely doing all of our validation and mutation within the API server with no HTTP at overhead or anything at all. And that's already it. Thank you very much for listening to our talk. If you have any questions, don't hesitate at all to ask them, but please use the two mics which are provided so that the people watching digitally can also hear that. And I have to say this, it's very important. We're hiring. And if you're interested in working with us, there's a QR code. You can scan it. And I'd love to become colleagues with probably not all of you, but maybe. Hi. Thanks for the talk. I'm not really familiar with the validating of mission policies feature inside of Kubernetes. But as far as I understand, you can only validate stuff that's in the spec of that resource that's being admitted, right? Is that correct? That's correct, yeah. So you might be familiar. But in Gatekeeper, there's a feature called external data provider. So it allows you to basically hand off some validation that can explore some external data that's not in the Kubernetes resource. Do you have any use cases like that where you want to validate something, maybe something to do with the image itself that you need to pass that off to something else? And yeah, do you think that there's any shortcomings in terms of validating mission policies related to that kind of thing? There are definitely shortcomings in validating mission policies regarding that. But I wouldn't really call it a shortcoming because these are running in memory within the API server. So they're very focused on critical call path performance. So stuff like that would never be possible and also will never be possible. But complex validation like that, we do stuff like that for a lot of things. But that's nothing that would happen all the time just because someone is changing resources on your API server regularly. So you could do that with the traditional way of admission Vapox and stuff, custom controllers, how you'd like. But you probably would not want to have that for every single request reaching the API server because if you run stuff like Argo CD or we have teams running very custom controllers, they tend to update resources every few seconds. And then you get the problems of huge overhead. Thank you. Thank you for the talk. It was really interesting. I have a question regarding Kuvarno slowness. So what it's based on your setup and what load on your cluster you have and complexity of the policies or the framework in general was slow and doesn't matter what kind of policy you use or what load you generate on your cluster? I think it's a mix of different things. So talk is going back to a lot of slides. So basically, we only use a kind cluster to reproduce our problem with the high load. So this is exactly the slide. I think this answers everything because we just added a kind cluster without anything in it, but we added our complexity, our policies to it, yeah, for sure. But I also think that Kuvarno is lacking of performance even with a single policy. We haven't tested this so because we implemented everything that we have done with PSPs. So maybe this is something we can try. I mean, our policies are quite complex, but the important thing is that these comparisons, we implemented the exact same rule set of policies. So it is comparable. It's very possible that if you have a less complex set of policies, Kuvarno is totally fine with that, but we have a very... It's not rocket science, but we have quite detailed and complex policies and the important thing is the comparability and we implemented the exact same rule set on all of them. We guarantee that thanks to our cool end-to-end tests. We also tested to remove some special policies from the Kuvarno setup, but it doesn't improve anything there, so it's still slow. It was still slow. Cool, thank you. Yeah, thanks as well for the nice talk. I know the pain of getting rid of PSPs and, yes, in the end, we adopted Kuvarno. What I'm really curious about is, how does your end-to-end test up look? Could you outline that maybe in a few sentences? Do you want to? I can. How does it look like? So basically... Go ahead. So basically, we are using the Ginkgo framework to use end-to-end testing and we are just testing the pod creation of different things, like we add a root container to it, we test that the mutation is working, we test the volumes and so on. So it's just a simple pod creation and that's it. So we are not testing, we are not end-to-end testing the deployment-demonset validation, so because pod is totally fine. So basically, the setup is specific to the policies. Those end-to-end tests are specific for testing the policies and not like... Basically, we have a set of rules we want to enforce and we have test coverage for every rule we want to enforce, but that test coverage is not dependent on some error message by some tool or something, it just checks if it gets admitted or not. Okay, thanks. Okay, thank you so much for your presentation. That was amazing. I have just a quick question about policy exception. I'm very curious to know how you manage that because I think in this case, with close to 900 cluster, I'm sure you need to have exception and sometimes that you need to have maybe some temporary exception and some permanent exception. And I'm very curious to know how you manage that here and thanks again. Yeah, there's two ways in which we use the policy or the admission exceptions in our case. We have controllers reconciling stuff, like we have custom add-ins for clusters. You can, for example, deploy Istio with a single CRD in Datadoc, stuff like that. And these controllers sometimes, if they need to add privileges to some specific stuff, are able to manipulate these CRs who are deployed in every cluster anyway automatically. And the second use case is when we have customers where we can trust that they know what they're doing or have very specific needs, we can also edit this manually and we have tooling that we call it Snowflakes that those are applied automatically on upgrades and stuff as well. This can happen, but we never allow the customer to use root containers. Oh, of course not. But we have very little. But yeah, thank you so much for your answer. Perfect. Okay, so on our side, we are also using Qverno and then I would like to ask for example, you know with Qverno, we can also do some kind of image validation, right? So then for example, you can restrict to certain registry and then even to certain version of some images. Is it still possible with this? Sure, of course. You can validate everything which is inside the spec of a pod and you can validate if the registry starts with a certain string and you can do this as well, of course. And then with Qverno is also possible whenever you create like a policy exception to actually say that you want to keep it alive only for I don't know X amount of time, you know? So it's... That's not directly possible. I mean, you couldn't probably create like a scheduled job or something for that time, but... We don't want to say that Qverno is bad. It's just, it doesn't fit to our needs. Yeah, no, sure, sure. Qverno is a really good user experience. That's basically the trade-off, right? No, that stuff like that is not possible. It also will not be possible because like we're talking about very core API server features here. So they're not going to build some scheduled thing in there or something. Hello, thanks for the talk. I was just wondering if any of the policies or maybe the HelmChart you used to render all the sub-policies for the sub-resources is this open source somewhere or just internal? No, we're not allowed to open source that. We're asked. But maybe we can do another round. So maybe that's feasible somehow. If you need it, maybe we can do it open source but currently it's not possible. Okay, thank you. Yep. Good. Thanks. Thanks for coming to the talk.