 Hello, everyone. Welcome to this week in CloudNative. We are diving into the code behind CloudNative. I'm Paolo Suonis. I am a CNCF ambassador, and every week we bring a new set of presenters to showcase how to work with CloudNative technologists. They will build things, they will break things, and they will answer your questions. Join us every Wednesday at 3 p.m. is the time. This week we have my friend Jim and Shading from Neermata here with us to talk about Secure as Code. This is an official live stream of the CNCF as a set is subject to the CNCF Code of Conduct. Please do not add anything to chat or questions that would be in violation of the Code of Conduct. Basically, please be respectful of all your fellow participants and presenters. With that, you hand it over to Jim and Shading to kick off today's presentation. Nice to have you to address. Thank you, Paolo. Great to be here. Thanks, Paolo. Thanks for having us. Amazing. Jim and Shading, you talk about security, Secure as Code, Splenso does what does, and how can you do that? Sure. The topic we want to talk about, and security of course is a very broad topic, there's many different facets to it. But specifically with Kubernetes and with Kubernetes configuration management, I think as we all know, there's a lot of power to how Kubernetes does configuration management. But with that power also comes some complexity and the need to manage the configurations correctly. There's also always this question about what parts of the configuration developers should be able to update and manage, what parts operators and cluster admins should manage and enforce, and then how that balance should be coordinated. To solve this, in Kubernetes, we believe that policy management is necessary to secure clusters and to manage clusters at scale. We want to talk about how Qverno, which is a CNCF sandbox project, addresses this problem for policy management. In fact, as we are chatting, I can also share my screen and pull up the Qverno website, which showcases and we'll take a quick look at some of the basics of Qverno itself. Let me switch quickly to the website. What we wanted to do and one of the reasons why we created Qverno is dealing with this problem of managing declarative configurations, making sure that the right settings are in place, validating for best practices, security, compliance, even things like, I think we've heard about pod security policies, how to make sure those settings are in place. We want to make that very simple and make sure that all clusters can utilize a toolkit, which is simple to install, simple to manage in Kubernetes, and designed specifically for Kubernetes. With Qverno policies are written just like any other Kubernetes resource. They are a custom resource, which can be managed much like you would a pod or a deployment or anything else related to your Kubernetes cluster and configuration itself. Jim, I think that's amazing project. We have discontinued in the future, in your future for the new version of the Kubernetes of PDP. One question, how does Qverno compare to OPA and Gatekeeper? OPA Gatekeeper. Great question. OPA, as probably most everybody knows, is Open Policy Agent. Gatekeeper is a sub-project in Open Policy Agent, which specifically adapts OPA to Kubernetes. OPA is a general-purpose policy management solution. It does policies using a language called Rego. Using this language, you can basically write code, which manages or which controls aspects of your policy configurations. Rego is optimized for JSON processing, and there's a lot of power and flexibility with that. But it is a new language to learn. It's something that was created outside of Kubernetes, and then adapted to Kubernetes through the Gatekeeper project. It is certainly an option for policy management. However, what we found when working with customers, working with real-world scenarios, that Rego was very complex to customize and to manage at scale. And then also the other thing that we saw was it's not just about validating configurations. It's important to be able to also mutate and generate configurations for policy enforcement. So a good example of this is if you have a multi-tenant cluster or even different teams within the same enterprise using a cluster, you want to be able to easily manage policies. So we wanted to make sure that there was some solution which was very native in Kubernetes and which allowed us to write these policies in a simple manner. So like Gatekeeper, Giverno works as an admission controller, and Kubernetes of course is designed to be very extensible. So leveraging the admission control and the Webhook capabilities, Giverno can receive every API request, but then the way you manage the policy, and here I'll show a quick example, is extremely simple. So this is a Giverno policy which enforces our ad admission control that a particular label is present. Now, so this is what we want to make, and you don't require to learn a programming language or have any other external tooling to be able to maintain these type of policies. In fact, you can use your existing CI-CD pipelines, you can use GitOps, and that's what we mean by policies as code. So the same best practices you want to use to manage your Kubernetes resources like Customize or KubeCuddle and other tools, you can now leverage for policies without having to do anything again external or bring in some other complexity. Quick question, interesting. Do you talk about GitOps that you can apply GitOps? This kind of policy engine can violate or not the principles of GitOps? Sorry. Yeah, that's an interesting question. So I think the thing to think about is, okay, so if Git is the source of truth, and now you have a policy which can mutate or generate or change configuration, what happens? Is that violating some of the GitOps principles? Because there's changes that happen at admission control. So I think the question there is, you want Git to be that source of truth where you can go back and reproduce things for any particular version of your deployments or your cluster configurations. But as long as that's repeatable and the operations are idempotent in the sense you can always go back and reproduce those with Kivirno and with using the whole policy as code lifecycle, it is very much in line with GitOps. In fact, one thing I can quickly show, and this is interesting. So if you, I'll just Google Flux and Kivirno. So Flux as a lot of folks know is a very popular GitOps toolkit, right? And in Flux too, to deal with multi-tenancy challenges, in fact, they're leveraging Kivirno and Kivirno policies now to solve what happens when different tenants want to manage their own applications. So the example of the Kivirno policy here, let me see if I can see, so here it's actually installing Kivirno through using GitOps principle, and then it's pulling some default policies for Kivirno, right? So if we go back to the Flux to repo, I can show what policies are available here. So this, for example, is a Flux policy, which or it's a Kivirno policy included as part of Flux, which is checking a custom resource called customization or a Helm release, and it's making sure that there's a service account specified, right? So very much, I guess, if you again think about this, how policies can be used as resources, then you can apply all the same GitOps best practices that you would for a normal policy. So I think one thing we can do, Pablo, is shooting, and I had prepared a number of different demos. So we can demonstrate some of this in action too. So you see, for example, for pod security and other aspects, how Kivirno can help with that. Oh, Grady, let's show the code, please. Show the code. Yeah. So let me, I'm gonna stop my share and shooting will pull up our desktop and we'll dive right in into, we can start with pod security and then go into some multi-tenancy and some other fund policies. Yeah, sure. Let me share my screen. Okay. So, everyone, this is shooting and first to start off, I wanna show you the ability of the mutation of Kivirno. So like Jim mentioned before, I will show it as the part of the CI to a process where I will be applied the mutation policy to like deployment and we will see how the mutation is patched to your resource, right? So here is just a simple policy that I have a policy that injects the vault agent as a site card container and also I'm mounting the volumes to the containers, to the site card container as well as the any container. So it does nothing but to patch those containers to the deployment. Here, I'm using the match clause to match the kind by the deployment and I'm searching for the particular label. This bracket here means if I have this label defined in the deployment, then I'll apply the policy. Otherwise it'll just skip. Okay, I think that's fairly straightforward to understand. Let me switch to my cluster. Just a sec. Could you increase a little bit your screen size? Sure. Does this look good? Good. So I download the injects. Come back. Oh. Again, please. Okay. Oh, there, there, thank you. Is it zoomed or you need a bit more? Okay. This looks good, right? Cool. Looks good. Thank you. Yeah. I've downloaded this inject site card mutation policy to my local directory and then I have a example deployment here which has this annotation defined and I only have a single busy box container inside the deployment, right? And from the CLI, what I expect is the Qerano CLI will mutate my deployment with those cycle containers. Also, it wants the volume to the pod, to the containers, right? Let me pull up the terminal. So I already have the CLI installed, but just to give you some idea, we have a document say that how you can install the Qerano CLI. It is, you can download the binary directly from the repo or you can install the accrual. And we made the Qerano Kupkado plugin so that you can use it along with Kupkado command, right? Okay, so let's check. I have the policy here. All right, I have this site card injection policy and the deployment. What I'm going to do now is to use Kupkado Qerano and we have this apply command which allows you to apply this policy, site card injection policy and if you specify dash R, it takes the resource manifest. What I have here is the deployment. Okay, so once I run this command, it'll output the expected, the mutate resources, the mutated resources. Here I have the deployment and this is the original container I have, but you can see here the vault agent is injected and the NA container is inserted also and then I have a informal volumes mounted to the container. Right, this is just the ability of mutation and I'm doing it in the CLI, but also I have Qerano pod run here. Kupkado get Qerano pod. I already have Qerano pod here. If you apply this policy to the cluster, it does the exact thing which is to mutate your deployment. So let's take a look of how the policy works in the cluster. So first I have to create the policy and then if you do Kupkado get cluster policy or we have a short name called CIPL, you have, then you see there's one policy created, right? Then if I create Kupkado apply, if I create the deployment, then if you get that deployment, you will see, oops. I think you have to get, all right. You will see there's a NA container and also another container is injected to the pod. So this is the mutation as well as how the CLI works with Qerano. And next what will be interesting is I wanna show you the validate ability of Qerano. So what I'm going to do here is as Jim also mentioned earlier, we have a set of policies that restrict your pod security contacts. And as we know that the pod security policy is going to deprecated in Kubernetes 1.21 I think and it'll be removed eventually. So as an alternative, Qerano provides you a way to validate your pod security configurations, right? From our website, we have this set of pod security policies defined. There are default policies and also the restricted ones which enforces the best practice of your pod configurations. So today I'm gonna only show you the default security policies. And these are all the validate policies. Let's take an example for the validate policy. Here similarly, I have a cluster policy defined and I have the validation failure actions set to audit but later I'll use the customized to mutate it to enforce which will blocks the resource creation. And in the rule pattern, I have a validate block defined here which says whenever you have host pit or host IPC, host network, which is not set to false, then I'm gonna block the resource creation, right? And we also have a bunch of other policies to disallow add capabilities, disallow host pass, host ports and all the rest. Since I already have key rental deployed, what I'm going to do here to use this customized command as well as cook cuddle to apply the entire policy sets, right? Let's go back to the terminal to check the policy. I'm going to do cook cuddle, get C tall. This is the only policy I have so far. And let's use that command to apply to create the pod security policies. Okay, once it's created, let's check again by cook cuddle, get C Paul. Now you can see I have this bunch of pod security policies deployed to my cluster. Okay, what would be interesting next? Yeah. Sherry, just a minute please. We have a, we have another question here. That is important before you continue. What does the sidecar ports do exactly? Oh, the sidecar policy. Let me go back to the mutation policy. This is just to inject the bot agent, the harshly core bot agent. As you know, they have the second container to automatically create a secret. And that can be later used in your running application for that secret. So here the any container is responsible for creating the secret. And I month to this volume, to this pass. And then I create the volume and then to use it inside the bot agent or in your running application, any of your running applications. Yeah, so this is an example of injecting a sidecar, but it could be anything that you might want it, the sidecar could be your own container. Here we have used the vault agent as an example. But there are several use cases where we have seen like for certificate management, security, for monitoring that sidecar containers are required. So one interesting thing is here, you can quickly inject all of these and just you can customize these policies. Also, if you're using something like Fargate or any serverless tool where you cannot run in a standard services like for logging, monitoring external, right, as external demon sets. One interesting pattern is to use sidecars to be able to perform those services within for your main application itself. And again, that's where Kevrono policies could be very useful. Great, great, great. One more question before continuing. The question from Christopher is, can I apply this cluster pause only if namespace of the object have a specific label? I saw what this did label, but can I explain where it is, please? Yeah, sure. As I mentioned before, you can match by an annotation with the anchor, but also in the match block. I think, oh, this is not in this release, but with the recent changes we added, we have the namespace selector that you can use to match a policy. And also we have the preconditions and with the conditions that you can look up the labels and then match to the specific values if you want. So yeah, that's possible. Okay, please continue to show the code. All right. Okay, let me switch back to the pause security example we have. As I've installed all those policies, yeah, what would be interesting next is to see how the validate policy works with key rental, right? I wanna test what if I create a pod, what happened, right? So I came across this repo called Bad Pods. They have provided a bunch of pod manifest that you can use to validate your security contacts of the pod configurations. They have provided the pod that has privileged security contacts that also the hosted host network, et cetera. So here I'm gonna use one of the Bad Pod example as it defined here. It physically allows everything to be defined in your pod manifest. Okay, so here is a pod that I have all those five security contacts set. And if it take a look at the pod manifest, you can see here I have host network set to true, host paid, host IPC set to true. And also I'm using the host pass as one of the volume. And then I'm running the container as a privilege mode, right? So by default, what I expect is that key rental policy would block this pod creation because it violates the policy or the security contacts, right? Let's take the raw manifest and do kubectl apply manifest. F is the manifest. Okay, let me zoom in a little bit. You can see the pod creation here is blocked by key rental. And these are the policies that is actually blocking the resource creation, right? Let's take a closer look, saying that this pod was blocked due to the following policies. And the first one is the field host network, network host IPC hosted must not be set to true, right? So here it blocks the pod creation. And let's see, let's use another example, what if my pod has nothing allowed? Like I don't have any sensitive data set in the security contacts, right? In this example, I have nothing sensitive set it, set in the pod manifest. And then let's apply this resource. As you can see here, I have a standalone pod, which has nothing sensitive set here. And then if I say kubectl apply this pod and you see one pod is created, right? The resource creation will not be blocked. Okay, this is the pod security example I have, but think about this, you may have another question that in most cases I won't, yeah, go ahead. Sorry, sorry, Shreddy, couldn't continue. Okay. In most cases? In most cases, you may not have a standalone pod created, right? You may manage your pod by the pod controller. For example, the deployments, demon set, state of set or et cetera, right? You may start wondering what will happen when I create the pod controllers. Typically this pod security policy in Kubernetes, everything will go silent. If you create the pod controller, the pod creation will be blocked, but you won't get any alarm for that, to saying that your deployment actually don't have any running pod, right? But with Qvernal, we will reject that creation immediately. And let's first take an example and then I'll explain what happens behind the scene. Shreddy, just a minute before. Christopher, explain a little bit more about this question. Let's talk about namespace selector, but about match all pods, if all namespace have a label, like apply pods to true. Make sense? Catch the question. Okay. Good? Do you know what I mean? Yeah, I think I get it. The label is set on the namespace and then with the resource creation, I want to fetch that label on the namespace to match to the results, right? Yeah, so the kind here, the type is a pod. So the policy is written on a pod, but it matches all namespaces. So it only applies to pods and namespaces with a particular label. Okay, I think so. The other question is specifically asked if you can provide a brief differentiate between the CPO and global network pause. Yeah, I can help answer that, right? So network policies are a Kubernetes resource itself, right? So they are part of the Kubernetes API and they control if you have multiple pods, network policies are useful in controlling traffic between those pods within your cluster or between namespaces or even any ingress and egress traffic for those pods within your cluster. Network policies require CNI, which can enforce that network, the firewalling, like whether it's ingress or egress. So you would have to run a CLI like Calico or Kubrouter to be able to use network policies. The CPO that shooting was showing is a Kivorno policy which is a custom resource in Kubernetes, which runs, you have to install the Kivorno controller and with that you can kind of use Kivorno policies, right? So they're two different types of things. Kubernetes has native policy objects like pod security policies, network policies, but then in addition, if you need more security, like here, we were talking about pod security enforcement, if you want to have, by default with pod securities, they're, to enable them, you have to use RBAC and roles and that's very complex because like shooting was just showing, typically it's a pod controller, like a deployment which creates the pod and that role is not associated to a person, it's associated to the pod controller. So there are so many problems like that which is why network pod security policies are being deprecated and Kivorno just gives them more flexible way to implement some of these security boundaries that you will want in your clusters. Otherwise, you're kind of exposed like the bad pods website shows very well. Thank you, Jim. Thank you. Shading, could you please return for your behind the scenes? Let us check this, it's very interesting. Yeah, sure, let's check what if I create the pod controller directly, what will happen with Kivorno? Okay, I'm going to use some bad deployment here, which has the sensitive dataset. Okay, here is a deployment that has the whole dataset. Please, please, yeah, thanks. Okay, it's just a simple deployment which has the exact the same configuration as the bad pod I showed before, but here, this time I'm creating a deployment, not a simple standalone pod. Okay, back to the shell. If I say Kupkado apply deployment, you will see the deployment creation will be reject immediately saying that you cannot set those data, right? So what behind the scene is that as I've showed you, all the policies are right on the pod, but in this case, if you have a Kivorno running your cluster, then you apply the policy, Kivorno will automatically convert those pod rule to the pod controller rules. Let's take an example here. Remember in the initial policy, I only have one rule, which is applied to the pod and say it is allowed the use of the host pass, but whenever I create the policy, Kivorno will convert it to the rule of the pod controllers. Here I convert this rule to Demonset to match the Demonset deployment, job and default set, saying that you can't use host pass in either of these resources, right? And also we have another auto-generated rule for crondrop, the spec looks somehow different, but it's still disallowed the host pass, right? With this feature, you will see the resource for the pod controllers creation will be blocked immediately, so you will know that what happened and it allows you to configure or confine your configurations folder. And this is, yeah. Thank you. Leonardo Murillo, my friend, ask if Kivorno can validate the docker image signing Yeah, great question, especially of course with all the focus now with the SolarWinds hack on supply chain and kind of managing the supply chain integrity, right? So there's two parts to think about over here, right? So in every image has a digest, which is a immutable hash that's created to represent the digest. So first off, I mean, even before you think about the digest, you wanna make sure that there is always, you know, like the images coming into your cluster are from a trusted registry, right? So that's very simple to do. There's a Kivorno policy in the best practices. So if you go to that link and go to best practices, it shows how to validate the registry is a registry that you trust. Now, the second thing after that is for a particular image, you wanna make sure that the image you're running, you know, if you're using a name and a tag, like a repo name, the image name, the tag, that translates to a digest which actually matches the digest in your registry, right? So that will be the second level of things to check. And for that, what you can do with Kivorno is there's a lot of, you know, there's a nice pattern that Kivorno implements in a Kubernetes native manner where you can use any data from a config map as part of a policy, you know, kind of check itself. So you can write a policy that says that if that image digest, so if given the image name and the tag, if the digest is not in your config map, it will reject and it will block that particular image. The final piece of this though is to do with, you know, signing images, right? And for that, you would need something like Notary or Notary V2, which is being worked on in the community. And those projects, what they do is they verify that the images are not only, you know, that you have the valid hash, but the contents are also signed by somebody you trust, right? So that's an external check, a third check, which Kivorno would not perform, but you would need, you know, something like, you know, a Notary V2 client, which or notary client, which can, you know, check the signature, the digital signature and enforce that. But yeah, very, very good topic. And that's still, you know, something we're looking at very closely and interested in ideas in terms of how we can, you know, expand what Kivorno does for end-to-end sort of image and content trust, right? I think there was another question from Nelson. Yeah, so, yes, okay. The question is, what does Kivorno still doesn't do and what are the plans for the near future? Yeah, so we're constantly, you know, so Kivorno is, by the way, the schema is we won, right? So we manage compatibility. So all of the basics that you saw with these policies, they're, you know, well-supported and not something that we're gonna, you know, kind of change without, you know, backward compatibility management, right? But of course, there's always more features, more advanced things we wanna do. Like, you know, the whole image signing is another interesting area to explore. So can Kivorno and Notary V2 work together to solve a particular problem, right? Though that's something that could be a future development. Other examples of things actually coming out. So we are at 1-3-2-RC1 right now. And, you know, we are actually gonna, so in the next few days, 1-3-2 will be out. And in this release itself, there's, you know, very some interesting features. Shooting mentioned the namespace selector. So going back to the previous question of, okay, if I have wanna match pods on a namespace and I want to check the label of a namespace. So today, Kivorno can check the namespace, you know, labels or annotations in the object being changed. All of that based on the incoming request. So based on the API server request. What this feature that's coming in 1-3-2 allows you to do is check an existing namespace for its labels or, you know, by selecting that label. So the policy will only in a match if, you know, that if there's an existing namespace with those labels, it's not checking the namespace in the incoming API request, right? So with, you know, so that's one of the use cases that had come up. And like I think Christopher also asked before, so that's coming in 1-3-2. It's already available as an RC and we will be, you know, kind of releasing the final look after some more testing in the next few days. Another interesting feature in 1-3-2, and let me share my screen, I can actually show this, is a use case which often comes up is, what if you want to, you know, write a policy which I'm limiting or managing things based on some existing API data, right? So you're, sorry, I was trying to make my screen bigger. Okay, hopefully that's more visible. So this is a more complex policy where- Wait a minute, I can't see your screen. Are you sure? Where do I mean this, please? We are- Yes, hold on, let me try and share again. Maybe that didn't go through. Okay, better? Yeah. All right, perfect. Yeah, so this is a policy which is checking, you know, within the cluster for a particular API object, right? So by the way, one interesting thing, I don't know if you noticed when shooting was doing a demo, but I'm getting help from my policy directly in Visual Studio Code, right? And that's a feature in Kiverno, which, you know, it's a small feature, but something that is pretty nice because you can get help for any part of your cluster or your cluster policy directly either using kubectl explain, or in this case, because I'm running Visual Studio Code, I can see, you know, all of the help fields and it does syntax checking and things like that directly in Visual Studio Code, right? So again, another example of this whole policy as code and why that's important to have this native approach, you know, so you can use standard tools, et cetera. But going back to this policy, this is a interesting part over here, right? And I'll actually, we'll run this policy and we'll see, make sure it works. But if you look at what's happening over here, it's saying I'm gonna make an API call to the Kubernetes API server. And for that API server, what I'm gonna do is I'm going to, you know, check namespaces and I'm gonna use from my request the incoming namespace for the object. And in that namespace, I'm gonna check how many services exist. Once I get those services, I'm gonna pass them to a JMS path expression which actually looks for a load balancer type service and determines the length. So now the result of this API or these two lines are stored in a variable called service count. And what I can do is if the service count is already one, I can say, don't create another load balancer service because I wanna limit each namespace to a single load balancer service. Cause if you're running in a cloud, like, you know, any like Oracle or AWS or any other cloud, these things can get expensive, right? So here, and you cannot enforce this using default quotas because if you're doing default quotas, you can only limit a service object. You cannot say a service object of type load balancer, right? So this is something that with the combination of the simple policy, you can now easily leverage in your cluster. The other cool thing is kind of if I go to my terminal and let's make this bigger. So the same thing I wanna test in my policy, if I kind of just do kubectl get and I'd say minus, minus raw. And I'll take that same path, right? So I wanna say API, let's say, Rewon namespaces. It's the exact same syntax that I would use within my policy. And now if I want to get a count, I can use JMS path. So I have a command line tool called JP which processes JMS path. And I can do something like if I say items dot, or I can even just say items and then I can say length and the syntax. So this is all documented on the JMS path website. And this, by the way, is also used by kubectl or AWS CLIs, et cetera. So if I run this, now it's telling me that, hey, I have five namespaces in my cluster. So the Kibirno policy is using the same familiar syntax, the same expressions over here that you would kind of use within your, just outside with kubectl itself, right? So here it's checking for the load balancer type and then using the length, but it's doing a call into the cluster. So let's actually see that quickly in action and what we'll do. So let me check if I, yeah, so I already have a load balancer policy. I have a pod, I have a load balancer policy because we're testing and playing around before and this policy is installed. So if I do something like expose, let's say we want pod engine x and what we do, I'll just say type equals load balancer. So what should happen is now Kibirno should check and see that there's already one load balancer service in this default namespace and it will block it, right? So that's the, an example of using this policy in action to solve a problem like this or something you wanna enforce in your namespace. So that's also a feature coming in, this feature is available in 132 and it's something new. Other new features and examples of things we're working on have to do with more just simplifying lookups like for images, like if you want within a policy itself, let's say you want to use particular, like a tag or things like that to be able to even use like ragex expressions within policies to advance things like that, right? So those are the type of, sort of constructs or simple but powerful things that we are able to do quickly with Kibirno and with Kibirno policies itself. Great, Jim. Jim, we have two questions here. One is from Olga. I don't know if I get really what he means, but will Kibirno be comparable with all three major clouds? I don't know exactly what's mean, but. Yes, so maybe perhaps the question is compatible with all major clouds or will it work with all, control planes? Yes, so Kibirno is all standard Kubernetes in terms of the machinery behind the scenes. It installs itself as a VEPO server and that is supported with EKS, AKS, GKE, OKE, all of these type of systems where as long as it's Kubernetes compliant, Kibirno can install itself and you can act, it acts as an admission controller within the control planes of all of these cloud providers, right? Or you can use it even, like in my case, I'm using it on my desktop, shooting is using Minikube. So it works with any Kubernetes compatible system. Great, you already answered this question, but we have four, eight minutes and more five minutes to complete. We have a question about what would be your recommend, what your recommendation with respect of different use case for Kibirno? Yes, so very interesting question and some of the solutions we're seeing, like I showed that the flux use case, which is very creative in terms of how Kibirno policies are now embedded from multi-tenancy in if it flux itself, right? We're also working with other open source projects like cluster API, also working like another is sort manager to do some automated configurations, automated kind of generation of defaults using Kibirno. One other quick example I can show and we might not have time to go through this full example, but one very interesting thing with any multi-tenant system is you want for namespaces, you want control over usage and you want the ability to securely share namespaces. So here I have a policy which is adding access controls to a namespace and so it's combining validate, mutate and generate policies all into one. And those are the types of use cases that are most powerful. So in this case, it's checking to see who's creating the namespace. Based on that, it's setting certain labels and it's enforcing that there's a naming convention created as part of the namespace. And then for that namespace, it automatically generates some defaults itself, right? Let me check and see if I have this policy. Can you share your screen? I don't think you're sharing. Not sharing again, let me fix that. Between this, there is another question here, quickly. Are CRD controllers supported by out generation roles? Yes, so the question is like are CRD supported and controllers, yes. So any Kubernetes resource is supported with Kibirno. We don't limit ourselves to the standard resources, but all CRDs can be managed. And those are some of the examples, like I was mentioning with cluster API or cert manager, like cert manager uses certificates and certificate authorities. So all of those custom resources can be managed also using Kibirno. Yeah, so and it's really coming, like the interesting use cases are when you start combining some of these policies. So if I look here, I already have these namespace policies and other things installed. What I also installed just to kind of complete the example is I have a role created, a user named Nancy. And if that is a namespace administrator, right? So they have a role binding where they can create namespaces and then they are assigned the role of an admin. Kibirno will automatically generate a role within that to be able to only view or manage the namespaces that user creates, right? So if I do, for example, let's say Kibirno create namespace, let's say test, and I'm gonna switch to user. So I'm gonna say as user Nancy here, it, Kibirno is telling me, hey, there's a naming policy, you have to give a small, medium, large for your namespace, okay? So let's try that again. And what I'm gonna say is like test, let's say small, and now I was able to create that namespace. And then if I say, you know, describe namespace, what's interesting here, so let's, yeah, let me fix that command. So I say describe namespace. I see that that namespace is created, Kibirno automatically added a label to say who created that namespace. I have a quota set already on the namespace. And if I also now do something like, you know, if I say get network policy, oops, in this case, yeah, I need to say my which namespace. So let's give the namespace, get network policy. I see that there's a default network policy also generated, right? So all of this is, you know, managed through that. Actually, one interesting thing is if I look at that policy, let's try to delete that network policy and see what happens, right? So I'm gonna say delete network policy default deny. So it got deleted, but if we query again, you'll see that Kibirno immediately recreated the network policy because that resource is managed through a Kibirno policy and, you know, although the user in this case had the RBAC permissions to delete that resource, Kibirno will always recreate or regenerate that policy based on the settings. So Pablo to answer your question, those are the types of use cases we're seeing now, like where users are combining, you know, validation, mutation, generation to solve some very interesting problems in managing clusters and managing Kubernetes at scale in a secure fashion. Great question, great chatting. Let's, we are really close to them, but I should ask you, Kibirno is a very promising project and is starting as a project in the sandbox from CMSAF. Please show to us how contribute, participated in the project to be a member of the project, how participate, contribute the project. How can we join you? Several ways, right? And thank you for, you know, kind of mentioning that on the community side, because we, so Kibirno is a very, you know, open, flexible and friendly community. So definitely, and I'll post, you know, even our GitHub link, if you, you know, so there's several ways users can start and you don't have to be, you know, if you're just simply a policy user, you can also, you know, just submit ideas for new policies, you can, you know, contribute by submitting sample policies in the repo that we shared. And you can also, you know, kind of just give us, you know, if you have an idea for a policy, something you're trying to solve, you know, just reach out and we will, you know, help you create those sample policies and we can add them to our repo. Other ways, of course, like for documentation and other things we're always, you know, helping improve. And so that's another simple way to contribute. And then, of course, if you're interested in coding and, you know, developing Kubernetes controllers, there's several, you know, bugs in our Git repo that there's a link posted here, which are marked as good first issue. So just look for those good first issue type of, you know, items in GitHub. And those are very simple, very easy to get started with. And also, you know, the key is to get started and also, you know, the Kvernos Slack channel is very active. So if you're interested in policy management, Kubernetes security, make sure you join our Slack channel as well and just reach out over there, say hello, and, you know, just feel free to ask questions. All questions are good questions. So don't hesitate if something's not clear, just to reach out and ask. John, Sherin, thank you so much. It was amazing, I'm very proud to be here with you. And I will ask everyone that want to contribute to open source, join and like policies plus control security, control, please join this project because it's an open source in our CNCF sandbox and with your contribution, this project will go ahead very well. Jim, again, thank you so much. Thank you, Sherin. That's all, our finish, our time was amazing. Thank you everyone for joining us. The last episode of this week in the CloudNative, it was great to have my friend, Jim, and Sherin talk about Kibirnor, security as a code. We also really love the interaction. Thank you so much, everyone. See you next week. Thanks Paolo, thanks everyone. Thanks.