 All right everybody welcome again to another OpenShift Commons briefing today It's a wonderful Monday in the world and we are doing AMAs with upstream projects And today we're going to talk with two folks who are the people behind Coverno a new CNCF Sandbox project, I believe Ritesh Patel and Jim Baguadia, and I'm going to let them introduce themselves We're going to have a nice presentation about what Coverno is doing and why And where it fits into the whole CNCF landscape, and then we'll have live Q&A at the end So Ritesh and Jim take it away Hey folks, I'm Ritesh Patel, VB Products and co-founder at Nirmata. Hi everyone This is Jim Baguadia co-founder and CEO at Nirmata All right, so let's dive into it. So today we'll talk about you know Coverno, Coverno is an open source Kubernetes native policy management Solution that's now a CNCF Sandbox project. So here's the agenda I will, you know, dive deeper into why Coverno, what does it solve? Why did we feel there was a need for something like Coverno and then jump into the technical details of how it works Now we do have some use cases that we can go over to give you folks an idea of the kind of problems you can solve using Coverno I will discuss some of the other tooling around Coverno, which is more specifically that Nirmata offers to help ease the management of Coverno policies and reporting and then jump into a demo followed by a quick roadmap and Q&A So let's jump right into it Coverno actually, you know, you may kind of wonder what does Coverno mean and Coverno actually means govern It's a play in Greek basically, it's govern and you know, obviously Kubernetes, the origin of Kubernetes, the word is from Greek So we thought it would be appropriate to kind of use Coverno as the name of this project because Coverno is essentially a policy engine for Kubernetes and a policy engine is used typically to govern or for compliance of various aspects of Kubernetes So that's why the name Coverno Coverno is a Kubernetes native policy engine, which means that it understands Kubernetes resources, Kubernetes patterns So it actually is aware of Kubernetes constructs and it's designed in a way that it's familiar to Kubernetes users and essentially there is no learning curve So let's take a step back and understand why do we need policies in Kubernetes? What are policies and you know, how do they help? So if you're familiar with Kubernetes, you can configure lots of different resources whether it's your infrastructure for Kubernetes, whether your applications or workloads running in Kubernetes and all of the configuration is essentially done using YAML and it gets fairly complex, some configuration files could be, you know, 200 lines and the challenge there is now how do you ensure that the configuration can be managed and is secure There are also, you know, external tools that, for example, Helm and Customize that tweak existing configuration to adapt it to environments So that's how your configuration could change The end configuration that ends up in the cluster may be different from the source depending on where your resources are deployed So, you know, in Kubernetes, there are ways to configure admission controllers and that's where, as a best practice, a lot of the validation for configuration as well as any modification or mutation of configuration can happen and that's kind of the best place where policies, if any, can be or should be applied So what policies do is essentially let you govern or ensure that the configuration that's being applied to Kubernetes is secure It follows your best practices, you know, follows the standards And Qverno is an engine which does that without the user or the developer or the cluster operator having to build something from scratch or build something custom for their environments So that kind of brings us to why Qverno and, you know, touched on this briefly in an earlier slide But with Qverno, our goal when we started out was to enable policies for Kubernetes and make it really, really simple make it really easy for users to understand and write these policies So Qverno policies are declarative and very easy to manage And when these policies are applied, they generate policy results Basically these are, you know, violations or any kind of, any, you know, any policies that are violated, kind of reported in the results So that also is Kubernetes native These policies can be of different types, you know, you can have a validating policy which can either be in audit mode or an enforce mode Or they could actually mutate an existing configuration by, you know, changing parts of the configuration Or the policy could generate new configuration if it's in cases where this configuration is required or it's missing And that gives you a lot of flexibility to kind of ensure that the configuration of your Kubernetes applications and resources are configured correctly Being Kubernetes native, Qverno supports all of the Kubernetes resource types and can also support custom resources, right? Which is a huge benefit so that, you know, now you can use Qverno across the board without having to worry about which resources are supported and which are not And, you know, being, again, being Kubernetes native, Qverno leverages and understands, you know, some of the common patterns used in Qverno Like label selectors, labels and selectors, annotations, owner references and things like that, right? And that really makes it very easy for somebody to, you know, start with Qverno and start using it with Kubernetes Now just a little bit of background on the project itself, so Qverno today is an open source and a CNCF sandbox project It was added to sandbox projects in November last year and since then we've been tracking the downloads and it's almost approaching 2 million downloads in, you know, about three and a half months, right? Which is huge. It went to 1 million downloads about three or four weeks ago and now we're almost there at 2 million downloads Seeing a lot of, you know, interest in the community, you know, a lot of folks are looking at it and like the simplicity ease of use of Qverno And then there are other projects in the CNCF and in the community like open policy agent, which do policy management but Qverno kind of because of its simplicity has been gaining a lot of interest and a lot of users So definitely do check it out if you're interested So question when we talk about Qverno of this question that comes up is, you know, how is Qverno different from open policy agent and here are some points of differentiation You know, open policy agent was actually predated Kubernetes was built as a general purpose policy engine and then adopted or adapted to Kubernetes whereas Qverno itself is designed for Kubernetes It's Kubernetes native. The policies are Kubernetes native resources in Qverno, whereas in open policy agent, the actual policies are defined in a language called Rego, which is something that a user would have to learn to write policies. There is a project called Gatekeeper which which works or integrates with open policy agent that simplifies or that tries to adapt Kubernetes resources and to define policies but ultimately the underlying enforcement and all of the policy processing everything happens based on in based on policies written in Rego Also a few other things about Qverno which are not necessarily available or possible with OPA, OPA Gatekeeper So Qverno is, you know, secured by default, you don't need to call out any external systems for any processing and, you know, because it's an admission control path, this is very important. And also a few other scenarios that Qverno enables are around, you know, kind of the if this and that paradigm where if a particular configuration, you know, if you check for a particular configuration and if it exists, you can actually write policies to do certain things. These are very easy to do in Qverno And being Kubernetes native and Qverno policies are also Kubernetes native. It lends itself very well to tools like, you know, customized sound and can you can use gate ops to actually apply policies across your clusters. So really, just in a nutshell, there are several benefits that we get with Qverno in terms of ease of use as well as the other capabilities. Let's dig a little bit deeper in terms of, you know, at a high level from the policy perspective for Qverno or OPA and we'll go a little bit more details into how Qverno policies are defined. But here this is just to show tip kind of the difference between Qverno and OPA. On the left hand side, you see the Qverno policy, which is, you know, if you are familiar with Kubernetes and abused Kubernetes resources like deployment spots, this will be very familiar to you. Which very, you know, straightforward in terms of creating these policies, there are certain keywords like match and validate and we'll talk about those a little bit later. But overall, in this policy, just validates and checks if the file system is read only. On the right hand side, you see the same policy written in Rego and this would be something that you would use in OPA. So this is just to quickly show slight difference between Qverno and OPA. So next we'll talk about how Qverno works. I mentioned earlier Qverno is a policy engine that runs inside the cluster and it registers, you know, as an admission controller. And then every request that comes in through the API server is a process through Qverno and policies that are configured in Qverno are applied to do these requests. So Qverno gets an admission review request. It applies these policies, the policies depending on the configuration may generate certain events or a policy violation and then the response is sent back. If the policy is configured as audit only, then it's really the, it's just in a kind of like a report only mode and there's no the request itself is not blocked. But if the policy is configured as a, you know, to configure to validate and block, then the request can also be blocked. If the policy fails. So this is really at a high level, how Qverno works in Kubernetes. Here is a quick view of the policy structure. So when you're defining a policy in Qverno, a policy contains one or more rules and each rule contains match or exclude criteria, which could be mapped to Kubernetes resource kinds, names. It could be you could use label selectors namespace selectors, as well as user roles groups and so on. And then you have the various actions that Qverno can take, you know, that's either validate, mutate or generate. So, so these are the high level blocks and we'll talk about each of these blocks in the next few slides. So just to map the structure we saw earlier, here we have the same kind of the Qverno policy showing the same structure where you have on the top line number eight, we have the match block where in this case it's matching all resources of kind pod. And then on line number 12, we have validate block, which actually does the, you know, does the validation for the for every pod. And in this case, it checks if the security context run as non root, etc. So, so this kind of simply gives a quick idea of the policy structure. So validate policy is essentially it kind of matches the fields that are defined in the validate block and determines if your configuration, you know, matches the matches the fields and, you know, it is it it's validated for it validates the configuration by matching all of the fields. So in this case, you could have patterns you can match, you know, for existence of fields you can match for, you know, with operators you can have you can check if value is, you know, greater than equal to or not equal to. So a lot of the, there's a lot of flexibility in terms of doing various types of matching in these policies in the back for validate. Next on the mutate policy. So here you can, when, when you want to mutate an existing config, you can specify kind of a patch JSON patch or strategic merge patch to update the incoming resource. So for example, here if you want to, let's say, add a port or, or, you know, insert like a secret name, you can do that by specifying the patch in the policy, and you can even do that conditionally so you can even you can only add configuration if it's not there, or if, if, if, for example, you can even have if then condition where for certain port names you can use forces for certain only if port name start with certain string you can, you know, add the, add the value and so on. So this gives you again a lot of flexibility to manipulate incoming incoming configuration. And it's actually very important that I use cases where, you know, if you want to add certain labels on on every resource or on specific resources, you can now ensure that that's done by creating a mutate policy. A lot of times, these labels could be used by other tools for security for networking for monitoring and so on. So, so, you know, give or no through mutate policies allows you to address some of those use cases. And the final type of policy is a generate policy where you can actually generate certain resources. So, so on the, the example here on the right hand side kind of shows how you can generate a network policy to deny all traffic. And this could be your default network policy for your namespace. And it can, you know, it would be created, maybe anytime a new namespace is created. So there is a usually a trigger, which can trigger the generate of, of these, these resources in this case, we are generating a network policy. And it could also you could use to clone existing resources. And, and this essentially allows gives you flexibility to ensure that, you know, certain required resources like network policies, quotas limits, things of that sort are always available in your, in your cluster or for your applications. And these policies are configured once these policies are configured and as their process based on based on incoming requests, there are violations that can be generated. So, here is an example in Kubernetes to see the policy result. So, on the top, you see an example of showing the of the policy results for a particular namespace. In this case, default namespace, which shows that, you know, nine policies were passed and one failed. And you can get, get more granular details, you know, like, like the example, like the second screenshot here to see which policies failed and what exactly, why exactly the policy failed, right. So you can get more details in terms of the violation so that the violation can be fixed. So that's kind of, you have that information. Again, it's all inside your cluster. You don't need an external tool to retrieve that information. So this was a quick overview of some of the features that there's lots of advanced features that are being built. And, you know, this is an ongoing kind of as as community comes up in more use cases, we're building more and more features. So we talked a little bit about, you know, some of the anchors operators like, you know, compare comparators and if then else and so on. You can also have variables, you can actually point to certain certain parts of your config using James path, James path. And you can also look up data external data, you know, from, from config map or through API calls. And this lets this allows you to kind of inch check, or figure, you know, violations by checking data that may not be part of the incoming request or the part of part of the resource configuration. Can also set up deny rules. One thing that's interesting is auto generation of pod controller rules. So if you have policies for parts, they automatically get applied to pod controllers like deployments, stateful sets, jobs and so on. And then finally, you want to know can also be run in offline mode. So you don't, there's a command line for Q and O which can be used offline so that you can actually check, check for violations in your configuration if it's in gate or, you know, before your check for your violations before your configuration is applied to your cluster. So that's definitely useful. You know, you want to don't want to wait till the configuration is in your cluster to check you can check upfront address those issues before applying your resource changes to your cluster. So, you know, we talked about a few use cases, but there's more use cases emerging as, as, as Kimono gets more widely used. We talked briefly about security validation and enforcement. There's also Kimono enables you to do fine drain RBAC. So based on, you know, base, for example, when namespaces are created, you can auto create some of the roles, role bindings, et cetera, depending on the namespace name or labels or any other criteria. Kimono is the enables multi-tenancy in, in Kubernetes again through some of the automation by generating RBAC rules by generating network policies can generate quotas. And, you know, we're also working with various states to kind of enable this enable more secure multi-tenancy across within, within Kubernetes. You know, for, for your applications. Auto labeling is a use case we talked about earlier. Also, we are seeing patterns like, you know, sidecars being used in for various, various tools like, you know, for, for, for example, for service mesh and Istio sidecars, it's very common to use sidecars. There are other scenarios like if you want to mount certificates or, or if you want to, you know, fetch, you know, continuously fetch data from some, some endpoint for things like that, you know, sidecars are being used like, for example, even for, you know, wall certificates and so on. So Kimono also can, can, because it can mutate existing configuration, you can create sidecars using Kimono. So you don't have to write a new operator to do that. And then finally, you can have very conditional rules like if this and that for Kubernetes. And that opens up a whole set of use cases. You know, and for your, in your, in your Kubernetes deployment. So it's very flexible and, you know, you're happy to kind of, if you have other use cases that we have not covered or not seen, be great to hear about those and see how we can help with using Kimono. One area where, you know, we really see a lot of interest today is around, you know, pop security. In the past, pop security policies have been defined using PSPs, but PSPs have been marked for deprecation by the, by the community. And there are other options and alternatives being discussed and we are part of those discussions. But today, if you want to, if you are interested in, in the pop security policies, Kimono provides a very easy way of achieving that. We already have a set of policies, you know, on the Kimono website, which kind of address all of the requirements for pop security and they are based on levels, you know, like the PSPs and some of the discussions that are happening in the community. So it would be, if you're interested in alternatives to PSPs for while, you know, communities working on, you know, the next iteration of PSP will be great to check out Kimono and let us know if that addresses the security needs. A lot of folks in the community that we know are already using Kimono for pop security. So something, you know, you can try out and, you know, let us know if there are gaps that we can address. So before we jump into the demo, the one thing I want to quickly touch on is, you know, as Kimono gets used in Kubernetes and as it's more widespread, you know, we're seeing some requirements around managing policies at scale across clusters, across multiple clusters, ensuring consistent deployments and so on. And so we've created, you know, the cloud-based solution at Nirmata to help manage policies at scale. So through Nirmata, you can actually deploy policies across clusters, you know, using GitOps. That's concept of policy groups. You can define different policy groups like, say, for pod security or for multi-tendency. And then these groups, these policy groups can be applied either automatically to new clusters or they can be applied to specific clusters. So you can manage it very easily instead of having to deal with each cluster individually, right? And then the ongoing challenge of keeping your policies up to date and automating that deployment can also happen, you know, very easily with our integration with Kimono, with the integration of Nirmata with Kimono. And then Kimono generates a lot of violations, right? So depending on the size of your clusters and the number of pods and the number of applications running, you could have tons of, you know, policy violations and failures and so on. And policy reports are available in the cluster, but through our integration with Kimono, you can actually get a very granular report of the violations and not just get the report, but then also understand what the problem is. And then, you know, either get help on remediating that problem or filing, you know, tickets to kind of get that problem remediated, you know, from the application owner and so on. So through the integration, we complete the full lifecycle and of managing Kimono policies and getting visibility into Kimono violations. You know, at scale, like if you have multiple clusters that you're managing, this becomes very, very important. Next, we'll jump into the demo, so I'll stop sharing and let Jim take over from here. Go ahead, Jim. Okay. Thanks, Sudesh. All right. Is my screen being shared or not yet? Not yet. Let's see what's going on here. And no worries, I can edit this little segue out if we use the video. Here we go. Okay, perfect. Now I'm seeing a black screen with your recycle bin. Okay. There we go. Yeah, so what I'm going to do is I'm going to just demonstrate, you know, how you would as a new user get started with Kimono, what you can do with it. We'll look at a few different examples. We'll look at, you know, like Ritesh mentioned, pod security policies get a lot of attention. So we'll start with those, see what Kimono can do in that area, and then also look at some more advanced policies. So I'll just go, you know, just again, going to the Kimono site if I go to documentation, and we'll click on installation. So we see there's a bunch of different ways to install. There's a help chart. But for this, what I want to do is just install using a command line so we can kind of look, take a look behind the scenes and see what's going on, right? So I'm just going to copy this command, which is a single line to pull down all the YAMLs from the Kimono Git repo and install into my local cluster, right? So if I do kubectl, get namespace, you'll see I'm just using Docker for Windows here. And on this, I'm going to run, you know, this command line that we copied and have it installed Kimono, right? So we see there's a bunch of CRDs. That's all of these that came in about like six or seven different CRDs, including policy reports. One interesting thing I want to mention about policy reports is these are not unique to Kivarno. In fact, the policy reports are being developed, you know, in collaboration with the workgroup policy, the policy working group in the Kubernetes community. And the intent is to make these same CRs available to any other tool that wants to report policy results in the Kubernetes cluster, right? So this provides a standard way of reporting policy violations independent of the engine or the tool that's producing it. And there is some work being done to take, you know, things like kubbench and also adapt it to these policy reports. Falco is being discussed as another candidate, et cetera. So once these CRDs are installed, we also see that Kivarno created a service account. There's also a few other things done. And what's also interesting is there's some webhooks created, right? So now if we look at our namespaces again, we should see a Kivarno namespace. If I do, you know, minus N, Kivarno. And if we, let's say, do pods, we should see that we have a single pod running, which is what we expect at this point, right? So now let's go ahead and, you know, I don't have any policies. And if I do kubectl get cpaul, which is short for cluster policy, it will say I don't have any resources. Because the CRD is installed, I can actually now use something like kubectl explain. And I can say policy, so Kivarno both has a cluster policy, which is cpaul and just policy, which is a namespace policy. And if I say policies back, it will tell me, you know, because it's integrated again with Kubernetes, I can see right here what I can put into a policy. So I see I have a background option, which is a Boolean. I have a set of rules in the policy, like Ritesh was explaining. And then I have a validation failure action, which can be set to either enforce or audit, right? And if you go further, so on, it's very straightforward. And of course, if you have your, you know, visual studio code or other things integrated with Kubernetes, you get all this help and, you know, schema, etc. Now, also in your IDE, right? So it makes it easy to write policies, check the syntax, things like that. But here I can see that this is the description of rules. But so far, I don't have any policies. I just have Kivarno installed. But let's go ahead and, you know, get some policies, right? So I'm going to go back to the Kivarno site and we'll click on policies. And I see I have pod security best practices and other. So let's go to pod security. And like Ritesh mentioned, here the way these policies are organized is based on the pod security standards. So if you're not familiar with those, these are great to review and look at. Because what the Kubernetes pod security standards help you do is they organize the pod security controls into three categories. So there's privilege pods, which is basically unrestricted. There's baseline, which is typically used as a default. So this is minimally restrictive and then there's restricted, which is the highest level of security itself, right? And then it lists every control within the pod security security context structures and allowed settings, et cetera, for those. Now the, you know, the, the cap, which is being developed, which is most likely going to be called pod isolation policies is also going to be based on pod security levels. So again, it's important and pod security standards. So it's important to kind of keep these in mind. And then you'll be able to apply these, these, one of these levels to namespaces in this upcoming, you know, cap, which will be pod isolation policies. So anyways, going back to the Kibana policies, as you see, they are organized to default. So this is actually baseline. It got renamed from default. So this is the baseline setup policies and this is restricted. If I go into default, we'll see, you know, each policy that's, you know, part of this. If I click on any one of this, like for example, let's say the require default proc mount. So it is checking both for init containers and containers that if a proc mount is specified, it has to be default. It cannot be changed from that setting, right. But there's other, of course, several other policies. If we go back to, you know, the restricted and look at some of the more stronger or the tighter control policies. You'll see things like require, you know, that run as non-root, which is pretty important for non-privileged pods to make sure both init containers as well as containers are always run as a non-root user, right. So let's go back, you know, to this main pod security page. And what we have over here, it's showing me a one line command. So it uses customize, it pulls down policies from this Git repo. And then it's just using kubectl to apply them, right. I'm going to take that and let's see what happens if we apply to our cluster. So I'm going to, you know, it's a, this takes a few seconds to pull down all of the YAMLs. And once that's down, what we should see is it will apply into our cluster itself, right. So we see about, like about 10 or so policies got applied. And if we go back and look at the cluster policies, we should see that these are now enforced. Now, what's interesting is if I go, let's actually, you know, look at this, if I go to one of these policies. If you notice when I was browsing through the policies, most of them are written at the pod level, right. But as Ritesh mentioned, one of the things Kivarno does by default is because it understands the relationship between pod and pod controllers, it will, let's take a look at that, you know, root policy. Kivarno will automatically generate policies for different pod controllers. And of course you can control the settings. So if I say kubectl get, you know, let's say cpaul, I'm going to do this and do minus O YAML. And then I'm going to just to kind of get better, we'll use kubectl neat, which is a really handy plug-in if you want to remove things like, you know, which you don't want to see, like all of the metadata and the owner references, things like that from the YAML. So you're, if you notice what happened is when I applied this policy, it was just written, the rule was written to match a pod. But in addition to a pod, what happened is Kivarno automatically generated policies for, you know, most of the standard pod controllers. And again, this can be, you know, tuned, you can specify different annotations to control, which you want to, if you want to restrict, for example, only to deployments and not to demon sets or something like that, right? But here what I wanted to make sure is now we do have all of these generated. It's this one's for a cron job. This other one is for all of these, you know, the more standard pod controllers as we see. So pretty neat that I have now all of this installed. So let me try and run a simple workload. You know, I think I have an engine X pod. So I'm just going to say create minus F and let me check if this is in my temporary. Yeah. So, okay, so immediately Kivarno is saying, and because these policies were set to enforce, it's saying that running as root is not allowed. It's telling me what exactly got violated. And it's saying that one as root must be true. Must be set to true. Run as non root must be set to true, both for init containers as well as containers, right? So that was since that was violated in my YAML, it's, you know, basically block this particular configuration itself. So now let's try something a little bit more intrusive, right? So I'm going to actually use this site, which is pretty handy. If you haven't seen it, it's the site is Bishop Fox, and it's called bad pods. And as the name suggests, these are pods which are misbehaving, right? So pods which are misconfigured to allow, you know, all sorts of, I guess, to basically be open where if you want to, you know, allow the host bit or access to host network or other namespaces, all of that is fairly open, right? So here I'm going to go into, let's check and see so this, the, you know, which one we want to do. Let's go into everything allowed, which is probably the worst you can do, right? And we'll take a quick look at the YAML. We'll actually go to the deployment just to see how that works. And, you know, we'll look at it. So this is basically allowing, you know, different, it's saying privilege true. It's mounting host, which is also a bad thing to do. It's all, and then allowing all of our, you know, host namespaces to true, right? So let's grab that. And what we want to do is we'll go to the raw YAML. And we'll run that in our cluster, right? So I'm going to now do kubectl create minus F and we'll give it that whole YAML and see what happens. So now we see a lot more warnings or a lot more, you know, errors coming back from our admission controller, which is what we expect. So it's saying that, hey, don't, you know, don't allow host namespaces, don't use hostpat. It's a don't privilege. So all of this got blocked, which is, and of course the other error we saw before, which is run as non-root. So this is how, you know, straightforward and simple it can be to, you know, to set these policies. These policies are very flexible. You can tune them based on, you know, like Ritesh was explaining. A lot of different selectors and, you know, I have some examples in here. So this is actually a different policy. It mutates or this is generating a network policy. But if you look at another policy, like, which also can mutate things. So let's, yeah, this is an example of a mutate policy. It matches also pods, but you can write your, your selectors based on several, you know, constructs, including namespace labels, et cetera. Right? So very flexible, very simple, you know, to apply some of these. One other thing I want to quickly show is how, you know, Ritesh also mentioned some features like being able to select and get external data, right? So very often you want to write a policy, but you want to then, you know, leverage data from things like config maps, which are very natural in Kubernetes. In fact, you know, in the community, one of the policies that I was just working with somebody to develop is to make sure that workloads, workload identities are protected based on the image name. So, you know, what the author did, which is pretty nice is use the config map to make sure that a certain service account can only be used for certain images, right? And they're managing this, the data through a config map, but then it's a very generic policy to do that. Another common example is to, you know, make sure your ingresses are unique within a particular, you know, cluster, right? Your host names for your ingress, I should say, are unique. So Piverno has this ability to use James Path and also to do API look calls. So this is a combination of using those two features. So just going through the structure of the policy, this policy matches ingresses. And then there's a context, which where you can populate different data and different variables. So here, what it's doing is it's saying, it's doing a call to, you know, to the API server, which is this API call construct. It's getting all the ingresses from the API server. And then it's applying this James Path on it to extract out all of the hosts. And then it's checking if that host is, you know, the host from the request object is already is included in that list of existing hosts. It's going to deny the operation, right? So fairly complex logic, but pretty straightforward. You know, once you kind of get the hang of how this is structured. And the interesting thing is, you know, if you use kubectl and just through the command line, all of this can be tested very easily. So I can do kubectl, I think it's get minus this raw. Yeah, so if you do kubectl get minus minus raw. So you're, I don't have any ingresses in this cluster, but you can kind of see what that looks like. And even if you want to test the James Path expression, you can use JP, which is the command line for that. And if I go back to my policy, let's grab what this looks like. And of course, if I had some ingresses, this would maybe make more sense. But if I apply this right now, it will come back with an empty string, right? Because there's no, there's nothing. So just came back with an empty list over here as you see. If I had any ingresses with hosts, it would show me a list, a string list of those ingress hosts. And then in my policy, what I'm doing is I'm checking to make sure that the host, which is coming from the request object is not, you know, already used within the cluster. So, you know, this is a simple example which shows again how you can combine some of these. And the other thing, you know, which is very interesting is there's, you know, policies I think Ritesh mentioned also for multi-tenancy. I can show a few examples of, you know, those. What we're seeing is first of all, you know, adding labels to namespaces is a very common use case. And there is, you know, there's a session that, you know, I'm going to be doing along with Adrian Ledman who leads a hierarchical namespace controller, a project that's also being developed in the community. We're going to be doing this at the Cloud Security Day for GroupCon EU where we're going to talk about how Kiverno and HNC can work together and manage both namespaces. And then HNC allows, you know, sub namespaces within a namespace. So how you can kind of allow those controls. So you're, anyways, in this policy, it's pretty straightforward. It's saying that if there's a namespace created, except by cluster admins or by the HNC manager, go ahead and inject like the user info. And then you can also do more complex things like you could say, okay, maybe I want three types of namespaces. I want a small, medium, large. And based on that, now I'm going to configure, you know, different quotas and different things for my tenants, right, for my users. So all of that can be fairly easily done. You can also generate very fine grained roles and permissions. So only the person who requested that namespace, the owner will get permissions to then delete that namespace. Things like that can be then generated on the fly. So these are Kiverno policies to do those kind of things. So we'll share this repo once we have this finalized for the session. But this is just a quick example of how now you can use the power of leveraging labels and then based on automatically generating different configurations for those namespaces. And of course, you can also add like validation logic to make sure that only the right settings, like in this example, it was requiring that each namespace only have a certain, you know, can only be used with a certain suffix, right? So dash SM or medium or large. And if it doesn't have those, it would be rejected itself. So these kind of things, you know, you can combine, you know, validate, mutate and generate to get exactly the right behaviors you want for your cluster. And once these policies are set, they can, they're very much data-driven. So, you know, the configurations itself can be easily automated in there. Okay, so one last thing I wanted to show, you know, before we switch is, you know, in Kiverno, I think like Shultz mentioned, and as we were looking at, there's this ability to, you can create, it creates policy reports. And something our one of our community member created and has made available this is also an open source project. He created something called a policy reporter, which takes this policy report and also provides a nice graphical UI on it. He's also got the ability to push this to Rafauna Loki and Elasticsearch and create other notifications. But really nice example of how, you know, you can have you this graphically, you can share this inside a single cluster. So, just wanted to highlight that. So, if you're looking for a single cluster tool to, you know, visualize these policy reports and even create notifications when these get created, definitely check out this policy reporter project. And, you know, we're, there's also work in progress, by the way, to take more Kiverno metrics and push these to, you know, to Prometheus, which could then be of course displayed in various UIs and dashboards like Rafauna and others as well. But this is more focused on the policy report. The other work that we're starting now is to build to push engine metrics and engine statistics out as well. All right, so let me stop there. I know we have about five minutes left for the hour. So would love to see if there's any other questions, thoughts, comments that we can help answer. And, you know, again, like there's plenty of documentation and if you want to reach out to us, feel free to reach out on the Kiverno Slack, which this is the on the Kubernetes Slack, the Kiverno channel itself. Well, that was one thing I was actually noticing is that your documentation is awesome. So, congrats on that because I think that's one of the things that drives communities and really helps people adopt. We have a couple of people on the call that had some questions. Paul Maury, if you want to pop yourself out of mute and onto the screen and Kirsten newcomer. Yeah, so I heard a lot about pods. I didn't hear whether you can write policies around custom resources. So like saying that I have a custom resource for my own bespoke controller that creates pods. It contains something that's like pretty close to a pod spec. Can I write a Kiverno policy on it? Absolutely. Yes. Yes. So one of the things right from the beginning Kiverno has supported is, you know, also full custom resources and it does help if the custom resource schemas are structural. So if they're structural, what Kiverno can also do is can validate the various paths and fields for that custom resource policy. If they're not that validation doesn't occur, but Kiverno will still apply the policies. Okay. And how about if I have like an aggregated API? So can I write policies against APIs that another API server might bring in? So you can make calls to an aggregated API server, right? So through the API called construct, if you, for example, if you want to pull metrics, things like that, that would work. There are resource definitions. Yes. Then those can still be, you know, looked up and then policies can be applied to those. Okay. All right. Thank you. Sure. There's one question in the chat. Mike, he's asking, do you need a CNI to enforce the policies? So for network policies, yes. If you're generating a network policy, you will still need a CNI to provide that runtime, you know, kind of to make sure that, you know, the right segmentation exists on the network layer. But for other policies like pod security, et cetera, those are applied directly with the API server and Kiverno itself will block configurations that are violating, you know, the policies you set. First and take it away. Sure. Thanks. Oh, but my cat, my cat, my cat a follow up. Is the, yeah, go ahead. Is the mutate policy in cash until validate is run. Yes. So there is a, there is a ordering in, in, in the admission controls. So the validate will occur to, you know, at the end. So all of the mutation logic will occur. And then once that's done, then you get a chance to sort of validate. There's also a way to re invoke admission controllers. So Kiverno by default, when it installs itself, it tries to be the last admission controller to receive. So there are, you know, there are some customers who will use both open Kiverno for various reasons. And in that case, you know, you could still run one particular policy and then register to read receive any mutate results. Great. Take it away. Thanks. So I had kind of a question about network policy generation as well. How do you inform the generation of that network policy? Are you using. Config data to inform that, or is there a, you know, or is that optional kind of, how do you decide on, on what's going to be in that policy that's generated. Right. Yes. So the few things, right? So the trigger to generate the policy could be the creation of a namespace could be the setting of a label. Or the setting of an annotation itself. So one pattern is if you're using, if you want some variations and you want to control, you know, like, for example. What exactly gets generated, you could use labels and based on labels, generate, you know, have templates for different types of network policies. And that would be, yeah, one quick way of, you know, of changing that you can also do things like because Kiverno can look up the namespace can match namespace selectors. You can say that if for certain namespaces and if there is an annotation on that namespace, then based on that, maybe you want to trigger a different network policy. So you would have two different generate rules, one for each. So, you know, a few different ways of managing those kind of variations and handling that. Great. Thank you. That's helpful. And I know we're pretty much right at the top of the hour. I'm actually I have two questions, but, but I'm going to pick one of them. I have thoughts on kind of policies or ways that we as a community might move forward to kind of apply policies to respond to events in the runtime. Yeah, great question right and something that we have been thinking about there's been discussion also in the policy working group about what other triggers. The commission controls and configuration changes. Of course, I want one trigger, but it would be very interesting to also have a set of, you know, defined runtime triggers. And there's also discussion, you know, about authorization checks right so somebody in the community was asking about, you know, the, there's a self subject review request that gets sent out from the API server when when there's an authorization check. You know, for having some more granular policies on that. But yeah, to your point, like if there's other runtime events, even those could be potentially used for triggers. But it's something that it's not supported right now and give her no but we're very open to that and interested in hearing more about the use cases and how that could be standardized. Great. Thanks so much. That's it for me. Well, thank you guys for coming today. It's, it's wonderful to have these upstream talks on Mondays because I really didn't know that much about caverno before this that way I was pretty open centric in my understanding of policy management and such. So it's really helpful to have the context set and so I think everybody appreciates that as well as Jim awesome deep dive and demo there today too. So we'll definitely you're in the sandbox now. Yes, the road ahead, more, more production users, more people using it, more feedback on it, hopefully incubation and graduation into the, you know, the real world today, sometimes soon. Absolutely. And we'll definitely get you back on as it matures and we get further down the path and congratulations on getting into the sandbox and getting it to this state. Again, if you're interested in this topic. These guys are pretty active in some of the SIGs and the CNCF, as well as I, as I mentioned earlier, the documentation is rock solid. So good on that. That's as a community person. That's one of my key tests, whether we can keep something going forward. And definitely when you get a new release, new features, new functions, reach out to us Ritesh and Jim and we'll have you back. So, good work. Good luck. And thanks. Thanks again for sharing the information with us today. Thanks for inviting us.