 All right everybody, welcome again to another OpenShift Commons briefing today. It's a wonderful Monday in the world and we are doing AMAs with upstream projects and today we're going to talk with two folks who are the people behind Coverno, a new CNCF sandbox project I believe, Ritesh Patel and Jim Baguadia and I'm going to let them introduce themselves. We're going to have a nice presentation about what Coverno is doing and why and where it fits into the whole CNCF landscape and then we'll have live Q&A at the end. So Ritesh and Jim, take it away. Hey folks, I'm Ritesh Patel, VB Products and co-founder at Nirmata. Hi everyone, this is Jim Baguadia, co-founder and CEO at Nirmata. All right, so let's dive into it. So today we'll talk about, you know, Coverno. Coverno is an open source, Kubernetes native policy management solution that's now a CNCF sandbox project. So here's the agenda. I will, you know, dive deeper into why Coverno, what does it solve? Why did we feel there was a need for something like Coverno and then jump into the technical details of how it works. Now we do have some use cases that we can go over to give you folks of, you know, an idea of the kind of problems you can solve using Coverno. I will discuss some of the other tooling around Coverno, which more specifically that Nirmata offers to help ease the, you know, management of Coverno policies and reporting. And then jump into a demo followed by a quick roadmap and Q and A. So let's jump right into it. Coverno actually, you know, you may kind of wonder what does Coverno mean? And Coverno actually means govern. It's a play in Greek, basically it's govern and, you know, obviously Kubernetes, the origin of Kubernetes, the word is from Greek. So we thought it would be appropriate to kind of use Coverno as the name of this project because Coverno is essentially a policy engine for Kubernetes and policy engine is used typically to govern or for compliance of various aspects of Kubernetes. So that's why the name Coverno, Coverno is a Kubernetes native policy engine, which means that it understands Kubernetes resources, Kubernetes patterns, that it actually is aware of Kubernetes constructs. And it's been, it's designed in a way that it's familiar to Kubernetes users and essentially there is no learning curve. So let's take a step back and understand why do we need policies in Kubernetes? What are policies and, you know, how do they help? So if you're familiar with Kubernetes, Kubernetes, you know, you can configure lots of different resources, whether it's, you know, your infrastructure for Kubernetes, whether your applications or workloads running in Kubernetes, and all of the configuration is essentially done using, you know, YAML, right? And it gets fairly complex, some configuration files could be, you know, 200 lines. And the challenge there is now how do you ensure that the configuration can be managed and is secure? There are also, you know, external tools that, for example, helm and customize that tweak existing configuration to adapt it to environments. So that's how your configuration could change. The end configuration that ends up in the cluster may be different from the source, depending on where your resources are deployed. So, you know, in Kubernetes, there are ways to configure admission controllers. And that's where as a best practice, a lot of the validation for configuration, as well as any modification or mutation of configuration can happen. And that's why, that's kind of the best place where policies, if any, can be or should be applied. So what, what, what policies do is essentially let you govern or, or ensure that the configuration that's being, you know, applied to Kubernetes is secure. It follows your best practices, you know, follows the standards. And, and Kibirno is an engine which does that without the user or the developer or the cluster operator having to build something from scratch or build something custom for their environments. So that kind of brings us to why Kibirno and, you know, touched on this briefly in an earlier slide. But with Kibirno, our goal when we started out was to enable policies for Kubernetes and make it really, really simple, make it really easy for users to understand and write these policies. So Kibirno policies are declarative and very easy to manage. And when these policies are applied, they generate policy results. Basically, these are, you know, violations or any kind of, any, you know, any policies that are violated kind of are reported in the results. So that it's also is Kubernetes native. These policies are can be of different types, you know, you could have a validating policy, which can either be in audit mode or an enforce mode, or, or they could actually mutate an existing configuration by, you know, changing parts of parts of the configuration, or they could, the policy could generate new configuration, if it's in cases where this configuration is required, or it's missing. And that gives you a lot of flexibility to kind of ensure that the configuration of your Kubernetes applications and resources are configured correctly. The being Kubernetes native, Kibirno supports all of the Kubernetes resource types and can also support custom resources, which is a huge benefit so that, you know, now you can use Kibirno across the board without having to worry about which resources are supported and, and which are not. And, you know, being, again, being Kubernetes native, Kibirno leverages and understands, you know, some of the common patterns used in Kibirno, like labels, selectors, labels and selectors, annotations, owner references, and, and, and things like that, right? So that really makes it very easy for somebody to, to, you know, start with Kibirno and start using it with Kubernetes. Now, just a little bit of background on the project itself. So Kibirno today is an open, is open source and a CNCF sandbox project. It, it got the, it, it was added to sandbox projects in November last year. And since then we've been tracking the downloads and it's almost approaching 2 million downloads in, you know, about three and a half months, right? Which is, which is huge. It went to 1 million downloads about three or four weeks ago and now we're almost there at 2 million downloads. So seeing a lot of, you know, interest in the community, you know, a lot of folks are looking at it and like the simplicity ease of use of Kibirno. And, and then there are other projects in, in, in the CNCF and in the community like open policy agent, which, which, which do policy management but Kibirno kind of because of its simplicity has been gaining a lot of, a lot of interest and, and a lot of users. So definitely do check it out if, if, you know, you're interested. So question, when we talk about Kibirno, this question that comes up is, you know, how is Kibirno different from open policy agent? And here are some points of differentiation. You know, open policy agent was, was actually predated. Kubernetes was built as a general purpose policy engine and then adopted or adapted to Kubernetes. Whereas Kibirno itself was designed for Kubernetes. It's Kubernetes native. The policies are Kubernetes native resources in Kibirno. Whereas an open policy agent, the actual policies are defined in a language called Rego, which could, which is something that a user would have to learn to write policies. There is a project called Gatekeeper, which, which works or integrates with open policy agent that simplifies or, or that tries to adapt, adapt Kubernetes resources and, and to, to define policies. But ultimately, the underlying enforcement and all of the, all of the policy processing, everything happens based on, based on policies written in Rego. Also a few other things about Kibirno, which are not necessarily available or possible with OPA, OPA Gatekeeper. So Kibirno is, is, you know, secured by default. You don't need to call out any external systems for any processing. And, you know, because it's an admission control path, this is very important. And also a few other scenarios that Kibirno enables are around, you know, kind of the, if there's in that paradigm where if, if you, if a particular configuration, you know, if you check for a particular configuration and if it exists, you can actually write policies to do certain things. These are very easy to do in Kibirno. And being Kubernetes native and Kibirno policies are also Kubernetes native. It lends itself very well to tools like, you know, customized sound and can, you can use GitHub to actually apply policies across your clusters. So really, just in a nutshell, there are several benefits that you get with Kibirno in terms of ease of use as well as the other capabilities. Let's dig a little bit deeper in terms of, you know, at a high level from the policy perspective for Kibirno or OPA. And we'll go a little bit more details into how Kibirno policies are defined. But here, this is just to show kind of the difference between Kibirno and OPA. On the left hand side, you see the Kibirno policy, which is, you know, if you are familiar with Kubernetes and have used Kubernetes resources like deployment spots, this will be very familiar to you, which vary, you know, straightforward in terms of creating these policies. There are certain keywords like match and validate and we'll talk about those a little bit later. But overall, in this policy, just validates and checks if, you know, the file system is read only. On the right hand side, you see the same policy written in Rego and this would be something that you would use in OPA. So this is just to quickly show a slight difference between Kibirno and OPA. So next, we'll talk about how Kibirno works. I mentioned earlier, Kibirno is a policy engine that runs inside the cluster and it registers, you know, as an admission controller. And then every request that comes in through the API server is a process through Kibirno and policies that are configured in Kibirno are applied to do these requests. So it Kibirno gets an admission review request, it applies these policies, the policies depending on the configuration may generate certain events or a policy violation. And then the response is sent back. If the policy is configured as audit only, then it's really the, it's just, you know, kind of like a report only mode and the request itself is not blocked. But if the policy is configured as, you know, configured to validate and block, then the request can also be blocked if the policy fails. So this is really at a high level, how Kibirno works in Kubernetes. Here is a quick view of the policy structure. So when you're defining a policy in Kibirno, a policy contains one or more rules and each rule contains a match or exclude criteria, which could be mapped to Kubernetes resource kinds, names, you could use label selectors, namespace selectors, as well as user roles, groups and so on. And then you have the various actions that Kibirno can take, you know, that's either validate, mutate or generate. So these are the high level blocks and we'll talk about each of these blocks in the next few slides. So just to map the structure we saw earlier, here we have the same kind of, the Kibirno policy showing the same structure where you have on the top, line number eight, we have the match block where in this case it's matching all resources of kind pod. And then on line number 12, we have validate block which actually does the, you know, does the validation for every pod. And in this case, it checks if the security context run as non-root is set to true. So this kind of gives a quick idea of the policy structure. So validate policy is essentially it kind of matches the fields that are defined in the validate block and determines if your configuration, you know, matches the, or matches the fields and, you know, it is, it's validated for, it validates the configuration by matching all of the fields. So in this case, you could have patterns, you can match, you know, for existence of fields, you can match for, you know, with operators, you can have, you can check if values, you know, are greater than equal to or not equal to. So a lot of the, there's a lot of flexibility in terms of doing various types of matching in these policies in the Vaport for Validate. Next on the mutate policy, so here you can, when you want to mutate an existing config, you can specify kind of a patch, a JSON patch, a strategic merge patch to update the incoming resource. So for example, here if you want to, let's say, add a port or, or, you know, insert like a secret name, you can do that by specifying the patch in the policy, and you can even do that conditionally. So you can even, you can only add a configuration if it's not there, or if, if, for example, you can even have if then condition where for certain port names, you can use forces for certain, only if port names start with certain string, you can, you know, add the, add the value and so on. So this gives you, again, a lot of flexibility to, to manipulate incoming, incoming configuration. And it's actually very important that I use cases where, you know, if you want to add certain labels on every resource or on specific resources, you can now ensure that that's done by creating a mutate policy. A lot of times these labels could be used by other tools for security, for networking, for monitoring and so on. So, so, you know, given through mutate policies allows you to address some of those use cases. And then the, the final type of policy is a generate policy, where you can actually generate certain resources. So, so on the, the example here on the right hand side kind of shows how you can generate a network policy to deny all traffic. This could be your default network policy for your namespace. And it can, you know, it would be created, maybe anytime a new namespace is created. So there is a usually a trigger, which can trigger the generate of, of these, these resources. In this case, we are generating a network policy. And it could also, you could use to clone existing resources. And, and this essentially allows, gives you flexibility to ensure that, you know, certain required resources like network policies, quotas, limits, things of that sort are always available in your, in your cluster or for your applications. And these policies are configured. Once these policies are configured and, and, and as they are processed based on, based on incoming requests, there are violations that, that can be generated. So here is an example in Kubernetes to see the policy result. So on the top, you see an example of showing the, of the policy results for a particular namespace. In this case, default namespace, it shows that, you know, nine policies were passed and one failed. And you can get, get more granular details, you know, like, like the example, like the second screenshot here to, to see which policies failed and what exactly, why exactly the policy failed, right? So you can get more details in terms of the violation so that the violation can be fixed. So that's kind of, you have that information. Again, it's all inside your cluster, you don't need an external tool to, to retrieve that information. So this was a quick overview of some of the features that there's lots of advanced features that are being built. And, you know, this is an ongoing kind of, as, as community comes up in more use cases, we're building more and more features. So we talked a little bit about, you know, some of the anchors, operators, like, you know, compare, comparators and, and if then else and so on. You can also have variables. You can actually point to certain, certain parts of your config using JMS path, JMS path. And you can also look up data, external data, you know, from, from config map or through API calls. And this lets, this allows you to kind of check or figure, you know, violations by checking data that may not be part of the incoming requests or the part of, part of the resource configuration. You can also set up deny rules. One thing that's interesting is auto generation of pod controller rules. So if you have policies for pods, they automatically get applied to pod controllers, like deployments, stateful sets, jobs and so on. And then finally, QVerno can also be run in offline mode. So you don't, there's a command line for QVerno, which can be used offline so that you can actually check, check for violations in your configuration if it's in gate or, you know, before your config, check for your violations before your configuration is applied to, to your cluster. So that's definitely useful. You know, you want to, don't want to wait till, till the configuration is in your cluster to check, you can check upfront, address those issues before applying your resource changes to your cluster. So, you know, we talked about a few use cases, but there's more use cases emerging as, as, as QVerno gets more widely used. We talked briefly about security validation and enforcement. There's also QVerno enables you to find the RBAC. So based on, you know, base, for example, when namespaces are created, you cannot create some of the roles, role-bindings, et cetera, depending on the namespace name or labels or any other criteria. QVerno is, enables multi-tenancy in, in Kubernetes again, through some of the automation by generating RBAC rules, by generating network policies, can generate quotas. And, you know, we're also working with various states to kind of enable this, enable more secure multi-tenancy across within, within Kubernetes, you know, for, for your applications. Auto labeling is a use case we talked about earlier. Also, for a lot of, we are seeing patterns like, you know, sidecars being used in, for various, various tools like, you know, for, for, for example, for service mesh and Istio sidecars, it's very common to use sidecars. There are other scenarios like if you want to mount certificates or, or if you want to, you know, fetch, you know, continuously fetch data from some, some endpoint for things like that, you know, sidecars are being used like, for example, even for, you know, wall certificates and so on. So QVerno also can, can, because it can mutate existing configuration, you can create sidecars using QVerno. So you don't have to write a new operator to do that. And then finally, you can have very conditional rules like if this and that for Kubernetes and that opens up a whole set of use cases, you know, and for your, in your, in your Kubernetes deployment. So it's very flexible and, you know, you're happy to kind of, if you have other use cases that we have not covered or not seen, it would be great to hear about those and see how we can help with QVerno. One area where, you know, we really see a lot of interest today is around, you know, pod security. In the past, pod security policies have been defined using PSPs, but PSPs have been marked for deprecation by the, by the community. And there are other options and alternatives being discussed and we are part of those discussions. But today, if you want to, if you are interested in, in the pod security policies, QVerno provides a very easy way of achieving that. We already have a set of policies, you know, and are on the QVerno website of it, which kind of address all of the requirements of, you know, for, for pod security and they are based on, on levels, you know, like we, like the PSPs and, and some of the discussions that are happening in the community. So it would be, you could, should, if you're interested in alternatives to PSPs for, while, you know, communities working on, you know, the next iteration of PSP, it would be great to check out QVerno and let us know if that addresses the security needs. A lot of folks in the community that we know are already using QVerno for pod security. So something, you know, you can try out and, you know, let us know if, if there are gaps that we can address. So before we jump into the demo, the one thing I want to quickly touch on is, you know, as, as QVerno gets used in Kubernetes and it's, as it's more wide, widespread, you know, we're seeing some requirements around managing policies at scale across clusters, across multiple clusters, ensuring, you know, consistent deployments, and, and so on. And so we've created, you know, the cloud-based solution at Nirmata to help manage policies at scale. So through Nirmata, you can actually deploy policies across clusters, you know, using GitOps, but that's a concept of policy groups. You can define different policy groups, like, say, for pod security or for multi-tenancy. And then these groups, these policy groups can be applied either automatically to new clusters or they can apply, be applied to specific clusters. So you can manage it very easily instead of having to deal with each cluster individually, right? And then the ongoing challenge of keeping your policies up to date and automating that deployment can also happen, you know, very easily with with our integration with QVerno, with the integration of Nirmata with QVerno. And then QVerno generates a lot of violations, right? So depending on the size of your clusters and the number of pods and the number of applications running, could have tons of, you know, policy violations, you know, failures and so on. And policy reports are available in the cluster, but through our integration with QVerno through, you can actually get a very granular report of the violations and not just get the report, but then also understand what the problem is and then, you know, either get help on remediating that problem or filing, you know, tickets to kind of get that problem remediated, you know, from the application on and so on. So through the integration, we complete the full lifecycle and of managing QVerno policies and getting visibility into QVerno violations at scale, right? If you have multiple clusters that you're managing, this becomes very, very important. Next, we'll jump into the demo. So I'll stop sharing and let Jim take over from here. There we go. Yeah, so what I'm going to do is I'm going to just demonstrate, you know, how you would, as a new user, get started with QVerno, what you can do with it. We'll look at a few different examples. We'll look at, you know, like Ritesh mentioned, pod security policies get a lot of attention. So we'll start with those, see what QVerno can do in that area, and then also look at some more advanced policies. So I'll just go, you know, just again, going to the QVerno site, if I go to documentation and we'll click on installation. So we see there's a bunch of different ways to install. There's a Helm chart, but for this, what I want to do is just install using a command line so we can kind of look, take a look behind the scenes and see what's going on, right? So I'm just going to copy this command, which is a single line to pull down all the YAMLs from the QVerno Git repo and install into my local cluster, right? So if I do kubectl, get namespace, you'll see I'm just using Docker for Windows here and on this, I'm going to run, you know, this command line that we copied and have it install QVerno, right? So we see there's a bunch of CRDs, that's all of these that came in about like six or seven different CRDs, including policy reports. One interesting thing I want to mention about policy reports is these are not unique to QVerno. In fact, the policy reports are being developed, you know, in collaboration with the work group policy, the policy working group in the Kubernetes community. And the intent is to make these same CRs available to any other tool that wants to report policy results in a Kubernetes cluster, right? So this provides a standard way of reporting policy violations independent of the engine or the tool that's producing it. And there's some work being done to take, you know, things like kubbench and also adapt it to these policy reports. Falco is being discussed as another candidate, et cetera. So once these CRDs are installed, we also see that QVerno created a service account. There's also a few other things done. And what's also interesting is there's some web books created, right? So now if we look at our namespaces again, we should see a QVerno namespace. If I do, you know, minus hand QVerno, and if we let's say do pods, we should see that we have a single pod running, which is what we expect at this point, right? So now let's go ahead and, you know, I don't have any policies. And if I do, kubectl get C-Pol, which is short for cluster policy, it will say I don't have any resources. Because the CRD is installed, I can actually now use something like kubectl explain. And I can say policy, so QVerno both has a cluster policy, which is C-Pol and just policy, which is a namespace policy. And if I say policy spec, it will tell me, you know, because it's integrated again with Kubernetes, I can see right here what I can put into a policy. So I see I have a background option, which is a Boolean. I have a set of rules in the policy, like Ritesh was explaining. And then I have a validation failure action, which can be set to either enforce or audit, right? And if you go further, so on, it's very straightforward. And of course, if you have your, you know, Visual Studio Code or other things integrated with Kubernetes, you get all this help and, you know, schema, et cetera, now, also in your IDE, right? So makes it easy to write policies, check the syntax, things like that. But here I can see that this is the description of rules. But so far, I don't have any policies. I just have QVerno installed. So let's go ahead and, you know, get some policies, right? So I'm going to go back to the QVerno site, and we'll click on policies. And I see I have pod security, best practices, and other. So let's go to pod security. And like Ritesh mentioned, here, the way these policies are organized is based on the pod security standards. So if you're not familiar with those, these are great to review and look at. Because what the Kubernetes pod security standards help you do is they organize the pod security controls into three categories. So there's privilege pods, which is basically unrestricted. There's baseline, which is typically used as a default. So this is minimally restrictive. And then there's restricted, which is the highest level of security itself, right? And then it lists every control within the pod security security context structures, and allowed settings, et cetera, for those. Now the, you know, the cap which is being developed, which is most likely going to be called pod isolation policies, is also going to be based on pod security levels. So again, it's important and pod security standards. So it's important to kind of keep these in mind. And then you'll be able to apply these, one of these levels to namespaces in this upcoming, you know, cap, which will be pod isolation policies. So anyways, going back to the Kibana policies, as you see, they are organized to default. So this is actually a baseline. It got renamed from default. So this is the baseline setup policies. And this is restricted. If I go into default, we'll see, you know, each policy that's, you know, part of this, if I click on any one of this, like for example, let's say the required default proc mount. So it is checking both for init containers and containers, that if a proc mount is specified, it has to be default. It cannot be changed from that setting, right? But there's other, of course, several other policies. If we go back to, you know, the restricted and look at some of the more stronger or the tighter control policies, you'll see things like require, you know, that run as non-root, which is pretty important for non-privileged pods to make sure both init containers as well as containers are always run as a non-root user, right? So let's go back, you know, to this main pod security page. And what we have over here, it's showing me a one-line command. So it uses customize, it pulls down policies from this git repo. And then it's just using kubectl to apply them, right? I'm going to take that and let's see what happens if we apply to our cluster. So I'm going to, you know, this takes a few seconds to pull down all of the yaml's. And once that's down, what we should see is it will apply into our cluster itself, right? So we see about like about 10 or so policies got applied. And if we go back and look at the cluster policies, we should see that these are now enforced. Now what's interesting is if I go, let's actually, you know, look at this, if I go to one of these policies, if you notice when I was browsing through the policies, most of them are written at the pod level, right? But as Ritesh mentioned, one of the things Kivarno does by default is because it understands the relationship between pod and pod controllers, it will, let's take a look at that, you know, root policy. Kivarno will automatically generate policies for different pod controllers. And of course, you can control the settings. So if I say kubectl get, you know, let's say cpaul, I'm going to do this and do minus o yaml. And then I'm going to just to kind of get better, we'll use kubectl meet, which is a really handy plug-in if you want to remove things like, you know, which you don't want to see, like all of the metadata and the owner references, things like that from the yaml. So you're, if you notice what happened is when I applied this policy, it was just written, the rule was written to match a pod. But in addition to a pod, what happened is Kivarno automatically generated policies for, you know, most of the standard pod controllers. And again, this can be, you know, tuned, you can specify different annotations to control, which you want to, if you want to restrict, for example, only to deployments and not to demon sets or something like that, right? But here, what I wanted to make sure is now we do have all of these generated. It's this one's for a cron job. This other one is for all of these, you know, the more standard pod controllers as we see. So pretty neat that I have now all of this installed. So let me try and run a simple workload. You know, I think I have an nginx pod. So I'm just going to say create minus f. And let me check if this is in my temporary. Yeah. So, okay. So immediately, Kivarno is saying, and because these policies were set there in force, it's saying that running as root is not allowed. It's telling me what exactly got violated. And it's saying that run as root must be true, must be set to true. Run as non-root must be set to true, both for init containers as well as containers, right? So that was since that was violated in my YAML, it's basically blocked this particular configuration itself. So now let's try something a little bit more intrusive, right? So I'm going to actually use this site, which is pretty handy. If you haven't seen it, it's the site is Bishop Fox, and it's called bad pods. And as the name suggests, these are pods which are misbehaving, right? So pods which are misconfigured to allow all sorts of, I guess, to basically be open where if you want to allow the host bit or access to host network or other namespaces, all of that is fairly open, right? So here I'm going to go into, let's check and see which one we want to do. Let's go into everything allowed, which is probably the worst you can do, right? And we'll take a quick look at the YAML. We'll actually go to the deployment just to see how that works. And we'll look at it. So this is basically allowing different, it's saying privilege true. It's mounting host, which is also a bad thing to do. It's all, and then allowing all of our name spaces to true, right? So let's grab that. And what we want to do is we'll go to the raw YAML and we'll run that in our cluster, right? So I'm going to now do kubectl create minus F and we'll give it that whole YAML and see what happens. So now we see a lot more warnings or a lot more errors coming back from our admission controller, which is what we expect. So it's saying that, hey, don't allow host namespaces, don't use host path. It's a don't privilege. So all of this got blocked, which is, and of course, the other error we saw before, which is run as non-root. So this is how straightforward and simple it can be to set these policies. These policies are very flexible. You can tune them based on ELECTROTATION's explaining. A lot of different selectors. And I have some examples in here. So this is actually a different policy. It mutates or this is generating a network policy. But if you look at another policy like which also can mutate things. So let's, yeah, this is an example of a mutate policy. It matches also pods, but you can write your selectors based on several constructs, including namespace labels, et cetera, right? So very flexible, very simple, you know, to apply some of these. One other thing I want to quickly show is how, you know, Ritesh also mentioned some features like being able to select and get external data, right? So very often you want to write a policy, but you want to then, you know, leverage data from things like config maps, which are very natural incriminates. In fact, you know, in the community, one of the policies that I was just working with somebody to develop is to make sure that workloads, workload identities are protected based on the image name. So, you know, what the author did, which is pretty nice, is use the config map to make sure that a certain service account can only be used for certain images, right? And they're managing this, the data through a config map, but then it's a very generic policy to do that. Another common example is to, you know, make sure your ingress is unique within a particular, you know, cluster, right? Your host names for your ingress, I should say, are unique. So Keverno has this ability to use James Path and also to do API look calls. So this is a combination of using those two features. So just going through the structure of the policy, this policy matches ingresses, and then there's a context, which, where you can populate different data and different variables. So you're, what it's doing is it's saying, it's doing a call to, you know, to the API server, which is this API call construct. It's getting all the ingresses from the API server, and then it's applying this James Path on it to extract out all of the hosts. And then it's checking if that host is, you know, the host from the request object is already, is included in that list of existing hosts, it's going to deny the operation, right? So fairly complex logic, but pretty straightforward, you know, once you kind of get the hang of how this is structured. And the interesting thing is, you know, if you use kubectl and, you know, just through the command line, all of this can be tested very easily. So I can do kubectl, I think it's get minus minus raw. Yeah. So if you do kubectl get minus minus raw. So you're, I don't have any ingresses in this cluster, but you can kind of see what that looks like. And even if you want to test the James Path expression, you can use JP, which is the command line for that. And if I go back to my policy, let's grab what this looks like. And of course, if I had some ingresses, this would maybe make more sense. But if I apply this right now, it will come back with an empty string, right? Because there's no, there's nothing. So it just came back with an empty list over here, as you see. If I had any ingresses with hosts, it would show me a list, a string list of those ingress hosts. And then in my policy, what I'm doing is I'm checking to make sure that the host, which is coming from the request object is not, you know, already used within the cluster. So, you know, this is a simple example, which shows again how you can combine some of these. The other thing, you know, which is very interesting is there's, you know, policies, I think Ritesh mentioned also for multi-tenancy. I can show a few examples of, you know, those. What we're seeing is, first of all, adding labels to namespaces is a very common use case. And there is, you know, there's a session that, you know, I'm going to be doing along with Adrienne Ludbin, who leads a hierarchical namespace controller, a project that's also being developed in the community. We're going to be doing this at the Cloud Security Day for Kupkan EU, where we're going to talk about how Kiverno and HNC can work together and manage both namespaces. And then HNC allows, you know, subnamespaces within a namespace. So how you can kind of allow those controls. So you're, anyways, in this policy, it's pretty straightforward. It's saying that if there's a namespace created, except by cluster admins or by the HNC manager, go ahead and inject, like, the user info. And then you can also do more complex things, like you could say, okay, maybe I want three types of namespaces. I want a small, medium, large. And based on that, now I'm going to configure, you know, different quotas and different things for my tenants, right, for my users. So all of that can be fairly easily done. You can also generate very fine grained roles and permissions. So only the person who requested that namespace, the owner will get permissions to then delete that namespace. Things like that can be then, you know, generated on the fly. So these are Kiverno policies to do those kind of things. So we'll share this repo once, once, you know, we have this finalized for the session. But this is just a quick example of how now you can use the power of, you know, leveraging labels, and then based on automatically generating different, you know, configurations for those namespaces. And of course, you can also add like validation logic to make sure that only the right settings, like in this example, it was requiring that each namespace only have a certain, you know, can only be used with a certain suffix, right, so dash SM or medium or large. And if it doesn't have those, it would be rejected itself. So these kind of things, you know, you can combine, you know, validate, mutate, and generate to get exactly the right behaviors you want for your cluster. And once these policies are set, they can, they're very much data driven. So, you know, the configurations itself can be easily automated in there. Okay, so one last thing I wanted to show, you know, before we switch is, you know, in Kiverno, I think, like Ritesh also mentioned, and as we were looking at, there's this ability to, you can create, it creates policy reports. And something one of our community member created, and that is made available, this is also an open source project. He created something called a policy reporter, which takes this policy report and also provides a nice graphical UI on it. He's also got the ability to push this to Rafauna Loki and Elasticsearch and create other notifications. But really nice example of how, you know, you can now view this graphically, you can share this inside a single cluster. So just wanted to highlight that. So if you're looking for a single cluster tool to, you know, visualize these policy reports and even create notifications when these get created, definitely check out this policy reporter project. And, you know, there's also work in progress, by the way, to take more Kiverno metrics and push these to, you know, to Prometheus, which could then be, of course, displayed in various UIs and dashboards like Rafauna and others as well. But this is more focused on the policy report. The other work that we're starting now is to build, to push engine metrics and engine statistics out as well. All right, so let me stop there. I know we have about five minutes left for the hour. So we'd love to see if there's any other questions, thoughts, comments that we can help answer. And, you know, again, like there's plenty of documentation. And if you want to reach out to us, feel free to reach out on the Kiverno Slack, which this is the on the Kubernetes Slack, the Kiverno channel itself. Well, that was one thing I was actually noticing is that your documentation is awesome. So congrats on that, because I think that's one of the things that drives communities and really helps people adopt. We have a couple people on the call that had some questions. Paul Maury, if you want to pop yourself out of mute and onto the screen and Kirsten, newcomer. Yeah, so I heard a lot about pods. I didn't hear whether you can write policies around custom resources. So like say that I have a custom resource for my own bespoke controller that creates pods. It contains something that's like pretty close to a pod spec. Can I write a Kiverno policy on it? Absolutely. Yes. Yes. So one of the things right from the beginning Kiverno has supported is also full custom resources. And it does help if the custom resource schemas are structural. So if they're structural, what Kiverno can also do is can validate the various paths and fields for that custom resource policy. If they're not that validation doesn't occur, but Kiverno will still apply the policies. Okay. And how about if I have like an aggregated API, so can I write policies against APIs that another API server might bring in? So you can make calls to an aggregated API server, right? So through the API called construct, if you, for example, if you want to pull metrics, things like that, that would work. If there are resource definitions, yes, then those can still be, you know, looked up and then policies can be applied to those. Okay. All right. Thank you. Sure. There's one question. Kirsten in the chat, Mikey's asking, do you need a CNI to enforce the policies? So for network policies, yes. If you're generating a network policy, you will still need a CNI to provide that runtime, you know, kind of to make sure that, you know, the right segmentation exists on the network layer. But for other policies like pod security, et cetera, those are applied directly with the API server and Kiverno itself will block configurations that are violating, you know, the policies you set. Kirsten, take it away. Sure. Thanks. Oh, Mikey had a follow-up. Is the, yeah, go ahead. Is the mutate policy in cache until validate is run? Yes. So there is a, there is a ordering in, you know, the admission control. So the validate will occur, you know, at the end. So all of the mutation logic will occur. And then once that's done, then you get a chance to sort of validate. There's also a way to re-invoke admission controllers. So Kiverno by default, when it installs itself, it tries to be the last admission controller to receive. So there are, you know, there are some customers who will use both OPA and Kiverno for various reasons. And in that case, you know, you could still run one particular policy and then register to read, receive any mutate results. Great. Take it away. Thanks. So I had kind of a question about network policy generation as well. How do you inform the generation of that network policy? Are you using config data to inform that? Or is there, you know, or is that optional? Kind of, how do you decide on what's going to be in that policy that's generated? Right. Yeah. So the few things, right? So the trigger to generate the policy could be the creation of a namespace, could be the setting of a label or the setting of an annotation itself. So one pattern is if you're using, if you want some variations and you want to control, you know, like for example, what exactly gets generated, you could use labels and based on labels, generate, you know, have templates for different types of network policies. And that would be, yeah, one quick way of, you know, of changing that. You can also do things like, because Kivarno can look up the namespace, can match namespace selectors, you can say that if for certain namespaces and if there is an annotation on that namespace, then based on that, maybe you want to trigger different network policies. So then you would have two different generate rules, one for each. So, you know, a few different ways of managing those kind of variations and handling that. Great. Thank you. That's helpful. And I know we're pretty much right at the top of the hour. I'm actually, I have two questions, but I'm going to pick one of them. Do you have thoughts on kind of policies or ways that we as a community might move forward to kind of apply policies to respond to events in the runtime? Yeah, great question. Right. And something that we have been thinking about, there's been discussion also in the policy working group about what other triggers, so admission controls and configuration changes, of course, are one trigger, but it would be very interesting to also have a set of, you know, defined runtime triggers. There's also discussion, you know, about authorization checks, right? So somebody in the community was asking about, you know, that there's a self-subject review request that gets sent out from the API server when there's an authorization check. So, you know, for having some more granular policies on that. But, yeah, to your point, like if there's other runtime events, even those could be potentially used for triggers. But it's something that it's not supported right now in Kiverno, but we're very open to that and interested in hearing more about the use cases and how that could be standardized. Great. Thanks so much. That's it from me. All right. Well, thank you guys for coming today. It's wonderful to have these upstream talks on Mondays because I really didn't know that much about Kiverno before this, that I was pretty OPA-centric in my understanding of policy management and such. So it's really helpful to have the context set. And so I think everybody appreciates that as well as Jim, awesome deep dive and demo there today, too. So, well, definitely you're in the sandbox now. Yes. The road ahead, more production users, more people using it, more feedback on it, hopefully incubation and graduation into the real world today sometime soon. Absolutely. And we'll definitely get you back on as it matures and we get further down the path. And congratulations on getting into the sandbox and getting it to this state. Again, if you're interested in this topic, these guys are pretty active in some of the SIGs and the CNCF as well as, as I mentioned earlier, the documentation is rock solid. So good on that. That's as a community person, that's one of my key tests, whether we can keep something going forward. And definitely when you get a new release, new features, new functions, reach out to us, Ritesh and Jim, and we'll have you back. So good work. Good luck. And thank you. Thanks again for sharing the information with us today.