 Okay, so welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Mohammad Shariar, almost like Amitabh, and I'm a Sensef Ambassador. So I will be your host tonight. And every week, we bring a new set of presenters to showcase how to work with Cloud Native Technologies. They bring, they will build their things, and they will break things, and they will answer your questions. In today's session, I'm stoked to introduce Jim and Jim, who will be presenting on Kubernetes Policy as Code with Kivano. This is an official livestream of the Sensef, and as such, is subject to the Sensef Code of Conduct. Please do not add anything to the chat or question that would be in violation of the Code of Conduct. Basically, please be respectful to all of your fellow participants and presenters. With that, I will hand it over to Jim to kick off today's presentation. Okay, let me add Jim to the screen. So, Jim, how are you? Very good. Thank you, Shariar, and thanks everyone for joining. So welcome to this session. Yeah, I guess, yes. You can start your session, right? That's awesome. Awesome. Yes, today, we're going to talk about Policy as Code with Kivano. So let me pull up, you know, we'll start with just a couple of quick slides to this at the background, and then we'll dive in into some hands-on demos to showcase, you know, some of the details and best practices and things within Kivano. So some of you may know Kivano is a CNCF incubating project. It is a policy engine that's designed for Kubernetes. So what we wanted to do is focus really on how, you know, how policies get used in the real world, right? And we'll talk a little bit about why policies are even necessary. But then dive in into showing how you can, you know, use a toolkit like Kivano as part of your overall, you know, not just within your clusters at runtime, but also as part of your delivery process in your CI CD pipeline. And as you are kind of doing continuous delivery or even, you know, leveraging infrastructure as code with projects like whether it's Terraform or Crossplane or other things, right? We'll look at how you can test things with the Kivano CLI. And also, you know, we'll show a slightly advanced use case of applying validating admission policies, which is a newer feature in the current versions of Kubernetes with the Kivano CLI. And then after that, a chip is going to deep dive into some, you know, more advanced use cases of Kivano showing how you can combine some of the flexibility and power of Kivano with different types of rules and show an end-to-end working example. And then we'll wrap up with, you know, a little bit about what's coming in Kivano 111 and some of the upcoming releases. So let's kind of talk quickly about why policies, you know, and especially why policies for Kubernetes itself, right? So if you kind of think about it, Kubernetes is perhaps one of the first, you know, platforms where, which brings together different roles like developers, security, and operations within an enterprise, right? And when you are deploying something to Kubernetes, typically it's as a resource manifest. And these manifests contain a lot of different details, which have, you know, different folks in an organization might be interested in different aspects of those manifests. So developers care about, of course, like what image, what, you know, how many instances, different things of their application. Operations may care about how that, you know, pod or deployment gets placed on different nodes, what type of cluster it runs in, what other backing services are available. And to some extent, how that, you know, is secured. And of course, security wants to know everything is safe and sound. And you know, the pods are configured correctly. There's no chance of a container escape, things to that nature, right? So policy serves as a contract, which brings all of this together. Now of course, you could write a policy in a document and you could review it and try to enforce it in a manual manner. But if you once you move to policy as code, that contract becomes a digital contract, which you can apply automation and best practice tools to enforce across your environments and automate at scale, right? So it's all about with cloud native, with Kubernetes, it gives us that toolbox where you can now start, you know, applying policy as code at various, you know, points in your, the lifecycle of your applications, of your workloads, of your clusters, of your infrastructure. And that becomes extremely powerful, bringing together all of these different concerns within an enterprise. So Kivarno, as I mentioned briefly earlier, it's a policy engine designed for Kubernetes. There's no programming language to learn as you're using Kivarno policies. And we'll take a look at some of those examples. And there's several different capabilities in Kivarno, right? The most common, of course, is validating different configurations, validating different behaviors that you might want in your clusters. There's also the ability to mutate different resources. And this is important because not everything, you know, of course, all of us are, you might be familiar with things like GitOps. And we want, you know, our resource states to stay in Git as much as possible. But there are things that have to be dynamically changed. Much like a pod controller can launch a pod, mutating policies act as, you know, an in-place updates for specific things, which you might want to do, whether it's again for security, for operations, for other reasons, as you'll see in some of the demos within your workloads. And this works very well with GitOps because the state that you store in Git can be different. And through things like server-side apply, you can apply mutating admission controllers, can change specific settings, and you can, you know, signal that to your GitOps controller to not be, you know, kind of reward or not interfere with those settings. There's also generate policies, which allow you to generate brand new resources. And this is another extremely powerful capability where, for example, based on a trigger, like a namespace creation or a service getting deployed, you can generate supporting resources for that particular application, workload, or other construct that you are working with. With Kivarno, there's also built-in support for image verification. So you can, you know, validate image signatures, which is becoming extremely important for software supply chain security. And just making sure that the images that are deployed to your cluster are actually valid images and not something that a malicious actor or some random user is trying to, you know, deploy onto your cluster. And then finally, one of the newer features is cleanup policies. So you can now use Kivarno through policy settings to, you know, automatically clean up resources, whether it's for cost management and tie that into your overall fin-up strategy. So a lot of different capabilities, and we'll kind of look at quite a few of these live as we're going through the demos. The picture here quickly just shows how Kivarno operates, and this is in the webbook mode. So every, of course, every operation in Kubernetes goes through the API server, as, you know, the API server will first, of course, authenticate where that request is coming from, and if the user is authorized to perform that request. And after that, the request goes through mutating as well as validating admission control, and that's where Kivarno kicks in, right? So it receives every request based on your configured policy set from the API server, can apply policies to it, and can either validate, mutate, generate resources based on things in the API request itself. So lots of different use cases, you know, as you can imagine that Kivarno gets used for, and very quickly, as you kind of take this mindset and look at what you can secure and automate through Kivarno, more and more use cases keep emerging, right? So starting with basics, of course, like pod security, a lot of you might know that now with Kubernetes, you know, 1.25, pod security policies are no longer officially supported. They're removed completely in Kubernetes. You need to replace them with either the build-in pod security admission, which provides some limited controls at a namespace level, or you need, you know, admission controllers like Kivarno to be able to enforce a full set of pod security policies, and we'll take a look at some of that. You can also, you know, of course, secure different things in your workload. You can set up granular RBAC and even generate permissions on the fly as things are happening, as namespaces are getting requested, etc. A segmentation isolation of workloads is extremely important, and some other use cases, right? And similarly for operations as well as for cost management, resource management, there's a lot of different, you know, things you can do with Kivarno. And the thing is once folks who are experts in managing Kubernetes, you know, because of the ease of use of Kivarno, what we typically see is once you get started with Kivarno, you end up, you know, typically having about 50 to 80 different policies within your clusters across some of these concerns itself. All right, so let's switch to, you know, policy as code, and then we'll dive into some of the demos and look at this live, right? So you might have a few questions by now. Okay, yeah, go ahead, please. Would you love to take it? Okay, so first question was like, yes to version policy artifact. Do you have resources? How Kivarno fits into policy as code beyond just Kubernetes? This was one question. Let me show you here as well. So, yeah. So, yeah, if I understood correctly, the question was how Kivarno fits into policy as code beyond Kubernetes or was it, you know, beyond just Kubernetes? Yeah, right. So that's a great question, right? So as we, you know, initially mentioned, Kivarno was built for Kubernetes, but more and more Kubernetes for stuff itself is being used for several different things, right? Kubernetes gets used for IAC, Kubernetes gets used for CICD with projects like Tecton, projects like Crossplane. You can even provision cloud infrastructure with Kubernetes. So all of that Kivarno can do very well in terms of applying policies. Now, if you have, you know, use cases, like if you're using things like Terraform, we are, you know, and this as we cover the roadmap, there's some exciting features coming up where Kivarno policies will, you will be able to apply also to other JSON configs and data that you will be pulling from these type of external systems. So that will extend, you know, and further broaden the use cases for Kivarno. Okay. So I guess another question coming up. So we have recently had to rip out Kivarno from a cluster because it was adding about 500 MS latency per part creation request. Any plans to work on optimizing the creation path? Absolutely. So not sure what version that test was done with, but, you know, every release we keep further improving and, you know, reducing the latency required. And that depends a lot on the policy set configured. So if you have things like wildcard permissions, every request will go into Kivarno, right? If you have, you know, a precise policy set based on your match exclude, Kivarno will dynamically configure your web of configurations to only receive those requests. So there's a lot of different things you can do to fine tune that. And with release 1.10, which is our latest release, we've also split out some of the background controllers and tasks from the admission controller, which further helps with, you know, that processing time and keeping the latency road. So typically, what we measure for is it should take a few milliseconds and, you know, feel free to reach out on the Kivarno Slack channels in the community. We have test results, which we benchmark with every release. So we can discuss further and see, you know, if again, it's reproducible in the latest release, and again, how some of the best practices for tuning that. Okay. We have also two more questions. So I think, do we audio lab to take those questions right now or after the actually start presentation, because questions are coming up? Yeah. Either way, I mean, if they're relevant to what we're discussing, we can take them now or wait for later. Yeah, I guess waiting for later would be better because they were like their personal queries, I guess, much more. Okay. Okay. Yeah. So just kind of going back to policy as code. And this term obviously gets used quite a bit. But to explain, you know, what we, you know, want to kind of look for as you're looking at policy as code. So anything as code means applying, you know, software development or coding best practices to that domain, right? So when we talk about policy as code is how do we treat policy artifacts and policy definitions, declarations themselves, as much as possible as code. So to do things like version control, do things like code reviews, use the same tools that we use, you know, our IDEs, get other kind of tools we use for coding to incorporate policy, you know, and to use policies in that same manner. So the three things, you know, that are really important, and there's obviously others as well. But three things that are important is version control to make sure that you are able to, you know, store these artifacts and including things like diff and, you know, code reviews, et cetera, you can do through version control. That's important. Being able to test, you know, and, you know, we'll take a look specifically at that. And then to automate how you apply policies in your pipeline, and we'll see how the Kibirno CLI helps with this, right? So let me switch over to my screen over here, my shell. And what I'll do is just to kind of quickly, first of all, show I set up, I have a, you know, mini cube cluster here, I already have installed Kibirno, and you can install Kibirno either through home charts or through, you know, those just YAML files. So the command I ran just to kind of show you to install Kibirno is just basically, you know, downloading a release YAML and then installing it, right? So super simple. And at this point, because I've installed Kibirno, if I look at my, you know, what's running here, I have four deployments, the admission controller, a background controller, cleanup controller, and reports, which manages the reporting subsystem. So all of these, by the way, are tunable, right? So you can now scale these separately. You can size them, or you can turn some of these off if you don't require them. So the first thing I'm going to just quickly show, you know, for folks who might not have ever seen Kibirno in operation. So if I kind of do just, you know, in this, I'm not going to actually run anything. But in my cluster, everything's open, right? So right now, so I can't basically run any, I can run any pod, whether it's secure or not. And of course, if I just run like nginx, it's insecure by default. So now if I just do, you know, so I want to apply some policies. But what I'm going to do is apply policies, which are coming from a Git repo, right? So this could be my own private Git repo. And I want as my cluster come ups. And now, of course, as you're automating this, what you would want to do is you will use like Argo CD or flux or some other, you know, way of deploying and automating this. But as you see, it's extremely easy now to just, you know, even spin up a test cluster, apply policies from any one or more Git repos. And now if I do, you know, check what's running on my cluster, if I let's just do a getcpaul, cpaul is short for cluster policy. I see I have a number of pod security policies. Because I, you know, pointed to a Git repo, and I'll show what this Git repo has. I'm looking at pod security policies in enforce mode, right? And I'll talk a little bit about how we got to this enforce mode. So now if I try to run that same pod again, which let's say, you know, we know was insecure, immediately Kiverno is going to block that. It's going to send events, but you'll see, you know, in your cluster. So if I do, I'll get events. I should also be able to kind of see that Kiverno has, you know, kind of put some policy violation events to whoever's listening to them. And I, you know, that if I go back to this, you know, it shows me also as the user why this pod was not allowed, right? So here what I did was I applied pod security policies. And I brought these pod security policies from a particular, you know, repository, as you saw, right? So this is the Git repo. Kiverno maintains a set of, you know, best practice, different pod security policies. On our website, these are rendered if you go into policy. And, you know, there's almost 300 policy samples here. And the ones that I specifically pulled were from, you know, pod security in this case. And in fact, I looked at, you know, at the enforce mode. So let me show you in my IDE what exactly that did, right? So I'm going to go now into, let's go actually pull up Kiverno policies here. I'll come back to this. But when I pulled in enforce mode, what it actually did, and this is a local change I have, but if you look at the upstream thing, it's running customized. And it's applying a patch where it's saying that I want to now, you know, set the, a validation failure action. So every Kiverno policy has, you know, can be run in audit or in enforce mode. And I want to set that to enforce in this case, because I want to block, you know, anything which is insecure. Now you can further customize that. And you can say I want to run these policies to not only, you know, block by default, but I want to override that. And for certain namespaces, let's say for my CNI, I want to run that in audit mode because I'm, you know, running other security checks. And I don't want that workload to be blocked, right? Obviously, you don't want the CNI to be blocked. If you're kind of, you know, deploying new nodes, things like that, right? So those type of customizations now are super easy to make. And if I have even local customizations, so if I kind of now want to, let's just say, you know, if I go into my policy repo, and over here, what I'll do is if I just run customize, and let's run the build command. And what I want to do is I just want to, I'm not going to apply this, but just to show you how easy it is to now, you know, do some of these things, I was able to get now this policy, which has the two patches that I wanted rather than just the one kind of, you know, thing we wanted to, you know, set up front, right? So in this case, oh, I didn't save this. So that's why the namespace didn't show up. So let's run that again, just to show that, you know, the enforce mode now has this particular override patch, also as you can expect applied, right? So with, and this is just using customize. Now, I'll also kind of show how you can run some of this with the Kivano CLI, but you could then pipe it to the CLI and apply this, of course, like we looked at in the first example, or you could just run it locally to test it. The other thing I want to show is all of these policies, right? So every policy we write, there's an extensive set of tests that we also provide with Kivano. And in these policies, and you can write, of course, tests for your own policies, which is another policy as code best practice. So these policies, like the one particular one which I'm running over here, so let's say require run as non-root, here's the test file. And in this test file, all I need to say is which policy I want to apply to which resource. And what is the expected result over here? Is it pass, fail, or skip or something else, right? And you can have, you know, as many policies, resources, etc., in your test scripts. So that way it's very easy to write these, right? So now if I kind of do the same, let's say I'm going to run this test command, so I'm running the Kivano CLI, I'm a plug running test, and I'm saying how do I run, you know, for the spot security restricted, because maybe I'm tuning that, I want to run some tests and see. So in milliseconds it applied about 75 different tests, and it shows me that everything's passing. So let's kind of make something, so fail, right? So if I'd say maybe because of a bug in my code or something, you know, I want instead of this to fail, which, you know, the fail is what's expected, I'm going to declare this test case should pass. And if I have an issue now, and if I run this test again, what I would expect to say is, yeah, this test case failed because of this one particular, you know, scenario where in fact the result that Kivano returned back was a policy failure instead of a policy pass, right? So super simple to start writing these tests and to manage these tests at scale, which allows you to do some very interesting things. The last thing I want to show in a demo, and then we'll take some more questions if there are any, before I hand off to Chip, is also how the Kivano CLI is now adapting to newer features that are coming in Kubernetes. So as some of you may know, I think in Kubernetes, one dot, I believe it was in one dot two six, where it was first introduced. And then in one dot two eight, now this is a beta feature, there is a new construct, which is called a validating admission policy. And this allows writing some, you know, simple validation checks and running them directly in the API server. So here, and what this uses is a language called cell. By the way, notice as I'm in my IDE, and I mentioned if you want to adopt policy as code, you want to use developer tools like IDs, I have built in help for Kivano, right? And the way this works is because Kivano is a custom resource, and I have my Kubernetes plugin, and this help is just automatically pulled up and shown to me as I'm kind of doing anything in my IDE itself. But here, what this, you know, expression using a language called cell is doing, is it's checking and making sure that my pod cannot be run with the latest tag, right? So super, you know, very simple. And obviously, you could do this, you know, as an admission by book, but why not run this, you know, in the API server, if possible, right? So Kivano supports that. And here, what I'm about to show is I'm actually going to apply this validating admission policy, which is written, you know, in with cell expressions to a resource. So I just have, you know, you know, deployment resource in the same folder. And I can now test this in my CI CD pipeline, or wherever, you know, I want to. And what Kivano did is as it applied that it produced the policy report, which by the way is, you know, standard or API that we are, the policy working group, which is a CNCF, you know, working group, has created this policy report API format. And it's being, you know, proposed to be promoted to a SIG level API. But the standard report that it produces now immediately tells you or tells the user if they're applying this that the latest tag is not allowed, right? So again, very simple. And these, you know, we have hundreds of these sample policies. But hopefully this gives you a quick idea of how powerful Kivano can be. And here I focused on using it with the CLI in my CI CD pipeline itself. So with that, let me stop sharing and we'll see if there's any other questions before I hand off to Chip. We'll dive into other use cases for us. Okay, great. That was awesome, actually. So yeah, there are some questions, but I guess those questions are not related to our session. Basically, they are their personal queries. So I guess what they should do is they should contact you guys through Slack, right? So yeah. Absolutely. And we do have, if you go to Kivano IO to the community link right on top, we have, you know, Slack channels on both the CNCF workspace as well as the Kubernetes workspace. And, you know, we're pretty active over there. So feel free to reach out over there or just, you know, of course, create a GitHub issue required. Awesome. Okay. So with that, I guess we can now add Chip to our stream and yeah, let's add Chip to the stream. Hey, Chip. Hello, everyone. All right. Glad to be here. Thanks for bringing me in. Thanks, Jim, for all of the intro. So what I want to talk about in this section, you know, we Jim did a nice overview of policy as code as it pertains to Coverno, what that actually means, how you can go about doing it, even showing some newer capabilities that will be releasing in 111. We'll come back to that point, but I wanted to flip over and talk about some of these use cases that Coverno was able to accomplish by stitching some of these capabilities together. And Jim showed that in a previous slide where Coverno has really five main capabilities, validate, mutate, generate, verification of images, and also now cleanup. But you can combine those and accomplish some really interesting use cases and recent ones that we created that we wanted to share with you just to give you an idea of what's possible with policy as code through Coverno. So the first thing here is this concept of expiring policy exceptions. And let me step back a little bit and talk about policy exceptions. I didn't specifically demo, I didn't have a plan to specifically demo them, but a policy exception is a construct in Coverno. And I actually let me just, I can show what looks like a policy exception is a specific custom resource that is supported in Coverno that allows Coverno to accept any resource from any sort of policy. So the general use case for this is something like, particularly with validation policies, you have a validation rule, and you want to exempt some sort of resource from that validation rule. Let's imagine that it's maybe that image policy that Jim showed just a moment ago. You do have a use case where there is a deployment that needs to run with an image tag of latest, probably not super likely that your use case is specifically that, but maybe it's something else that's related to that. Rather than modifying the policy to say, I want to allow a deployment named foo to circumvent the policy, you don't want to touch the policy. Using this policy exception custom resource, you can create basically a decoupled way to allow one or more resources to be exempt from a policy. So in this case, I want to exempt a specific config map by name from a policy called require labels that has a rule called check for labels. And so I'm doing this in a separate policy exception rather than modifying the policy. I could modify the policy and do this, but this gives you a decoupled way for your developers and other users using policy as code to allow things by policies only based on what you want. So this works just fine and this was released in Coverno 1.9. There's a lot more information out there on policy exceptions. You can go read the documentation. It's a pretty cool feature, but one of the things that we wanted to, we heard and we wanted to do something about was this notion of having policy exceptions only live for a short amount of time. Because the idea with policy exceptions is so long as the policy exception exists, whatever the definition of that policy exception will continue to be enforced. But in all cases, that may not be really ideal. For example, going back to that policy exception, let's say that I wanted to allow a resource to be exempt, but only for a day or a week or whatever the case might be. So that policy exception needs to go away after that point in time. And we can actually do that in Coverno using Coverno policies and also coupled with a new angle of the cleanup policies that is coming in the next version. So you're actually getting a sneak preview in addition to some of the other things that Jim showed around validating admission policies. This is something that we're working on for 1.11. So what we're trying to achieve here is that a user who's able to create a policy exception has the ability for Coverno to basically signal that that policy exception should go away after a period of time. One of the new enhancements in 1.11 is the ability for either users or Coverno or really any other tool for that matter to simply apply a label to a resource with a time to live value, a TTL value, and this is a specific Coverno label that it will watch out for and two formats of the value are supported. Either one is a time to live or two an absolute time. So you can apply a label to any resource and Coverno will track that resource and remove just that one specific one after that amount of time has elapsed or at the designated time whichever one you specify and we can actually use a Coverno mutating policy to assign this. So we're going to use two things here. We're going to watch for a policy exception. So that's the Coverno custom resource that I mentioned and we're going to use a Coverno mutate policy and actually we're also going to involve a validate policy as well. We're going to combine those together to get this idea of an expiring policy exception use case. So the flow is once we have the policy installed that's going to validate and then also mutate the policy exception, we want to watch for a policy exception to come inbound. Coverno is then going to see that and assign the label for Coverno's use later on and after that label has been assigned which just for demo purposes is going to be like a one-minute timeout or something, it's going to automatically remove that resource. So let's flip over and show that here. So first thing that we need, we need the ability for Coverno to be able to remove these resources that you want it to remove. So Coverno ships with principle of lease privilege and if you wanted to allow it to remove resources you need to give it some permissions. Now there's for those that may be familiar with policy or cleanup policies in Coverno with this new ability or this new enhancement coming in 1.11, we need an existing or we need a new privilege here, a new verb called watch because we need to be able to watch resources that have that label so that we can then remove them later. And I've already created this but it just says give permission to Coverno via these labels, which by the way, this is another nice thing about Coverno is that you don't need to modify existing cluster roles, you can just create your own cluster role to augment the permissions that Coverno has. And because Coverno supports cluster role aggregation, so long as it has the correct labels it'll automatically get those permissions. So this is super nice, I don't have to go and modify any existing Coverno resources, I just add my own cluster role. So I've already created this and this entitles the Coverno cleanup controller to remove policy exceptions via these verbs. So I've already created that. Now let's take a look at the policy here. So the first thing that I'm doing is a validation check and I'm matching on policy exceptions and I just want to apply sort of guardrails as a administrator who is responsible for allowing users to submit policy exceptions. But I only want to make sure when they submit a policy exception that it's only for a single policy because a policy exception could exempt a whole number of policies with a whole number of rules, it can be very very wide. You may not really want that in a lot of cases, you may want to scope that down so that it's more fine grained so as not to allow someone to basically counteract an entire policy if you will. So I'm enforcing that a policy exception needs to only be for a single policy and I'm doing that with as I'm highlighting here a validate rule. So this is one policy that has multiple rules in it so there's the validation rule and then here is where we're actually going to assign this new label that's coming in Coverno 1.11. We have a reserve label now called cleanup.coverno.io slash TTL and the value of this again just for demo purposes I'm assigning is one minute and what this is going to do is again watch for policy exceptions and if it sees one it's going to mutate it and it's going to add a new label with that value of one minute so the net effect is and if I go back to my diagram if the user submits a policy exception Coverno is going to first do the validation check and make sure that everything's copacetic with that and then it's going to add the label to it before persisting it so when the policy exception is persisted it will have this expiration that's on it. Actually see this in action I should already have a cluster policy that is here and so this cluster policy has the policy exception the cluster policy already has both of the rules that match on the policy exception. Now here's a policy exception that I'm going to create and as you can see it's only for one single policy so it will pass the validation check and what I expect to have happened is once I create this Coverno is going to label it with the cleanup label value and then without having to do anything else including in case it wasn't clear having to write or use an existing cleanup policy we're not using a cleanup policy which is a capability of earlier versions of Coverno we just simply want to label it so that we don't have to use a policy to match on it so let's try this and I'm going to apply the policy exception and we saw that it was created so let's get that back okay and we saw here that Coverno did assign the label so it automatically got the label based on the policy which we defined and so what we expect to have happened then after one minute this policy exception will automatically get deleted so if we get policy exceptions right now it's there with an age of 30 seconds and if we wait just a little bit longer Coverno will automatically behind the scenes because it now has permission to remove only policy exceptions for now once that expiration date has been reached it will remove this policy exception so you can imagine right now users are able to circumvent the policy that's defined in the policy exception which is only a config map named chip to very original I understand but such that it is they can submit any number of those that they want and it'll bypass the policy but once this policy exception is gone they can no longer do that and so if I get a policy exception back now hopefully I've been talking long enough that it has resulted in the policy exception being removed and we can see it has in fact been removed I didn't need to do anything else I didn't right need to have any other cleanup policy and let's just show in fact that I only have one cluster policy and it's the one that I just created which by the way if you didn't know there's this cube control get Coverno command that will give you all of the Coverno resources very convenient if you want to see that everything that Coverno has created let me just pause real quick here to address a question that I saw can you create created submit a deleted I'm not sure that I understand I'm not sure I understand the question here if you can maybe clarify I guess you can skip that maybe some sort of spamming I guess you can okay skip that yeah well if it if you do have a question around around what we're presenting here glad to glad to take that but that is the that's the first use case here expiring policy exceptions and of course all of this is optional if you didn't want to use the expiration if you wanted to use just a policy exception any of these things are completely optional in their modular so if you're only interested in validation you can of course only do validation you don't have to do any of these other things but the point is that we want to try and get across is with policy as code with Coverno because Coverno has so many of these capabilities you can weave these together to do some really amazing use cases that quite frankly really no other policy engine can do and can really help you not only from the security aspect which many people exclusively associate with policy engines but also operations and automation which can really be helpful to you as cluster administrators and operators in your day job so that is expiring policy exceptions the second use case that I want to show which involves even more of these coming together in a full-blown system is this concept that I created of a one-time passcode system for Coverno and so rather than rather than showing a very complex diagram I'm showing another complex diagram which is just a state diagram here but what I wanted to create with this one and just to illustrate kind of the art of the possible I want to be able to create a system in which rather than Coverno blocking everything or rather than users having to explicitly create a policy exception what if you could use Coverno to give somebody a one-time passcode that they can use in order to allow just that one resource to circumvent policy but never again and so I wanted to set out to see if we could do something like that with Coverno by combining multiple of these abilities in fact this is something that we can do so let's just walk through this so the idea is that somebody creates a bad deployment and maybe this bad deployment is a deployment that you have a policy that says containers must not run as root which is a fairly common one it's part of the pod security standards but you wanted to allow users to maybe circumvent that based on a one-time passcode that was that didn't need any external dependencies and by the way this whole system does not need any other external dependencies there's no other applications internal external that's required this is all driven through Coverno so you wanted them to be able to bypass a policy using a one-time passcode so the flow is user creates a bad policy or bad deployment Coverno is going to check based on a policy yep that's a bad deployment so we'll deny that but here's a one-time passcode that you can use if you want to you don't have to use it but you can use it if you want to circumvent the policy just this one time so the user will be able to see in the message request and of course I'll demo this so this will become real in just a moment here but the user will be able to see all right my request was denied oh there's a one-time passcode okay well let me add that one-time passcode to my deployment in the form of a label and I'll resubmit that again Coverno is going to check that it sees that it now has a label and it's going to check if the code is valid if the code is valid it's going to allow the deployment and of course the user will see great my deployment which was previously getting blocked is now allowed now let me see if I can try and circumvent it again let me try and reuse the same passcode so they'll delete deployment and we'll try and create it with the same passcode Coverno is going to see note that passcode was already used blocked and so one other thing that that we can layer in addition to this is a quota system let's say that you wanted this but you also wanted to use a quota system on a per user basis so maybe I am allowed only three quotas a month but somebody else has no quotas or maybe they have one quota so we'll see that in just a second here in the demo let me flip over to that so I'm going to switch clusters here and get another cluster and for this one we're actually going to combine a bunch of different policies now I'm not going to bore everybody and walk through these policies from end to end because there are several rules there's several uh there's several pieces of complexity there but I do want to show a couple things first of all is that we're using we're using a multiple validation rules so we're validating and this is the bulk of the policy here this is just a standard pot security standard policy that we've retrofitted to use this one-time passcode system and you can see here that this is the one that requires that host namespaces are not used so any of these values that they're specified they must be set to false and for this test I'm using a quote bad deployment and it's bad because we're setting host IPC to true so normally this would be rejected by the policy but we want to use a one-time passcode to be able to circumvent this so have in order to drive this system we have a config map called otp which is going to serve as basically the config file if you will for this whole system so this has a couple things one it has user names that have a quota associated with them so I'm shift I have a quota of five one-time passcodes that I could use I've got another user mark he's only got one so we're going to do this as mark so that we don't have to burn up five and then we've got the passcode system here so these are all passcodes that have been used before and since they've been used it's marking when they were used and the user which consumed them so this really allows you to have a config file with the passcode system but also use it as a fun as an auditing system later on down the line if you wanted to see what what were all my codes when were they consumed and who consumed them so let's just try this out so I've got all of the caverno components that are already installed all the policies and also a cleanup policy which is going to reset our quota system for us so let's just see what happens here I'm going to try and submit this bad deployment and for right now I'm not using this OTP label I'm just going to submit it as is and first let's show the context that I have so I am currently using this context let's switch over and use mark so as I showed just a minute ago mark has a quota of one I'm going to use that so we're going to use mark and just prove who I am I am mark and remember that mark only has a quota of one so mark can still use the quota system and so let's apply this bad deployment so again bad deployment it uses a host namespace our policy which is an enforce mode says you're not allowed to do that but this policy is also subject to the one-time passcode system and by the way all of this stuff is in a blog and we can provide the link to that later so that you can read all this at your leisure you can use the try out the policies we've got all this documented so link a try to apply it and as you can see no the policy in the rule said sharing the host namespaces is disallowed but it also says to get around this you may use a one-time passcode and there's our one-time passcode and this is randomly generated upon every bad resource so let's just pick that up and try it so I'm going to put this as the value of a label we now have our one-time passcode that's in here and so I can apply this cool as you see the deployment was allowed because the one-time passcode was valid and it was consumed so let's go back and check that config map that had everything in it and see what it did so you see here there is the one-time passcode and because I'm mark and I just used it it added when that was consumed in my username and it also debited the last credit that he had in his one-time passcode system to zero so two things happen then let me delete this and if I try and reapply the same bad deployment again keeping in mind that I'm still mark I shouldn't be able to do this and it tells me that code is invalid or or it has already been used so denied so fine let's see if I can circumvent this and get another one-time passcode and disregard my quota so I have a new one-time passcode here so you can see this is a different code that was automatically generated all of this stuff is generated and managed entirely by caverno as I said there's no external dependencies involved here this is all self-driving so we'll go and we'll try and put this new one-time passcode there and we'll once again try and apply this knowing that mark's quota is now zero and you can see here even though the code was valid that's denied because the quota has been exhausted and you'll need to contact a platform administrator to either increase increase the quota or like I showed earlier you could use a policy exception to get around this so that's kind of the gist of the one-time passcode system and again some of this stuff is just demonstrating the art of the possible it's probably not likely that you're going to use this in production totally understand that but it does illustrate the power of something like caverno using policy as code and I'll just quickly flip through some of the policies that are here you can see I've got a number of validation rules with some interest in addition to a mutate and actually this is a mutate existing policy and this is actually one good thing to point out so one of the capabilities that I'm using here that was slightly different from the first use cases I'm mutating an existing resource not a new resource caverno is the only policy engine at least of which I'm aware that has the ability to change existing resources in the cluster based upon another admission event so very powerful stuff that you can get with just mutating for existing resources and here's another one that's resetting the quota so the the gist of this is policy as code can be extremely powerful can unlock a lot of capabilities far more than just security and with caverno and the unique capabilities that it has you can weave many of these things together to get some pretty cool use cases even like this one-time passcode system however potentially impractical they may be it still demonstrates that you can do it and you can do all of this stuff using caverno without any other external dependencies so that's that for the use cases let me flip back over and now kind of close out with some upcoming things in the next version of caverno let me first pause and see if there are any questions here I don't think there are any questions cool okay not so that is it for the the caverno use cases feel free to drop a question and if you have any questions about those otherwise like Jim mentioned many places to come and talk to us but what's coming in 111 well you kind of saw two of those sneak peeks here in the form of using the caverno cli to validate validating admission policies which by the way this isn't something that kubernetes has an answer for so that's the reason why that we put that into the caverno cli and also I showed the new cleanup label both of these are features that are coming in 1.11 so kind of to expound on that caverno 111 is going to have more extensive support for validating admission policy and specifically there are really four points of intersection that we're looking at one of them is what Jim demonstrated the ability to test validating admission policies by the caverno cli so you don't need to use another cli uh in in fact caverno uh kubernetes won't have one so this gives you the ability to test both validating admission policies and caverno policies using the same cli you don't need to switch between multiple ones being able to generate policy reports from validating admission policies this is a really big one we didn't really go into a lot of detail on this but caverno creates what are called policy reports and these are another custom resource that are in the cluster the main value of these are that they're decoupled from policies so now you don't have to worry about entitling users to policies or the kubernetes audit log just to be able to see how their resources are doing policy reports are a separate custom resource in caverno 1.11 in addition to caverno continuing to generate policy reports from caverno native policies will also allow you to generate policy reports from validating admission policies that may be in your cluster so this can be a great way to see and decouple and allow things like our back to entitle your developers and users to see how their resources are doing in compared to validating admission policies the third thing that we're going to allow with this is the ability to write caverno rules natively using cell so cell as jim showed is the language that validating admission policies must be written in if you'd like you can also write caverno policies using cell expressions and the fourth point is which is kind of related to the previous one is caverno will act as a controller to actually generate validating admission policies for you if they're written in cell and if they're written in such a way as the api server can support them so validating admission policies are still very new there are many gaps that they have compared to other external admission controllers but in any case if you're writing them in such a way that they are compatible with the api server caverno will see that and will generate and manage those validating admission policies for you without having to do anything else so that's what we're doing in 1.11 for validating admission policy support really cool stuff i hope you'll check that out and by the way if you wanted to test some of this stuff is available right now so there is a way to test pre-release code you can definitely check that out going on to the next thing there per resource policy reports so already explained a little bit about policy reports today caverno generates policy reports on a per policy basis we want to potentially remove some of the bottlenecks with policy reports and also make them a little bit more performant less resource intensive and require a less space in at cd so we're going to switch over and start using per resource policy reports this would be a policy report for an individual resource rather than one that's at the policy level there are many things that that that enables more details will be coming around that cosine 2.0 and notary updates for those that are using caverno you may know that caverno already supports image and image signature verification from the likes of six-door cosine and with the latest release 1.10 also the notary project we're going to extend that even further in 1.11 by bringing in cosine 2.0 support and also some of the latest updates to notary so look for those enhancements in 1.11 and the last point there is something that i also showed in the demo cleanup using that reserved label so that's that cleanup.coverno.io slash ttl label you saw that in action so you'll get that in 1.11 that allows you to assign that label using like i showed either a caverno policy somebody can self assign it you can have it and get whatever you want to do that label will automatically cause caverno to clean things up and with that that's all that we have for you here and for additional information please check us out on the website the website also has links to community how you can get involved how you can get in touch with us how you can start being a contributor we have a contributors meeting lots of ways to get in touch with us and get involved so there are no further questions thanks a lot for being here and we hope to connect with you sometime in the future and in the community yeah awesome awesome so i guess i would like to add one question at least so the question is like what's next for like what's next on the caverno roadmap right like how a person can get involved into the caverno for it can be a student or a professional something like that who is new to this kind of field what do you say yeah so uh jim i think put a link out there uh to the community um the community has a lot of links as far as we have a contributors guide with a lot of information on how to get started with uh caverno looking at the code base getting your development environment set up there are a lot of resources around there obviously slack is a great way so definitely check out the community link that that was just provided and take a look at some of those resources um and you know feel free to connect with us in slack a lot of the maintainers are there and um very interested in helping out new contributors so check the guide out uh come talk to us if you have any problems awesome i guess there are a lot of people that a lot of students will be influenced by you guys and will contribute to uh over slack okay thank you so much uh jim and jib for the awesome session and uh let's now end the session okay thank you so much again for uh giving your time thanks okay okay bye then okay so yeah so let's start let's just end our session now okay so thanks everyone for joining the latest episode of cloud native live we enjoyed the interaction and questions from the audience thanks for joining us today and we hope to see you again soon so that was it from my end see you soon in the next week