 Hello, everyone. Welcome to today's cloud native live with CNCF online programs. I am Libby Schultz and I'll be hosting today. And I want to read our code of conduct and hand over to our speakers, Jim Baguadia, founder and CEO of Nomada and Kyverno Maintainer and Chip Zoller, technical product manager of Nomada and Kyverno Maintainer as well. A few housekeeping items during the live stream. You can chat with us and leave your questions in the chat box. So please do so. Tell us hello and where you're listening from. And leave all your questions for Jim and Chip here and we'll get to as many as we can throughout. Please, this is official live stream of CNCF and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. And please be respectful of all of your fellow participants and presenters. Please note that this recording will be online on our YouTube channel later today. So you can catch it anytime or use your registration link that you registered with to reach it as well. With that, I'm going to hand things over to Jim and Chip to take it away. Thank you, Libby, and thanks everyone for joining. So I'm going to kick things off with just a quick introduction to Kyverno. A lot of you probably already know a bit about Kyverno, but just for those of you who might not have heard of the project. So Kyverno is a policy engine that was built, purpose built for Kubernetes and it's designed to operate both inside of Kubernetes clusters as well as as a command line, which you can put in your CI CD pipelines for Kubernetes, you know, policy management resources, and as a building block for governance and compliance, right? So just, you know, kind of going and sharing the documentation of Kyverno and we'll just pull up the introduction section because it has a nice picture, which shows how Kyverno operates. And as we look at some of the changes we have put into the newer releases of Kyverno, we'll revisit a lot of these components. But basically Kyverno works as an admission webhook, which means it self registers itself with the Kubernetes API server. It becomes part of your control plane and receives every API request based on your configured policy set. And Kyverno policies are Kubernetes resources. There is no programming language necessary for these policies. They're declarative written as Kubernetes resources. And you can use these policies to either validate and block certain checks, certain rules that you want within your cluster. You can also mutate resources. You can generate resources. And with some of the newer features that Jip's going to cover, it also allows like cleaning up and other hygiene and automation of resource management within your clusters. So a lot of different powerful things that Kyverno allows and we'll see some of these features live. Like I mentioned, Kyvernos also can be run. So this is showing the webhook, but there is a Kyverno CLI, which is extremely handy if you want to either, you know, apply your policies in your CI CD pipeline. Or if you want to test policies, the CLI also has a test command for this. So certainly, you know, take a look at that as well. And these are the two form factors. In addition, Kyverno will run periodic background scans. And the nice thing here is even policy reports are produced as, you know, Kubernetes resources. So Kyverno policies as well as reports, the output, the results from your policies are all consumable through Kubernetes APIs. There's no other additional tools, et cetera, necessary. And it fits in nicely with your existing way of managing resources, managing, you know, for the policy lifecycle, as well as, you know, getting updates, et cetera, for your cluster. So that's a quick overview on Kyverno. There are, you know, of course, several sample policies in our policy library. I think, yeah, we're up to 265 over here. So certainly awesome to see the community contributing a lot of these, as well as, you know, the Kyverno maintainers as we're working on different features, et cetera, that we'll see. But, you know, this is probably your best source to get started. You can look at policies by either the category, you can look at things, you know, in terms of resources or versions that you're trying to write policies for. So a lot of different options or just do a search and you'll kind of see at least a sample which gets you started, and then you can dive in a little bit deeper. One quick thing to note is, you know, we do have a lot of community users coming over from using other projects like OPA, Open Policy Agent Gatekeeper. So just quick comparison, I did already mention that Kyverno does not require a policy language. The other thing to kind of think about is, you know, Kyverno also allows generating, mutating, cleaning up. So it's more for automation and security versus just validating and enforcement. So one other, you know, a major distinction between sort of the approaches that the two projects take. And of course, you know, where OPA has some value is if you're using the same policies or if it's the same team writing policies outside of Kubernetes using a language like Rego may make sense. But if you're focused on Kubernetes, the goal of Kyverno is to make things as simple as possible, leverage as much, you know, built-in value that Kubernetes provides. A great example of this is with the pod security admission and how we have kind of, you know, allowed Kyverno to leverage the upstream libraries, etc. For that, but still provide additional value and flexibility on top of pod security admission for Kubernetes itself. So that's just a quick overview with that. I'm going to hand off, you know, to Chip, who's going to dive in a little bit more into our roadmap and we'll talk about one-night-dine features first and then come back to 1.10 and a few new things we're working on. Cool. Thank you, Jim. All right. So that was a good overview and introduction of Kyverno. Let's talk a little bit about things that just came with the 1.9 release that we released at about the beginning of February. So I'm looking at the official blog here and just not going to be able to cover all of the things. If you want to look at all of the things that are new in 1.9, go and check out the release notes that have a ton of things that are in there, but just some key things to point out. So one of the things that has been met with really great reception so far are policy exceptions. And I'm going to cover this in a demo in just a minute along with the second new feature. But policy exceptions, and you can see an example right here, is a new resource that we created in Kyverno that allows you to decouple the policy from the scope of its application. So the problem that we're trying to solve here is avoiding the necessity of modifying a policy every time that you want something to not apply to it. Policies are typically as broad as scope as possible, but sometimes it doesn't make sense, or it's just simply not possible to go and modify a policy when you need to have exclusions that are legitimate and valid in your environment apply. And also, this having a separate policy exception allows you to have teams, for example, that may not be responsible for policy authoring, they may not even be able to see the policies, they don't need to. They just need to be able to create an exception. So if you go through one of multiple mechanisms, once that policy exception that they're allowed to offer comes into the cluster, then it will be able to bypass the policy. So here's an example of what a policy exception looks like. And what you're able to do is define the name of the policy that you want to provide an exception for policies wrap rules so you'll define the names of the rules as well. The simple, the standard match and exclude block that you already may know about in Coverno policies, which is very flexible and has a lot of different options, you just select the resources that you want. So in this case, maybe I've got a rule that's disallow host namespaces and that applies to all pods. So maybe just matches on pods across the entire cluster. But you want to be able to provide an exception for one specific name pod in one specific namespace, you can create this, not have to modify your existing policy. And once this is in place, your important tool that's in the Delta namespace for example here will be allowed to run. And that's a really nice thing because you can now separate, like I said, who writes the policies not have to update those policies and just create a separate resource and Coverno basically combines them together to understand whether this resource that's coming in which matches it should be allowed or not. So that's the policy exception feature. And like I mentioned, I'm going to demo that in just a second here and how you can combine that with a second new feature of 1.9, which is cleanup policies. So Coverno has long had an ability that we call generate rules and generate rules are one of the seminal capabilities and one of the most favorite capabilities of Coverno, which allows Coverno to generate create all new Kubernetes resources. Yes, including custom resources based upon a design that you qualify in a policy. So you can either choose to clone from an existing resource or you can define the entirety of the resource in line into the Coverno policy as well. That was great and that that went a long way but one of the gaps that we heard was that, hey, there's still a need to be able to remove resources you got the creation angle covered and that's awesome and we can use that in a lot of different ways. But it would be super nice if we could couple that with being able to remove resources. So we came up with these cleanup policies and there's some other tools that do that as well. But the nice thing about cleanup policies and having them built into Coverno as well is that you can get all this stuff together and these rules can complement each other. So one of the use cases for this might be periodically removing cruft from a cluster. So like in this example here, we're able to remove things like bear pods. So one of the stories goes, you know, we're having troubleshooting. We need to be able to run some pods one time to be able to do ping tests or do curls or something. But as what typically happens, people forget about them and they get left around and bear pods aren't managed by a controller. So we could use something like a cleanup policy and here's an example of one to go and find all those bear pods and then remove them on a schedule basis. So just quickly walking through this, as you can see, it's very familiar policy contents. Like Jim mentioned, we try and make Coverno as easy as possible to reason about and to write policies. And so we're trying to recycle all the same constructs that you are already familiar with or may be familiar with in Coverno policies and this is what they are. So we're just going to match on a regular pod and then we're going to use the familiar expressions that Kubernetes already has and Coverno uses as well to take a look at the owner references. And without getting into too much of the nitty gritty details, bear pods are defined by the lack of owner references. So we're just simply looking for those pods that do not have a known a reference. And we're going to clean them up on a cron based schedule. And so you can configure this however you like. And when that time period elapses Coverno will kick into action and if the whatever resources match that definition, they'll be cleaned up. And I'm going to show this in tandem with policy exceptions on how you can combine these two together to make a really pretty cool system on how you can empower your own local development teams and users of your cluster to get something like this as a service on a time exploration basis. But before getting there just to quickly touch on some of the other features so distributed tracing. We instrumented Coverno with distributed tracing and whatnot one dot nine so that if you point this to your your collector it'll give you the full execution that Coverno is doing under the covers all the policies all the rules how long they took what external calls were made. So super duper valuable when it comes to getting some of this observability information. Can be used for troubleshooting but also also can just be used for things like performance analysis or just to know what it's doing. So that's distributed tracing extended support for sub resources so we brought even more abilities for working with Kubernetes sub resources to Coverno now they're easier than ever and basically all of them work. So you can take a look at some of the samples that we have here. This one is just showing how you might be able to do things like when new nodes get bootstrapped into your cluster if you want to advertise some sort of a custom resource. Like it's a dongle or FPGA or something like that Coverno can mutate those nodes for you, which happens to be in the sub resource called status and be able to as part of your onboarding workflow for your cluster. Be able to advertise as resources to other pods and things in your cluster without you having to do anything. And the final one I'll just mention here is config map caching so Coverno has the ability to do things like use contents of a config map to make policy decisions. It's had that for a while, but for certain large clusters with a lot of policies that may be consuming a lot of config maps we wanted to reduce the impact on making those API calls. So we brought in this config map caching feature that will simply allow you to assign a label and Coverno will watch and cache the config map that will avoid some of these API calls. So it has a benefit of reducing some of the load on the API server and also makes policy lookups quicker. And there are a lot of other things that you can see here, not going to go through all of them again I encourage you to read this blog and also take a look at the release notes if some of these things piqued your interest. But I want to flip over real quick and show these two the first two main features in action. So what we actually want to do here and actually let me flip over and show the architecture what we want to be able to do with the policy exceptions feature. So this has been a great thing to allow users of your cluster to be able to request their own exceptions rather than having to go and find somebody who can modify a policy. But one of the things that ultimately comes up from this is you want those exceptions to only exist for a certain period of time. And like I mentioned, in that one use case where this might be like a one time troubleshooting thing, once that troubleshooting process is complete, you don't need the exception anymore. In fact, you really want to get rid of that because that may allow things to circumvent your policy which you may not want. So we want to be able to put something like an expiration date on this. So what we can actually do is combine these two capabilities and cover in a one dot nine to give you that in a fully automated fashion. So here's what's generally going to happen. Somebody wants to create a policy exception and this is a user in your cluster. Maybe they're trying to write an exception for one of those troubleshooting pods or something else that could be anything. So Coverno is going to see that request coming in based upon a policy that matches on these policy exceptions because Coverno has the ability to match on any resource in Kubernetes. It's not just for pods and deployments and secrets and whatnot. It's truly any resource that you might have in the cluster and policy exception is just another resource. So we can match on that. And then it will automatically create a new cleanup policy that will set the expiration date to four hours. Of course, all of this is configurable, but in just this demo, this is set to four hours. And after that time period elapses, it will clean up that specific policy exception in the Falcon dev namespace and give you peace of mind that once that period has been expired, that exception is removed, your policies go back to the same enforced behavior that you want. So let me flip over and show this. So this is basically what we're doing here. We have a standard Coverno cluster policy, and we are going to match on policy exceptions. And once we see a policy exception, Coverno's generate rule, and I talked about this just a minute ago, which has the ability to create any resource that you want, is going to create a cluster cleanup policy for us. The second resource type that I mentioned in the 1.9 features. And it's going to create one with a specific name here that has this pattern that you did that I'm showing here. And just one cool thing to note, Coverno uses a system called James path, which is a JSON processing system. But we've also built a whole bunch of filters that are endemic only to Coverno to add even more value and power to it. And one of the things that we added is this random strange generator. So as you can see here, part of the name that's going to be built involves a random string that I said as far as a regex goes of eight characters long. So this is this can be a really cool thing because you can generate things like pod hashes, API tokens, UIDs, all sorts of things without having to go down to a low level programming language. You just simply write a filter for it. But anyhow, once we have that name, we're going to stash all this information in it. So we're going to populate this cluster cleanup policy. And one of the cool things that we can also do in 1.9 and I'll show this in the next demo is take time into consideration. We can get the current time and add whatever amount of time that you want to it and use that to actually build the schedule. So the schedule is a cron based thing. And so we're going to take the time form right now, add four hours to it, convert it to cron, and that's going to be our schedule. So once we do that, the cluster cleanup policy is going to get created. And then after the four hour time period elapses, it's going to automatically remove it now. We're obviously not going to wait for four hours. So not we're not going to have an extended webinar here, but I just want to show the actual process of this. So here's a policy exception that I'm going to attempt to create. And I'm going to only say that a an emergency busy box pod is allowed for my host namespaces disallowed host namespaces policy. And so let me go ahead and create first the cluster policy, which is going to provide this automation for us. Actually, by this cluster policy, all right, so this is going to give us the cleanup capabilities that we want. And then this is just triggered on a policy exception. So if I create this policy exception, what I should see once this is done is a new cluster cleanup policy exists that has the correct contents. And I'll note that although I've already done it, you will need to add permissions to Coverno in order to do this. And there is a blog that already has all of this stuff out there for you. And I believe also the documentation mentions this as well, but I've already done that portion. So now we've done now that we've created the client policy. Hey, sorry, there's a quick question. It can manage validation mutation even for the CRDs. Yep, sure can. Yeah, Coverno works indiscriminately on any Kubernetes resource. It doesn't matter if it's built in or if it's a custom resource. Even if you don't have a controller that reconciles that custom resource, which is probably not common, but still you could do it if you wanted to. So yes, it can not only just validate and mutate, but also generate and clean up and other things, those custom resources. So absolutely. Yes. Okay, so now we see here, I just had a cluster cleanup policy that was generated and going back to the name. So it gave me the name in the pattern that I mentioned. And there's that random string that's at the end. And there's the schedule that if we were to take the current time and add four hours to it and convert it to cron, that's what we got. So again, not going to wait for this to actually run. But as you see here at this automation that fired saw policy exception, it automatically created cleanup policy. You can set the interval to whatever that you want. And then once that interval elapses, it'll delete the policy exception. Your policies that are provided for in that exception go back to the same way that they were. So that's it for the first one. Hopefully that makes sense and maybe you think is even kind of cool. And so let me do one more. And in this one, like I mentioned, we with the ability to work with time, one of the things that we can do in Covern a one dot nine is be able to do things like create a time window for your policies. And this can be super helpful because if you have ops teams that work in different shifts, or maybe you open up your cluster to like a platform that's that has a component that's driven by non developers like actual users. Maybe you want certain enforcement or non enforcement behavior and things like business hours, but another behavior to occur outside of that or maybe a separate behavior to occur on weekends or whatever the case might be. So, since Covern now is aware of time, we can do things like time bound whatever policy that you want. And so imagine a case like this, I've just taken a policy from our part security standards which Covern already provides for you fully built that conforms to the pod security standards. And we want to be able to only make sure that this is enforced during business hours, eight to five at 8am to 5pm eastern standard time in my case, what obviously you can change this to whatever that you want. But the idea is, if it's within that window then this policy is going to get enforced. If it's outside of that window there's nothing that you need to do Covern will understand that and not apply this policy. So, imagine that you had a pod like this, and this is a troubleshooting pod and it as you can see it needs to mount a host namespace it needs access to an underlying hose to be able to do some sort of work on it. And so, normally this pod would be blocked by that policy. But if I were able to run this pod in the outside of the window that's defined here, then it would be blocked. And actually, I changed this for the sake of a demo. That is the current time so let's go over and do this and let me answer a question here. Can we set the policy for 30 minutes or 10 minutes. Is it allowed. Sure, you can set it for whatever time period that you want. It's really up to you just for demo purposes I had that in there to kind of illustrate what it might look like for a real world scenario but totally up to you you can set it how you want. So let me create this cluster policy. And this is the digital house namespaces, and I have a pod that violates this policy. And now since I'm in that that that time window it's currently 1225pm Eastern Standard Time, this pod should be blocked. And as you can see here the pod was blocked, because it was within that time window. So let me simulate what it would look like if I were to adjust this time window so that it fell outside of it so that's why I'm going to change it to 200 hours. So 20 to 22 would correspond to a very narrow working window. But let's just see what the difference is. And now that's the only change that I'm going to make in the policy. And I'm going to try and submit the same pod that needs those capabilities, and imagining that this is outside of that time window. And as you can see here I was able to create the pod. And that was the only modification that I made I did not change the structure of the policy I just said, Hey, if the time window is much more narrow now and we're outside of that you can allow this through. Anyhow, that is what I wanted to show from a demo perspective. Hopefully that makes sense. Hopefully that's kind of cool. And, yeah, that's most of it for one dot nine. Let's take a minute. Let's first of all check any, any more questions out there. I don't see any other questions. All right, so with that, that's one nine. Let's take a look at what we have on the roadmap coming for one dot 10. All right, so Coverno one dot 10, select the right one here. We've got some really cool things that are coming in Jim is going to show some of these things today, which I believe is going to be the first time that that we've shown any of these. Let's just take a gander at the features. So the first one is we're going to try and will not try we are going to split Coverno multiple components. So one of the things that we've heard is that you know Coverno has got a lot of really flexible and valuable capabilities. But some people may not need or want all of those and they just want a certain portion of it and they don't necessarily want to provide the resources for some of those other things or like they might just want the admission component or they might just want background scans or they might just want some other components. So we're going to take the first step and decompose Coverno into separate controllers that will allow you to get that. So first thing that we're going to do is split the web hook and the background controller so that you can choose which one of those that you want. Now by default you'll get both of them and everything else but if you just want one of those and you can totally do that. So we'll reduce the resource consumption that you that you're requiring and also just give you the capabilities that you want so that you have a little bit better ability to reason about what it's doing and what it's not doing. So that's the first thing we're going to we're going to split this up. And this is just the first phase of more phases that are coming we're going to decompose it further and and try and make it fully modular. So if you only want for example generation in the future then we'll be able to just give you generation you don't need everything else you just want background scans just to be able to generate the nice policy reports that Coverno can generate. Then you should just be able to get that and not have to want to run a web hook at all. So that's the first one. The second one here is has been a long standing request and so we're we're super excited to be able to deliver this which is Coverno will now be able to make intra cluster service calls. So Coverno will now be able to call any service that you want in the Kubernetes cluster, not just the Kubernetes API server, you can call any service that you want and even do things like post request to send data to it. So this is going to be super helpful because now you can integrate Coverno with truly whatever that you want. And as long as it gets some JSON data back which at the end of the day that's really what admission controllers are doing because Kubernetes is going to send JSON anyway. We think that you should be able to do that for any service as long as you get some data back that it can process Coverno will be able to take that into a policy decision. And Jim's going to show a demo of this and also the second major one to point out here is notary to support so today in Coverno. We have the ability to do things like verify image signatures and attestations based upon the cosine project, but tomorrow we are working on notary to support which will allow Coverno to do the same type of thing but for the notary to project. So now you can have a choice of if you're using one or the other project, you can now do things like verify images for either one of those. And as typical that we're trying to do for everything in Coverno, this is all done in a very simplistic manner, we're not exposing a programming language, we're making it very easy to consume and very easy to switch between the two if you want to. And the final thing here is we're, we're making some some pretty significant modifications to the generation ability of Coverno to make this an even better experience so we're, we're creating some new or enhancing it in some ways that have been requested for a while, giving it a nice coat of paint and polishing it up really well so that we've heard a lot of folks that rely and really love the generation capability because it's so easy to use. I just want to make it a little bit more robust and also bring it bring a few enhancements that have been out there for a little while so with that let me give it over Jim so that he can show you a couple of these cool new features coming in 1.10 and a live demo. Thank you Chip. Yeah, so just switching over, you know, and we'll start with the first feature itself so like Chip mentioned, you know, to decompose Coverno so Coverno started as a single deployment. But as you can see in this picture, there's a several different controllers that were part of this, you know, packaged in this one deployment. And just kind of I'll show you this how it looks in the command line. If we kind of do, you know, get pods on Coverno and I'm running, you know, from latest from 1.10. So I can see there are four different, you know, pods running and in, you know, we can even just do get deployed to see the number of deployments. And what we should see is this for each one of these pods we saw there is also a deployment in Coverno right so now like Chip mentioned so Coverno itself is the admission by book. So that runs in full h a mode so you can have multiple instances and it will load balance API request. So you can size that differently scale that differently. But then you have the background controller which does mutate and generation on existing resources. There's the reports controller which is responsible for reports and the cleanup controller for the new cleanup policy that chip demo right so very easy to kind of, I guess, understand what these are doing and then hopefully also easier to now scale Coverno and make sure that your web book is properly sized because it's critical as Coverno runs as an admission controller to make sure that the web book doesn't go down or doesn't have any glitches and is performant enough as well as of course some of the background operations need to be sized correctly for larger clusters right so that's just the you know kind of split so really from a user perspective no major change you just install Coverno with home or with you know the installed script. It will automatically install the controllers but then you have you know flexibility and have your size tune and operate Coverno within your clusters itself. Alright so let's kind of switch over to the next thing you know on the list I'll start with notary and then we'll go back to the extension service. So like Chip mentioned you know here really the idea is to start supporting in addition to a six store cosine notary as assigning and verification format and you know for those of you who may not be aware of this project this is a CNCF project. It started life you know prior to six store cosine and some of the other components in six store but then you know there were some changes made there was a notary b2 but which may be renamed after notation as this demo is showing. But this is a you know CNCF project which aims to do similar things to what six store and other components are doing but using more OCI references and artifacts as well as X 509 infrastructure or you can use KMSes and other things right. So I'm not going to go into details or comparisons or so there's a good blog post out there which compares the two and you know I guess the for Coverno we will support the plan is to support both formats in terms of signing and verification and you know depending on how things you know evolved we will of course continue to track and support both projects. So for those of you who might be familiar with image verification in Coverno this was a feature we introduced a few releases back. There is a type policy type in Coverno called verify images right and what this allows you to do is it does a few things it will verify image signatures verify attestations within you know that are attached to images. And again based on the project you're using the attestations may show up differently slightly differently in the registries but the idea is that you are creating metadata in your CI CD systems you're signing that metadata along with your images like provenance data right where was this image built does it have a scan report does it have an S bomb and then you're attaching this information into your OCI image which Coverno can then verify. So this policy just kind of starting up top it's a validation policy it's an enforced block you know and it has some standard boilerplate you'll see in most policies. It matches every pod and Coverno for those of you are familiar will auto generate rules for deployments and other pod controllers but the policies written only to match pod. The new thing here is this type right so prior to one 110 we did not have this type field and cosine was the default in your image verification logic used underneath integrated into Coverno. So now notary is also supported and again this name may change based on how the project you know evolves but right now we're calling it cosine and notary b2. And then it's matching every image reference so you're just kind of done a splatter wildcard where it's going to say okay every image reference it's matching typically you would you know you might have different even signature formats for different repositories. If it's a third party image maybe it's using you know a different spec so on but then it's saying these images have to be signed and a tester is an authority typically can be a certificate. It can be a key can be also cosine support something like call keyless so there's a lot of different flexible options in how you verify the image but ultimately this is the public certificate here now in again in most you know production deployments you might not want to put you could put a search chain here if you just want to you know kind of verify based on root search but typically you would get fetch your certificate from a KMS or some sort you know upstream system could be walled some other store. It can also be fetched from a config map so again a lot of flexibility for the demo we just have this hard coded where it's kind of showing the certificate in here itself right so let's do you know going back to the command line what I'm going to do is I'm actually going to sign my test image I'm using Azure's container registry here but you can use any OCI one dot one compliant registry. And in a notation if you go to their website they kind of talk about what the compatibility and the specs etc. And this is warning me it's telling me hey you know signing with tag is not a great idea but that's what we ended up doing if you put this in your CI script it's better to get the digest. And sign with the digest. And then if of course if I similarly kind of view a notation. Let's let's kind of just verify this on the command line to make sure it works. And I'm going to verify by the digest right and your the digest matches what we did. So it was as we expect to work so you know just to kind of demonstrate what this looks like from the command line going back to if I look at my Azure registry. What I should see so I have assigned and unsigned over here. If I dive in into the signed the new thing here is this artifact spec right and you see there's two signatures now attached. Typically you would have one and you could have multiple signatures one could be a team level one could be the you know the let's say if it's a production cluster. You have another signature to allow that cluster into production and give her no policies going back to this example. They're very flexible you could say both need to match or one need to match. So there's a lot of different ways you could vary how you're signing and verifying your images itself right but going back here what's cool is and previously I also had you know uploaded a scan result and had signed that scan result. Over here into you know which you can also verify right but for this demo we're just going to keep it simple and just verify the signature. Using the key on a policy and I have two signatures so you know I don't really need two signatures I can delete one but we'll just go ahead and you know kind of leave it as is and try how this works with. But now if I do let's apply first of all this policy and if I do you know just show my policies this policy should be there it's an enforced mode it's ready to be applied and if I do now could cut all let's say I want to run this you know image and I'm going to first test with the unsigned version just to make sure it gets blocked. And then what I want to do is you know so as as obviously as expected I'm sorry and it could not find any signatures so it got blocked. Now if I do be one I'm expecting it to get created which is just it right so again. This is as simple as it gets if you start signing your images you can very easily start verifying them and it's you know pretty much. With all of the you know the rise in supply chain attacks we highly recommend. You know evaluating perhaps one or more of these type of options but definitely start signing and verifying images and give or not you can start with an audit policy right so it will just flag unsigned and then you can slowly turned up. In a which images you want to kind of verify signatures on and like you saw here there's a lot of flexibility so I could have said it's only images coming from a particular repo or only third party images or. Lots of different ways you can match namespaces however you wish right but that's the simple demo for notary support it is going to be built in into Kibirno much like with cosign so very simple no need for extensions etc for a very basic checks right. But moving to the second feature and explaining why you know you know like Chip mentioned we often get the request to say hey Kibirno is great but. Can it do you know can I do something else because I have these other systems I want to either integrate with I maybe want to you know fetch data from external systems or perhaps I want to even. You know called some other service it could be Prometheus there's a great demo. You know that I recently saw from one of our colleagues who was starting to integrate 1.10 with things like even open cost right so. The possibilities really become endless where you could start pulling data or even you know posting Kibirno information that it gets from the admission request into external services and then responding to that right so here what I have for the demo is I have a very simple project and there is a repo I'll share the link. Which is just it's written in Golang but this could be in any language could be JavaScript Java Python whatever you you know your language of choice all it's doing is it's starting up you know to you know. Listeners here one is a web service on port 80 another is on 443 the 443 of course as a certificate and T file in this demo I chose to use cert manager but you can. Do you know configure that however you wish and all this is doing again is saying that every time it gets a request it has a standard you know. You know kind of your handler for that request and it's going to check and see it's going to parse the namespace based on how the request is you know sent if it's a get it's going to expect that as an argument a parameter in the request if it's a post it's going to get it from the body. But it extracts the namespace and then once the namespace is it's extracted it's saying hey if the namespace is missing or default I'm going to you know block this if it's some other namespace that you know and you could make this as fancy or as complex as you wish. You could say which namespace as you want to block things like that now this is a very trivial example and there are key where no policies you can do this all in one policy. But it's just meant to demonstrate the flexibility and the power of this right so that's the web service and we'll install this and you know but before we do that I want to kind of show the corresponding policy and what that would look like right. So here what we're doing is we're saying hey we again have you have a validation policy which is calling this extension service and it's it's checking for config map resources now. I did not want to make those pod resources because if my extension services down then give or no will return an error which obviously you know you have to be very careful with if you're running any extensions. They have to be H.A. they have to be performant because admission controls as a finite window to kind of respond back or you are blocking potentially blocking API requests based on your policy configuration if you put them in enforce mode. And if your failure mode is fail instead of ignore right. In this demo I'm doing a post but as you see in the commented out lines you can very easily do a get. This could be again get to Prometheus get to your own custom services things like that. One great example we heard on a community call is as a user who wants to do a get to like the Kubernetes API or a post your Kubernetes API for a subject review right so that use case would be extremely interesting where you are doing an additional call to say hey can this user. Do they own the namespace that they are trying to maybe create a service in or something like that which you can now do this additional call to check and whatever data is returned back from this call is stored in this field called result. Which you can then you know easily kind of check and you know in within your policy block itself right so you're I'm saying if result is allowed. Then proceed if it's not allowed just print an error message saying hey this is not allowed right so again pretty simple not not you know not too much complexity in the service but more meant to show a demonstration of what can be done. And of course you can fetch data from other external resources. You can even make external calls but we highly recommend with this you know to be limited to the API calls within the cluster. One other thing to kind of show is you can of course configure full you know a mutual TLS on on or encrypted connections on both sides so you're if we kind of have the CA bundle. Through cert manager or some other mechanisms you can then check that within the policy itself and make sure you're talking to the validated service for on the on the server side what you can do is you can use the Kubernetes token review API to make sure that. Only request from Kivirno are accepted and all other requests are blocked right so this way some other third party tool cannot call that service. I mean it's up to you if you need that security you can add it in right so anyways let's see this in action so what I'm going to do is switch to the second demo. We'll just delete our existing policies and I'm going to apply you know let's start with applying the resources so I think I have in manifest I'm going to apply all of these. Okay, so it looks like they're all in here and then we'll apply this policy for checking the namespace right. So now if I do again could go to get see Paul what I expect is you know this is one policy that we are going to demo and if I let's say I want to create a config map. So we'll start with just creating a config map without a namespace right and immediately you see it says namespace default is not allowed because we didn't provide a namespace. But if we do something like you know I'm just going to use Kivirno and we'll put it in dry run. I don't want to actually create this config map. Or you know you can use whatever namespace you want this should be allowed right and which again internally Kivirno made this API call to the service it got the result based on what we coded up over year. And because it got allowed equals true the policy kind of does the check and comparison but this could be any data that you get back from the service, and you could formulate your allow or deny conditions according to that. So simple example of what you know can be done. So one use case we're looking at and going back to how this ties into notary is notation as you saw as a very powerful command line it has its own extensibility and its own trust policies. So we're working with some partners to be able to also extend Kivirno to be able to call notation and its plugin system and get back a response for yes or no. And of course will integrate with you know you're back in again things like signing systems code signing, you know, tools, which makes an allow or deny decision but this can be, as you can kind of envision extended to anything right if you want a schedule if you want a scorecard if you want any of this data now to be part of your policy decisions, very easy to do very easy to kind of integrate third party things or even cost data like I mentioned earlier. That's that's a quick demo but hopefully gives you a sneak peek at what you know what's coming there's a few other things we need to do to get this feature ready. It's already available in Kivirno main but we will you know be as we kind of near production readiness, we will be kind of, you know, adding in a few other checks and things for this. So, yeah, with that, I can I'll hand back to chip to talk about you know, 111 and a few things in the community but if there's any questions happy to answer those as well. Yeah looks like we do have a few questions that are out there. So, one of them is, is data replication available in Coverno similar to open gatekeeper Jim you want to take that one. Yeah, so I, if I understand that correctly that is referring to the ability that gatekeeper has or opa has to be able to cash some data and to be able to make policy decisions on that. In Coverno, we have like chip mentioned the ability to cash config maps, and we have discussed, you know, extending that to cash any resource within the cluster, either based on something like labels, or some other identification of which resources you want to So by default Coverno does not require that replication which makes it much more performant and doesn't you know does the memory is can be tuned accordingly but we do have you know pending feature requests and we have had discussions with you know community members on being able to cash certain you know and replicate certain data for this faster policy decisions right so it's a trade off between memory and performance of course. So to reach out on the Coverno Slack channel if you have you know any you know specific use cases happy to discuss more there. And then there's another question. I see we got an answer to one with some links, but what's the best way to handle mutation policy where we are configuring default image registry for all pods. But before Coverno is installed the policy doesn't exist and so we're not able to achieve this. So if I understand this correctly it's a case where you want to be able to change the image on existing pods, but after you interviews Coverno. So one of the cool features that Coverno has that and this is actually I guess a partial answer to one of the other questions what's the difference between open gatekeeper Coverno Coverno has very rich and robust mutation capabilities open gatekeeper has very very limited mutation capabilities was only like metadata assignment. So it's very limited but one of the things that Coverno can can do in addition to more robust standard mutation is what we call mutate existing so you can introduce a Coverno policy. So on that policy design Coverno will go and mutate, as it says existing resources in your cluster without anything having to flow across a web hook at all. And that can be super useful because you can do things like introduce a Coverno policy that changes namespaces for example. And also we have the same ability when it comes to the generation capability, you can do that same thing and generate existing resources. So imagine that you had an existing like round field cluster, and you wanted to get the benefit of Coverno but you already had, you know, a dozen 100 namespaces that were out there and you wanted to be able to do something like, give me a resource quota in all those existing namespaces. Well, you can introduce a new Coverno policy, and the Coverno policy will generate into those existing namespaces, as well as new namespaces that you create after that point in time. So there's quite a big range of flexibility that it has and if I still wasn't understanding your use case come talk to us in the Coverno channel would like to understand that a bit more. But with the last few minutes that we have remaining let's just flip back over and talk a little bit about what we have tracking right now for the one dot 11 release. So we are participating in the Linux Foundation mentorship project where we have a it's a great opportunity for those they're trying to get started and open source or they're trying to get started with the Kubernetes ecosystem or maybe they're already started but they want to make more of an impact. We have several things that we're working on for that with our mentees, and one of those is validation admission policy support in Coverno so in Kubernetes one dot 27. They introduced what's known as validation admission policy, which uses cell common expression language in Coverno we're trying to integrate with that, and a couple different ways. We're we're defining what that looks like but one of the things that we'd like to be able to do as part of this is, for example, using the Coverno CLI to allow people to validate those policies without the need for an API server, which is one of the one of the gaps with the with the cell solution so that's one of the things that we're looking at in one dot 11. We're also looking at the ability to use OCI artifacts and references there for image validation rules. And this would be super helpful, because as many of you probably know there's a lot of activity around changes that are going on in the OCI spec Coverno policies now actually are OCI compliant and you can and you can upload them to a registry and even pull them down with the Coverno CLI so we want some more integration around that. Some integration some additional integration and Coverno is pod security what we call the sub rule type so Jim mentioned this earlier on and that is Coverno has pulled in the same libraries that Kubernetes itself uses for pod security admission, but we allow more flexibility more feature set and more robustness around how you can configure that we want to extend that even further in one 11 to be able to do things like exclude by specific field paths in those controls. So we're looking at being able to introduce that in one dot 11 and also some sub resource support and also not shown here but Coverno policies are now officially supported on artifact hub but we Coverno has the largest policy library of any policy engine for Kubernetes I think Jim showed it was 265 that number grows almost weekly that's across the whole gambit of capabilities of Coverno so we want to offer that in artifact hub and also be able to make that an extensible system whereby any new contributions that you all are the other community members would like to make they automatically flow there so those are the things that we're looking at for one dot 11. And with that, we just have a couple minutes remaining. I see one other question out there. Can we configure audit notifications to go to slack or preferably AWS SNS topics. So Coverno doesn't do that out of the box, but there are some other options that are out there, both open source and commercial that will allow you to do things exactly like that be able to scrape a policy report and send those things to an additional collector if you like. Yeah, policy reporter, which is a sub project for Coverno also has some configuration I believe slack is one of them but not SNS so that could be, you know, a future enhancement but you can certainly route to slack and teams and other things. Right. And I believe you can also send a web book so if that's something that you can you can send to a custom web book and you can, you can do that as well. And that's part of the policy reporter project which is you can find that under the Coverno organization. Very nifty way to get a visual dashboard among other things in your Kubernetes cluster to see how Coverno is doing. And that's all the questions that I see anyone else that has something to throw at us. Right. Well thank you all so much. I think you've left all of your contact info and how to get in touch with y'all and how to link to Coverno in the chat. So everyone be sure to click on those before we wrap up and thanks again Jim and Chip for all the great info today. And thank y'all all for joining us. We will catch you next week for another cloud native live and with that we'll say goodbye today. All right. Thanks so much y'all. Thanks Libby. Thanks everyone. Thank you.