 Hello, welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Whitney Lee. This is my very first time hosting today. Thank you for being here. I'm a developer advocate at VMware. Every week, we bring a new set of presenters to showcase how to work with Cloud Native Technologies. They will build things, they will break things, and they will answer your questions. Join us every Wednesday at this time at 11 a.m. Eastern. This week, we have Chip Zoller and Jim Baguardia here to talk with us about what's new in Kaverno. Jim and Jim, please take it away. Hi. Thank you, Whitney. Thanks for having us on. Today, we would like to share with everybody the news that Kaverno 1.10 was just released. We'd like to give everybody an overview of the features and changes that were in this release and also do some demos so that you can really understand how this stuff works and how you can use it. So, if we're seeing the blog that I've pulled up, like the walk through the blog post and talk through some of these features. So, here is our blog post on the Kaverno 1.10 release, and there are a ton of stuff to cover in this. So, I want to start at the top, and there are links that will be provided there for all of these resources. So, first of all, several of the new features in Kaverno 1.10, and let me just start off by saying that this was a really huge release. We've been working on this release for a number of different months. I think we had almost 500 PRs that were merged for this release that included a bunch of new features, a bunch of enhancements to existing features, your normal plethora of fixes and everything in between that encompassed not only obviously the maintainers, but many contributors across the community. So, if you were one of those who participated in this, and that could have been saying that you had a problem on Slack or opening an issue or obviously a PR, thank you. We love community. Kaverno is very popular within the community and it wouldn't be that way without all of you who are participating. And at the very least, just providing feedback. So, if you did contribute in some way to this release, thank you very much. You helped put this where it is. With that said, let's walk through some of these key features and then getting to some of the other things of Kaverno 1.10. So, one of the main things that we did in Kaverno 1.10 was we wanted to address some of the scalability challenges. And the way that we did that was by breaking Kaverno up into its constituent controllers. So, for a while, Kaverno has had a bunch of different capabilities. Kaverno can do all sorts of cool things like validate resources, mutate them, generate resources, clean them up, have policy exceptions, generate reports that are accessible in the cluster. All of this stuff obviously takes resources. And in some cases, you wanna be able to fine tune the resources that you're allocating on a per component basis. But with all of the things that were lumped in together, you know, as Kaverno grew, we realized that we needed to be a little bit more scalable. We needed to allow people to both turn on and off components and also allow them to scale just the component that they need without necessarily giving resources to everything else. So, the first thing that we did in Kaverno 1.10 was decompose Kaverno into these separate controllers. And this is what it now looks like, the physical diagram of a Kaverno 1.10 installation. We've got a cleanup deployment. We've got an admission controller deployment, our reports deployment and a background controller deployment. All of these are optional, except the admission controller because that's the heart and soul of Kaverno. So you can flip any of these others off if you want to. That also means that you can scale them out horizontally. Each of these accommodates horizontal scale. And you can also vertically scale them. So you can add additional resources to these components if you want to. If you do check out the blog page, take a look at the high availability page that we link in there because scale is handled a little bit differently depending on what these controllers are. But this should be a really welcome change because as I mentioned, it'll allow you to get only the components that you want and it'll allow you to scale them for your needs. So if you need, for example, more resources in the reports controller and less resources in the admission controller, you can totally do that. So that was one of the first changes that we made in 1.10. And as a result of that, and we'll talk about this in a little bit more detail, but we had to totally change some things because we're going from one to, well, three others, but four total. So there were some under the covers changes that we had to make because of this. But those are the components of Kaverno and that they're now broken out in 1.10. The next thing that we did in 1.10, which has been a longstanding release by our longstanding request is to allow you to make calls to other services in the cluster. And technically, this could even be calls outside of the cluster if you really wanted to. Kaverno has a lot of capabilities to already fetch resources from the Kubernetes API and the form of config maps so that you can use for your policy decisions or literally any other resource that you want inside of the cluster. And that was great and necessary in a lot of cases, but one of the things that we heard is, hey, you know, sometimes I need to make calls to something that's not Kubernetes itself in order to get something to inform me about what my decision should be. And so we're glad that in this release, we've enabled that ability for you to call external services and get data from some sort of a JSON REST API. Now, this is an initial feature. So there are some things that aren't there like authentication, but in general, this is gonna allow you to write policies like this. And even if you aren't familiar with Kaverno, if you're watching us for the first time and you haven't seen a Kaverno policy, you can see that this is fairly straightforward. I mean, here's our context call that we're making to some other service that allows us to do a post. Will you zoom in on that please? So it's bigger on the screen. Thank you. How's that? That's much better, thank you. So here's an example of a policy, even if you aren't familiar with Kaverno, you can probably detect what this does and you're able to make a very effective service call very simply with this new capability. You declare what your method you wanna be and we support get and post. You provide some sort of data in the body here. And so this is a key value pair that it would be bundled as JSON and sent to some endpoint. You provide your service URL. You provide a CA bundle in case you're talking to like an HTTPS endpoint because we need to establish trust. And then you just use whatever the response comes back and this is gonna be stored in result and you do some validation against it. So in this case, this is just basically a prototype that does a deny action if the result in the allowed field said false. So what this actually allows you to do, and I'll flip over and just show an example like this, you can do things like subject access reviews now and against the Kubernetes API server. This is a little bit of a, it's a variation of the service call but this allows you to do a subject access review to find out if whatever user or principal that is submitting the request has the permissions that you expect. So I'm not gonna go through and dive into this. Whitney will put the links out there. This is an example policy that's in the Covert No Policies repo and so just to show that real quick, if you haven't seen the policies repo, there are 283 Covert No Policies that are out here. There are everything from pod security to what I just showed subject access review to custom resources to you name it. These are all things that you can pick up and use right now. Many cases you don't even have to modify you can just pick it up and plop it down. In other cases, you can gently modify it or if that doesn't get it, you can use that as a starting point to write your own policies. And since Covert No does this very, very simply there's bound to be something here that you can piece together to make whatever use case work for you. So anyhow, that is the overview of the service call and the next major feature is support for software supply chain security with CNCF Notary. And I'm not gonna steal Jim's thunder because he's gonna talk about this more in depth and also show a demo around this, but just to quickly gloss over it, Covert No has had the ability to verify signatures and attestations with six-door cosine tooling. Well, now in this release, we're adding notary support as well. So the end goal is that no matter how you wanna secure your software supply chain with either of these projects Covert No wants to meet you where you are you can verify signatures already with cosine six-door. Now you can verify signatures with notary. So Jim's gonna go into more detail in that, but that will be a pretty cool demo later on. That's notary support. And the last of the major features before talking about some of the other minor ones is that we did some major work on the generateability. So for those that may not be aware, Covert No has this cool ability to generate resources that is create all new Kubernetes resources based upon a policy that you define. And those resources can either be defined in the policy itself, or they can be cloned from other locations and other resources that are in the cluster. We did a massive amount of work on the generate rule because we heard there were some issues with it. And also there were just some features that users wanted to have like, hey, I really want my generated resources to follow the same life cycle as my triggering resource. So for example, if I label the namespace, and this is pretty common in multi-tenancy where based on the label that you assign, you want some resources to go into that namespace, maybe like a T-shirt size of a resource quota, like, hey, I want a medium T-shirt size resource quota in a medium limit range. But if I go and update the labels later to maybe change medium to large, I need those resources to be adapted. Well, in this release, we've done that. So you can change those resources and delete them if the trigger no longer matches and a lot of other things, not going to go and boil all this down. But in short, we did enough work on generate that we felt that this was a major feature. We know that generate rules are very heavily used out in the community. They're one of Coverno's most favorite capabilities that we continue to hear over and over. So we really wanted to make sure that we not only added new things that met people where they were, but we were delivering a better user experience and making sure that things worked when they should work and they didn't work when they shouldn't work. So we put a lot of work into that refactoring and that's the generate stuff. On the, some of the other minor ones, I'll just go through this, but some of these will be, there will be some silent cheers and rejoices in the background if all of your microphones were up. So I'll just quickly run through these. The first one is that you can now specify operations directly in the match. So for those that do use Coverno and know it, you had to use something like a precondition to match on the operation that you wanted, whether that was create, update, delete. Well, we made that even easier. So now all you have to do is specify this. If you want to match on create, all you have to do is operations, create, and you get it. So pretty simple and straightforward, but does allow writing simpler policies and making the simple Coverno policies even simpler. Policy exceptions for those that have not seen the 1.9 release policy exceptions are a unique feature in Coverno that allow you to create what's called a policy exception. And it's a separate custom resource from a policy that allows you to exempt any resource from the policy or policies that are named in that policy exception. So this type of decoupling is really great because now you can allow something like a self-service portal to generate policy exceptions. And even though you may not have or want to grant somebody access to modify a policy or anything having to do with the policy, you can still allow them to create an exception. And that exception will allow those things to bypass a policy. So this is really great for these legitimate use cases where it's, hey, I know that we have a policy in place that forbids this, but I have a valid reason and I want to create a policy exception for it. You can do that with a policy exception feature. But in 1.10, we enhance that to do things like background scanning so that you can get policy reports that reflect the policy exceptions and also making rule names more flexible with wild cards. So we'll go into a lot of details with that, but if you're a fan of policy exceptions, we know that a lot of people out there are already using it, even though it's only been around for one version, even more support and enhancements around the policy exception feature in 1.10. Policy reports, so quickly for those who do not know about policy reports, Coverno has this cool ability to generate a policy report in a Kubernetes cluster. A policy report is an open standard developed by the Kubernetes policy working group of which Jim is one of the members. And the policy report is just another custom resource in the cluster. And this is cool because it allows someone to see the results of a policy that may not have permission to look at a policy itself. So you decouple the policy and what happens with the evaluation of that policy so that anyone, including tools, by the way, can scrape those policy reports. So we've made some enhancements to policy reports in 1.10. And one of those things is if you've already excluded namespaces in the resource filter and the resource filter is just an internal mechanism that Coverno uses for its own configuration purposes, those will now be obeyed in background scans. And this was one of those yahoo moments that we heard from folks that they love policy reports. They also want those to be exempted from background scans. We've done that for you. That is, of course, configurable. So if you want the previous behavior, it's no problem, you can still do that. Some other, let's see, minor things, context variables. So these are now evaluated just in time. So this allows you to write policies that you perhaps couldn't before or make those policies simpler because they're only gonna get checked at the time that they're run. So that can be super handy. There's now a new message field that's in condition. So if you're using Coverno and writing conditions of which, let's see, the one that I had just pulled up is an example of such a policy. And I'm looking at our 110 policies here. So in the condition is this thing right here, this tripartite expression, just like we're all familiar with in Kubernetes today, you have expressions like this that are all over the place. We've now added a message field in here as well so that in special cases like the verified images rules, whatever the value of that message field, it can show your users when something is blocked. You can show this exact condition or the exact expression that produced that result. So super handy and that'll allow folks to display even more rich messages in a more dynamic fashion. We have some new James Path filters. I won't cover that because that's going a little bit more in the weeds. Do check out the blog that Whitney shared because all of this is in there and we have links to the docs and you can go find all that stuff out. Last thing just quickly mentioned, so obviously the docs, we did a lot of refreshing on that. I showed the policy library, 283 policies, the largest policy library of any policy engine that's out there and it continues to grow even in between version releases. But all of those policies are now reflected on Artifact Hub. So if you know and use and love Artifact Hub like a lot of us do, you can find all of these out there right now which actually makes Kubernetes policies the fourth largest artifact type on Artifact Hub. So that's it for the features. So the major and the minor features just wanted to bring out a couple of notes on breaking changes. So we did have some breaking changes in this release. And as I mentioned, one of the main ones as a result of this decomposition to provide that increased scalability was we had to go to a new Helm chart version. And as a result of that, there's not a direct upgrade path because we're going from one or two deployments to four. So things aren't gonna translate and it's gonna be really nasty. So we came up with a new Helm chart, put everything in it along with all of our learnings from previous versions. So we've made that a lot better. There are also a couple of things when it comes to policy declarations I won't go and distill all those things down. Again, check the blog, look at the release notes. We have a Helm migration guide that's out there to make this process easier, but do just wanna make you aware of those things. And yeah, that is it for the 110 features. Anything to cover any questions out there, Whitney, before handing it over to Jim to dive more into the notary stuff? No questions in the chat so far. If you do have any questions, everyone, please do be liberal, ask your questions. We have, we can ask questions in a moment, but we also have a Q and A section at the end, so please don't be shy. And I just wanna say I really love how in tune you are with community needs, I think that's really admirable and super cool. And I love the policy library, that's amazing too. Thanks, yeah. We really spend a lot of time listening to folks in the community and understanding, Coverno is supposed to be the easy button for policy and it's not just supposed to be easy, but it's supposed to be imminently capable in a lot of different areas. And that doesn't work if you don't listen to what real problems people are trying to solve. So we love hearing cases where, hey, I'm trying to do X, Y and Z and here's why I think it's valuable. Is there something that Coverno can do? And very often the answer is, yeah, there is. And if not, then that's something that we'd like to take and chew on for a little bit and then figure out, all right, how can we take something that's kind of maybe difficult to solve or involves multiple tools or maybe just can't be done without a bash script and make an elegant way to do that, but also make it simple. And so that's what we really try and do in the Coverno project to make this stuff simple, but also make it powerful. That's evidence and impressive. I do have one question about it. When people do have, you say you love hearing from the community, what is the space where you do hear from the community? We hear a lot of feedback in different channels, but the biggest one probably is Kubernetes Slack. So we have Coverno channels that are on Kubernetes Slack, as well as CNC of Slack. We're on Twitter and obviously we're on GitHub. So we hear a lot of feedback on all of those channels, but probably our most active space is the Coverno channel on Kubernetes Slack. And in the documentation, there's a link that can take you there. So if you wanna go and check those things out, you can go and subscribe to the Slack channel. We have a community meeting. There are all sorts of things on our community page that you can go in and check out. And also in a similar vein, we love contributors. And if you wanted to get started as a contributor, we have all sorts of resources to help get you started, not just developer docs, but issues that are labeled good first issues. And we also have a channel called Coverno Dev on both Kubernetes Slack and CNC of Slack. And we would love to talk with you and help you out on any challenges that you may have in contributing to Coverno. So lots of ways to get in touch with us. We're very responsive, as you can see. And we like working with our community and our contributors. Amazing. Jim, are you ready to do your section? Yes, absolutely. Thank you, Chip. Thanks, Whitney. So let me share my screen and I'll pull up. We'll start with just going back and describing what the image verification capability in Coverno was and how we have changed this in 1.10. And I'll also talk a little bit about how this will evolve a little bit further. So first of all, you see just this warning in the documentation that image verification, we have kept it as a beta feature. And one of the reasons was we were anticipating some of these changes coming in for notary. The cosine support, I believe, landed a couple of releases ago initially and has matured where, and I'll explain a little bit about what cosine and notary are and how we're utilizing them in Coverno. But just stepping back a bit, and for those of you who might not be familiar with, you know, software supply chain security or image verification and why all of this is even necessary. So today, what's interesting is more and more we're seeing that production systems are getting a better security sort of treatments. We have better tools, especially with cloud providers as well as we know with the vendor community. And, you know, as production systems are getting secured, the one interesting thing that's also happened is with cloud native tooling, with infrastructure as code, we're seeing that, you know, build tools and CI CD tools have become more powerful and in many cases have the capability to deploy into multiple production system. So combining that with another, you know, factor which is more and more applications are leveraging open source, are leveraging, you know, other, you know, services or modules and getting composed, you know, with maybe even in some cases, hundreds if not thousands of different packages. What we're seeing is that attackers have started exploiting these CI CD tools, these build systems as a way of penetrating multiple production systems. So what, you know, with attacks like we've seen in the last few years, like SolarWinds and a few other type of like the log4j, you know, it's security issue that we've had other things. What's been happening is that by, you know, either injecting malicious code at build time or being able to compromise build systems or being able to run things which are not authorized, attackers are able to, you know, leverage these, you know, powerful sort of capabilities in terms of, again, what CI CD tools and infrastructure and codes tools can do to be able to penetrate production systems. So what can we do and how do we solve that, right? And the good news is in the open source community, there's a lot of activity to be able to address these issues. Six Store Cosine is part of the Linux Foundation and, you know, that they have developed some great tooling to be able to help with some of these software supply chain type of attacks. And I'll explain that a little bit. But what Kiverno does is as it's an admission controller and a policy engine, it can do a few things to make sure that the images, the container images you're deploying into your clusters are signed. They have, you know, there's integrity in terms of checks, in terms of where this image was built or who built it and is it a trusted image which should be allowed to be deployed into your cluster itself. Other things you can do is you can even check, for example, if that image has an associated vulnerability, scan report, an associated S-bomb or a software buildup materials. So things which, you know, really prove that that image was not only built on a trusted system but hasn't been tampered with and has the right security posture to be allowed to be deployed into production. So how we do all of that is through this Verify Images rule in Kiverno and that Verify Images rules has two main components to it. It takes a set of attesters and think of attesters as authorities in terms of proving the identity of your image itself, right? So these could be keys, they could be certificates, SixTor allows something called Keyless which is, you know, a way of using OIDC and short-lived certificates to be able to verify that identity. But attesters are a way you can prove that some, the thing that built your image is something you trust. And then attestations are metadata that you produce along with your build systems. Could be scan reports, could be provenance data and provenance just means, you know, proving where this image came from. So it could be information about your environment, your workflows that, you know, let's say if you're using GitHub Action, your workflows which built that image or any other metadata that you wish to check like even a software bill of materials. So within attestations itself, of course, you have the same problem as with images. How do you know who built this attestation, right? So again, you can use attesters to verify that, you know, identity of the builder or the build tool for the attestation. And then you can check certain conditions and chip briefly touched on conditions in Kibirno and how those work. But you can check, you know, go fairly deep into your JSON data that's in the attestation itself to check various things and to prove that that image can be trusted and can be deployed. So the basic structure of the rule is, you know, you're verifying images, you have attesters to check the signature of the image itself. And when we say an image is signed, it really means that the manifest that's stored in the OCI registry for that image has been signed and you're verifying that along with the signature, the public key of the signature that you want to trust. And then you have one or more attestations which you can also optionally check. There's a few other things that Kibirno helps you do. It can also, it will automatically mutate tags to digest. And this is very important for security because if you're using tags and if your tags are immutable or they are mutable, what can happen is, you know, an attacker can again present a particular tag but then can switch it in, you know, some time window where now you're running an untrusted image, which you thought was previously trusted. And Kibirno can also do things like it can make sure that all images coming from specific sources or from all registries are signed and are verified before, you know, allowing them into your cluster. So this is, you know, what we had before, even with cosine. And now what we've done is we've been able to maintain this, you know, pretty much the same rule structure but add notary year as an additional type of signature and verification that we use within Kibirno itself. So I'll show, you know, what this looks like. But just before we go to that, just to briefly kind of mention what notary is and I'll also elaborate a little bit about why it's different or where it differs from six store cosine, right? So notary is a CNCF project. It is, you know, it works with X509 infrastructure and it uses underneath, it uses, you know, OCI standards, like the artifacts and references which came out in the new, the 1.1 version, I believe, of the OCI spec. And I'll show what that looks like in an actual registry because it's interesting to see, you know, how notary is managing some of this metadata and the signatures itself. But just, you know, first, if you want to kind of, you know, just assign something with notary, notary comes with its own command line too, much like six store. So if I do notation, you know, sign, I can sign any image and, you know, this image is already signed so I'm not gonna rerun the signature. But once I sign this image on the command line, it will, you know, show me or it would sign that image and it would push up the signature blob into the registry itself. So if I do notation inspect just to show you that this image is signed, what I'll see, and by the way, here I'm using the digest because otherwise it'll warn me saying that the tag is mutable and, you know, the recommendation is to use the digest. But if I do an inspect, what I'm now seeing is that, you know, this image and this is my digest for the image has a signature associated with it, here is the signature, you know, digest itself and then notation gives me a lot of helpful information on the signature, who created the certificate for it and other, you know, things itself, right? So now, you know, with this itself, if I, even on the command line, what I can do so is I can also run notation to verify this image and then we'll take a look at what that looks like in a, you know, cavernum but if I just do verify signed and notice now it's telling me, hey, you know, you might not want to use the tag, use a digest instead but it's giving me back the digest and saying that it already verified the signature based on what's configured, you know, how I've configured my trust or in my system itself. I have a question that might be silly but is notation the CLI for notary or the CLI for cavernum? Notation is the CLI for notary. Yeah, it's a good question because it is a little bit confusing. Notary is the project. Notation is the command line tool which is a sub-project or one of the repos that's maintained and delivered as a CLI binary. Excellent, thank you. Sure. Yeah, so this was just a quick, you know, how you would verify it, you know, from the command line and of course, if I do, you know, unsigned, it's gonna tell me that there should be no signature associated and, you know, to make sure my artifact is signed, right? And these are just test images we're using. So let's take a quick look at how you would do this with cavernum, right? So now going back into cavernum policies, what I'm gonna first demo or first show is this policy and I'll kind of do a, you know, walkthrough of how this is laid out. So here, what we're doing is we're first of all saying that we wanna enforce validation, which means if there's a failure, we're not gonna ignore it in the web book. We're gonna block the image. We're giving it a generous timeout here. Typically it just takes a, you know, few milliseconds, but we're saying 30 seconds just in case the registry is delayed. And then, you know, the failure policy we're saying for the web book here is fail, which means also again that if the web book is unreachable, block, you know, new images from being deployed. Notice here I'm writing this policy on a pod, but cavernum will automatically generate, you know, from the pod policy will generate for different pod controllers as a autogen feature in cavernum, which helps do this. And it will, you know, run so I can even have a deployment if and as long as it matches the pattern that cavernum is looking for, it will enforce the same policy. And then what I'm doing is in my context, I'm loading, you know, a config map, which contains the public key that I want to use for verification. And I'm saying that verify images with notary and any image that matches this pattern, which is our test image, I want one attester, so one key to make sure that that image is signed with that one key. And here I'm gonna use this key from my config map and the key name for it is just production, right? So if I look at my config map, this is what it looks like, and I can inject this key over here into my policy itself. So let's take a look, I think I already have this configured, but I'll just quickly check. So if I do get C-Poll, C-Poll is a short for cluster policy. I have this policy configured. And now if I do kubectl, you know, let's say just run and I'm gonna run, let's try the signed image or well, we can try on sign first and I'll just drive on it so it's not really creating anything, but it gives me a failure as expected saying, you know, similar to what we saw to make sure that the image itself is signed because it couldn't find any signature here. So if we go back and do a signed, what we would expect to happen is it checks the signature with the key and the pod is allowed to be created, right? So pretty much as expected, no, you know, no major surprises there and Kivorno is able to run this check and at this, you know, I don't need any other extension or anything else complicated. This is all built in into Kivorno itself to do the signature verification. So like I mentioned, what I also wanted to show though is what this image looks like and I'll explain, you know, also some of the things we're adding to this feature and what will be coming in release 1.11 in terms of, you know, how we will also do things like annotation checks, right? So just going back to this image, I'm using another command line tool and because notary is very much standards-based, you can use any registry client, any tool that understands OCI artifacts and here I'm using a tool called ORAS which is, you know, running, you know, command which pulls down that image which we just checked and it's inspecting everything attached to that image. So a few interesting things here, right? So first of all, in this image, I'm seeing that there's a signature and this is my OCI artifact type and it's giving me a digest for that signature. Then it's telling me there's a scan report attached to this image. I have the digest for it and in addition to the scan report, I have a signature associated to that, right? So the interesting thing here is to that same image, I have, you know, I can check the signature for it but I can also check some of this additional metadata and I have, you know, I can make sure that metadata was signed by authorities that I trust and then, you know, the last thing here I have is a Cyclone DX image or an SBARM, I should say, which is also attached to the image and similar to the scan report, it's also signed. So if I look at all of this in a registry and I'll pull up, I have, you know, here, just as an example, I'm using Azure Container Registry which has this preview of OCI artifacts. So if I look at this and let me make this. Do you make it bigger? Yeah, excellent. Yeah, there we go. So one thing I wanted to show here, which is interesting, right? So if I go, you know, into my artifact preview, so going back, let's go back a bit here into the image itself and if I look at my manifest here for my image, I see this is just, you know, it was built. It's following the Docker distribution spec and it's showing me the digest, et cetera, and the layers for that image. If I go to my artifact preview, much like we saw on the command line, I see there's a signature and there's a scan and underneath the signature. So by the way, this is a different image. So it looks slightly different, but similar concept right to what we were just looking at on the command line. And if I go in and look at my signature itself and the manifest for it, one interesting thing, and this is new, you know, there's this subject block here, which is pointing back to the hash for my original image. So this is how, you know, when I looked at this on the command line, this is how a command line tool can figure out that this, you know, artifact is associated with a particular image and is actually adding or attaching something to that image, which can then be verified by policies. So the combination of all of this makes it really powerful because now I can, you know, I can check for my image artifacts or any, you know, anything attached to an image. I can check whether they're signed. And, you know, in the next release of Kibirano, what we're gonna do much like we do today with Cosign is we can, we will be able to check the details, the contents of these artifacts that are signed and attached. So all of this, you know, just works, you know, right now with Kibirano and like one of the things going back to Notary when I, you know, we've mentioned that we also support six store Cosign. So Notary uses X509 certificates underneath. It is using, you know, so if you have standard, you know, infrastructure or if you're using a KMS type system to build and deliver these certificates, you can continue using those. And that's where, you know, there is a, you know, mechanism to have plugins also for other external KMS systems, other things that you would build in. So if I go one, one just quick thing to show over here, if I kind of look at my, you know, in my local system where I have notation that command line to installed, I have a trust policy configured as part of my Notary configuration. And this trust policy is saying that if my image pattern matches, you know, this test image that we were using to be able to, you know, use the keys in my CA store with, you know, the name test. But I can also have a KMS system and you're having AWS, you know, external plugin, which is saying that if my image pattern matches this other pattern with Kibirano demo, then go ahead and use this plugin to verify, to pull the keys automatically and verify, right? So that, you know, Chip mentioned the extension feature that we have added in Kibirano 1.10. And the intent there is to be able to use that extension feature to be able to run these kind of external plugins to be able to also do verification for images where needed. So overall Kibirano would then be able to provide that ability to either check signatures built in, like locally, just like a demo, or to be able to run other plugins, which can do additional checks and additional verification. Right, so that is just a quick overview, you know, of how Notary works, what some of the things it does in terms of the OCI artifacts and how, you know, that's laid out. So it's pretty cool to be able to see some of these details of the image, some of the other things. And we will be adding more capabilities into, you know, Kibirano to be able to, you know, verify this type of data itself. One quick thing to show, you know, before I, you know, before I kind of wrap up this segment. So with Cosign, you know, and one question that comes up is, okay, we have a six store Cosign, we have Notary, why would I use one or the other? What are some of the, you know, pros and cons? Like I was explaining Notary works with, you know, X509 certificates that integrates very well with your KMS or with other systems. But Cosign offers the ability to also do a few other things like keyless. And it also has, you know, components. So six store has three major components. Cosign for signing and verification. Rekor, which is a transparency log where information is stored for additional security checks. And then Fulcio, which is the keyless support for web-based or, you know, OIDC certificates, which are short-lived and can be generated as part of your CI CD tools. So it works really nicely if you're using something like, you know, GitHub Actions, which has an OIDC provider or GitLab also provides that. But the one thing to also realize is if you're signing something in Cosign, and what I'll do here is just to kind of simulate, I'm not gonna actually sign this. But, you know, if I kind of see what it's telling me here, is I see a warning here, which is saying that six store or Cosign is gonna send some information to the transparency log. So it's something to be aware of and to keep note of that when you're signing anything with Cosign, there is this other component which gets used by default, you know, it's using the public transparency log, which can be configured to use your own. And just as a demo for what that looks like, I'm gonna go to one, you know, test image I have and to show like, you know, and as part of this pipeline, we have signing configured, right? So again, it's- Can we make that bigger, please? Sure. Thank you. No problem, yeah. So you're, we have signing configured and you see it's generating, you know, it's using keyless. So it's generating ephemeral keys. It's actually, you know, doing some check. And then there's this one line which says T log and T log is transparency log, which is record entry created with index, you know, and there's a number here, right? So if we go and look and there's, you know, SixTor provides this, you know, tool to search record entries. Let's go in and look at a particular log index. So I just copied the log index I saw in my bill log. And what I see is in record, this record that was created, which matches exactly what was signed. Now, this is pretty cool because I can fetch this record and do other checks on it. But it's also something to be aware of that if you're using, you know, SixTor and it's way of verifying trust, there is this additional record that's created and it's something when you're configuring a policy, you would be able to also fetch this and check this in addition, right? Whereas if you're looking at, you know, notary, it doesn't have the capability of keyless. It doesn't have this other additional transparency log. It's relying on X509 infrastructure and KMS to do its, or, you know, you can of course, like in our example, what we did is we just passed in the key directly into the inline that certificate, the public key, which we are using to check, but this could be coming from an external system, you know, if you're using a certificate management system or some other key management system to manage your keys, right? So a few interesting differences. There's also some differences in how with SixTor, you are, you know, if you kind of go back to an image itself and here I have an image which is signed, both with SixTor as well as with, you know, notary. So the SixTor, you know, image signatures are stored in this dot sig extension, whereas with notary, like I showed with the ORS command line, these are stored as artifacts, right? And I believe at some point SixTor will also move to storing and, you know, managing as artifacts, just few minor differences in how again, you know, the signatures are stored, retrieved, and can be managed as well as attestations as you're looking at, you know, these OCI images. So that was a, you know, somewhat of a deeper dive into notary, but we're pretty excited to have this support. And with Kverno, you can do, you know, again, with a fairly simple policy, you can already now in 110 start doing signature checks and we will be adding more features, which are already well, you know, into development and which will be part of 111 to do additional checks for attestations and other, you know, metadata that you attach to your images. So what I'm hearing is with notary, it uses traditional PKI infrastructure. So you have the responsibility of managing your private keys in that case and keeping them up to date. And, but then with SixTor, your key list, so now you don't have that burden anymore, but then your records are in this public log, which, yeah. So there might be, uh-huh. There is an additional component to transparency log with SixTor introduces. You can run your own private version of that, but it's, you know, one other sort of, you know, infrastructure component that would be required. And then if, say I was switching from one to the other, is it possible to have for Kverno to make policies or for you to make policies with Kverno that would use both types of verification, like in the same policy? Yes. And that we do see, you know, there is a potential, at least in the same cluster that perhaps some images may be signed by Cosign and Cosign, you know, is fantastic for open source projects, right? Cause it works really well. It integrates with GitHub where most of the open source CNCF projects live. And so, you know, there will be images signed, you know, which can be verified through SixTor and Cosign, but perhaps there's other images which are signed by Notary using, you know, PKI and X509 infrastructure, right? And that's where, from a Kverno perspective, we of course will continue to support both and see how the standards evolve, how the specs evolve and, you know, add features accordingly. Super cool. We don't have any questions in the chat right now. Is there anything you want to embellish on? It's not. Yeah, maybe we still have to cover a little bit of the futures for Kverno. So if there's nothing left on signatures and the new stuff on 111, I'll flip back over and cover that last piece. And we can see if there are any questions that arise during the course of that. Absolutely. Let's do it. All right, so I realized that I was remiss earlier and I didn't talk about the scale testing that we did during the course of the decomposition of Kverno 1.10. So one of the things, in addition to that that we've done or rather complimentary to that is that we've completed a pretty extensive set of scale tests with Kverno, at least when it comes to the two main components, the admission controller and the reports controller. And so as Whitney showed earlier, there's a link that's out there that'll point everybody so that you can go and look at these scale tests. But what this shows is Kverno is able to handle really high scale. And we see this very often in the community where folks are using this in clusters of 100,000 pods or sometimes more. And so we put together a test case where we would measure things like latency, resource consumption and storage consumption space for these controllers. So you can go out here and take a look at the docs and what we've done in these cases because Kverno has so many capabilities, we had to kind of limit it to what are the most common policies or the most common types of policies. So that's what's reflected in these tables. There's a lot of numbers there. Definitely not gonna go and try and distill all of this stuff down. But if you're interested in how Kverno scales, if you're interested in what the, as a 110 anyway, what resources you can kind of expect to see in your environment, then this is a great way to kind of get that baseline. Of course, it's no substitution for you monitoring your own cluster using your own tools. But this is a good starting point so that you understand a couple of things. Number one, what should I be getting it to it? Because you don't necessarily want to waste resources. Like that's, you know, an over allocation thing creates waste and clusters. But if the behavior that you're seeing, you can check against this and say, is this normal or not? And those are both two very important pieces of information to get. And like I mentioned, we published that for the admission controller and the reports controller. So a lot of cool facts and figures, but you can see in the case of like the reports controller or we've scaled this up to, this is the, let's see, where is the one for pods? So a hundred thousand pods, you can see what that looks like if you really want to see how this performs. So anyway, that's the last piece of covering 110. And if there's still no questions that are out there, I'll flip over and we'll talk about what's next for Kvernor in the future. Still no questions. So let's hear about the future. All right. So we got 110 out of the way. We're obviously still not done. Sorry to interrupt. Thank you for the reminder. Thank you. So we're just taking a look at the milestone here. So 1.11 in Kvernor, let's talk about what the plan of record is, plans change, but as the saying goes, planning is still essential. So we'll talk about what the current plans are for 1.11. So the first thing is Kubernetes validating admission policy support. So this is also known as cell admission that was introduced in Kubernetes 1.26. As an alpha feature, it's still alpha. And what this allows you to do is inline basic validation policies to the Kubernetes API server. So Kvernor has been, you know, have our finger on the pulse of the Kubernetes nation as it were. And we're listening and hearing things that are going on and we're going to have support for this in 1.11. And this could take more than just one form, by the way. This could be, for example, the Kverno CLI has the ability to test these validating admission policies in your pipeline, the sort of shift left movement. So we're looking at that. We're looking at being able to generate the policy reports that I talked about earlier as a result of these validating admission policies. We're also looking at Kverno being able to, you being able to express rules using cell, common expression language if you want, and maybe even Kverno in the future being a controller of these validating admission policies. So depending on the type of policy that you create if it's appropriate for Kverno, we'll leave it as a Kverno policy. Otherwise, we'll translate it to a validating admission policy so you get the best of both worlds. So in any case, we don't know exactly what we're gonna do fully, but we do know that we're going to do something and work has already underway in a couple of different regards on validating admission policy support. It will still be alpha, but we are gonna intercept with it. And so that's the first thing that's on the roadmap for 1.11. Pod security admission enhancements. So we are hoping to enhance Kverno's ability to pick up the upstream pod security admission library. So just for a little bit of backstory, pod security admission, as those may know, is the current in-process policy enforcement mechanism that's the successor to pod security policy in Kubernetes. It came out in 1.25. It's still here. We like it. It's very good and very flexible. Kverno, however, pulls that in internally and makes a little bit different use of it to give you more flexibility with those same libraries. We wanna enhance this even further to allow you to do things like per control or sorry, per field exemptions. We need some upstream help from the Kubernetes project to do that, but we hope to have that wrapped up and have this out the door in 1.11. OCI artifacts and references. So Jim talked a little bit about this when it comes to cosine and notary support. So we're gonna hopefully have more support around those OCI artifacts and references in 1.11. Same thing with cosine and notary updates. Cosine 2.0 is a pretty significant change from cosine one. And so we're working on being able to support cosine two. Again, kind of the idea behind this is we don't wanna be ideological as far as what tools you use for your own software supply chain. We wanna try and support all of those tools or at least the major ones that we can. So if you wanna use cosine, we wanna support you to use cosine. If you wanna use notary, we wanna support you to use notary. We don't wanna pick a side. We wanna just offer you the option but also do both of those things not only easily, which that's one of Coverno's great hallmarks is being able to do all of this cool stuff but easily without having to write an esoteric language but also being able to keep up on top with the latest changes. Since this stuff is moving pretty quickly, we can't just hang our hats on a version that we did two years ago or what not two years ago, even two releases ago. We gotta keep up with this. We gotta make sure that we're staying current with the changes in the software supply chain ecosystem. We hope to get to some CLI refactoring. So Whitney, you asked a question like is that the Coverno CLI? Good question, it wasn't but Coverno does have a CLI as folks may or may not be aware. The Coverno CLI allows you to do things like test policies in a pipeline and even use those in things like GitHub Action or whatever your own tooling is. So we wanna do some refactoring to fix some issues and to address some things in the CLI that will lead to a better user experience and make this more future-proof. So we hope to do that. Policy reports, we're gonna be making some changes to policy reports to hopefully make them consume less space and maybe a little bit more performant. A ton of work, by the way, has already gone into that process in Coverno 1.10. So if you were a user of previous versions and you upgrade to 1.10, again, please read the release notes in the migration guide if you intend to do that, but you may see quite a big boost in performance already in 1.10 with policy reports. Cleanup, we didn't talk about this because we didn't really make any changes in Coverno 1.10 but Coverno also has the ability to clean up resources very easily based upon a policy definition. That's been there in the previous release 1.9. We wanna take this to the next stage in 1.11 by allowing you to do things like individually label resources so that they can be tracked and picked up outside of a policy definition. So just more flexibility around that and also policy exceptions, which I did talk about policy exceptions and enhancements that we've made in 1.10. We wanna go further with this in 1.11 and I honestly can't remember what it is that we're supposed to be doing in 1.11 but we're gonna enhance it even further and make it better and it's still, I think we listed as currently alpha or maybe beta status and that's just because we wanna make sure it's really rock solid before we stamp it and send it out the door. So we're gonna make future changes to that in the next release and then hopefully get to that point sooner rather than later. So that is the Coverno 1.11 story in a nutshell along with what we've done in 1.10. We've got a couple of minutes left. Glad to take any questions or have any other future discussions. I really like the cleanup policy. I know it's in the space. There are some tools that help you group your resources to, in terms of application because Kubernetes doesn't natively provide that. So there are tools like Carvel Cap Controller or Crossplane to help you label every Kubernetes resource that's associated with the same application associate them with one another. So I can see how that combines with the cleanup policy. It could be very, very valuable. But yeah, that's absolutely true. And Coverno takes a couple steps further by not just looking at things like labels and other metadata but it can actually dive into the resource and then group them by whatever that you want. So a couple of use cases that we've seen around that that we've seen have been highly valuable are things like cleaning up bear pods. You know, like we launch a bear pod to do things like oh, we need to do a Qt control exec or we need to curl or something. But like typically happens, those things get left behind. Well, Coverno can find not just all pods that would be, you know, not useful but only the bear pods and clean those up or finding old resources. Things that maybe weren't labeled the same way but saying, hey, you know, I want to remove things that are older than two months or something. It can do that as a component of the cleanup policy. So we want to take that stuff further in the next release by doing it on an individual basis. This is kind of a funny question to ask is the very last thing. It probably should have been up at the beginning but like what are the most popular use cases or a couple of the most popular use cases you see for Coverno generally? Oh, there's so many, Jen, do you want to take a step at first? Yes, I think it's starting with simple stuff like pod security of course tends to be a top runner but even just proper hygiene and clusters like labeling and other things, right? So those tend to be very popular like Chip mentioned, generate policies. So just like clean up. It's necessary to generate a few things by default and that Coverno does a great job of too. And then of course supply chain security has become, you know, fairly prominent in a hot topic recently, yeah. So time is up, let's recap how people can get involved in where to go if they have further questions and also any parting statements y'all want to say? Sure, just a way to get in touch and get involved. So Whitney put a link out there to the community page and you can go and check out the Coverno community page on the website and you can be a contributor, you can join and just have a discussion. There are links from there jumping off points to all sorts of other locations, the developer docs, Slack, I mentioned several others. So that's a great place to get started. And, you know, if there are any other questions and for even more information, join the Coverno Slack channel that's on Kubernetes Slack. We've got more resources that are out there and we'll be glad to point you to anything else that you're interested in knowing. And we do have one quick question in chat and then we need to say goodbye, but is Coverno used much in OpenShift? Yes, it's a good question, yeah. Coverno is pretty heavily used. We see a lot of utilization in OpenShift environments, not just OpenShift, but a lot of the cloud providers as well. And also just by the way, Coverno actually has OpenShift policies that are pre-built in the policy library that are ready to go. So if you wanna use Coverno on OpenShift, there are some policies that you can just download and start to use right now, or at least like I mentioned, use them as a starting point to build or customize your own. And also do check out the docs because we have a separate docs section just for OpenShift users, some things that you might wanna know before you deploy Coverno in your OpenShift environment to have a good experience. So check out the docs, check out the policy library and come and talk to us on Slack if you have any other questions. I do like, you mentioned earlier, the extra work you put in to be tool agnostic and to not have opinions about what tools people wanna use to support as many as you can. So I think this is evidence of that. And I think that's a really cool thing you'll do. Thanks. All right, I think with that, we're gonna say goodbye. Thanks everyone for joining the latest episode of Cloud Native Live. It's great to have Chip and Jim here to talk about Coverno. We also really love the interaction and the questions and comments from you in the chats. And thank you everyone who watches the recording. So we bring you the latest in Cloud Native Code every Wednesday at this time, 11 a.m. Eastern. And that's all for today. Thank you so, so much. I appreciate you coming and sharing your knowledge and appreciate everyone watching. Thank you, goodbye. Thanks for having us. Bye.