 Hello, everyone. Welcome to Cloud Native Live where we dive in the cones behind Cloud Native. Hi, I'm Annie and I am going to be your host today and I am a CNCF ambassador as well as a senior product marketing manager at Camunda. So every week we bring a newsletter presenters to showcase how to work with Cloud Native technologies. They will build things, they will grade things, and they will answer all of your questions. So you can join us every Wednesday to watch live as you are doing now. This is possibly maybe or watching it afterwards. So this week we have a few great speakers talking about protecting software supply chains using Qverno. And as always, a reminder for everyone, Cubecom plus Cloud Native from Europe is next week. So really looking forward to that one. I'm excited for that. So check that out as you saw in the banner in the beginning as well. So and as always, this is an official live stream of the CNCF and as such it is subject to the CNCF code of conduct. So please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, please be respectful of all of your fellow participants and presenters. With that, I'll hand it over to our speakers to kick off the presentation today. Thank you, Annie. Yes, hi everyone. This is Jim Beguaria. I'm one of the maintainers of Qverno and a co-founder and CEO at Nirmata. So I want to chat today mostly about the new features we are introducing in Qverno 1.7 around software supply chain security, right? And I'll share my screen and kind of pull up a few sites as we talk. I'll start with some of the just background on software supply chain security, why this matters and what some of the basic concepts are, right? And I think we've done a previous live stream where we covered this at a very high level, but just to kind of lay the foundation again or go over some of the basics. So in the last 12 to 15 months, it seems like we've all seen the headlines about software supply chain attacks, breaches happening in the software supply chain. And it's very interesting if you look behind the headlines and dig in into what's going on. So it seems like because as we've become better perhaps at protecting production systems, as we're leveraging more in a managed services, which have secure defaults and other security settings, which are easier to set up, production systems seem to be better protected. But now, and as that happens, attackers are naturally sort of looking for other ways to find vulnerabilities or find ways to get into systems, right? So the other trend that we have been also seeing is, of course, with continuous delivery becoming more and more popular, with workflows like GitOps, you have CI CD systems, which now have credentials to deploy into production systems. And these CI CD systems in many cases are deploying things, perhaps dozens of times or hundreds of times, even some day could be as they're pushing things to production. So you combine those factors and what's happened is attackers have found that software supply chains like our build systems are a weak spot. And by taking advantage of that, by attacking those, they have ways of getting access to production systems and getting malicious code or even other things deployed into production. So how can we protect against this and what are some of the things happening in the open source and other communities? There's a lot of activity within CNCF projects on this. In fact, Kubernetes with 1.24 just announced that they have adopted 6-store and tools like CoSign to sign all of the Kubernetes binaries. And there are projects also leading Kubernetes into the software or Salsa 3 compliance. Salsa is an emerging standard. So it's actually an acronym that stands for supply chain levels for software artifacts. And it provides a way of measuring or checking for or different things you would want to do within your CI CD system, within your build systems to be able to protect those. So I'm not going to go into a lot of details on those, but there are just very briefly, there's four levels of Salsa protection. And each level kind of does things about, you know, for across like whether it's the build, whether it's the code, whether it's the artifacts created to be able to do certain checks, right? But if you look at Salsa, it's very much focused on the CI CD system. So how do we going back to the question of how do we know that we're not allowing some malicious code to come in into production? The way, you know, one of the ways of thinking about these concepts, and I'm pulling up a blog post that was posted on the CNCF site is to kind of break down the process, the build process into different, you know, steps, right? One way of thinking of it is every build system is going to produce artifacts, whether it's binaries, whether it's container images, other things. The other thing, you know, in addition to artifacts, you'll also have metadata which gets produced, right? So when you produce metadata, these could be like whether they're vulnerability scan reports, there's software build of materials or S-bombs, all of this can get produced in the build system. So now with these Salsa levels, what you can do is you can create signed attestations, which are basically, you know, verifying those metadata and the provenance data, which means the build system is a trusted system, you are, you know, configuring the source of that. And all of this can be now using tools from six-store can be pushed into OCI registries, which is fantastic. So now once you have this data in OCI registries, the final step to all of this is policies, right? And that's where Kivarno comes in. And that's what we're going to talk about mostly today. And the audience is already really fired up and asking questions. So we can actually have an audience question here, I think, which is why they were essentially great agenda. So there's a Maxine saying, I have a GitOps problem with Kivarno when it tries to apply policies for CRDs, it doesn't know yet, it doesn't retry and my system is unprotected by the time the CRDs are synced. Any ideas? Yes, so there are, you know, we have worked with the flux community, I believe there was a similar question from Argo CDs, so Kivarno policies do have a ready state. So there are, you know, GitOps controllers can check that state and based on whether the policy is ready or not, delay other, you know, other artifacts from being introduced. So certainly reach out on the Kivarno Slack, we can help you with that. This was an issue that I recall had been addressed. But if there's any other issues, that can be certainly handled there. Right. Thank you for the question. By the way, already on, yeah. Yeah, so let's kind of briefly just introduce, you know, the existing Kivarno functionality. So I'm just going to look in, you know, for image verification in 1.6. So we understand what that policy looks like. And, you know, then we're going to start looking at some of the newer features of what is possible. Right. So the very basics of image verification policy is you're checking for signatures and attestations. And I'll explain the structure of what we have, you know, come up in 1.7 to simplify this. But in 1.6, we had this verify images rule, which was introduced year we're checking for a pattern which matches certain images. And then, you know, we're verifying that image is signed using a public key. Right. So that's a very simple policy to check that your container image is actually signed. And this will verify by internally what will also happen is the image will be, you know, replaced by a, the tag will be replaced by a digest so that, you know, just for additional security, because tags can be mutable, the area it will replace by a digest. A more complex example is, you know, to in addition to checking for the signature to also check for attestations. Right. So year, for example, and attestations again are signed metadata, which you're producing with tools like cosine and it could be any JSON blob could be an SBOM could be a vulnerability scan report. And year in this policy, what we're doing is we're checking a custom attestation to make sure that there was a code review done on the right branch for that image. Right. Great. Yeah. Yeah. And the sharing, I think is a portion. And then a question while we get the share, I think you'll want to share some screen. So what we get that working? Is caverno similar to open policy agent? So there are similarities. And so I stopped my sharing and chip. Maybe Yeah, I'll be glad to take that one. Yeah. Yeah, go ahead. So so yes, caverno is similar to open policy agent in a lot of respects, in that, you know, they're both admission controllers, they both have the ability to validate. But the great thing about caverno is that it doesn't ask you to learn a new language. It doesn't bring additional technical debt with that you are you manage policy, you write policy, you reason about policy in the same way that you do today using YAML. So you know, today in Kubernetes, we deploy pods with YAML, we create certificates with YAML, we have full pipelines with YAML, we even build other clusters and other infrastructure and other clouds with YAML, we think you can do the same thing with policy, and do it very effectively, but also powerfully and simply. So that's a long winded answer and saying yes. But it also brings additional capabilities that OPA does not bring. And and some of those will probably share in this. But for for any more details on that, you can hit us up on Slack and also go to the documentation and look at features like generation, which allow you to create new resources that don't previously exist based on a host of triggers and in a variety of ways. And that by the way is getting enhanced in 1.7 with some great new features that we might talk about here. Okay, yeah, no, if the policy was invisible, I just reshared my screen. So this was the the simpler policy that I was showing. And then I had scrolled down to kind of show a more complex policy with some attestations here with the custom code review. So again, as you can see, like Chip was mentioning, this is very declarative in terms of YAML, it's fairly simple to understand what we're doing over here is checking that the image attestations are signed using a public key. And then there's certain data using James path expression, which is just a common way of checking for certain things in JSON. We're checking for, you know, that the, the review was actually done on the main branch. And there were two reviewers, you know, from a set, which, of course, you can externalize this data, which would be a best practice, but here we're just showing it in line. So that's the very basics of a mid signing policy. And, you know, now we can switch to showing a few new features and things we're introducing in 1.7. So I'll hand off the chip for the first set of demos, and then I'll showcase some additional use cases and things we can do. Thanks, Jim. And hey, folks, I didn't introduce myself earlier, but my name is Chip Zoller. I am a technical product manager here at Normada. And I'm also a cover and no maintainer. Many of you may, if you ever came to the Slack channel, you might have, you might have been helped by me. If you haven't, then hopefully you see in the future. So want to just kind of level set and before getting to some of the new features when it comes to 1.7 kind of level set, what Jim was saying on supply chain security, you know, one of the things that that we need to be able to do, in addition to looking at things like s bombs and attestations is to just look at the existing metadata of the image, because when Kubernetes pulls these container images or we declare them, you know, there's limited things that you can do from a Kubernetes perspective, but there's much more capability that opens up when we have Coverno in the picture because Coverno has the ability as of the previous release, or the current release has the ability to go and look at the configuration of that image inside of the registry. So for example, this is a policy that's possible today. And what we want to be able to say, and as a way to augment our supply chain security, we want to do some basic things even before that, which is like saying, in your images, you're not able to set a root user unless it comes from perhaps an internal corporate registry. So in this policy, we're doing exactly that. And Coverno has this nice ability to talk to an image registry wherever the container image is located. So what we're basically saying is go and get the user parameter for this image. And if it if the image does not come from GHCR, which is GitHub, you obviously you can put whatever registry that you want in there, deny it. So let's see how this actually works in process. Just make sure I don't have any policies added here. So I've created the policy in my environment. And now let's take a look at an example of a trustworthy image. So a good image, for example, is going to look something like this. So we're going to use the crane tool and just go and look at a image that hopefully works. Okay, let's look at a bad image then. I clearly am having some sort of a credential problem here. Let's see if I can do it. Okay, let me switch over to my other terminal here. This is always the fun thing about demos. You never know what's going to happen, even if you practice it until you click the button. Yeah, you never know what's going to happen. So yeah. Alright, so in when defining a user when defining an image, we'll take a look at one that does have a good example. So in this case, we're just going to fetch a cover no test image that's out there and take a look at the configuration of it. So we can see here that there's user that's being set to empty. And so by not having a user or setting a user to something that's an empty string like this, you're effectively saying, hey, I want to run this as root. Well, our policy said, that's fine, but it has to come from GHCR, which this one does. But if we take a look at an example of a bad one, then we have a similar type of thing. Now this one is just a general Redis image that's coming from Docker Hub. But same type of thing. This is setting a root user, but it doesn't come from our specified environment. So let's see what happens if we go and create this. First, let's do it with the bad one. So I'll go in the Redis image that you just saw a second ago. We'll try and apply this. And you saw it right here. Coverno has reached out to that registry that we're we're working on getting the share up and working. Just give us a few. Oh, no, it's working immediately. Okay. All right. So there was a little bit of a delay there. Yeah. So so I applied in the in my example pod here, I just went and put that Redis image that you saw a second ago. So this is an image that does set root, but it comes from a registry that we haven't deemed safe. So I tried to apply that and Coverno immediately blocked that it reached out to the registry. It was able to decode the configuration and see that this is specifying root. But our policy said that it has to come from that registry. Well, it didn't come from that registry. So we blocked it. So by contrast, if we give it an image that also sets root, but does come from an image registry that we bless. And again, this is GHCR. But imagine you've pointed this to your internal registry or maybe are a an existing registry, but in a repo that you do allow, it should be able to understand that. And indeed, it let us create. So Coverno was able to do the same thing, but decode that and allow that to pass. So that's something that Coverno can do today. And you know, as you can see from this, this policy declaration, it's fairly simple. I mean, the need of this is only really in a little over 10 lines here. And there's no programming that's required. So we're just simply saying, go and get everything that's in spec containers, go and pull the the image data from the registry for that. And then in our deny conditions, take a look at two different things. All of these has to be all of these have to be true. Does the user is the user running his route, which is empty there? And also does it come from GHCR? If it doesn't, if none of those, if all of those things are true, then it will block it. So let's look at another example. But this time using some capabilities in one seven, and there are a couple of them, base images. So when we build a container image, we can specify a lot of different ways to build that. And one of the popular ways of doing it is to specify a base image from an existing image that's out there. Well, in many cases, what you find in the the community in real life is people want to create images from like Ubuntu latest out from Docker hub, not only is it a huge image, but there's a bunch of stuff inside of that that you don't know exactly what it is, what it came from. But it's also not secured. You know, you didn't make it, you don't know who made it, you don't know what's inside of it. And so you really want and need the ability to say, any of the images that get deployed in my environment, I really want to take a look inside of those and see what was the base, even though my application code may be good, even though my pipeline process is good, I'm starting from an image that may not be good. And so one of the things that we can do in Coverno, and in one seven, we have some enhancements to an existing ability is we can do exactly that. So one of the things to point out is we're doing a similar type thing here in this policy declaration that we were doing before, which is, hey, get me all of the containers that are listed in this pod, and we're going to look up the image registry. Now you notice that we're calling this variable here image data, it's just a way to refer to whatever this data it got back, we're going to call it image data. But in one seven, we have the ability to chain that variable to new variables. So I'm declaring another one here, and I'll show what this is in just a second, but we're looking at, we're parsing whatever the the contents of this previous variable in a new one. So this is a new feature that's in one dot seven. And also in one dot seven, we are able to look at any of the data that's in the image when it comes to fields and annotations and configurations and not just those that may pertain to standard. So what we're basically saying here is, there are four possible locations from which a base image can be specified. Now it's important to point out that just because you build an image with a Docker file that has a from statement doesn't necessarily mean that that's going to be recorded because you have to make explicit steps in order to record what your base image is. And there are various ways to do those. And we've captured four of those here. So one of those is if you're using Docker build kit, which is a popular option. This is the Docker build X command. If you're familiar with that for multi stage builds, this will get recorded into a config annotation. OCI labels and annotations, which are reflected here. So one for labels and one for annotations. These are kind of the replacement for the existing metadata that was possible and in a Docker file. And also if you're using build packs, so able to parse all of these locations. And what we're basically saying is, look, any of those locations that you look for doesn't matter which one. There has to be some base image specified. So let's take a look at what that would look like. Here's what nothing looks like. So I've just got an example of an image that I built a while ago, not built according to best practices. Really nothing that specified in here. This is all the configuration data. When you would expect to see the base image show up somewhere in here. Now, by contrast, let's take a look at what, if we specified it in an OCI annotation, which is the pretty much the standardized way of doing this today. So I'm going to look at an image here and I'm going to use the crane tool like I did before. You can use many others, but I'm going to look at the manifest for this. And you can see here, down in the annotations, we do have a base name. And in this case, the base is the GCR, the static image. So this one specified. But build kits also really popular as well. And so Coverno has the ability to go and parse into that as well. And I'm going to paste a more complex command here. But one of the things to point out is that we're actually using the Coverno CLI to do the parsing. Now, you can use other tools as well. And I'm piping it to JQ just to give it some nice colors that you can see it. There's an ask from the audience to if we can zoom in maybe a bit so that we can see a bit better. Sure. Is this better? Yeah, I think it's starting to get better. And I trust that the audience will will let us know if someone still can see it. But I think it works better. Yeah. OK. All right. So so what we're just showing here is that we were able to get the same type of information. In this case, this is from the Coverno image itself. So we're basing it on Golang and we can get all this information in there. So let's go and try to apply this policy and then we'll try a an image that does not specify it. And then we'll try an image that does specify it. So I'm just going to try and run busy box, which busy box doesn't specify it. So we'll just try and see what happens if we use this policy with that image, knowing that it doesn't have a base image at all. Right. So Coverno has blocked this and it's also the other policy that I had in there also blocked it. So we're not specifying a base image. And so it's blocked that one. And now if we flip over and show an example from one of the images that I just checked, this is a demo image, but it uses a base image specified in an OCI annotation. We'll try that. And Coverno lets us create the pot. So it was able to go in and look at the base images and make sure that something was specified. So last thing to show won't show a demo on this before kicking it back over to Jim. But you know, imagine that having a base image is great and Coverno is able to do that in one set. And there are some enhancements that allow you to do that even easier. But ultimately, you probably as an organization want to start building a catalog of a loud base images, not just any base image. And that's a that's a good first step. But you want to say, hey, you know, I've got a list of maybe eight or something that my teams or my entire organization is able to create these gold images. And only those are allowed. Well, Coverno can do that as well. And so what I'll show here real quickly is imagine that you wanted to build this index in your environment and you're using a GitOps. You're using a GitOps flow. Well, Coverno can read this from example for from a kid and fig map. So I've got a platform namespace that I want to have my platform team curate a lot of these sort of cluster global variables. And in this case, it's just a config map that says that has a key that says allow base images. So as you can see, it's just an array of strings, a mapping of all of the allowable base images that that I'm I'm going to permit to be pulled into this cluster. And so I can have a Coverno policy that goes and fetches that looks at the base image and then looks at that list and says, is this base image that you declare in that list? If not, it's not running in this cluster. If it is, it will. And so very similar. Actually, a question from the audience as well. So Maxim asks, it tells you all the policies it fails. Question mark. If you apply, if you have multiple policies and any resource that you submit violates any of those, yes, it will show you all of the ones that it violates. And that's actually what you saw here. I had the first policy that I created with one rule and then the second policy. And so the previous resource that I tried to submit, it violated both of them when I was trying to submit the busy box pod because not only was it trying to set root and not from the registry that I blessed, but it also didn't have a base image declaration. So it it violated both of those and it showed you both of those. So hopefully that answers the question. And and so just to wrap up here, you know, this policy is doing what I mentioned, looking at the base image, ensuring if it's from a trusted list. If not, it blocks it. And again, like everything that we try and do in Coverno, it's fairly simplistic. I mean, that's a pretty powerful capability, but it's written very simply, not that many lines of the animal go and get the config map that's in that that platform namespace and we'll save that into a variable and then go and get the registry data like we saw the two previous policies. And we're going to look for the base image in this one, which we're picking up whatever was got whatever was taken from our image registry. And then we're just going to dive into it and look for that annotation and we'll steal the value out of it. And then we'll just say, hey, is the base name. Is that not in the allowed base images from that config map? No, block it. So pretty simple, but pretty powerful stuff that allow you to augment or if you don't have it today, this is a great step to getting to that software supply chain security and to getting some security into your cluster with Coverno. So with that, I'll kick it back over to Jim and happy to take any other questions that come up. Thanks, Jeff. Yeah, that was pretty interesting because I think what you just showed allowed us to now check exactly how the image was built, what was the information shared in the image as well as some of the properties like the base image, etc. Right. Let me actually what I want to do is I'm going to go back to now the next level, which is signing images as well as verifying signatures and attestations of metadata for images. Right. So before I do that, I want to kind of quickly explain and let me just go into present mode here. What a Coverno policy looks like. So you saw some examples already, but every Coverno policy is, you know, has a set of rules. It must contain at least one rule, but as rules can match and exclude different, you know, resources, different namespaces, you can match excluded by the user that created. So there's a lot of flexibility in how you apply rules to admission requests or to existing resources. Once you have decided, you know, once the policy decides that the rule should be applied, then each rule can either mutate resources so you can change things in your existing configurations. You can verify images, metadata, the OCI config like Chip just showed, including whether the base image was built correctly, whether the image includes a root user or non-root user, things like that, as well as signatures and attestations that I will show. You can validate resources, right? So for proper settings, for best practices, things like that, you can also generate new resources. So when a new namespace is created, if you want to generate secure defaults, if you want to, you know, actually even, you know, trigger generating based on like when a service is created, perhaps you want to create an Istio network policy, right? Things like that can be now automated through Kivarno fairly easily. In fact, with 1.7, we're also introducing the ability to mutate and generate on existing resources. So that opens up a whole, you know, different set of use cases which have been, you know, requests from the community as well. So that's kind of the structure of a policy, but diving more into the image verification part of it itself. I want to explain. I'm to take an audience question or two. Sure. Yeah, absolutely. Yeah. So there's a question from Afzal, which is, do I need to customize the config map every time I specify a different image registry? Yeah. So if you like I mentioned, if this is being managed in a GitOps process or even if not, you're probably going to create that config map in advance with the allowed registries or the allowed base images or whatever source that you want to have in there. And so if you need to add a new one, yeah, you typically are going to add it to that config map, but that's not the only possibility to add things. You can also declare them in the policy. You can declare them in other locations anywhere in the cluster or even, even beyond, really, Coverno can go and access that information. But if you're going to the right approach, if you're going to manage a config map that has a data source is probably to just update that config map. Perfect. And then there's another one, which is a bit of a broader one, maybe not or maybe not. Who knows? What are the things you absolutely shouldn't do with Coverno and what are some of Coverno's limitations? Yeah. So Coverno is pretty vast. I mean, it's there. There's there's not a whole lot of things that can't do even complex use cases are comparatively fairly simple and Coverno. But Coverno is built for Kubernetes. That's not an oversight. That's a specific strategy. So if you want to use Coverno or thinking of trying to use Coverno with other things, there are other tools out there like OPA is a great tool as a more general purpose thing. But you're not going to be able to use Coverno for that. Coverno is going to be a great fit for validating, mutating, generating and even performing a lot of these image verifications that we've shown and that Jim's going to show later on. So that's really where it's a great fit. And there's a lot of use cases that can accomplish within that. But it's built for Kubernetes. That's one of the reasons why not only, you know, we're able to get such power out of it, but it's incredibly easy and flexible to get started. It doesn't require weeks and months to try and get spun up on a new language. You can do it right now. And in fact, one of the things that we constantly hear from people is, Hey, I was able to get policy in my environment solving my use cases in under 10 minutes. That's where Coverno is a fit. Perfect sounds good. Yeah, and just to kind of add to that, you know, there are obviously if you're running any admission controller, there's important things to remember. And one of the things which is, you know, become it's a Kubernetes has made it very easy to add admission controllers. But that also comes with challenges. It's not simple to secure scale and manage admission controllers. And they can cause problems and clusters if you misconfigure them, right? So there are one of the things we've taken, you know, fairly great pains within Coverno to do is to make it as first of all secure by default. And then secondly, also make it so that it configures itself in a in as intuitive and a smart way based on the cluster settings itself. But there are a few gotchas that you need to be aware of as you're putting any admission controller in production. And of course, you know, one of the anti patterns I've seen is to some end up with perhaps too many admission controllers, which could end up also creating challenges, right? So things like that you do need to be aware of the documentation at Coverno. Take a look at, you know, the the installation page, the security page, it goes through that in quite a lot of detail. Alright, so back to, you know, the policy I was explaining kind of the structure of the Coverno policy. So this is just any any policy. But then I want to dive in a little bit deeper into what a verify image look rule looks like in one dot seven, right? So in one dot seven, the major change that we introduced was to allow flexibility of multiple attestors, which could be signatures. You know, you think of those as authorities for saying, Yes, this, you know, attestation or this image is good. And those attestors can be, you know, specified as public keys, public certificates, or even using, you know, something known as keyless, which just like serverless doesn't mean there are no servers. Keyless doesn't mean that there are no keys. But in fact, what it's doing is it's using almost disposable keys on demand underneath. And then, you know, specifying or taking information from that signing event and putting it in a transparency log, which is part of the six store tooling, right? So that's a more advanced use case. But in lots of cases, if you're using keys, certificates, or and keyless, and you can, by the way, have a combination of these, right? So you could sign an image, you know, with, you know, let's say, one key or add one certificate or with a set of keys and keyless, things like that. So previously, Kivarno had some limitations in 1.6 in terms of allowing the flexibility of these multiple attestors. And now I'll show you a couple of examples why that's important. So that's now a lot more flexible. You can have a list of and or and other combinations. And then you can have attestations which are verifying and attestations here are supported in in total attestation format in total is another Linux Foundation project. It's a CNCF project, which does, you know, is also, you know, focused on software supply chain security and managing metadata for, you know, images or any other artifact. So the attestation format is in and you can, you know, what I'll show is how you can put anything from software buildup materials to even like we saw the example for a code review or you could even put, you know, vulnerability scan reports, things like that as signed attestations and update these for your image, which creates a very powerful use case, because now you can periodically check and see which images might not be compliant with, you know, new vulnerabilities or any any changes in your environment, right? Other things that in one dot seven, which were done with this image verification rule is now you can very simply like it shows on top, you can just say required is true, which means if you say required is true, you are enforcing that every image that matches that pattern that matches that rule in your system has to be verified the and, you know, trusted, right? Previously, if you could have certain images which, you know, let's say if you use a glob or a wild card, you can match certain images, but other images would not be, you know, checked. Now you can very easily say that every image in your cluster needs to be verified before it's allowed to execute or allowed to be deployed. You can also now control on a granular basis how you are verifying digest. So you can enforce a global policy, which says that every tag must be converted to a digest before it is admitted. And Kivarno can do this in a couple of different ways. It will, you know, leverage the signing and cosign for that or it will fall back. If that's not, you know, specified in the policy, it can also fall back to do an OCI lookup and get the digest and make sure that the tag is replaced by the digest during admission. And that's the second part, the mutate digest, right? So all of this kind of leads to a lot of flexibility and a lot of interesting scenarios and use cases that you could now create and apply in terms of your governance and overall, you know, security posture that you want, right? So let's look at an example of this and, you know, as Chip and I were kind of discussing what to demo and how to show the example. One interesting suggestion or use case we came up with is to say, let's what if we have, you know, multiple sets of keys, right? So if you have, for example, a global key that maybe is per cluster or could be across clusters, but a global key, in this case, we called it production. And that production key is requiring that your first of all, your image has to be signed with that production key. And in addition, I want to make sure that my image is also signed by a namespace specific key. By the way, if you just noticed like Kivirno, because it uses open API, you know, V3 schema, all of the help and stuff is available in the BS code, it makes it super easy to kind of look at policies and understand the structure. So here, you know, going back into the policy, we can also start from the top. There's a few, you know, sort of global settings that we're putting in here, we're matching every pod. And then as we match a pod, we're saying, you know, pull, first of all, I'm going to again look up data from a config map. And then I'm going to verify any image that matches this pattern over here. And I'm going to do two things to that image. First thing I want to do is check that that, you know, it's signed with my production key. And then based on the namespace that's in the inbound request, I'm going to look up from my config map, another key, and make sure that that image is signed by that, right? So I don't have to statically list out all the keys I need. But Kivirno is doing a dynamic kind of a double dispatch over year. It's first, you know, looking up the namespace and based on that formulating a key. And just to I'll show you what the config map looks like, you know, so I have this config map with production app one and app two. And I have my three keys. And as I add new environments, as I add new apps, I can have, you know, more keys added to this config map and managed through CI CD. So in a very Kubernetes native manner, now you can dynamically control which not only that everything is signed by your common or your global key, but you're also making sure that, you know, certain images can only go into certain namespaces, right? So let's try and see how that all works and looks on the cluster, right? So if I in my cluster, I already have a few namespaces created. So app one, app two, app three, there's nothing running in these. So let's just make sure I deleted all the pods. So I'll do get pods minus a. So yeah, I see I have some other stuff from Tecdon and if you give our own kube system things, but nothing in those namespaces, right? And let's check the policies I have. I want to make sure I'll actually delete these policies and then we'll add them back in just to make sure I have the latest. I'm going to delete those and then let's apply this multi attesters is the policy that we just looked at, right? So that's what I added in. And if I now see it, the policy, by the way, should show that it's an enforce mode and it's ready and it doesn't, you know, do background scans. This is configurable. So I just, you know, set it to false. It could be set to true. So now with this policy in place, if I do a kube cuddle, you know, and let's say I want to run. Up one, but let's say first thing I do is I don't specify a namespace, right? So immediately what Kivarno is saying is that, hey, there's no, you know, key for default. So you can't run this because I can't verify this. You told me that you need two public keys, but I'm not able to look up the second key. So I'm not going to love this, right? So that's why Kivarno blocked it if he did not specify a namespace. So let's see now what happens if I specify, you know, an incorrect namespace. So notice over here, I'm running app v one, which I've signed with my key for that application and the production key. And I'm running it in in namespace app two, right? So ideally what I would want to see here is that Kivarno actually detects that and says that, hey, you have your production key, but notice here it's saying entries one. And if we go back into a policy, we can correlate that to saying entries one is so it's indexed by zero. So entries one is this namespace key, right? So it could not verify using that key, although it passed entry zero and it was allowed with that, right? So this is an example again, where now you're enforcing that specific applications can be signed with like a global key as well as a group key or a team base key, right? So just to kind of finish that use case, let's try the last kind of approach, which is I'm going to run this in the app in the namespace where it actually belongs. So I'm going to say app one has to run a namespace one and Kivarno should allow that, right? Because if both the keys are passed and that's what we see here as the pod got created, right? So simple but powerful use case which shows the flexibility and going back into this concise policy, just with a few lines of YAML, you are varying the data and you don't have to kind of manage a large set of static keys and things like that, right? Other use cases around this could be you might, you know, what one common thing we're seeing with customers as we work with several organizations on this is that maybe they want, you know, each of their environments in their pipeline, like perhaps they want a different key for DevTest and one for staging and one for production, right? So the production team has signed off or a few SRE team has signed off, then the image gets signed with the production key. Otherwise, it has the staging keys, but it's not allowed in production, right? So you can kind of build these type of governance policies and start enforcing them for your organization pretty easily. And it leads to some extremely powerful set of, you know, capabilities. All right. So the last thing I want to demo, and I know we just have about 10 minutes left, is something on attestations. And I'm going to quickly show a pipeline. So this is a Java application. It's a public repo. You know, it's just demo Java Tomcat under my name, Jim Bagwadian and GitHub. And I want to show, you know, the pipeline that, you know, is just playing around with different things here. So what in this through GitHub Actions, what it's doing is it's building an image from the Java app. It's scanning the image. It's generating an S-bomb. And then it's signing all of this as attestations and uploading the data, right? So all of this I can do just through GitHub Actions. And because GitHub Actions has the OIDC support, which was a really nice feature that was introduced recently, all of this can be done in a keyless, you know, manner, which means I don't have to configure. So GitHub, we know when we trust the identity of it and we can then rely on that identity in a policy, right? So let me show you what that policy looks like. So the one I want to check is this one with attestations, right? Let's expand the screen. So there's more YAML here, but there's some pretty interesting things we're checking in here, right? So not only am I checking that this, okay, so the image I'm matching is demo. Tomcat, the same registry. I'm checking the subject that was used because I'm using keyless signing. It was the actual workflow that created my image, right? So I can precisely identify the workflow and I can make sure that it was signed using GitHub. So if I trust GitHub, I want to make sure that, you know, the issuer of that, you know, that the certificate that is embedded in the image is GitHub. And I can check even down to the SHA, so this is the commit ID of my workflow, the workflow name, I can, and if the workflow comes from a different repo like your global repo, I can check that, right? So all of this verifies now that the image is, you know, trusted. And once I do that, and notice I'm not using any certificate or key year, I'm using this keyless option to trust this image, right? And once I do that, I'm checking a few other things. I'm checking that that image has an S-bomb in Cyclone DX format. I'm checking that the image was scanned here. I happen to use Trivi as the image scanner. And in the scan, I want to make sure that the scan was done, you know, in the last 15 days. So I'm enforcing that. And then I'm checking that for the score, right? So all of this can be allowed. And if you, for those of you who might be paying close attention, you might have noticed here, I've allowed 10, which is actually not a good thing to do, but take as 10 is the, you know, sort of highest of the most relaxed score. That I did that because my image has vulnerabilities. And I'll show what happens if I switch this to a lower score, right? So with all of this now. There's an audience question as well. So is there an easy way to write slash test policy without spinning up a KCD or something to run Kiverno, for example, in a pipeline? Yeah, absolutely. So Kiverno has a command line tool which allows exactly that. And it allows, you know, you to test, you can even write like unit test cases. You can have inputs, outputs, the success, failure cases, all of that. So check out the command line too. And the command is Kiverno, like kubectl dash Kiverno. And then you would do test and specify your test cases there. And for the writing of the policies, if you haven't already looked, if you check out the website, Kiverno.io slash policies, we have a ton of sample policies. Actually the most sample policies of any engine that's out there, I think it's 140 and growing. And so in addition to using the test CLI, you can go and look at how many of these are being written. And by the way, a lot of the policies that we're showing here, they're either there now or they will be there soon. And those are a great way for you to either just copy and paste something and use it right now, gently modify it, or as a last resort, just see how we're constructing these things and then take the bits and pieces that are useful to you and apply them however you like. All right, so continuing with that demo flow. So what I'm going to do is run, you know, this version of that Tomcat image, which was built with all of the attestations, et cetera. And in my policy, you know, it is going to check again for the scan report, for the S-bomb, as well as the signature, you know, using that it was built using that GitHub action, right? So if this, so actually, oh, my other image kicked in. So let me just delete that policy. Let's delete that again. And then we will back for attestations over here. And now we will, let's try that one more time, right? So in this policy, what I would expect is now because it's checking just for the Tomcat image and just to show the policy again, we're going to check the identity of the image and then we're going to check for certain attestation data in here. And what I'll do is once this runs, we will go and reapply that policy with a stricter score that we want. And we will, you know, kind of not allow that, right? So it said already exists because I had that pod running, but it allowed it to go through, right? Which is what we really want to see here. And I'll just delete the pod. And if we run it again at this point, it would, you know, have created the pod. So now that I changed the score, I'm just going to reapply that policy. And let's see what happens if I try the same thing again. In this case, it should block that if the policy doesn't comply with that score, which I recall correctly, it did have some high severity vulnerabilities which are flagged in the latest run. And sure enough, it's saying that because of the Trivi Aqua Secure scan, which came in and it, you know, reported those new vulnerabilities, it's been blocked, right? So again, simple but powerful example of how you can, you know, integrate these type of scans and these type of different attestation data. Like you could also check within the SBOM. And SBOMs tend to be fairly large, but they're all in JSON format. And let me show you very quickly. So you're with every build, we're including a scan report and SBOM and the provenance data, right? So with, if I kind of go into this SBOM, it is in Cyclone DX format and it will be in a JSON format data, which will, you know, show me exactly. So this was built using SIFT. It's showing me the container data. So you can verify all of this, including which packages again, you want to allow where the dependencies are and check this in a policy, right? So pretty interesting, like think about the scenario where like with log4j and others, if you want to check and see if any of your images had that, you know, package, you could now easily write a policy and give an overview report violations if it did not match a specific version. You can also check for certain licenses right here. For example, if you don't want to allow GPL, write a policy for it, right? And you will immediately know if any images were built using an SBOM or which with any package, which depend on that license. All right, so that's all we had prepared to demo and, you know, just kind of in conclusion, certainly check out, you know, if you haven't tried Kiverno, definitely go try it out. If you go to Kiverno.io, like Chip was saying, or go to our, you know, kind of GitHub page, there's a lot of information there. The documentation, you know, if you just go to documentation and the introduction, it will explain all the basics. There's a getting started, which will, you know, kind of help you just with the installation, there's a help chart. Also, if you're kind of just installing on your local cluster, you can use the YAML approach. So it's a one line to kind of do that. Just kind of from if you are not using any admission controller and if you have production communities, definitely make sure that you, you know, figure out how to install your pod security policies either through, you know, the, like Kiverno and there's built in, there's a good library of pod security policies. So this would be a very good starting point just to kind of get some baseline security on your clusters. And then you can expand into other use cases, like, you know, the container image signing and verification that we focused on today. Great, really great presentation. And great that we had a lot of questions already that we answered throughout the presentation as well. So we have a few minutes for the final questions. So this is essentially also the last call for questions for today. But let's see if the audience has anything to ask anymore. There has been a lot already, so let's see. But thanks for the great presentation. It was really nice. You're welcome. Thanks for having us. Yeah, of course. So let's see if any questions come up. But then before we see if there is any from the audience, I would like to maybe ask you what is the most common question that you guys get about the Kiverno project? I think it's comparing, you know, like some of the comparisons, the trade-offs, like we saw from the audience, right? Of course, those are fair questions and good to start out with. Yeah, and one of the others that we tend to hear a lot is, well, this looks great for, you know, if you have very simple needs. And also if you're just operating on your core Kubernetes constructs like pods, but can it do anything that's more complex and can it work on custom resources? The answer to both is absolutely yes. And as you saw from some of those policies, you know, there are some more complex use cases that are there, but they're still accomplished in a few more lines of YAML and it is all YAML. There's no programming language to expose. So Kiverno works the same way on custom resources, the way as it does on existing resources. And there's no difference. So if you wanna write a policy that operates on a pod and you wanted to write a policy that operates on, let's say, you know, a certificate from CERT Manager, you're using the same style. You're using the same language, just the same constructs. There's no differences there. And I also see we had some additional questions come up. Yes, there's three that came very last minute of the week. We can try to make it through all of them. Let's see. So when should a Kubernetes user start thinking about policies? Right away, right. So in Kubernetes, you do need policies, so as early as possible. Perfect, easy answer there. So what's Kiverno's footprint? Do I need a DT cluster? So by default, I think believe it's like around like 40 to 50 meg in terms of memory. Of course, as you scale the cluster, you will need more. And there are like one of the things we will do in the docs is have some best practice guidelines. So we have tested with pods, which have even hundreds of namespaces, thousands of resources or clusters, which have thousands of pods. So it does scale fairly well. Great. Last question, how many admission controllers should one have so that Kiverno doesn't break as Jim said earlier? So it's not that Kiverno breaks, it's just that you need to be aware of the behavior of admission controllers, especially if there's some, they're kind of writing on each other's resources, right? So you can have multiple admission controllers, just be aware of what they're doing. Perfect. And by the way, that's a general Kubernetes concern. This isn't restricted or somehow endemic to only Kiverno. Whenever you're running admission controllers in general, and you have instances that happen where your cluster may be down, you have the same problem. So Kiverno is an admission controller. It isn't exempt from any of those, although we are taking additional steps, both on the documentation and also on the Helm chart side in the upcoming 1.7 to make it even easier to tell you more, to prevent you from accidentally doing things like that. So look for those enhancements coming soon. Great. Really, we speak through the other three questions. So perfect, we're good on time, but the discussion can continue in the Cloud Native Live Slack channel as always. If there is anything more, but that's it. So let's start wrapping up. So thanks everyone for joining the latest episode of Cloud Native Live. It was great to have a session about protecting software supply chains using Kiverno. Really loved the audience interaction this time as well. So happy to see this again. Next week, we won't have a Cloud Native Live since it's Q-Pom plus Cloud Native Calm Time, but the week after that we have a session on comparing different minification techniques and their vulnerability assessment. So thank you for joining us today and see you all in a few weeks. All right, thanks everyone. Thank you.