 Hello, everyone. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Annie, and I'm a CNCF ambassador, as well as a product marketing manager at Cast AI. Welcome on my behalf here today. So every week, we bring together a new set of presenters or presenter to showcase how to work with Cloud Native Technologies. They will build things, they will break things, and they will answer all of your burning questions. So you can join us every Wednesday to watch Cloud Native Live live. This week, we have Jim Bagwalia here with us to talk about supply chain security with Cosi and Kibirna. So thank you much, everyone, for joining in. This is a house group keeping thing. This is an official live stream of the CNCF. And as such, it's subject to the CNCF Code of Conduct. So please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful, all of your fellow participants, as well as presenters. So with that, let's get us started. And I'll hand it over to Jim to click us off today. Thank you, Annie. And thanks for having me. So really excited to talk about supply chain security, present some of the work going on with Kibirna, which is an admission controller and policy engine for Kubernetes. And also with Cosi, which is part of the six-door project, right? So this is, for me, just kind of as a caveat and somewhat of a disclaimer. Like obviously, as you see, probably in the last six to 12 months, there's been a lot of headlines, a lot of news on supply chain security and topics related to that. So this is something that, of course, based on customer interests, others, we have been diving into. There's a lot of new work going on in this area. So I'm going to first introduce just some basic concepts, things that we have been researching, collating, and kind of putting together. And some really good emerging work coming out of different open-source communities. So let me share my screen. And I'll pull up a few websites that we'll go through just for the introductory concepts. And then we'll dive into details on what you can do today for your Kubernetes clusters, how you can sort of help improve. So just as a general... Perfect. Just as quickly maybe to kick it off as well. You mentioned that there's a lot of new headlines about supply chain security breaches. What do you think is causing the increase? Yeah, good question. So one of the theories which is fairly interesting and something which resonates is there's been a lot of work and improvement in overall security. If you think about our production deployments, what we're starting to do with multi-factor authentication across the web, but even let's say HTTPS, other network security becoming more prevalent. So of course, what's also as part of that, what happens is attackers are looking for the easy way in. And one of the weak spots, something that has been ignored, unfortunately, is how we build, deliver, deploy software. And if you think about it today, if you're building any commercial or open source software, we don't think twice about running commands like whether it's pulling a Helm chart or updating, adding a dependency. I mean, we might find something that looks interesting on GitHub or some other version control system. And we add it to our dependencies and we pull that in. But how do we know who actually built that? How do we know what that depends on? Things like that are often just an afterthought because there's so much data. If you try installing any package on any one of our systems, we just get a ton of information and ton of data. So it becomes difficult to check and audit and verify and automate all of that. So I think in terms of why this is happening, partly it's perhaps because we've been getting better at securing production and runtime systems and there are better tools, better technologies available there. And this has been exposed as one of the weaker links in our whole software development life cycle and attackers are, of course, as they become aware of that, they're looking for the easy way in, right? This maybe provides that. Perfect. Well, not good, obviously, but perfect as a brief one is. Yes. So but there are the good news is, again, and it's awesome to see all the work that's going on and I'll introduce some of the communities if folks are interested where they can go for meetings and different things. There's just a ton of growing awareness. There's, of course, even at the government levels, etc. There's something which is often talked about now is there's been an executive mandate in the United States to improve this aspect of our software development practices and delivery best practices, right? So, yeah, I mean, I think there are these problems can be mitigated and will be mitigated. So it's a matter of just spreading the awareness, the knowledge and figuring out what we can do right away. Just getting it done is the next step. Right. Yeah. So a few things I would like to introduce, right? And again, this topic may seem daunting as you first look at it, but it is, you know, it is approachable and there's a lot of good work and, you know, things emerging. So the first thing we'll talk about is something, you know, a framework called Salsa or supply chain levels for software architects, artifacts. And that's, you know, a standard which was originally came out of Google and now is being adopted, you know, fairly widely in terms of a reference, you know, model for securing different aspects of your supply chain. So there's different levels of security that can be attained. There's, you know, various audits and checks in place and various controls that are, you know, laid out. So we'll go through some of that. I'll talk about, you know, another project called Six Store, which is, in fact, has multiple sub-projects, cosine being the one that we look at live, but there's also, you know, other projects which are part of Six Store. And Six Store is a Linux foundation project. There's, you know, it has, you know, sub-projects, I mentioned cosine, which is used for signing as the name might imply, but there's also RECOR, which is a transparency log, which becomes an important part of the supply chain security puzzle. And then there's, you know, FullCO, which is an easy way to manage web-based, you know, PKI and web-based identities, but just makes it, you know, rather than, you know, managing static keys and dealing with key rotation and things like that at scale. You know, FullCO gives some capabilities. We just make this simpler for, you know, developers and teams to adopt and manage this at scale. So we'll briefly talk about those. And then I'll, you know, switch to Kiverno, which is a policy engine for Kubernetes. We'll talk about where Kiverno fits in into this and what admission controllers can do in this whole supply chain security puzzle. And, you know, what we'll use for a demo is I'll start with the repo. This is a just that simple, you know, taking the Kubernetes pause container, just push it into a private repo. And as you can see here, this is just a simple one tag. And there's no, you know, signing of this container, there's nothing right now. And what we'll do is we'll, I'll show how this can be signed, but cosine, what we can, you know, then also do in terms of adding other metadata, which as you'll see, is recommended for various cell levels, and how we can, you know, kind of, you know, verify that metadata in our software supply chain. So again, quite a lot of different concepts and things, but, you know, hopefully as we go through, these will come out as, you know, things again, which are fairly practical and straightforward to start implementing with, so again, looking at improvements in the future as well. So really wonderful. I think it's going to be great. We're going to get a good overview about the things that are happening as well as, I guess, practical things on how to get started on a topic that might seem a bit too big to approach. Sounds good? Right. Yep. So let's, let's go to salsa first, right? And great, great acronym, of course, and what it stands for is supply chain levels for software architects, as it says. And salsa, you know, has four different levels which they identify. They have a nice picture over here with some of the, you know, threat model for different things that could potentially go wrong. So here what they're showing is if you're a developer, obviously you're kind of, you know, starting with source code. And then you're building your code. And once you build your code, it pulls in a lot of dependencies typically. And then you, you know, once you build your software, your packages, those get, you know, pushed to one or more, you know, deployment servers or services, which are then consumed by your customers, right, or your end consumers. So there's several things that could go wrong, which they've identified as, as various threats over here. And then they have, you know, different guidelines in terms of what you can do to mitigate those threats, right? So if you go down and click on this, learn more, you know, salsa defines four different levels of, you know, trust and security. And the first is, you know, basically, you're starting with something, it's something basic, which, you know, you have a documented build process. And there's, you know, there's no signing, there's no, you know, verification. But at the same time, you know, you might have some provenance, some proof of what you're built, some records in there itself, right? The next level up is, of course, you're going into, you know, salsa level two, where you have more provenance defined for every step of your build process. And you're starting to, you know, attach that in a way that can be verified, right? And we'll actually look at those examples. And then beyond that, you want to go maybe into, you know, level three, which is level three is actually what is, you know, like something, something which takes a bit more effort to attain. But this is where every step, again, becomes auditable. And your build environment itself is in a fully automated fashion also. But at the same time, you have, it has a verifiable identity, which can be then, again, you know, proven at the time of consumption, right? So somebody can look at this and say, or look at the right, or audit the right artifacts and verify who built this package, who reviewed it, how was it, you know, what kind of dependencies did it have. And there's so, you know, there's another acronym called SBOM or software build of materials, which is also there is, you know, activity in terms of standardizing those, which becomes important to have that as an artifact and be able to put that into some trusted, you know, central server, which can be then also verified itself, right? And then level four is where there's some, you know, more requirements in terms of, you know, every, every sort of in your software code reviews of all changes. And again, this could even apply if you're doing infrastructures, code and other things, but requiring at least two people for a code review, making sure that there's proper sign off and processes in place. And then your build process has to be, you know, fully, fully isolated, well protected, as well as fully reproducible, right? And there are, and we're not going to go through each of these details, of course, but if you go to the Salta website, all of this is fairly well defined. In fact, just this morning, I saw there was a good blog post from Google, you know, from the security team at Google, where they talked about attaining different levels, as well as, you know, what, again, practical things that can be done to start mapping to each of these levels. But it's a very detailed and well laid out framework, which talks about, again, these, you know, like different levels of security in your CI CD build process, as well as, you know, discussing what are some of the best practices and things you can do along the way, right? So, It's an interesting, quickly question. How many kind of, what's the majority of the projects where they are usually in this scale or are they on the four or the one or hopefully some way in between, you know, so. Right. Yes. So today, most would be at, you know, level one. So most open source projects will have some automated way of doing bills. They will have, you know, the bill logs available to some levels. So there is most would be at level one or two. And very, I would say the majority would, you know, fall into the level one, where there's some scripted build processes. There is some provenance available. But even this is not, you know, overall as an industry, we're not in good shape here, right? So this needs to be improved. And we'll talk about, you know, just think about today, if you pull a container image into your cluster, or a Helm chart, right? It's such a high level of abstraction. Because as a user, you have no idea, you know, okay, well, that Helm chart may have thousands of lines of YAMLs. And now there's tools like Kivarno or Gatekeeper Opa and others that can scan and verify and, you know, audit those configurations before admission. But, you know, that Helm chart also has images. How do you know those images were not tampered with? How do you know those images? Again, where is the provenance to show how they were built? What their reference? So there's a lot of work that needs to be done, even to get to that level one, right? So, but it's possible. And then level two requires more, you know, you have to use build services where the provenance is authenticated. And six store, you know, the tools we'll talk about start getting you towards that level two. And then there's, of course, other processes which have to be improved for level three and four. All right. So hopefully that gives, you know, very quick, but brief overview of Salsa. And certainly would recommend folks go through this in more detail, read and understand. And there's, again, some great work, some really good, you know, posts and things coming about, which are, you know, helping explain this and break this down into more practical and other steps which can be implemented, right? So let's switch then to six store, which is a really exciting project. If you're interested, you know, there's, of course, there's a lot of details on the community and the community meeting. So this project, you know, has an editor receiving, as you can see over here, there's lots of good, you know, things being said about six store and what it can do for supply chain security itself, right? And it started with collaboration from teams. So Luke at Red Hat, Dan at, you know, Google are some of the main folks behind it, you know, and we'll go through some of this in more detail. But just to kind of show what the project does, there's three things that they, you know, three sub projects in six store, which are important to know about. The first is, you know, signing and making signing easier itself. And cosine is the primary tool. It initially started out as being a signing tool for container images and OCI registries, but has, you know, evolved into being able to sign almost anything, right? And it's, whether it's, you know, blobs, whether it's ham charts, there's work going on, you know, to integrate cosine and helm and, you know, standardize some of that. And for the most it will, you know, these signatures again, cosine gives a lot of flexibility in where you can have you manage these signatures. It doesn't, you know, sort of pin the signature to a particular location. So you can even move, you can have your signatures in a different OCI registry than where your images are. You can, you know, also do things like you can have your other metadata and we'll talk about the importance of metadata and how you can, you know, build attestations out of those and also manage those with cosine, right? So that's the first tool to kind of be aware of and, you know, play around with. It's very easy to get started. So each one of these, of course, like if you go to just the cosine git repo, you know, it has, they have their own kind of getting started pages, how to kind of install and use this. So, and it's cosine is super simple to, you know, get started with manage, you know, public private key pairs, sign images and try out, right? So once the signing part of this is taken care of, there's also, you know, one of the things I mentioned and maybe should I point it out when we were discussing salsa is there's a need for more, you know, metadata and there's other projects also existing projects like Intotor, which is another Linux foundation project, which act as metadata repositories, right? So as you can imagine, when you're creating bills, when you're, you know, verifying these type of things, a lot of information gets produced and, you know, this metadata is important when you're going to then again verify trust and, you know, check things later at deployment time or at runtime. But, you know, signing what's important next to do is signing this metadata and creating something that's known as an attestation out of it, right? So the Intotor project has a format and there's a process and they're, you know, kind of going through the work of standardizing this format of having attestations. So it could be something as simple as saying, okay, yours, you know, I've scanned my container image or, you know, the code that goes into it. Here's, you know, the results of my vulnerability scan. And I'm, you know, as a approver, I'm creating an attestation by signing it with my name, with my identity, that this is ready for deployment into a production system or into my, you know, the next system in my CI CD pipeline. So that, you know, there's another tool from in the six store project called RECOR, which becomes the place where you can, you know, you can push all of these transparency logs, as well as other metadata, which can be in an Intotor can also receive a lot of this metadata. So these systems will now have information you can later audit or look up to verify. And there's also a question from the audience and that we can maybe handle now. So who decides an organization's salsa level, level achievement? Is it managed internally by the org? Yeah, so that's a good question. There is some activity where, you know, there are just like with PCI and other compliance standards, the, you know, auditors and, you know, folks who are in the business of providing these compliance certifications are also adopting salsa as a framework, which will be, you know, auditors will be able to inspect that. Just like with PCI, for example, an auditor would check if you're using proper fire rolling and you're using SSL and things like that. Similarly, auditors will be able to check and provide that, you know, compliance level for salsa. Perfect. Yeah, that was a question from Tatmaka. Perfect. Thank you so much. And then there was also a comment from BiGuy86 from open source applied to enterprise perspective. He or she has sewn a really poor security more than 80% of the project and best ever seen Argo CD, worst ever seen Kanifah. So that was an audience comment as well. Thank you so much everyone for interacting with us. Absolutely. Yeah, so, you know, what I was mentioning with the metadata and what you can do and we'll take a quick look at an example. So record, you know, becomes this transparency log where it's capturing and save some of this metadata for auditing later. And it's also used in conjunction with, you know, what I mentioned of trying to make signing and verification a little bit easier across organizations and teams by another project called Fulcio, which is a web based, you know, PKI, which allows, you know, using OIDC to generate. Sometimes this is also referred to as keyless, but what's happening underneath the covers is Fulcio has, you know, a root CA or root certificate and, you know, it is using OIDC to verify the identity of the person doing, you know, providing the signatures and then looking, you know, based on that, it will generate temporary keys, sign and record that in the transparency log and record, which can be then later audited, right? So it's a very, very smart way of, you know, making managing keys a little bit easier and making sure that now you can, you know, as you're scaling the usage of these tools, you don't, you can rely on well-established standards like OIDC to be able to, you know, to be able to assign, you know, verify identities, generate temporary keys, sign and then also be able to kind of capture this in a transparency log itself, right? So those are the three tools to think, to kind of investigate and look at. We look mostly at cosine in what we are going to do today, but, you know, the cosine, record and Fulcio all work together to solve some of these problems at scale. So that was the second topic I wanted to introduce, right? So we talked about salsa, we talked about six-door and finally, you know, now, you know, let's kind of bring it back to what does this mean for Kubernetes, right? And what can you do today? So I'm going to briefly meant, you know, just introduce Kivirno for those folks who may not be aware of it or what it does. So Kivirno is a policy engine. It's a CNCF project which is a policy engine for Kubernetes. So it's a similar in function to, oops, shut that. So it's similar in function to what you could do, you know, with OPA gatekeeper, except it doesn't require writing and managing policies in rego. It uses, you know, Kubernetes custom resources for policies itself and it will, you know, policy results, et cetera, are also available as Kubernetes resources. So just briefly, very briefly looking at how Kivirno works. It acts as a web book based mutating and validating admission controller. So what that means is every API request in Kubernetes can be passed to Kivirno and based on your configured set of policies. Kivirno will then either allow, deny, or it can also mutate. It can generate based on certain things in the API request, which you use as triggers. It can generate other resources for your cluster itself. So several, the several use cases, I mean, the basics use of Kivirno, of course, as for enforcing pod security or enforcing workload security, making sure that your namespaces are configured correctly, just managing configuration security in general. Even simple things like, for example, if you want to make sure this example is saying that every pod and Kivirno, by the way, automatically applies policies which are, you know, written for pods, it can, you know, map those to pod controllers. So every pod, you know, independently of how it's created, here, this policy is requiring a specific label, right? So very simple way to manage and, you know, enforce policies for your cluster and simplify, you know, some of the complexity of managing Kubernetes configurations. So let's talk about, you know, what Kivirno can do, and how it works with, you know, Cosign, as well as, you know, some of the other tools we talked about, right? So what I'm going to do, and here we'll, I'm going to go back to my image, which is just, you know, this pause to, there's a single version, it's latest. And in my, you know, as we kind of, and let me pull up my shell. So if I look over here, you know, I have basically a cluster running locally, it's a mini cube cluster, it has Kivirno. And if I do get C-Poll, which is cluster policies, there's no policies installed, right? So if I, you know, let's say do kubectl, and let's just try run, I'm going to run that image from that. And if we do kubectl, you know, get pods, we should see, we have that pod running, and there's no checks, nothing happening right now, right? So basically, we're just allowing everything into our cluster. So let's, you know, delete that pod. So I think I ran it in default. So we'll just do, you know, we'll delete that one pod, and we'll go back, right? So at this point, what can I do to start signing my image? So the first thing I would do is here, you know, if I look at, in my repo itself, what I've done is using cosine, and the first command you can use on the cosine command, command line is just to create a public-private key pair, right? So I've used generate key pair. I have a couple of different key pairs, and I'll show how I'm using that in signing, because you could, you might have, you know, multiple signatures associated to a single image. And you can then have key rental policies, which we'll, you know, quickly look at, which you can then use to verify or check those. So I've already generated a key pair, and that's, you know, if I look over here, that's in cosine.key, cosine.pubs, this is my public private key. And I have some other yamls, which we'll come back and look at later. But what I want to do now is I want to sign my image, and I want to see what cosine does when I sign that image itself, right? So I'm going to, you know, just use cosine sign. And in this case, you know, we're going to use, I don't want to sign it first with the, so I have another public-private key pair in under keys, but I'm just going to first sign it with, you know, the key in this folder itself. So basically, it took, you know, my password, it, you know, I said, okay, it signed something, and that's all cosine is telling me right now, right? So let's go back to the git repo and see what happened over there. So if I go and refresh this screen, I see something else showed up, right? So previously, that was just latest, but now it's showing me that there's a signature which ends with dot sig, and it's got this big, you know, sort of stream, which is this is actually the digest, and it's telling me that, look, there's a signature which got attached into this. Now, I can verify the signature with, you know, basically, if I want, with cosine itself, but what I want to show is how do you verify this with the Kivorno policy, right? So I'm going to use this policy here, which, you know, is checking, and right now this policy is checking from multiple keys. So basically what this policy is doing is it's, you know, looking at, and if we kind of start from the top, it's the policy name is checked image, it's saying that if there's a failure, it wants to enforce. So Kivorno supports two different modes. It can either audit, so it will just, you know, create a policy report, but it won't enforce that, or it can enforce the configuration, in which case you have, you know, it'll block that container from getting deployed. It also, it's saying that don't run this in the background. I just want to run this on the webbook. It's configuring the webbook timeout as 30 seconds, because we are going and calling, you know, we will check the signature from the registry. It's saying that if this policy webbook fails, it's going to fail closed, right? So it's not going to allow images or things to go by if this webbook doesn't respond for any reason. And then here it's saying if you see a pod, make sure that every pod is signed by two keys. But I don't want to, we don't, we just signed it with one key. So what I'm going to go and I'm going to comment this out. We're going to, we don't want it to sign with two keys right now. We'll just start with one and we'll apply this policy. So let's go back to our shell. And what I'm going to do is I'll, you know, let's say kubectl apply. And from here, I have the policy which is check images.yaml. And I'm going to, you know, just create that policy. And if I do get cpaul, I'll see it's here. If I do minus or yaml, it should show me that policy. By the way, Kiverno again, like automatically by default, it will create, you know, that policy for the standard pod controllers. You can manage this, you can turn this off if you don't want that and just apply it at the pod level. But that's kind of what we're seeing over here, right? So this policy is now configured. So what we can do is, you know, it's actually what I should have shown is maybe let's do that. Let's go ahead and delete that signature just for a test and see what happens if I try to, you know, deploy that same image without, you know, with this policy configured now. And, you know, without the signature being present, right? So I'll go back and, you know, try to run that same image which we had. So here we were, you know, this pause too. And we have this policy in place, which is saying that don't allow this to run. So and it's saying as expected, it checked for the signature and said it's not found. So let's go ahead and sign it again using the cosine sign command. We'll use the same key and we'll push the signature back into our repo. And once that's done, if I run pause too, I'm expecting at this point that, you know, the pod gets created, which it did. And, you know, it should automatically also, one other interesting thing to point out is once it, you know, once the signature is verified, what Kivarno will do is it will also, you know, in the pod YAML, it will replace the actual tag with the digest. And what this does is if we just grab for image just to show you what I'm talking about. When I ran the command, I just gave it the tag and by default, of course, it goes into latest. But here, as you see, what Kivarno did is once it verified the signature, it replaced that tag, which is mutable, with the immutable digest. And that also, you know, kind of, that is important because otherwise you have a window in which somebody could potentially, if there's a sophisticated enough attacker, they could potentially either replace the digest or, you know, if you're just pulled by tag, your image, you know, they could masquerade some other container image as an image with that, you know, tag itself, right? But once you use the digest, any time that container is pulled, you're guaranteed to get that exact same image from your OCI registry. There's been a few questions from the audience, and this would be a good moment to take, if you want them at least. Okay. So from Fendi Yusuf, there's, how did the signed artifact propagate the GitHub? Did you do a push? Cosign automatically pushes that signature. So by default, the Cosign assumes that the registry, where the image is, is also where you want to push your signatures. There is an option, you can specify a different registry if you want to manage your signatures in a central registry or, you know, any other repo you can. But by default, Cosign will assume that it's, you know, going to be pushed into that same registry. There's also an option in Cosign, and this again speaks to its flexibility to create a local signature, right? If you don't want to push it anywhere. But by default, it will push it back into that same OCI registry where the image is. Great. Then by Byguy86 again, thank you so much for multiple comments and questions. Can you please share the repo with the list and commands? I will, can you share it after the presentation at some point or? Oh, okay. So the, sorry, the request was to share the repo or to share the actual commands that I'm going through year two? Well, if Byguy has any, any specifications, please let us know. But I would imagine that it's the command. Both, okay, he says both. Okay. Yeah, we'll do. And certainly feel free to reach out. You know, I'm just jambigoladio on the Kubernetes and CNCF Slack. So reach out and I'm also, you can find me on the Kivirno Slack channel. So reach out if you, you know, get stuck with any of this. Okay. Yeah. And then the last question that was a bit wild ago already, but we were so deep in the demo and I want to interrupt from Pat Moka, how do employees receive keys in our organization? Is it solved? So the question was, how do we manage? Yeah. And that was, that was around when we were talking about cosine as well as cells. Right. Yeah. No, that's a good question. Right. So if you are using, so you're today, what I'm going to demo is just using public, private keys. So, and it does become difficult to manage keys at scale. So this is what Filsio and Recor are solving by, you know, by removing by this keyless option of signing, which is very cool, because you can just, with your OIDC identity, you can, you know, once you authenticate with your preferred OIDC server or your identity management tool, it will generate temporary keys to do the signature. And then cosine will check the transparency log to make sure that the, both from a timestamp point of view and from, you know, the fact that there was a valid identity that matched those keys that's available. So that is, you know, those are marked as experimental features, but soon to be production ready. And we will, you know, we can do more demos and we'll have more coverage of that. And Kivarno will of course also support this as soon as it's, you know, ready to be available. And that will reduce the burden of managing keys, of course, in a larger organization. But, but you could also, one other thing you can potentially do is you can today with, you know, things like GitHub secrets and other secret managers or PKI or other, you know, KMS type of infrastructure, you can still at least start with a central key, but then you have to manage and rotate that and, of course, protect your keys, which becomes important. Perfect. Thank you so much for the comments and questions that keep them coming if there's any, any more. But yeah, we can continue with the, with the scheduled programming as well. Okay. Yeah, so so far, just to kind of quickly recap what we have done is we saw that, you know, we were able to basically apply, you know, policy, we signed an image, we verified that without the, you know, without the signature, the image was blocked. And then with the signature now, you know, we are able to run that image, right? So if you recall, what I also did is I kind of commented out the need for multiple signatures. So I'm going to go and undo that. And we're going to require multiple signatures now for this particular image. So by the way, just to kind of also show very quickly here, the first rule here is saying by star. So it's saying for every image must be signed with the first key, but then it's saying that only images in a particular repo are signed with the next key. Right. So you could, if you're kind of thinking of an organizational structure, you could manage your keys in that manner where you have a central key, which requires to say, okay, these, all of these images are coming out from my orgs. And then you could have also as group specific key. And you can, with Kiverno policies, you can match this to any namespace. So you could split this into two policies, one could be namespace based, what the other could be cluster wide. So a lot of flexibility in how you want to organize and manage this for your teams and for your organization. But let's go ahead and now, you know, apply this particular policy. So I'm going to just go ahead and apply that. And then if we now try to, let's check and see if we still have those. I think the pod was running. So we'll delete it and we'll now try and run this again. Right. And what I'm expecting to see is this time it says that one of the images is, you know, one of the signatures is there, but the next one, it says, you know, it failed. And it's saying, look, it can't, you know, match the second signature. So it denied it again. So to fix that, what we can do and, you know, I'm going to now sign, we'll go back to cosine and we'll sign this image with the second signature, which with the second key, which was in this keys folder that I just happened to have over here. Right. So if I do, you know, kind of assign that again, run the cosine command. And if I check back in my registry, I'm not going to see here, I shouldn't see, you know, anything else. I'll just see still the signature because all of the signatures are captured and put in here. But if we try to run that image, what we should see is at this point, we can, you know, run the image because both signatures now are present in our, you know, pod itself. Right. So so far, so good. We've, you know, we've been able to now sign and verify the signatures. We've seen that we can do multiple signatures. But that's, you know, that's certainly a great starting point. And it's something that everybody, as you can see, it's not difficult to do. It should be something that everyone looks at, you know, adopting. But the next step would be, how do we get to those upper levels, the next level of the salsa framework? Right. And that's where it comes back to, you know, how do you now start producing the right metadata in your CI CD pipeline and, you know, start verifying that metadata as part of your policy set itself. Right. So you're, you know, one, one quick thing, let me just quickly show is if you let me see if I can find this, if you go to in total attestations, this is the definition I was kind of mentioning. This is, there's a standard work in progress. So and it references either a salsa provenance type or other types of attestations you could attach. Right. So here, this is the structure of a, you know, of a in total attestation. Again, an attestation is this sign metadata that you're inserting and uploading. And I'll show how you can do this with cosine. And what it, if you kind of look inside the payload, it could look this is coming. So by the way, there's another pro open source project that's, you know, I should mention. So tecton chains is doing a great job at producing a lot of these attestations and making sure that these are recorded and can be pushed and signed and pushed into either an OCI registry or into a transparency log, which can then be audited. Right. So here, like this is a more complex example of an attestation, some simpler examples, and we'll, you know, use something what actually we'll use this example is like, let's say I want to do a code review. And I have, I'm just here, they're using a custom attestation type, which is then saying that, you know, which kit repo and who are the authors of the code review and what it was able to do. So and another simple example is if I have some automated tests, and I want to create a signed attestation as from a builder, or my test executor, that's saying that, you know, this test has run and has passed for this particular image itself. Right. So let's, let's try that out and see how it works. So here, what I have is if I look at, I think, yeah, I have this review.json. So it's a very simple predicate and a predicate is part of a statement, which is part of a into auto attestation. So this is the inner body of what I'm going to actually check in my policy. And it's just saying that, hey, you know, as part of this repo, I'm going to, and this belongs to this particular URI, I'm checking it on the main branch. Let's say there's an author and then there's a reviewer. Right. And if I want to know if I want to kind of sign this with, you know, cosine, the command I would use is cosine attest, and I would give again my key, I'm giving a predicate, which is just json. I'm giving it an attestation type and I'm pushing it back into the same for the same image, which is the subject in this case. Right. So I'm going to go ahead and sign this and let's see what happens. So it's saying it's using this payload, and it's actually, you know, kind of deploying this up into our OCI registry. So if I go back to our registry now, and if you see, you know, if you look at that positive image, I see there's a .att file, which is also showed up. Right. And what that is, is it contains now the signed attestation. And you can have several of these. Right. So we'll just look at a simple example, which is verifying, you know, that that image has this particular attestation. So then what do you do with this? Or how do you, you know, how do you kind of verify this at admission controls? So let's look at, you know, this policy of which we will apply next. So if I have this Kivarno policy, which is saying check for a code review, the first thing it's doing, it's got a similar verify images, you know, block, which is checking, and it's saying, okay, if my image is coming from, you know, this particular repo, and it has this public key, then I'm expecting these attestations in it. And here, if you recall, the type of attestation we want is code review we want. And within that attestation, it's checking to make sure that, you know, that is coming from a particular GitHub repo. So the repo URI has to match, the branch has to match. And then the reviewers have to be in one of the set of reviewers, which are specified over here at the bottom, right? So here's Anna, Tim, and Bob, who are the allowed reviewers. Of course, if you're putting this in production, you might want to put this in a config map or make this a little bit more declarative where it can be expanded. But the basics of the policy now are, we want to check for this particular attestation when we are, you know, running this image. So let's apply that policy and see what happens, right? It should run because we've already signed and pushed that attestation. And just to prove that if we, you know, maybe we can change that attestation if you want to do something else and make sure that it fails, right? So we'll first go ahead and delete the pod we were running before. We'll apply this policy. So we want to do its check review is the policy. So now this policy is created. If I do my, just to verify that the policy is created and active, I'm just going to do get cluster policies. It shows that they're both set to enforce and they're both ready, right? So let's go ahead and create our pod again. And in this case, we're going to run this container. So now what should happen is we are, you know, we not only did we verify the signature, but we verified that attestation. Just to prove that this is actually, you know, this did something. Let's kind of change something. So let's go ahead and delete the pod first. And what I'll do is I'll edit this policy, you know, just in my cluster itself, right? So what we can do here is if I edit this policy and right now I have, you know, if I go look at who signed it, it was Bob who, you know, did the review. So let's remove, you know, Bob from this list and say, maybe, you know, Bob left our, you know, organization or group. So we're just removing that and updating our policy. So at this point, if we now run that particular image again, if we did this correctly, what should happen is it should check and say, hey, you know, the policy sign, but it did not pass this particular attestation because Bob's not on the reviewer list, right? Which is exactly what we see from this output and Kibirno was able to block that and reject the, you know, that image from running. So this is a simple, but yet powerful example of how flexible, you know, these tools can be and the power of cosine and using Kibirno together, how you can now start, you know, not just creating this metadata, again, Tecton or GitHub actions can be used to create some of this metadata. But then you can sign this metadata, push it into your OCI registries and start verifying this added mission controls to, you know, start achieving some of the requirements that we looked at in the salsa framework. So I know that was quite a lot, but that's what I wanted to cover in the demo. And hopefully, you know, I think this sort of all made sense as we look at it end to end. Yeah, for sure. Really great stuff. Thank you so much. There's been a few comments and a few questions as well. So we can get them now. So, so there was a kind of comments and two questions from a secret tech. He starts or he or she or they start. Thanks, Jim, for taking the time. Question one, what do you recommend as a guideline of private registry? EA, the standard set of Kibirno policy that we can apply be default based on security exposure. And then a question to how can we continue to existing projects to extend this? And please let us know if any project we can contribute to build a policy standard. Yes, a great, great set of questions, right? So, you know, so in terms of private registry, there's several good options, you know, of course, out there and those both commercial as well as open source. So I don't, you know, I'm not going to kind of name any particular ones, but most OCI and cosine, of course, has a list also of registries that they have tested with verified with. So if you go to the cosine get up page and just search for registries, there's a list public and private that they have, you know, validated with any OCI compliant registry should work in theory, but it's always good to check and make sure that it's been tested with. In terms of the set of, you know, Qverno policies. So Qverno, there is, if you go to the Qverno.io site, and I can quickly just share that again, there are a standard set of policies to start with. So if you go to Qverno.io, click on policies, there's about 80 or so best practice and, you know, other policies, of course, the basic to start with, if you're not running PSPs or some other way of managing pod security, definitely start with this pod security set, right, which implements the pod security standards and gives you, you know, some flexible ways of managing pod security. But then there's several and best practices you should look at and see what makes sense to deploy and manage. So it's a good set of policies to get started with. And then in terms of contributing to open source, Qverno is a CNCF project, SixTor, like I mentioned, is a Linux foundation project. Both have, you know, weekly community meetings. And there's several, you know, other, if you go to the community sites on both, they're very active and we welcome, of course, any form of contribution, feedback, or even just, you know, just your usage patterns or use cases, anything you might want to share. So, and both communities are very welcoming and open and seeking new contributors. So feel free to jump in. Perfect. Everyone jump in as fast as you can then. And then whom do you, I had a question about their recordings. And they are available as pointed in the chat already, which will help them on their Qverno presentation of the group 26. So perfect timing for this session, I think. And thank you so much for talking through these things. And then there was now a comment from Duke again. Moitation policies are pretty impressive. And he or she or they would love to hear more about them in the future. Yes. So certainly, we see a lot of interesting use cases emerging around both mutate as well as generate policies. So totally agree. We will, you know, we have some samples, but definitely can build will do more in terms of showing some of the some of the use cases and things developing around mutate and generate policies. Great. So we are getting to the end of our time. I have one question left. And then if there's anything else you can the audience can ask super quickly, but after we have to start wrapping up. So since there are so many projects and tools, thank you so much for the really great overview into the space. But it can get a bit confusing to people who are getting started. So what do you recommend to get started? Like what are the things that everyone should be doing? Right. Yeah, no, it is. And certainly, like as you're approaching the space, it looks a bit, you know, overwhelming. There's so many new names, new projects, new things to learn about. But just breaking it down into and look, if you go back to the salsa framework, look at that picture with shows and every for any software you have to, of course, build it in your CICD pipeline. And then you have to, you know, you're using OCI registries already. And then you want to verify some of this, right? So there are for that you cosine is certainly emerging as one of the key projects that you will want to get, you know, get familiar with within six store. Like I mentioned, full CEO and record are also very interesting projects. Definitely take a look at those and, you know, kind of what they can do. And then for admission controls, either a Kibirno or OPA gatekeeper about CNCF projects. I mean, I'm a Kibirno maintainer. So I did all of the demos with Kibirno. I see there was another question about, you know, if OPA can support this. So there is ongoing work and activity for OPA and gatekeeper to support it as well. So, you know, certainly, did some of the same similar validation and checks can be done there. It's a slightly different approach and the policies will look different. But the basics will be covered there. Perfect. So we've randomly answered the last question from the audience already. That's wonderful. And as noted in the little text there, I see, you can join the discussion at cloud native live CSDF Slack channel to continue the discussions. And I think Jim, you mentioned that you're happy to talk in Twitter and all of the other places as well. So please, if there's any questions, I think Jim is happy to take more discussions going further as well. But then now we've got to start wrapping things up. Thank you so much for Jim and thank you so much to the audience. Thank you so much for a lot of questions and comments. Really great to see so many here today. So thank you, everyone, for joining the latest episode of cloud native live. It was great to have Jim here talking about supplies, things, change security with Jose and Kibirno. Really great interactions. Thank you so much. And next week, we have Jalee Rosen presenting Introduction CubeScope, open source tool to test Kubernetes deployment. So really looking forward to that one next. Thank you, everyone, for joining us today and see you next week. Thanks, Anna. Thanks, everyone.