 Hey everyone, I'm glad to be speaking at supply chain security con as part of the open source summit North America 2023. And today I will be speaking about securing your community's manifest with 6-door cosine and we will go through the different options you have out there. We will go through a very quick introduction about what is 6-door and cosine and we will review the workflow of a typical workflow for container images in nature with cosine and then we will see three main scenarios to verify the signature of your communities manifest within your communities cluster. The first one will be with Kyverno, an admission controller, a policy engine and the two last one will be verifying your end charts or your OCI images with flux and a Github's tool out there. But first let's make sure we are talking and defining what is software supply chain and software supply chain is from a developer perspective, I have my code, my source, I will trigger a build, I will go up some dependencies and I will package what I want to expose to my consumer but here we could see that this consumer is actually consuming this package, this artifact, in our case maybe a container image but we know that it's coming from the right place but do we know if it's what the owner intended to provide an expose to me as a consumer. So let's imagine you have a compromise account within this workflow, a compromise build process, a compromise package repository. So here you could see from later A to T the different places where you could have compromised stage and steps within your workflow. And that's what the Salsa, the web site I'm sharing here is defining and that's a notion of zero trust with software supply chain and actually they are helping us out there to meet some requirements to be secure within our software supply chain. And one of the criteria is the provenance, the provenance of dependencies, the provenance of code, provenance of packages and you could meet the different levels, Salsa 1, 2, 3, 4 depending on which criteria and requirement you are meeting. And provenance is part of that, so very important as a reference as well. And SICSTOR is here to directly answer this need of I need to guarantee that the artifact is yes coming from a repository, a registry, but actually is coming from the owner with a signature, right? So I will double check that this artifact is signed with a very defined signature. So you could, with SICSTOR ecosystem and tools, you could sign your code, verify signature and monitor activities as well. I won't go too much in detail with the different tools of SICSTOR. The main one we will see today is Cosign. And actually when signing code, you could see this very evolving ecosystem where you could sign GoLang package, Python package, Ruby package, Java package, Rust package, as well as your Git commit, as well as your container images, as well as your end chart or more generically OCI images that we will go and see in this presentation. So let's see the typical and very basic and simple workflow about signing container images and with that later, we will see how we could have a parallel for signing and verifying our Kubernetes manifest. First, I need to build my container image locally, Docker build. If you are using Podman or Builder or other tool, you could build your container images locally. Then I need to push this image in an OCI registry. I have mentioned OCI multiple times, so let me maybe define it a little bit. OCI stands for Open Container Initiative. It's under the Linux Foundation umbrella, and the goal is to define standard and specification around what is a container image, what is a container runtime, what is a container registry on a very generic way, and not just talking about Docker. And actually, an OCI image is able to package any file, a Kubernetes manifest. In our case, we would see later, a readme.md, an mp4, mp3 file, so you could archive or bundle and package any file within your OCI image. And an OCI registry, in our case, will be able to host your artifact, your OCI image. So a Docker container and a container image is actually an OCI image. So here, I need to push my container image, and then I will use Cosign locally. I'm taking this approach to generate my keeper, the public key and the private key locally. I will use a private key to actually sign this container image with the immutable digest of this image, and I will verify with the public key locally. So that's a typical workflow for a container image, right? Now, with this workflow, when I want to guarantee that any container image is actually signed with this workflow from within my Kubernetes cluster, and this cluster is actually just taking and accepting this container image is signed from this workflow and this signature. Sorry about that. I will use an admission controller, and in our case, Kyverno is one of the tool you could use, right? So I'm using Cosign, I will sign within my container registry, and my container image is actually signed in my container registry, and actually Kyverno has an admission controller and policy engine installed in my cluster via a policy, which is this one here. I could define this manifest Kyverno policy cluster policy, where you could see this verified images specification for any container images. I need to, please Kyverno, could you check that this container image is all actually signed with this public key, right? Store, in this case, in a secret, it could be in a KMS from your provider, et cetera, that here we are making sure any container images is signed with this public key. Very important for container images. There is another approach, if you know a policy controller from SIFTO, that's the same approach. That's an admission controller doing just that, Kyverno is doing a lot of different policies and governance and security from within your cluster as a policy engine. Policy controller is doing this very niche and specific signature check on your container images. Same approach, with a secret in this case, KMS is also working, and here I'm defining my policy cluster image policy by using SIFTO policy controller. If you're using Connoisseur, there is the same approach. If you are using KubeValde, same approach. So that's a typical tool that you have for container images signature verification, right? Now let's talk about how to do that with Kubernetes Manifest and why you would like to do that. That actually you could make sure that any Kubernetes Manifest, a deployment, a secret, a config map, name it, is actually coming from and trusted from where it's coming from, from your corporation, enterprise, and within your workflow and your software supply chain actually. There is a first approach. The first approach is using the KubeCTL plugin for signing Kubernetes Manifest. I put the link of the tool out there and the typical workflow is using KubeCTL6.sign using my ML file, my manifest, and targeting my OCI registry. So this ML file will be stored as an OCI image and will be signed with this public, private key actually. And the typical workflow is verifying, right? I will use KubeCTL6.verify, verify this ML file, targeting this OCI image. Within the OCI registry with this public key, right? So that's the typical scenario and workflow for signing your Kubernetes Manifest. As a sign out, this workflow, actually the KubeCTL6.sign common, will actually generate two annotations, the message and the signature that you could see, where the signature will be stored and will be leveraged by the verify command. So just that you know that this ML file is immutable, right? As soon as you have updates with this resource within your cluster, let's take an example about replicas of a deployment or if you are doing some mutation, adding a notation, adding labor, adding other metadata and data within your resource, actually running in Kubernetes, that could be a bit of a challenge here. So you need to be aware of that. And actually, KubeCTL6.verify is almost maybe the only one verifying this workflow here by using the plugin, KubeCTL6.sign, KubeCTL6.verify will be able to actually do the verify at admission control when actually deployed within Kubernetes. And before being deployed in your Kubernetes cluster, actually, so that's a typical workflow. And here is the cluster policy. Earlier, I mentioned the cluster policy for container images. Here that's to verify the signature of your Kubernetes manifest, previously signed by the KubeCTL plugin, KubeCTL6.verify, we just illustrated right before. There is a validate section where you could actually target a specific secret or KMS or other approach from your public key. And you could see this very important section, subsection, in your fields, which is actually highlighting for you when you are using this approach, this way to be immutable versus immutable fields, right? Here, I'm illustrating for replicas. Maybe you have auto scaling in place. So these values are immutable. They could change. And you don't want that when you want a signature because you don't want the resource to be changed. Now, let's talk about mChart. mChart is one of the ways to package your manifest, Kubernetes manifest out there, right? So that's the second approach I would like to illustrate. And let's talk about the typical workflow to sign your mChart. You do a package, like we did with Docker build. Here it's n package, like we did with Docker push. There is n push to push this actual OCI image and chart in your OCI registrar. I will generate locally my key pair, public and private key here. And I will still use cosine to sign this OCI image and chart, actually. And I have the verify command, right, with a public key in this case. Pretty simple, similar to the Docker workflow we have with cosine and Docker. And your container images, same for your n chart. I just want to highlight that there is, when you do n package, this dash dash sign parameter, which is not yet supporting cosine. I shared at the bottom of this slide the link of the issue, mentioning that it's coming in the future, where you will be able to actually not using all the cosine command at the bottom of this prompt command. But you will be able to use n package dash dash sign directly. Pretty convenient. So just for your information that in the future, this workflow will be simplified. Now how to, so it was locally, cosine sign and verify. Again, at admission control time within my Kubernetes cluster, how I could guarantee that any n chart is actually signed, right? Out there, there is actually one tool proposing that. It's Flux, a GitOps tool. Flux is able to pull versus pushing your manifest. It's pulling your manifest from a Git repository, from an OCI registry. And in this OCI registry, you could have your manifest bundle as n chart. And I will illustrate the other method just right after. But Flux is able to pull your n chart from your OCI registry as an OCI image, right? And that's a typical workflow. You have Flux, your GitOps tool installed, and it will be able to pull your manifest, not from a Git repository, but from an OCI registry. And deploy the bundled Kubernetes manifest as an OCI image, right? And the typical manifest to tell Flux to do that is defining the n repository was the n chart pointing to your OCI registry, as well as your verify section for the n chart, where you will define as a secret, for example, your cosine public key. So pretty convenient, pretty simple, and yet powerful. More generically than an n chart, you could bundle any Kubernetes manifest and any file, like I mentioned earlier. OCI images, again, OCI stands for Open Container Initiative. And I will use ORAS. ORAS is part of the CNCF, Cloud Native Computing Foundation, ecosystem and tools and projects out there. And like we did Docker push or N push, we could do ORAS push. So I could bundle any files locally from the current folder and pushing those files in my OCI registry and doing an archive or OCI image within this OCI registry. Again, I will use cosine to generate my private and public key. And I will sign this actual container image, OCI image, to be more generic and precise. And then I could still verify. So pretty much the same workflow with Docker container image or n chart and here with more generic approach with OCI image. So pretty simple. And again, yet powerful workflow. What is a tool again to validate and verify the signature of this OCI image within my cluster before being admitted and deployed in the cluster? Flux is the tool allowing you to do that. Again, Flux is able to pull manifest from a Git repository and name registry or more generally an OCI images. And again, my Kubernetes manifest will be bundled as an OCI image within my OCI registry. And GitOps will have this tool mechanism. Here is a manifest where you could define your OCI repository. Like we did earlier with the n chart. At the same, just one resource in this case, OCI repository and URL of the registry name of the image and the verify section provider cosine and again storing the public key as a secret, right? And that's a wrap. Thank you for listening. And I hope you enjoy this talk where we demonstrated how you could verify cosine signatures for your Kubernetes manifest, not just for your container images, as well for your Kubernetes manifest, bundled as OCI images more generically, more specifically as an m chart. And the three option is treated where we scavenger and the QCTL plugin for Kubernetes manifest. Second one was with flex for m chart. And the third one was still with flex for OCI images. So you have your option out there and you could add more security from within your security software supply chain. And I hope you will be able to leverage some resources I'm sharing as well. So three first links or some of my own experience that I have been sharing about Kyrono about using Cloud KMS and cosine and policy controller and other talk about OCI and GitOps. And the last one from Kyrono ecosystem, which is a more specific talk about the QCTL plugin I mentioned during this talk. Again, I hope you enjoy this talk and have a good rest of your conference. Thank you, everyone.