 Hello everyone welcome to our talk. My name is Raul and here we have my colleague Vito. We both work at Suze Rancher and today we're going to talk about how to enforce a secure supply chain on Kubernetes. So let's start defining what's a supply chain. A supply chain is everything that's needed to deliver your product. In terms of software that means your source code, how you build this code into Artifact, where these Artifacts are stored. In the case of Kubernetes, you're probably building container images on a CI CD pipeline, so that's what we need to secure. If you look at this picture, we took it from the ... oh actually you cannot see the full picture but that's right. We took it from the Salsa framework. If you start from the left, you have the source integrity. First thing we need to make sure is our source code is protected and no one has access to our source code and pushes on malicious code. And then on the right hand side we have the build integrity, which is the part we're going to focus on today. We want to make sure that what we build from our source code is what we really see in our production cluster. And for that, what we're going to do is we're going to assign our images in our build pipelines and then we're going to verify these container images inside of our Kubernetes cluster. So let's start talking about signing. And for signing, let me introduce Sisto, which is a combination of technologies to handle signing and verification. These technologies include Cosign, which is the tool we use for signing. Then we have Fulcio, which is a certificate authority we use for issuing certificates. And then we have Rico. Rico is a transparency log. Think of it as a ledger, where you can put some records, but you cannot modify or delete any record. And anyone can query this record for verification. In addition to container images, you can sign almost anything, a binary blob. You can also sign any OCI artifacts, such as was a modules, hem charts. We're seeing a lot of options. A lot of people are starting to use Sisto. Even Kubernetes itself started signing all their artifacts with Sisto and starting on version 124. That means that you can verify not just your own images. You can also verify third-party images if they're using Sisto. And we can see a lot of people, at least in the open source community, using Sisto. Okay, so let's talk a bit about signing. There are two ways of signing. One is the keeper. You can set additional approach. You can create a keeper with cosine. You can even bring your own keeper. But you need to keep your private key secure. And then you need to distribute the public key for verification. And then there is a new way, which is called Kiles. It's not actually Kiles using ephemeral keys. It's still experimental, but we find it very interesting. So let's talk more about this Kiles workflow. So for Kiles, it uses OpenID for authentication. OpenID is an identity layer built on top of OAuth 2. And it allows clients to verify the identity of an end user using an authorization provider. So the way Sisto uses that is when you try to create a new certificate, you request the certificate to Fulcio. You pass it an OIDC token. Fulcio says, okay, you are who you say you are. Then it generates an ephemeral certificate, a short list certificate. You use this certificate for signing. You sign your image. Then you store the signature in an OCI registry. And then also in the Reconst Transparency Log. So you can later on verify the signature in the Reconst Transparency Log without the need of a public key. So you don't need to distribute a public key. It also has a great support for automated environments. So we can integrate it in our pipelines. So let's see an example of signing. This is how you would sign using a keyless if you do it in your terminal. You type a cosine sign in your container image and then you are redirected to your OIDC provider. You have to log in. Once you log in, Fulcio will create a certificate. You will use the certificate for signing. Everything happens transparently, so you don't see any certificate at all. And then for verification, you just need to type cosine verify. And as you can see there, the issuer and the subject. That's what you use for verification. In this case, it's the email we used for authenticating to this OIDC provider. And the issuer was GitHub. That's what we use for as an OIDC provider. So yeah, this is how you would do it in your local computer, but it's not great because it requires some human interaction. So it has great support for pipelines. It has automatic support for GitHub actions on Google Cloud, but it could integrate with any other system if you pass an identity token as a flag to cosine. But as you can see here, this is a GitHub action. This is a job. And we're not even fetching any OIDC token. That's because cosine is smart enough to look, okay, say, I'm inside the GitHub action, so I will fetch the token and I will pass it to Fulcio to generate the certificate. We obviously have to give you permission to get the ID token, which will allow you to get the OIDC token from GitHub. So, okay, we know. Now we know how to sign our content images. We know how to verify them, but let's see how we can do that in Kubernetes using our admission control. And for that, I will hand over to Victor. So we have seen the primer on SIGSTOR and SQC supply chain from Raul. And now how do we implement that in a cluster? Well, we have here a cluster. You can see the first blue box on top, that it's the cluster. You can see the happy users on the left and then ATCD on the right where everything gets stored and then the reconcilers of the cluster would just make it happen. And there's a concept in Kubernetes which is called a dynamic admission controller, which is what we see here. A dynamic admission controller in Kubernetes allows us to put some webhooks for those that don't know. And those webhooks connect to the mutating admissions and validating admissions when you make a request to the cluster and allow you to check or modify that request. So for example, let's see a request here below. We have a JSON request, maybe it's a pod. We want just to instantiate a pod with one container inside. But we want that pod to get an annotation called prod because we want to label all our pods. So we can just select them and so on. We get the JSON request from Kube's control from the user. It goes through authentication, so it's validating in that sense. Then it goes to the mutating admission where we have our own webhook. And then we mutate the JSON there and we add an annotation with prod. Then it goes to schema validation, which checks the pod just to see if the JSON is correct. And then it goes to the validating admission where we see that the annotation was actually prod, which is what we want, because that's what we want in our cluster. Maybe our cluster only wants prod things. And then it passes by and it goes to a dcd and that's it. It's in our cluster. So how does this tie with the whole thing that we were talking about? Well, we need a policy engine. A policy engine, for example, could be Kube Warden, which is an open source project that we are working. And with that open source project, which is the policy engine, we can enforce those signature signatures and verification of those signatures. In this case, Kube Warden, as usual, it's just installed as Helm charts, open source with a community. We have submitted it to the CNCF and so on. What's so interesting for this case of Kube Warden? Well, apart from being a normal policy engine as others in the space, the policies in Kube Warden are Wasm modules. What's Wasm? You say, maybe you don't know Wasm. Well, Wasm, it's a binary instruction format, so you just compile to it as you would do with another architecture, so ARM and so on. Once you compile, it's super small and it's secure because it comes from the web browsing world, so it's a VM with very minimum amount of open things to the outside. You use it as a POSIX environment and so on. It's portable because there's a lot of languages that compile to Wasm, for example, Rust, Go, Swift, TypeScript. You also have domain-specific languages such as Rigo. If you know in this space maybe you know other policy engines, you write the policies in Rigo, maybe you like them, maybe you don't like so much to write policies in Rigo. And that's the thing. By using Wasm, you can use your language. Maybe you already know your language. Maybe you want to use your own libraries. Maybe you want to use Git. Maybe you have already your CI set up for your language. Then that's it. A policy is going to be super simple because it's just taking a JSON of maybe 300 lines, taking for some things in that JSON and returning a Boolean. So if you know your language, it's somewhat easy to do. Also, policies as other policy engines can check the state of the cluster or commutate or so on. So how do we do this? How do we sign and verify with Q-word and INCI? Well, Raul has already explained. If you do a cosine sign of your container image, you can do the same with the policies from Q-word and Y. Because the policies are Wasm modules, OCI registries support containers, hand charts and Wasm modules as first class. So you just push your policy to your container registry and operate with it as a normal container image. So you would do a cosine sign and then cosine verify. It's the same. In our case, for example, if you see this object, it's going to come from our organization if it's the policy that comes from us. And then the tooling would check for those things. Of course, you need to follow best practices. So there's little minute detail that need to be followed to correctly verify the images. That's why, for example, Q-word provides an abstraction layer where you just pass, okay, I want to verify things that come from GitHub Actions. And then the organization is going to be Q-word and maybe the repository is going to be another thing and so on. And with that, you can just verify the images. You would configure Q-word with that configuration and that's it. Okay, now you have the policies trusted and you have Q-word in the cluster. You secure all the containers in the cluster, those that come from you, those that come from third parties and so on, those that come from Kubernetes itself and so on. How do you do that? Well, you just write the policy and run the policy. In this case, for example, we can use the policy that we have written, but you could just write your own with the SDKs for the different languages that we have and that's it. So a policy for this case, it gets false, no? It's going to look at the containers that are inside. In this case, it's going to be container, any containers and so on. You get the requests, what do you do? You check for the signatures of those containers inside of the pod and if they don't match, you reject, but you can also approve, no? I mean, if you have the signature correctly, you approve. But ah, best practices again. Yeah, what happens if you get a container that the tag is 1.0? Tags are mutable. You could just push whatever app example 1.0 and then you can repost it again. So you need to be careful because you would lose information there. What you need to do then is to verify and only instantiate in the cluster the digest of the container that you have run. So you would just get your app example 1.0 and then append the example. Just to be sure that you are actually instantiating what you should. And how do we do that in the cluster? Well, for Q Warden is just doing a QCTL apply of a policy, this policy, you can see it here, specs module the policy as an image, as a container image that you would do, that's how a policy looks and then you set it for pods, create an update and then you pass the configuration that you want. As we have talked, what happens if you instantiate a pod and it verifies? Well, you do QCTL apply and of course it mutates the pod with the digest. You can see it below. You have the app example 0.1.0 and then you have the shashom of the digest appended, so only gets to the cluster what should go there. And that's it for this little explanation, what's next? Well, we have talked about our policies, our containers but what about our dependencies? Everybody depends on something. Well, how do you handle dependencies? You handle them by a software build of materials where your container comes with a list of what depends on other container libraries and so on. And with that, we take care of that problem. Also, we need to sign and verify everything. This is a community effort and a community problem. Everybody depends on something for whatever, so it really pays off to use something such as 6-store which comes from the Linux foundation and sign everything so we can depend on each other. And for Q-word specifically we want to expand on those, on our video material and so on and on CI integration. We have shown GitHub but it can also work on GitHub or whatever the tool, whatever integration. And that's it. And yeah, that's it. How do we get involved with that? Please just stop us around find us maybe in the in the sea of masks. And if you want to talk to us about Q-word and secure supply chain or 6-store, we maintain some things on upstream 6-store to share our information and many things, many things for your attention. If you have any questions, please come by.