 Thanks folks. So about six months ago, Yuji presented at a Kiverno community meeting and he talked about yaml signing. And the first question, when we were kind of discussing this as we were asking, well, why would you want to do that? Why sign yaml's, what's the benefit and what can we derive from this? And of course, once we got past that discussion, the next thing that came up as well, how does this actually work? What does it mean to sign a yaml manifest? How do you get this into your cluster and verify at the right levels? So we're gonna cover those things today. We'll also discuss some of the challenges with signing and verifying yaml files or resource manifests in Kubernetes. And then we'll end with a full demo of how this flows end to end, using a six-store CLI as well as Kiverno, which is a validating admission controller that runs inside your clusters. So let's start with the why, right? So why would you want to sign yaml's? And just to kind of recap, signing anything gives you three things. The first thing is it makes sure that what you, the data you're receiving, the message or payload is authentic. It comes from who you think it comes from. The second is it makes sure that it hasn't been tampered with. So there's integrity protection. And finally, it also makes sure that once something is signed, you cannot kind of go back and renege on that or change that. So those are the three things you would get with signing pretty much anything. Now, putting this in the context of Kubernetes, like it or not, most Kubernetes resources are declared as yaml manifest files. In most cases, maybe perhaps not in all, these would be in a Git repo somewhere and you're gonna use a GitOps controller to push these yaml manifests into your cluster. And then controllers within your cluster are gonna look at that declared state and the current state of the system and reconcile the two. So in Kubernetes, driving its declarative configuration through yaml's becomes pretty much the standard practice. So putting that together with now signatures, the benefits you're gonna get is you're gonna make sure that the data in those yaml's, the contents of your resource manifest is authentic. It hasn't been tampered with once it's deployed into the cluster. And you can also make sure that you're preventing any inadvertent or unauthorized changes which may occur in your cluster itself. So signing can also be used to build some pretty interesting workflows. Like approvals, if you want multiple signatures before a resource is deployed, you can do that. Or if you just wanna prevent some resource from being going into production until your QA or your production team signs off on it. All right, so hopefully that gives you an idea of why we would wanna do this or why it's signed in a resource manifest. So diving into the how, and you'll see some of this live in a demo itself. But the two components we're gonna use in the demo. So there's a six-store CLI. So this is in a repo under the six-store repository. It's K8S manifest six-store. And what this does is it contains pretty much a command line as well as a library, everything you would need to sign and verify yaml resources. So it builds on top of cosine. It has logic to be able to inspect the yaml, to be able to convert that into a couple of hashes which are ingested in the yaml file itself. So you end up with annotations within the yaml file which are then used for verification. The other component you would see is Kiverno which runs as an admission controller inside your cluster. So Kiverno can act as a validating as well as a mutating web book. Its role is to be able to inspect any admission request, apply your configured policies. In this case, it would be a policy which for signing your yaml files itself. And you can also generate resources but that's obviously a different use case from what we look at today. So putting that all together, in the end-to-end flow, typically in your CI CD pipelines or based on your approval workflow, you would use a CLI or some tool like GitHub Action which would do the signing much like you would for image signature. And then once that resource is deployed, it's pretty much self-contained. There's no additional fetch which happens from an OCI registry which is slightly different from an image validation workflow but given every all of the data and these annotations which are inserted into the yaml, what we'll be able to do is then verify the contents and either allow or deny that request. Now this request may also originate inside the cluster. So in case somebody edits that yaml, it would be, you would be able to deny that if it wasn't signed and didn't flow through the right phases of your deployment pipeline. So given that, you know, Yuji is gonna dive in into details about some of the challenges encountered while building this solution and then we'll look at a live demo itself. Thank you, Jim. Okay, so let me introduce a little bit detail of how the SIDIS mechanism works. First, signing yaml manifest. This is a tool of the CLY, actually the QBCTL program. It's original, some annotation is injected to the original yaml manifest. Basically it gets this yaml manifest body and the encode that is signed by the cosine sign below command. Then it's encoded message and encoded signature is embedded into the annotation. So to verify this signed yaml manifest, there are three steps. The first step is to pull the encoded message from the annotation, then check the signature against this encoded yaml manifest. So then, sorry. Then once the encoded yaml manifest is correctly signed, then compare the yaml manifest and encoded yaml manifest. Those two manifest is checked if those two content are matched. So this mechanism is actually embedded in the QBEL control. So to configure this signature verification, we can use QBEL policy. So actually the new declaration validate manifest is defined by this. So the normal, and so the math, so resource to be signed is specified and also the public key is specified in this policy. So this policy has another extended use case like the list of keys and multiple signature verification. And the more complex case is logical and operation between the keys. So that is, you can define that kind of complex case in this QBEL policy. So the challenge to verify this yaml manifest signature, the admission controller is mutation. So when you create, apply, send the request from the QBCTL command, QBCTL command, so the original yaml manifest is not coming to the admission control point. So some mutation happens before that's checked, like the native admission control or third-party admission control may introduce some mutation to the original yaml manifest. So to verify the yaml manifest, we need to consider which part is generated from those trusted admission controller or this change is not allowed. So that consideration is required. So the solution is, first solution is, you can specify which part of the manifest can be, should be ignored. So maybe that's some, you can specify that this mutation is introduced by some native QBEL admission control or maybe if you know, you have some, your own custom admission controller, you can specify that's the ignore configuration. That is one approach. Another approach is dry run and the compute diff. So this is a approach to compute the data if I diff by using the dry run output. So I will explain later, but this approach is very good because you don't need to specify the which part should be ignored because the expected mutation is automatically computed by the dry run in admission control phase. So that part is good. But another requirement is you need to add some, you need to put additional permission to the QBEL admission control. Actually you need to do the dry run in the admission controller. So the create permission is required. So, but by using this dry run, you can remove the cost of specifying each individual ignore configuration. So approach is like this. So the admission request come, but the admission before coming to admission request is coming to the QBEL node. Some attribute, level or annotation, some part of the level manifest is mutated, maybe added to something. So, but this change is also computed in the dry run in the admission controller. So then you can compare those two. So dry run is out and coming the admission controller. Then you can automatically filter the expected mutation. So let me show them. So now this policy is, this is a policy already deployed in this cluster. So this policy specifies which resource should be protected and the public key here. And okay, so this is a sample deployment. And I want to deploy this deployment configuration, but I already sets the policy. So policy says the signature is must be provided for this deployment. So if I create this YAML manifest deployment to the target system, so this QBEL node automatically deny this request because the signature is not provided. So next I will put the signature to the original YAML file. Then this is generated YAML manifest. So some annotation is already included here. So if I apply now, I did previously I did this one, but if I use the signed YAML manifest, this deployment is also successfully deployed. So the deployment is already running. So now if I, this is the expected, so I successfully deployed, but maybe later someone comes in then change the configuration. For example, I will start the edit this part, allow privileged escalation. This part is originally set as by true, but if I change this part by true, then this edit is prevented because as you can see that this part, originally this value is false, but after that too. So this part is not, this part is the gap from the signed state. So this is prevented. So that's the demo, okay, sorry. Yep, so just to quickly recap, I guess in the last section, yeah what we saw was by signing and verifying your YAML manifest, what that will let you do is protect critical resources. It's probably not the best idea to use this for every resource in your cluster, but certainly things like cluster roles, policies, et cetera, you might wanna protect and make sure that those are signed and can be not tampered or not changed later in your pipeline itself. The signing and verification is pretty simple. Six store of course, cosine can sign everything. So you might as well use it and it's using the blob signature format underneath so you can utilize that to sign and then you can use admission controllers like Kiverno to do the verification in the cluster itself. So certainly feel free to either reach out on the Kiverno channel or on the cosine channel if you have any questions or would like to see any other examples of using this. Also wanna give a quick shout out to Rico. She did most of the implementation for this feature. Unfortunately she couldn't attend, but it was a fantastic job in getting this implementation in and feel free to reach out to either UG or I if you have any other questions and if you have a few minutes, happy to answer any right now. Right. So the example use a deployment but it could be in any resource typically, you might wanna choose a few resources in your cluster that you're signing, protecting in this manner. Certainly if it's a deployment of something that is critical like a security tool would make perfect sense but could also be done at a pod level and matching a particular namespace. So really a lot of flexibility in that. Yes. So sorry it was your question if you. Yeah, so yeah and it's interesting with GitOps and it's something to kind of obviously here we're mutating the YAML you would probably wanna do this on a pre-commit at the right point in your life cycle, right? So yeah. Yes. You can and that is interesting case. So you could have a policy which verifies every other policy including itself. So then the only thing you're really getting from that because that policy is already deployed is you're preventing somebody from changing that policy. Any other questions or if not then, oh grab, yeah. Yeah. Yeah. We need to have the original content to match the mutated one and the original one. So that's nice. Yeah, one of the other schemes we had explored is if there's some other way to get a canonical form of the resource but it becomes a pretty hard problem, right? Cause you need to know what the actual initial form was and strip out all of the other things that could have changed in your pipeline or through other mutating controllers or in cluster controllers. Like when a pod gets deployed, it's amazing how much stuff gets added by just the standard controllers and all of that needs to be ignored. Basically checking field by field by considering the ignore combination. Yeah, so you could configure the policy to ignore some set of fields if you know that some other cluster controller will change those or you could do the dry run if you're confident you're gonna get back the same results and that's typically the easier way to get started is with the dry run but either scheme works or a combination. All right, well thank you. Thank you.