 Hi, everyone. My name is John Osborn here with ChainGuard. I'm going to give you a quick crash course on the six-door policy controller. Let me share my screen here and I'll tell you a little bit about why I'm here and why this talk. So my job at ChainGuard is to essentially take customers from the journey of, I want to start my self-resupply chain journey, but now what, right? And I noticed there was a gap missing in terms of educational content where customers would start using six-door to start signing their artifacts, start signing their code, start signing S-bombs, start signing various attestations or security scans, etc. But there wasn't a whole lot of examples out there around what to do next with all that information. There's plenty of examples in the six-door docs around how to end a lot of other places too around how to verify that a signature exists or that an attestation exists or that an S-bomb exists. But if you actually want to go towards that next step of creating policies around your custom tooling, around the content of some of these attachments, attestations, there's not a ton of things out there. So the goal of this talk is to give you a little bit of insights into the six-door policy controller, how it works, and then we're going to do a little bit of a custom attestation by example, where we sign a code review and then validate it one step at a time and kind of build that out. And I put all the code examples on a GitHub repo, which is also I'll share towards the end of the deck as well. All right. I'm not going to cover, this is more of a 201-level talk, but I'm if you're not too familiar with six-door, I'm going to give just a quick primer before I get into the policy controller aspect, which is really the focus of this presentation. So six-door itself, really if we look at a lot of the software supply chain threats that are out there, where six-door comes in really is that it's very easy to use signing service. So you can sign and verify all the handoffs and verify a lot of the dependencies that you're pulling into your organization. And the idea is that I can close off a threat vector by signing something and then verifying on the receiving end that something hasn't been tampered with, because I'm going to check the signature center, the artifact hasn't been tampered with. I can check who signed it. I can check some of the attached evidence to it, which would be an attestation, for instance, a security scan, etc., etc. Now that was very quick, but if you look at a lot of the supply chain frameworks that are out there, Salsa, NIST, CIS, there's a bunch of others that have just started to add software supply chain guidance. A lot of it really is around signing and verifying a lot of the different artifacts and handoffs in the environment. So at a high level, that's where six-door can come and help, because it's very easy to use and it can be automated very easily for people and machines to use. So the whole purpose around six-door is that it's easy, so the developers don't have to really have to manage keys anymore. So in this case, this is just a screenshot where I'm signing a random YAML file. And the way it works, this is in keyless mode. There's plenty of ways to sign with six-door, not just keyless mode. You can automate with a KMS backend. You can automate with your existing keys, a number of ways to do it. But with keyless mode, what happens is, similar to using any third-party app, it logs a little pop-up and you can log in with another user, or we're logging with another identity provider as your user. So in this case, you can log in with Google or GitHub. And then what's going to happen on the back end is six-door is going to take that OIDC token that you generated and it's going to generate you an X509 certificate with the identity that was vetted from the OIDC token. And that certificate's going to be very short-lived in a lot of cases, 10 minutes. And then you can sign your artifact and the signature will go in an OCI registry. If it's container image, it'll go in the git commit. If you're signing a git commit, et cetera, et cetera. So I mentioned something called git sign. That's really an extension of six-door to sign git commits. It's under the six-door git sign repo. It's very easy. You can essentially just enable it per repo or per, for all your repos, and that's just a one-time thing. And then every time you do a git commit, it'll automatically pull up that pop-up box and you can log in and sign your artifact, sign your git commits that way. And as I mentioned, it's stored in the signature itself. It'll actually be stored in the git commit. And that's just a look at it. Here on the right, I took a screenshot showing the git log, which will actually print out some of the information related to the signature there. Now, we want six-door and to be very easy to use, so we can sign and verify all these things. So I just put together a quick primer on what the commands would look like. But ultimately, what you're doing here is really generating this kind of supply chain metadata. So if you sign something, you'll have a signature. If you might sign certain evidence, like an S-bomb, for example, because of course, an S-bomb's no good if it can be tampered with, right? So you might sign an S-bomb. You might sign an attestation around provenance, which is really how your code was built. So a lot of tools will spit out a lot of information, like the git commit that was used or some of the parameters and flags that were passed as part of the build. You want to have that body of evidence as you're creating all these handoffs from development to production. And it could be something as simple as I ran a trivia scan against my container image or a sneak scan and I want to sign the output, right? Because later I want to verify that there were no critical CDEs at the time of scanning or just to know that I just did the output or I just did the signing and I can sign off on that also. Some big news. Within the last couple weeks, SixTor went GA. So Cosign, this was a little confusing for some people, but Cosign was already GA, but Recor and Falsio, the backend supporting services specifically around keyless signing, those are GA now. So that's huge. Keyless signings is great just because you can work it in with a lot of existing workflows, especially for people adopting Salsa and these frameworks where you want your build service to sign the artifacts so you can have cryptographic evidence that something came from the build service and it hasn't been tampered with. Now, the services themselves going GA, that's huge, but one of the things that people might miss with the announcement, which is probably just as big is the SLOs and non-call rotation that's happening now. So that really started with the GitHub announcement and partnership to back in August, but there's a lot of big companies helping support SixTor now, Chingar is just one of many, and so it's a great project and you can confidently sign with and use SixTor, especially if you look at all the rapid adoption and high availability of it. Another thing too is now that SixTor has gotten really just generated that momentum behind it, I wrote a little script that pulls a lot of the artifacts off ArtifactTub and they actually have a SixTor flag in their API now, so the last time I checked a week or two ago, it was over 50% of the artifacts or at least the container images have already been signed by SixTor and there's lots of programming languages too that are now adding signing, so if I go to use Python for instance, I can validate the digital signature of Python using SixTor and that's huge because now I know that Python hasn't been tampered with and if you think about a lot of the way enterprises kind of bring in these releases, it's checksum at best, but even that's kind of a one-time thing and the way things get handed off internally or moved between enclaves or environments, a lot of times we're not actually re-betting our artifacts to make sure they haven't been tampered with and the fact that SixTor is getting used at such a high rate, that's a lot of artifacts that we can validate that haven't been tampered with. So onto the policy controller, so the SixTor policy controller itself is really a Kubernetes admission webhook, it's a validating webhook and it gives you that go or no go aspect to it. Now there's a lot of examples as I mentioned at the beginning of the talk around signing things with Cosign, running Cosign Verify to verify the signature, you can create an attestation with Cosign Attest and then you can verify the attestation with Cosign also using the Cosign Verify Attestation command, but ultimately if you want to get more complex outside of just saying that those things exist, you start wanting to write your own policies that might match whatever things you have going on with your security posture or compliance or regulatory frameworks, etc., you might have to create a custom policy and that's really what the heart of this talk is about. So before I get into customization, just want to throw in a few examples of what this might look like. Now these were actually public examples where people put in the logs, so I like to use real examples. This one is part of Salsa and it's really where I think a lot of people are trying to get to. So what this is is, this would be for Salsa level 3 and it's an authenticated and non-falsifiable build service. So what that means is that there's a everyone, CICD is not necessarily a novel idea at this point, a lot of people have built out CICD pipelines and things like that, but especially in a large organization there's a lot of ways to get into the front door and a lot of times there's not necessarily a check or block that things actually went through the pipeline. So what this would do is you sign, the build service will actually sign itself, not a person who would actually come from the build service and then on the receiving end you could have a policy, a six-door policy that says, I'm using the policy controller that says something I've authenticated that there's cryptographic evidence that this came from the build service and hasn't been tampered with. So that would be one example. Another example might be a code review. This is part of Salsa but also part of a lot of other frameworks. PCI now released code reviews as part of their supply chain guidance that they just released this year. And this isn't a standardized format yet, but this is just one example of it. It might be an attestation which could just be a YAML or JSON document and then you could validate using signatures. So in this case, Dan and Kim both signed and then you could verify that they both signed this attestation which would have some metadata in it for instance. And you could use different tools like if you use GitLab or GitHub, especially the enterprise versions, you can do code reviews and then you could have these attestations generated as part of that and attach them to artifacts. So later you have cryptographic evidence that a code review was done for certain things and then you could create policies around this also. Last example, SBOMs because there really are a hot topic, especially with CVEs that are out there now. So I can sign my SBOM which is great because SBOMs aren't helpful if they can be tampered with. So signing the SBOM would give you that integrity aspect to it, but you might want to create a policy that says the SBOM exists, but then also actually start validating the content of the SBOM. So in this case, this is an example of log for shell, so we're just parsing the SBOM to see if particular versions that might be affected of the log for JAPI or log for JCore would be part of the SBOM and I can create policies based around that and those could be policies that just warn you and flag you or they could be policies that actually block on how to do that. So that's all hypotheticals. I'm not expecting you to learn this yet. I'm going to walk through it at a slower pace once we actually get to building up the policies. So everything in Sigstore for policy follows the CRD called cluster image policy and there's really three parts to it. The first two are mandatory. The third one is not mandatory. The first one's very simple. It's just what images am I going to apply this policy to, right? And so that's a URI to point to a registry. I can put in wild cards in there if I want to. The second piece is the authority, so that's actually what or who signed them. So in this case I'm going to have a policy that says it had to be signed using Sigstore. It doesn't have to be using the keyless signatures in here. This could be some sort of pointer and automation to longer lit keys or KMS using GCP or AWS or whatever it may be. There's a whole bunch of options in here. The other piece to it is who signed it and that's really important and I think I'll just pause there for a second because I think that's a pretty big fundamental difference between typical signing that you may be used to and moving into more of a supply chain framework component to it. So what I mean by that is historically the way things have normally been signed you really care more about the key because you're managing the key. So you care more about the private key and then validating that with a public key. When you go into adopt a supply chain framework you're not necessarily managing the keys all the time and because things can be automated you do have a more granular identity associated with the key. So if we're being honest in a lot of enterprise signing scenarios up until now probably even including in that right a lot of times keys get issued and they're not very granular. It might have a wide and net as the entire build service uses the entire pipeline uses the same key right and a lot of times especially in more regulated organizations they probably don't even trust their developers with a key. So you're not really validating with developers signed it you might be validating that they pushed somewhere and did something else right but with Six Store you can get a lot more granular than that. So this is a policy that we use internally at Chain Guard where we make sure that if you want to sign a if you want to push a get commit to any of our products this is something that we use this is actually the policy that we use is you have to authenticate and sign you have to authenticate with Google and with a chain guard dot dev email address which essentially means we use Google SR authentication provider so essentially with our identity provider so essentially what that means is you have to have a chain guard dot dev email address and sign in with it in order to generate a valid signature. Now the third part is the optional component to it those are attestations so I like to think of that as attachments or evidence and again that's not necessarily a mandatory field but if you wanted to for instance you know sign an spdx document and then you could create attestations around that so that would be okay I want to search for a specific cde as an example that would be an attestation that you could sign and create a policy around now the way these work is with a six-door policy controller if you want to deploy an image to kubernetes it essentially works like this if you if a single a single policy will pass if any authority has signed to these things now you can put in a whole and what I'm why I say that is because you can put in a whole array of keys that would be valid and attestations that would be valid if any of the valid keys have signed any of the valid attestations that you include well then it will pass if there's if you have multiple different policies well then they all need to pass so if you had you know for instance a policy that says everything has to be signed by chain guard but then you had another policy that said don't admit anything with the log for shell vulnerability then it would it would block and I put a couple references down here if you wanted to go see what all the fields could be but those are really at a high level it's these three components what am I what are the images what are the signatures and what are the attachments really now the way you turn this on and this is somewhat new also so in kubernetes the way it'll enforce is by by namespace labels and that's the way it's been done since early on in six store so I can just create a label for my namespace using policy six store dev slash include set that to true and it'll start going to enforcing mode if I wanted to do a break glass scenario for instance you know coming in on the weekends and I don't want anything to block anymore whatever that may be I can just remove that label assuming I have that permission set up you know I'm a cluster administrator and I can do that now one thing that just got pushed within the last few weeks so this might be new even if you're you know not new to six store you might not have seen this unless you're following your github all the time is there is a more granular label selector now so I can set things on a per namespace instance but I can also you know have policies that apply to stateful sets or deployments and I can get a lot more granular with that also so now there's a lot of images a lot of examples of how do you create policies for images that are upstream some of them are spread across a couple of repos in the six store repository we've also been creating a lot of these internally for customers our goal is to push all of these upstream so look for that in the near future also so if we create new policies for specific CVEs or other things that are that are out our goal is to start just pushing these on to the six store repo we don't really want there to be secret sauce around that it should be the policy should be available for anybody to use so also this is somewhat new also as you can add for the cluster image policy you can actually add different modes so on a per policy basis and this will be new for a lot of people but you can actually create policies around what you want to do when something does match so if you label the namespace I think by default what's going to happen is if the namespace is labeled and the policy fails well then you'll be denied but you can set that to warn mode which is really helpful too if you want you know certain policies to be admitted but to flag a system somewhere but you don't necessarily want to block or slow down based around the security posture right so that that's something that you can set on a per cluster basis or an a per policy basis now building the cluster image policy you'll see if there's some examples that are out there but they're pretty rudimentary I'd say you know you can look at these first two examples there's a million examples it'll show you just how to set up the images and authority section you can also create a catch all if you want to for for you know static pass fail for certain things all that's built in there but really the attestations is where I want to focus most of this talk and there's built-in schemas for things like spdx for things like cyclone dx there's built-in support for intoto which is an attestation format but you also might just want to sign you know the attestations could be anything right so they could be just random JSON that you have in-house tools and you want to build policies around those outputs whatever it may be but you might notice this and if you're if you look down the bottom the types can be in rego or q and you know q is incredibly powerful language and so I'm going to give you a quick primer on how to validate some of this some of these policies using q for the rest of this talk so at a high level now I think one of the things the gaps that exist probably if we're being fair in q is that it's incredibly powerful but a lot of the things that you can do with it or if you go look at the docs are all you know kind of lumped together I guess you could say and so if you but if you only care about creating six store policies you really only care about using q for data validation now it's it's really powerful if you look at a lot of the examples that are out there with q q can be used to generate artifacts like terraform modules or ansible modules which I've seen we're actually used looking at it internally to generate some of our artifacts as well but for the point for the point of six store policies you really only care about doing data validation so a few things that you need to know the most about it for data validation is that first is it's a superset of JSON and that's great because then you can if you have an existing JSON or JSON schema even better it becomes incredibly easy so if you have a tool that has a JSON schema there's actually a q import commands that actually is convert that entirely to q if you want and then validate it in the doc if you have random JSON that you want to match against the policy the q policy for data validation can actually just be raw JSON if you want to that's perfectly valid to do that also so it makes it very easy to adopt because virtually any tool will output JSON or something that can be converted to JSON um the second is that it treats types and values the same and you'll see that as as I do the validation so I can set something to be a string or I can set it and then I can set it something more specific like for instance you know the string might be my email address right and then I can create policies that just can get more granular as we go another piece is that the order is irrelevant and so that's actually really helpful from a from a validation perspective because well one it just makes it less brittle but two if you get something really complicated and there's a lot of inferred values well you can use this command which I'll uh drill a demo for you or I got a screen screenshot at least called trim q trim there's a command line tool for q and it'll remove all those things and condense things very nicely and so it's a lot more easily readable and then you know q's just very flexible so you can create if you want things to be open for instance which you know if you're using JSON you probably do because there might be more fields later or you can be very specific and have things closed that that's really all up to you so I wanted to walk through a specific example um and how it and how it builds out um but I'm gonna give you a little bit more material just so you can validate what some of this stuff is so um q is a JSON superset that's great opens up big big ecosystem um I'm I don't program and go every day right so there's a lot of go extensions where it can take a lot of uh you know open open api schemas and you know different things that might be have first class support and go and create you know q data you know schemas and things right that's great but I'm not a go programmer and I don't want to write a bunch of code just to validate a policy right so q has this great command line tool that you can download on their page and it's really helpful for validating policies because if I have something with an existing JSON schema that's great because I can just write q import and it'll output the q um if I have q code I can just do q output and it'll output JSON for me and then if I want to do some validation without using my images or anything at all just uh you know shift left and get um get you know that feedback loop to be really kind of lightweight I can just use q eval and evaluate uh JSON locally at my machine also so um that's really helpful from that perspective uh I mentioned that it ignores the orders of the rules so this is a policy yes I have to know or care too much about what these the syntax of it but what this policy is doing here on the right is it's um it's checking against a serif so serif is an oasis standard for sass tools uh around standardizing some of their output and some some uh security scanning tools like in this case I use trivy uh trivy can output into serif which is great because if you have a bespoke tool you can write your own policy for it and that's easy to do but if you have something that outputs in a standardized format like serif well then you can use the same policy to validate against multiple different outputs and so in this case um I've got a trivy I had a trivy scan although the output standardized and it's just going to look for cvs with a score higher than 9.0 which would be a critical cvs and just since the the order is ignored it's really helpful because it can reduce a lot of the boilerplate code you don't really want anything to be um any uglier than it than it already is right especially when we're looking at JSON outputs and things we want things to just be concise and human readable uh and then because of that you can also run Qtrim so in this case I ran Qtrim on that last one and it made things uh a lot easier to read here if I had a lot of inferred values here well it could have even cropped those out so it could even crop those down more um but in this case I I pretty much always run Qtrim just to have the most concise uh policy that I that I need I don't really want any more text than I need in there great so let's build this out in action I'm gonna go slow for this part but I'm not expecting you know I wouldn't expect anybody to learn Q just from this presentation but my goal is to help you learn I uh what you can do with Q kind of what it looks like and then give you some examples and really just get you started um if you have specific examples you know feel free to just tag me in the six-door slack um I'll reply in there I'm happy to you know help anyone that gets stuck we're also working with a Q team to get more examples that are out there um but we want I think a good goal what I'd like to get to is to the point where if you want to do anything that's not um you know completely off the off the grid then there should be an example for you at least to start with and um and generate artifacts based around that you should be able to fork something essentially and make minimal changes to it unless you want to get really deep into the uh into the doing something really custom so in this case I've got a code review format which uh this is just a sample format that I've created and essentially we're going to validate three things incrementally and I'll talk about how to do that and what this policy would look like so the first thing I'm going to do is validate this repository format here then I'm going to validate that my email for the author came from example.com because I don't want any uh I don't want code coming into my production environment unless it was written by my company which is example.com in this one in this sense um and then I want it to be two peer reviewed so uh in this case all I'm going to check is that uh the reviewer was not the same person as the author so we want to make sure that the review is independent so before I get into actually checking the repo I wanted to point this out because this is um this is very important if you plan to create any custom policies make sure you pay attention to the slide so in my example and if I click on this log here it'll take me to the public record instance where you can see where I sign this code review and this link will work in the in the deck it's also in the read me which I put uh on the GitHub repo all these examples in it too but you'll notice when I sign this I use I'll have to go back a sec here I ran cosign a test so cosign the command line tool I attested to this and I gave it the specific command I gave will be on the um will be on the read me um but and then I passed it this uh this json file here as the predicate so what it did was it put took that json and put it inside a um intoto attestation and when I want to validate this there's really just two fields is this data field and there's this timestamp field now this might look a little new to you if you do cosign a test before because if you run cosign on the command line uh and you're testing known formats it's not going to be all packed in here but for the purposes of custom attestations and using the six store controller everything just kind of gets put in this big string but don't worry it's still it's not a big deal at all there's built-in tools to help fix that so really you have two options now um you can just parse that string out using q so you can create a policy that just parses the string out on the left uh in this case I'm just using a regex so you know wouldn't expect anyone to necessarily know regex I know I always have to google regex format pretty much off the off uh offline um but on the left what I'm doing is I'm just setting a simple policy to parse the string there now battle this policy in the left will fail if the phrase bad bad bad you two bads are okay but three bads is just over the line so um it'll fail as policy if it's got bad bad bad now this is usually good enough for a lot of things especially for people just starting off because they might just want to validate that um that something exists in the attestation or doesn't exist in the attestation and a lot of times a really common way to do this might be you might be checking for a specific version of software or something in your sbomb if you just care that the pattern for that that specific file or thing you were looking at within the sbomb all you have to do is here is the left is just write a regex the checks against that so that's very very simple but if you want to check um that things are nested inside element inside other elements you really want to do something very powerful um with q q can do a lot more things than json can do there's logic to it there's you can even create um conditional things in here that would you know potentially do for loops and things like that um but there's built-in tools to do that so if you remember it was just one big long ugly string I can create um another element in here that uh and use a built-in uh json encoder that comes as part of q and create a json what they call a struct and then once I create a struct well then I can just start um use that built-in q package to create the json and then I can start validating it just like I would using any other um q aspect for json and again q is a json superset so I could even put raw json in here if I wanted to so the first thing I'm going to do is validate the repository so in this case I want to validate three things I want to validate that the branch is set to either main or origin slash main that is just in q is pretty simple it's just that or bar we should all be familiar with that the second thing I'm going to do is I want to know that the uri is came from my company so I want our my organization in github so in this case it's going to be um the uri has to be github.com slash example because that's my organizational uh github my github organization and then the third thing is that type which you can see an example is set to git I'm going to let that just be any old string that's perfectly fine um anything can be put in there um because I don't necessarily know what that type might I don't know all the options that type might be so I just put that little question mark in there and then I'll offer that and then since this is a I'm creating essentially a schema here by adding the um what they call a definition in q with this hashtag and if I don't add these dot dot dots it'll be at what's called a closed struct which means that um everything has to match exactly but I won't be able to add more fields later so typically what I like to do is add the dots just because which makes the definition open so that way if you're if you're using JSON there's a good chance that you might be adding more things to a schema later so I like to uh uh have that flexibility so in this case I'm going to check that predicate data so I'll just just like in the last example I've got JSON data here I'll marshal it and then I'll just make sure that there's a repo field embedded in that JSON that matches this schema so I will uh run that for you against this example no I'm only sharing my I'll share the rest of my screen to do that sorry you have to look at me for a second all right so I'm just gonna write cosine verify I'm going to add the repo check which was the um which was the q statement that I just showed you in the presentation and then I'll give it the slash type for custom and we'll be from this it should verify and you'll see this will be validating against the q signatures repo dash check dash q and then there's a bunch of garbily goof down here if I wanted to print this out to see what it looks like I can just use jq to do that and I think if I look like this looks like let's see okay and I'll set that to unfortunately it was encoded twice so let me see if I I think I kept the command inside of my read me here so we'll check that went off the grid somewhere so there we go so that was the existing um JSON which you saw in the code review and this part of the validation for the repo check is just making sure that the repo follows that format it's got the JSON needs a repo struct and it needs to have a type field that's set to anything it needs to have the uri is set to github.com slash example and it's got to have a branch that is set to main or origin slash main all right next example so second one is a lot simpler if I come up here uh all we're saying now is this the author the author has to be a string and they have to have an email address that came from example dot com because I want them to you know be someone that works for my company of course so I can create that definition for that schema and then all I'm going to do down here is just make sure that this JSON has an author field and the author field equals the author structure from from my definition here so not too difficult just make sure the author's a string and it's got a string that ends with it at example dot com and you can see that's regex here more regex so come back to my sample here and all I'm going to check changes the policy file which is the queue this is just for checking locally I'll show you how to move this into the into the repo or into the six star policy controller let me get towards the end and you can see yep it validated against that queue policy author dash email next finally a third thing I'm going to do is check that the it's an independent review so I'm going to check take the author field which we already know is a string from the previous example and then I'm just going to make sure that this reviewer field that's also a string and that they don't equal each other and this is a differentiator between JSON and queue you wouldn't be able to really compare fields just doing raw JSON right you just be able to know if they had certain traits about them but with queue I can make sure that I can actually compare these values which makes it really powerful so I have a reviewer and it's going to not equal the author there and then when I put this embed this down inside my JSON data all it's going to make sure it's a top level item so it's just going to make sure that all the JSON data that's there follows that format so you can see what I'm working with that's the example from the slide and I can run cosine verify again you don't have to necessarily write down all these commands they're all going to be on the read me with all the full examples I was an independent author independent review dash queue and it didn't like that it said well I got an error it said field not allowed repo okay this is actually a good you know this breaking is not a bad thing to be showing here so why did this break well if I come back to the policy here come back to my slide come back to the policy what it's saying is independent review okay well if you remember I said if I didn't add these dots down here the struct is closed so that means the repo information would now be invalid essentially it would say expect an attestation that only had author and only had reviewer so I can make this struct open to fix this so what I'll do is I'll come back to my independent review and I'm going to add these dots here and that'll allow my repo data and I will run this again and that should work perfect so that worked and before I forget I'm going to come back to my example here at the end and make sure that these other ones also are now validating against these things okay so I will update that slide screenshot before before I before I put this online okay so that's all just using cosine I haven't even used the policy controller yet but I'm trying to show you shift left and just validating this stuff with with q and once you have done that it's incredibly easy to use the six tor policy controller because now you're just actually copying and pasting so I can take all those definitions put them all together you can run trim if you want to not mandatory but it'll help that that stuff then I can run cosine verify if I wanted to use cosine to double check that it's all working and then once there I have a like you all I get to do is copy and paste it in so I cropped it just for the purposes of running this example but I can if you want to see the full example you can use that repo but it's nothing you haven't seen yet it's just the other definitions and then adding them in here as things that are being validated so let me run that and so what I'll do is I'm going to turn on enforcement in my repo and then I'll put all these things together I'll try to deploy this and I'm just going to use the chain guard enforce product to install this but it's not really part of it you could have installed the if you just wanted to the policy controller aspect to it you could have done this separately and I imagine I probably got logged out by this point so I'm walking real quick if you're not running chain guard enforce you don't necessarily have to care about this part but I'm just going to use this as a example to show what this would look like and I installed the policy using the chain ctl policies aspect to it it's got a policy called code review and again all that was is just the policies we created already copied and pasted into my six store my cluster image policy crd and it's going to validate the predicate it's going to validate the repo the author email the intent review all that stuff you don't even have to use these definition fields but they're helpful if you want to they're helpful for a lot of reasons one you could create these as packages that get imported into different policies you're not doing a bunch of boilerplate things and then you could you know if you had multiple authors for instance in this attestation you just have to cross reference them once so I like to use them as a default but if you're doing something simple they're really not necessarily necessary at all and if you want to disvalidate with raw JSON they're not necessary at all either but I copied and pasted that in there and then I'm going to that's my read me and I'll come back here and then I will try to run this and I'm guessing this will pass because it does have that attachment so we hope it passes and great that passes but just to keep us honest let's let's go change the policy and you know we'll mess it up on purpose just to make sure that it's working so let's see I'll delete that pod somewhere in my history I've got it pretty cool zero there we go you can see I've we can see I've run this a lot of times so I can't remember commands so there we go kubectl delete the pod increase pretty cool zero all right now just to keep us honest I will mess up the policy on purpose we'll say that the example has to come from example.org as example and then we will delete that policy and add it back and update it but just to be doubly sure that it's gone make sure there's no policies at all and then we will install the policy again and yeah now it's giving us a warning because we didn't always said for our identity was that it had to be signed by Keystore because we didn't necessarily care about who's signing it for the purposes of this demo but if you wanted to you could add you know that it had to be signed by you know had to become from your GitHub SSO and it had to be signed from a chingar.dev email address or whatever your you know corporate SSO is that's all information that you can set in there but now this is created and again this should break this time because we changed let's see where do we change we changed this it's the author email has to be example.org now right and that should fail because the this is what we attached to it as part of our attestation was that the email address of the author is nsonexample.com so this should actually fail so if I go to run that same image again that passed let's hope this fails but then my gods are with us and yes that failed and it tells you that the author had an invalid email because the value was example.com and that was out of the bounds we expected or we had this as the policy example.org so that is building it by example and that's towards the end of my presentation I put all the code examples that I used for this on six-store custom policies GitHub repo if you have specific examples about syntax check the queue spec and also you know feel free to just tag me in the Slack channel there's some if you're doing some basic stuff just go to the six-store.dev site or check the six-store policy controller read me but if you're trying to create something custom and you know you're hitting some troubles you know feel free to just tag me in the Slack channel and I can point you on your way so thanks everyone for joining appreciate your time hope this was helpful I know it's probably a little bit more advanced than normally cover things on a webinar but yeah I think there's been enough six-store talks out there that hopefully you can grab one of those if you just want you know really the high-level uses and hopefully if you're creating policies that you found this helpful but if there's anything else you want to see you know just feel free to reach out so thanks everyone talk to you later