 I'm Justin Komak, I'm the CTO at Docker, I'm the CNC technical oversight committee and maintainer of NATURY. And I'm David Tesar. He wasn't on the bill, but we always like to do other presentations jointly and usually I've done them with Steve Lasker, but it's good to have someone else for a change. Yeah, it's great to be here. I'm a principal product manager at Microsoft and I work a lot with notary over as ratify managing those projects. So, yeah. Right, so really just getting started really with how did NATURY get to where it is today? NATURY's always been a project that's been based around kind of firm foundations and standards. NATURY originally joined CNCF back in 2017 alongside the tough standard, which it was based on, so Justin's over there. And originally it was a project at Docker, kind of which is where my original connection to it comes from, but it's been a CNCF project for a very long time now. Late in 2019, there was a bunch of issues. We weren't seeing a lot of adoption, of NATURY back then. There were kind of architectural issues in particular. It was a side car that's on the side of a registry. First of all, many registries didn't have support for NATURY at all, because it had to be run on the side. It wasn't a native part of the registry protocol. And it was kind of, there were other related problems with that, there's a talk I did back in 2019 in San Diego. Where I talked about a lot of the issues that we had and the kind of problems and that really kind of started off as a whole set of streams of work to try and resolve some of these issues and find new foundations and standards to build on. One of the things I think is kind of useful to talk about is identity when we're talking about signing, what kind of identity, what's the identity of the thing that we're actually kind of interested in. These are my terms for these things. I don't think there's actually a very standard terminology for a lot of these things. We built NATURY originally on TUF, and TUF has a kind of basically a root key which roughly corresponds to a repository or project. I mean, you can have larger, kind of larger scope root keys than that, but kind of which is a really great model but we did find it hard for people to adopt because they're not actually used to that kind of model and they didn't have very many tools for it, which I think is kind of a problem. One of the issues we had. One of the things that we have been working around, in particular, we're going to talk about more today is like organization identity as a very kind of very coarse grained but kind of easy to use identity because organizations already have identities through PKI, through HTTPS and that infrastructure. There's a whole thread around service identity with Project Life Spiffy that give out identities to services and work on OIDC with, for example, GitHub Actions for kind of organizing identity through services which is kind of interesting and promising and then the thing that people kind of assume is the kind of right path sometimes is identity of individuals. There are lots of problems around that because most individuals don't have keys or key management as difficult to work out the association back from own videos to the actual artifacts you're signing which are actually rooted in, for example, projects which is where the tough model comes from. So I think it's useful to think about what kind of identities we're going to use when we're signing and yeah, so we've got this focus particularly on PKI. I mean if you look at X510 it was originally designed as a kind of framework for hierarchy of keys for everything and everyone including individuals but it never worked and we kind of left with the organizational identity bits the sort of SNHTPS pieces, the sort of dominant kind of working kind of key management infrastructure that's kind of around. A bunch of principles we had around key management in general one thing is all the people we talked to all the potential customers around this were all very insistent that they wanted hardware managed keys. Again, low tree v1 largely managed software keys although there was lots of the root keys were we did a lot of work on root keys and UV keys and things like that but the general usage most of the keys were in software there's now good infrastructure with the cloud providers and elsewhere for actually managing keys always in hardware and only doing signing without actually exposing the private key and so we built a plug-in model around that so that we, you know, you can well with demo as your key vault but you can use any kind of hardware key using the kind of plug-in model that we built to show you that in the demo. And for signing formats we eventually after some many iterations settled on Kozi again on low tree v1 we used canonical JSON for signing we run into a bunch of largely because the kind of whole registry formats were based around JSON and tough with beta on JSON it seemed a natural thing at the time but we ran into all sorts of issues like the canonicalization library we used well first of all there's multiple canonicalization standards for JSON and the library we used turned out to be buggy and either of them correctly so there's all sorts of issues we had Kozi is an ITF standard to improve JSON signing effectively it's not JSON, it supports binary serialization directly wrap everything in wrap binary data at all and we've been using the Go Kozi library which we've had two independent security reviews from Trail of Bits and NCC on that and we're very happy with this as a foundational standard infrastructure for actual key formats we spent a lot of time working with, we wanted to put things natively in the registry the things we want to put in the registry are signatures, also things like S-bombs and other pieces so the ORS project has kind of been around since 2018 about let's put more stuff in registers it's not just container images what's really happened now is that we've managed to as we've got real use cases coming from signing and so on we've actually managed to get this converged into an officially accepted OCI 1.1 standard that's coming out soon so this is actually now official and not a kind of side project which is really great and it's seeing wider implementation for a long time people complained that Docker Hub did not have support I'm happy to announce preview announcement, we're launching this on Monday so there will be full support for artifacts on Docker Hub and I think this is now a kind of becoming a mature standard and officially upstream in OCI which is great reference types for the other part of this work that we ended up doing we had a lot of requests that people wanted to add signatures to things rather without actually modifying the object and they wanted to basically be able to add signatures add S-bombs add basically references without modifying underlying objects because that's the kind of work flows that people had so we spent a lot of time working on this again outside the notary project because it's a general OCI thing and this is also finally being standardized in OCI 1.1 as well and again at the moment there's support on Azure OCR we will be working on support on Docker Hub as just as soon as we've shipped artifacts we'll be starting work on that so again as it's now kind of officially part of the OCI spec we're expecting to see much more use of it I think Amazon has support as well yeah, Phil's thinking of front row so yeah, Amazon's some things that they're interested in using it as well so it's a general mechanism for doing more stuff which is kind of really interesting so there's some demos just to reiterate what Justin has been talking about here is that the notary project has really taken the approach to build carefully upon a number of security standards and notation is the v2 tool that I'm going to show you but that is one tool as you'll see different tools here that help with the overall secure supply chain security to start off I am going to show you how easy it is to use remote signing and so first off what we're going to do is we're going to create a key in Azure Key Vault this key for your organization you might have it already assigned to you and have access so I'm going to first create this this key second here, it's actually pulling up to the Azure Key Vault to create the private key and then we now have this key ID assigned so this key that we created uses the code signing certificate which is what this EKU is but you could imagine in an organization you would have probably a sub-CA or root CA to if something were to happen you could revoke only the minimal amount of damage so with that we created the key now let's add that key to be able to be utilized by notation so that told us that that's the key that's used for default to show that key by notation key LS and you'll see that our key is being stored in Key Vault and we're going to utilize that to do some signing but first it's important to call out that right now the local doctor instance doesn't have the OCI 1.1 support for artifacts and referral types that will eventually change so for now what we're going to do is we're going to run this container locally so that we can push to it so if we do a doctor PS we can see that this is being served up locally on port 5000 we'll then build that container image and push it to that local registry so now if I were to take a look ORAS is a tool that allows you to discover those artifacts that have the OCI enabled registry so when I check a look at this right now there's no artifacts attached to that image that I just created however what I can do is I can use my key that's up in Key Vault to sign the local image using notation when I sign this it's going to come back with a with a SHA it's stored remotely so I have no access to it there and I've now signed that local image so when I do that same step again I can see here that the local image has a signature attached that has a SHA a unique SHA so that enables me to do a verification when I try to verify you're going to see that this does not work well why and that's because we haven't decided by default we don't trust any kind of certificates so we have to explicitly add that to our configuration so if I do a notation cert ls that says which certs in my store am I trusting and right now there's none so that makes sense why I don't trust it I signed it but I don't trust it yet I could pull down that key that I used the public key that I used to just sign that but what I think is a little more interesting is public content right so I can verify my local image but what if I had an image that's just I don't know it could be on Docker hub, could be somebody else's registry how do I know if that thing is what it was supposed to be when it was created and so for that we have this external image this webnetworks azure acr github net monitor image that I don't necessarily trust yet so when I list that though there's a whole bunch of different artifacts attached to it as you can see here it's got a vulnerability scan it has an attached sbom and the image itself and all three of those things are signed so how do I know if I can trust this or not well with notation what I would need to do is just get the public certificate now we are looking into ways of how we might be able to make this a better experience than a cert from github but for now this does work it is indeed public so if I add this public cert to my notation trust store I do a notation cert ls you can see that I now trust that cert so what I can do now is I can verify that image that I had up online that's up there so before I decide that I want to use it locally or in any other place or pull it down I can verify so that did verify so now I know that I have some level of assurance that whoever signed that content if I trust the identity of this public key I can have some insurance that that image has not been modified the other nice thing that you can do is we noticed there were some artifacts attached to it so one of the things I might do is first verify let's say the S-bomb because maybe somebody hacked in and they modified the software build materials to say there wasn't vulnerabilities or something in it I verify that that's there so that seems to be correct and I can actually pull that software build materials down from the actual public image and when I look at that I now have the S-bomb there pulled from the public registry for media and spec I could run different tools on it to see if there's vulnerabilities or other things before I decide I may or may not want to trust that okay so with that I'm going to switch back to the slide deck and we'll go from there so this is a real scenario here that does happen what if you have an identity stolen somebody hacked your credentials or it may even be that somebody creates what seems to be a real account but it's fake and they get a certificate it could be a short-lived certificate it could be ephemeral, it could be P-list, whatever you want to call it but in some way they've obtained an identity that allows them to validly quote-unquote or get or use a certificate at that point that person what they could do is they could start to sign content and push that up well there have been some attacks in this space I know there's a number of them out there so what do you do in this case right well this is where the trust store and trust policy comes into play with notation so with notation we have the concept of trust store I would say it's a fancy way to think about it it's a location a file system of where all the certificates just like the one I had the public certificates that you trust but the real power comes in with trust policy trust policy we just give you a little teeny snippet of an idea of what you could do here but really what you could do is revoke let's say that compromised identity you know where let's say the CA or sub-CA it's coming from if you want to revoke anything that's signed with that that identity you're going to block from trusting inside your organization you can also go even find grain down to individual artifacts you want to leave your old content that's signed validly there but you just want to say maybe certain things out there that you want to not trust so there's a lot of power in this and customization to be able to at least in the specification right now we do support OSCP and CRL lists so if you want to go to that level of extent you can go there to query for revocation lists another point that is really also important to point out is that there's criticisms out there on this revocation thing doesn't scale well I would argue that that definitely applies for the HTTPS the World Wide Web where there's just an insane amount of sites that you have no idea like what they are and there's just overwhelming but when it comes to an organizational enterprise you should have a much smaller list of entities that you are deciding that you want to trust even for content that you decide to pull in from outside sources you're likely going to re-sign that with your own certificate and therefore you have a much smaller radius and it becomes much more manageable the next thing though is okay well so great do I trust it or not how do I actually enforce this stuff that I trust I got the thumbs up or down on the thing but how do I make sure this actually happens and there's two places I would say key places that you need to think about enforcing this and that is likely in your build pipeline so before you go and deploy a container image you want to verify that since it was created nothing has changed and it's reliable so at that point you might want to verify the image to make sure that it does not cause any extra cycles on the cluster once it goes to the cluster you can yet validate again and then have that enforce that standard so that those policies that you decide you want to apply on the cluster enforce that you only are going to run signed content the tool that we use in this particular case that I'm going to show you with a demo is an open source tool called ratify and ratify enables you to do that enforcement as a binary which you could put into a pipeline or run offline technically if you wanted and as a admission controller working with gatekeeper is an actual external data provider for gatekeeper and then it runs there working with gatekeeper to enforce only those policies that you desire the first thing I'm going to do is show you this in a pipeline so I have a pipeline here that does quite a few different steps and I'm going to first edit this pipeline just to give you to kick off a build so you kind of have an idea of what it's what everything is doing and and we'll walk through it so the first thing I do in here is I'm going to just say hello coupon this is my build image text that I have for the image up here for net monitor and we're going to commit that directly to the branch ok so that's kicking off a while that's kicking off a build I'm going to explain a little bit about what this is doing first it's pointing out that github has a ability to host the registry as a service for within the builds itself and once docker natively supports this you'll no longer have to run this at the start that's important to call out we install or as we build the docker image and the other thing that is nice in terms of making it convenient there is a notation setup task that enables you to use that secret that you let's say your organization uses for instance to sign things in this way that you can secure your build and ideally as a user you would never even really need to you know personally you know have access to the certificate you would just probably have your build system have your certificate at least a different certificate perhaps than a personally used one to test things so that it's more secure and trustworthy and then after this you know we're going to after building the image we're going to sign the image we're going to generate an sbomb we're going to attach use the oras attach to attach the sbomb and then we're going to sign that sbomb and all of the artifacts that are there so I think we can take a look now or as attach is the reference type new command in rs to simple reference types so we can see here I think we're actually almost done it's almost already built and one of these the last thing that is important to call out is that you want to do this as a local on the local host until the very end because what you may want to do is during that process of doing the build you may want to check for you know if it has vulnerabilities I may want to stop the build process because you don't want to necessarily pollute your registry with tons of extra stuff that doesn't really qualify for production and at the end we can use this oras copy to just go from the local build host up to the actual location in the registry so what I can do now is I can take a look and see what I just built with you here and if we take a look at this I'll clear this out did oras discover for the image we just built and again you could see all of this that we just did there it was about a minute to do all this to create the sbomb, create the vulnerability scan sign the image, sign the sbomb all those things are done in a very fast manner and now how do we enforce it we can enforce it we have ratify as a binary and one of the things that ratify has is a local configuration so it allows you to show what certificates that you are using so in this case ratify has this config there are a number of different verifiers you can verify sbombs, license checkers it is open source so you can add other plugins for policy but right here we're just using the notary signature verification and we're using just this particular trusted certificate and so with that we can do a ratify verify locally and actually see that these different images will verify from a binary so you could imagine here again you could put this on an offline scenario or in a pipeline to verify before you deploy and then within kubernetes we have we have ratify running on this kubernetes cluster and so what I'm going to do is I'm going to show you how that same experience looks in a cluster so I'm going to run the logs here kubectl logs on one window and then on the other window I'm going to show you what it looks like to run a non-trusted image and this should fail there you see the gatekeeper admission controller said nope you failed this it's not assigned it's not assigned container we trust and then here the sign one runs and just so you could see that it actually did something we can go back to the logs and we could see that this net monitor signed actually did verify from ratify on the actual cluster itself with that we'll head back so this is the kind of stage of where we are today so this is the versions of pretty much everything that we demoed so they're kind of in a various alpha and beta states this is kind of the roadmap for going forward so there's going to be first beta nature release obviously OCI 1.1 there's a lot coming out soon and we're also all the other things are kind of on their way to kind of final releases over the next few months I also once talked about SCIT which is a standard that a lot of the people who work on it are really interested in it's a draft IETF standard for supply chain security in particular it supports a set of distributed transparency logs which still support revocation and they also support DID identities as part of the logs so you can there's a whole set of verification of structures it's a very early stages but there's a lot of interest in this there's a blog post there that Steve Lasker wrote and we'll be doing a presentation to the nursery working group about why this is interesting and what kind of things that people are thinking about integrating back into nursery as this is developed in the IETF there's also a whole bunch more work that's ongoing that we kind of haven't had time to cover so there's a lot to do in terms of how do we roll out signing infrastructure from a world where things are not signed to where they are signed and how we actually get that done we're doing some planning work around docker official images and the signing support for those and how we're going to get validation of signatures throughout the ecosystem and there's a bunch more work to talk about in that kind of area as well thanks there's what's next I don't think we covered the what's next right did we cover what's next or I'll do it briefly sorry we're also out of time or almost out of time we've got like one minute so we could probably we've got five minutes we've got time for questions we've got time for questions sorry there's a question in the front so if you want to have a mic for the recording and the online people probably a good idea I know you can project your voice alright great thank you it's a very interesting talk and I have like a whole bunch of questions so I'm kind of struggling with with what to ask here I guess one of the biggest is there's been a lot of work with the six store community that builds on standards and has had a lot more security review and things like that that sort of subsumes everything that this proposal is to do so why not collaborate with that effort I think there is a whole bunch of collaborate I think a lot of the building blocks are common things like reference types and so on we're all done together so I think I think we there's definitely a kind of convergence in a lot of those so for instance skit would be one that's an IETF standard would love to see participation there as well that's international standard and then OSCI is something earlier that was Josh from ChainGuard and Saje worked on together so in general I did miss that bit I did miss the bit about saying there's a session there's a session there but yeah I'd say offline would be happy to talk to you more because I'd love to know what standards we're actually not supporting or doing right now I would actually kind of think about almost the opposite way so you support like Salsa signing and attestations and you're also doing so how do you verify that the data that you have in your S-bombs is correct because there's like a whole effort that groups are doing to make sure that the supply chain they generate is actually accurate and not just signed because signing is not accurate signing is only part of the whole picture and so that's where we talked on it briefly but Skit is the industry standard that we're trying to work for all the different supply chain artifacts end to end and so I think because otherwise if everyone in this whole space just does whatever they want then we all have different tooling experiences but there's already a fairly standardized thing underneath a lot of SigStore and Salsa and stuff like this in Toto I'm aware of Salsa we're absolutely using in Toto for the work we're doing in that area and we're at Docker and we're working with testifying people on supporting that for the S-bombs that we're generating for Docker official images we're absolutely using absolutely those things and you're generating in total layouts and verifying them right as part of that also or not? I mean it's work in progress we haven't shifted yet but the aim is absolutely the aim is for us to use in Toto for the validation of the stuff that we're shipping as Docker and that's not part of the natural project we're just using we're working with everyone else on that stuff I don't know if you're we haven't covered it in here but in terms of trying to help with things we also at Microsoft we were announced like an open source for security framework and has a maturity model and other things and we have that now being a part of the working group in CNCF so there's a lot of things that are involved in standards and as a whole I'd say that's part of what we've been talking about that's what we're trying to do is build on standards and throughout the whole time which is why sometimes things come together with different tooling to take a little longer the repo that you're using the workflows and the tooling there is that all stuff that's publicly available that we can try out? yeah all that stuff right now it's all shipped everything I demoed is available ready to use how I demoed it today anyone else? thank you one more? I'm actually curious the admission control functionality it called out to an external tool would you have to know how that worked how gatekeeper was able to call it to an external tool to verify the signature or is that like an inbuilt feature? so which part? the one that was on Kubernetes? yeah so the ratify admission controller functionality the thing that attaches to Kubernetes it runs in a pod that's there that pod actually has mounted the public certificates that you trust and ratify ratify will actually locally try and go and verify that signature on the cluster which is what I showed you in the logs so it's using the it actually uses the notation go library and so it's using that go library in the ratify binary to check the signatures the notation signatures so it's a separate admission controller from gatekeeper right? so it's an actual external data provider for gatekeeper so they work together gotcha so if you have gatekeeper you'd need the workload running to verify the signatures but it would be a data feed for gatekeeper that's right if you wanted you can do it with rego so you can do just rego policy that works with external data by ratify to gatekeeper awesome, thank you appreciate it sorry I think that's it thank you