 Okay folks are you ready? So we can get started. This is the talk about notary project. This is the maintenance track talk. My name is Tori Modanov. What we're gonna talk about is enabling the software supply chain ecosystem with notary project. So if you attended previous talks about notary project, it was more about the project, what the project does. Today actually I have two of our partners. They will talk about their experience working with notary project in implementing functionalities on their side. As I mentioned I'm Tori Modanov. I'm a maintainer of the notary project and also I work for Microsoft and I deal with container security for the last more than actually two and a half years. Shooting, want to introduce yourself? Sure. Hey everyone this is shooting's out and I'm a cavernal maintainer and a staff engineer of Nomada. So today I'm here to demo the integration of between cavernal as well as notation project. And Ivan is from Venify. Yep thank you and so yeah Ivan Walls architect at Venify focusing on software supply chain security and a lot of background in key management crypto PKI and really here to kind of give some learning experiences and also demo the signing part of this integration. So thank you and our agenda for today will be like first only one slide what happened with notary project since cubecon North America in October last year. After that we'll go through the demos that shooting and Ivan will show you and at the end we'll talk what's coming next with notary project. So very briefly what happened since October last year the first thing is like we really clean up the branding for notary project so officially the name is notary project now and the tooling that we have currently available is called notation if you are interested we have a full FAQ where you can read about all the terminology and what we use in the specification and so on. We also have update of the notary project notation libraries as well as the CLI so we have a revision of the first version that we released last year in August we actually recently released updates to that version with more functionalities and better user experience. We implemented also GitHub actions and also ADO tasks that you can use and we are working with other CICD tools to add more actually automation with notary project and as you can see our adopters are growing so not only kind of the big maintainers that are on the project but we also have a lot of projects looking at notary project implementation as well as customers using it. So I'll hand it over to Ivan to go over the plugin framework which was one of the big improvements that we did over the last couple of months. Thank you Tanya. So I think you know in general if you've already played with a lot of these enterprise tooling one of the I think the measure of success is the ability for these tools to have a very extensible plugin framework so I was really excited to see when kind of the notary project started looking into notation that you know here's a framework where we can allow a lot of our partners ecosystem partners to be able to deliver kind of that experience for their customers and so on this particular side is really kind of shows the overall you know plugin framework so obviously we have the ability to perform both the the signing and it's really pointless to have the signing unless you can properly verify a lot of these the software artifacts so essentially what we have here is think of it the first step is to obviously get the tooling installed then to be able to install the plug-in such as the one from from from AWS from Azure Key Vault from from Venify and that allows you the ability to offload I think the most important and the critical parts for the enterprise is to be able to to be able to sign with a trusted identity and so you're seeing that here with with that enterprise code signing platform which there's a lot of different solutions including ones from Venify so we get we essentially sign that if you're already familiar with with signing a code signing in general typically we take the digest of that artifact that basically provides the authenticity and the integrity around that particular artifact and then we push that to the to the registry and at that point when it comes to you know what shooting is going to be showing is on the verification right so we have a sign container image and we want to run this in production so we want to know it's coming from a trusted trusted you know source and so think of it the reverse so we verify so the the system wants to be able to pull down a an image so we pull down the the signature and I think really then the nice thing with notation is that not only does it provide the ability to sign with you know with the plug-ins but also be able has the ability to extend what normally happens with verification where we validate the chain we check revocation there's a timestamp and so the plug-in for the way that Venify implemented was basically going back to the to a that code signing solution which is that source of record so we know where that signing identity came from and we can also do things like timestamp checking we can also check the the status the revocation status of a particular signing identity and so really kind of end-to-end right and we'll show you hear that that shortly but basically plug-ins can extend the verification based on the the trusted identity and revocation status and so I think that's a kind of a good overview but and you know I think when when I started to to kind of see what the functionality would be it was always always about you know if you've if you've ever built something without looking at you know like a recipe or something right you try to bake a cake and you're like what is this actually supposed to look like so you have to be able to have a reference implementation so at the time I think I think it was just the Azure Key Vault implementation Golang you know limited I'm like okay how do I there's some basic scaffolding in there there's a there's a somewhat of a template so that was a kind of an interesting experience I think one things that that we've we've added as an additional repo is that there's a kind of a notation plug-in framework so a little bit more consolidated and I think also if you all if you see kind of the the deeper parts of the of what the library supports of course we have a you know going back to that the analogy of you know building something you need a spec so I think you know one of the things compared to other other types of signing solutions out there is that a very detailed spec so both from a signature generator so what what options do you have for being able to to serialize the data protect the payload and then also down to what about as I mentioned in the previous slide how do we verify that signature and so that allows us to to extend that into enterprise code signing solutions such as the ones from Venify and others and I think you know so lessons learned there obviously if you're already familiar with with revocation and some other checks especially in some of these production environments there's obviously that performance trade-off so we have to consider when we I know there's just caching that can take place but we need to that that that kind of performance trade-off with real-time verification of these the signing identities so I think you know initial right areas that kind of helps steer the direction of the project was and I know there's still some kind of ongoing things we can improve how do we debug how do we troubleshoot plug-ins and so at initial challenges with how do we create the necessary payload and how do we serialize that data especially when it when dealing with a certain third party solution and as taught he mentioned a big improvement in terms of that user experience how do I how do I get notation first of all installed and then how do I plug that into my environment right so I can start leveraging some of these some of my infrastructure especially for signing so that was a so at that point I think every plug-in had to basically create their own documentation and then also I think you know with 1.1 we introduced the the plug-in management part so so with that said we're gonna kind of go to the demo all right so I had already kind of pre-recorded this one so it's basically we're gonna be running through a a signing example and though I know shooting is gonna kind of walk through the other part of it which is gonna be that that verification experience in the fur for caverno so let's go ahead and get that going so the way that's so yeah running the script is that the latest version of notation and we're going to then at this point start the the installation so notice here obviously with 1.1 being able to securely download the latest version of the plug-in from a your plug-in repo so that's I think a big improvement right from that experience as I mentioned so we could see that listed now on the developer or the build system and then at this point if you've already played with notation now yet needs to be able to start configuring right what I'd signing identity or certificate you're gonna be using so in this case this is relative to a a venify type of deployment so obviously there's a back and stuff we're not going to go into detail here such as how do I you know how do I point to that code signing service and and also you know what are some of the details there so at this point we're going to be running the notation sign command so as you can see here we're referencing that specific identity and this is just a sample image that I just put together and that's going off to the venify service and signing that digest and so successfully signed and then if you're familiar with notation you can run the inspect command you could see that right all the details of what that signature is from the signed attributes all the way to that trust chain which is going to be very important as you'll see with the enforcement side with with policy and you can see here for notation obviously it has its own kind of policy management so as you can see here we have just a very very simple example with a strict signature verification we're pinning off a specific trusted identity and then we're going to import that and then finally very simple right so you can imagine kind of end-to-end before this gets released we can verify the signature against what's in the registry and that's pretty much for the demo so I'll pass it over to shooting. All right thanks Yvonne let me switch to the next slide so next I'll be demonstrating how Kvernal integrates with the notation venify plugin to verify images within your Kubernetes cluster right and so we have built this on the moda extension service that runs inside of your Kubernetes cluster to verify the images using the notation venify plugin that Yvonne just demonstrated so it internally embeds the plugin and it runs inside the Kubernetes cluster and let's say now you're trying to send a your request to create some workloads that request will first reach to the API server and if you have Kvernal running in place let's say here Kvernal runs as an emission controller and with the Kvernal policy deployed running in place the emission request will be forwarded to Kvernal and then Kvernal parses the image data from that emission request and out forward image data to the extension service right and once the extension service received that image data it'll use the notation venify plugin to verify the images and return that results back to the Kvernal and then that result is through the return back to the API server along with the emission responses and then you are able to block your workload if your workload is running some of the unsigned or insecure image right so that's basic the workflow of how that works in the Kubernetes cluster and just give you a little bit more context of the extension service and it cannot be used can not only be used to verify the image signatures but also to verify the attestations right and you can also mutate the image digest for your resources on the fly and we have introduced this caching mechanism into this extension service which stores the verification outcomes like the results of your image signatures or the attestations and this is the in memory TTL based cache and the cache entry will be cleared out once it's once it expires or if there's any change to your trust policies or the trust doors right and those are the resources that are used when you do the verification with the notation COI and more over to that if you're running on a shared cluster it is possible to configure multiple trust policies and trust doors to isolate the verification flow in the shared cluster and you know depend on your use cases especially if we're running in a large scale you are able to scale out the extension service by increasing the replicas so that the verification request will be distributed to all running instances and you can you know by allocating more resources to the extension service you are able to increase the combined throughput so with that let me dive into the demo we'll see how the verification works and your Kubernetes cluster okay so this is the cavernal one and you know just to introduce the setup here I have a single node kind cluster running and with the to verify the images you have to install cavernal in place and also the extension service so I'm running I'm deploying cavernal using this hand command this is the pre-release so I use that with the dash dash double command and after let's wait and see cavernal is installed it's gonna take some time but once cavernal is in place it'll be you know here it'll be installed on the control emission controller will be installed into your cluster with all the rest of the controllers and next I'm gonna install the extension service this is you know open source you can fetch it from the GitHub repo I'm installing from my local manifest and let's verify that extension service is running in place and then you know as I said you have to deploy cavernal policies in order to receive those emission requests and let's quickly inspect the cavernal policies here so I have a cluster policy here that matching on the resource kind pod and a specific testing namespace which is called test-venify and here if you inspect the context entries I have two entries here first it fetches the certificate so that it can send the request to the extension service endpoint right and then the second entry here I'm building the image variable this is happening inside of cavernal and then I'm doing a post call to the extension service so I can send that image data over to the extension service and then along with the information of the trust policy so the extension will know which trust policy to look at when I do the verification right and moreover to that I'm doing a mutation as I mentioned earlier you can replace the image tag by the image digest that is returned from the extension service right so next let's deploy that policy into your cluster by kubectl create and then let's make sure the policy is ready yeah it's now in ready status so it means you can it can be applied to your emission request and you know this is the trust policy that you want has showed earlier and it just wrapped up in a custom resource so that you can create it into your Kubernetes cluster it has the trust policy name and all the rest of the data this is just a simple example of your trust policy and then let's deploy that into your cluster again verify that it has been created successfully okay now similarly I will install the trust stores in the in the policy into the cluster and similarly again let's inspect that so it is also wrapped in the custom resource and it has the name to that trust store there's the CA Bondo embedded in the spec of the resource and if you create that to the cluster and let's make sure it's created successfully okay and with all those resources in place you are now be able to verify the images let's let's use the image that even has showed has signed earlier so I'm trying to create a pod into this newly created test run a fine namespace with the image you know the data from GHCR container registry with the image tag v1 and I'm doing a server side dry run and what happens internally is that you know the mission request has been sent the API server for awarded that to Covernal and then Covernal parses the image data send it over to the extension service and that extension service internally leverages the notation and if I plug in fetches the image data verifies the signatures and has the result returned back to the Covernal and then further back to the emission to the API server now you can see that the pod can be created successfully because this is the signed image and next I'm gonna run this command again just to showcase you the caching how the caching works right so if I run that command you can see the pod is created nearly immediately right so the verification results has been cached in the extension service so that you don't need to fetch everything again and you know with that long delay when you do the verification and similarly if you're running an on-site image of course it'll be blocked by Covernal because when if I fells to when if I plug in falls to verify the image and the request will be blocked so you'll see that in a minute okay now so you see the message that making sure your image has been signed successfully otherwise your creation request will be blocked okay with that let me switch back to the slide and hand over to Toddy thank you should think as you can see so we enabled ways for actually to create extensions not only on the signing side but also on the verification side and we are very happy to work with partners like Benify and Covernal in order to extend the capabilities so we have the core part in notation and if you have any specific needs either we can work with you to actually extend it or you can go through the documentation the same way like Ivan and implemented yourself what's coming next for notary projects so a couple of things that we are working we are working on signing and verifying arbitrary blobs which will allow us to not only actually sign binaries that are different from container images but also the actual binaries that you download for let's say notation or any other software we are also working on this because most of the time you want to sign the things before you push them to the registry so we want to sign S bonds we want to sign other information that's related to your software before you push it to either the registry or whenever you will use it the I think we have repetition on the slides we are also working on timestamp protocol support which will be built into for example notation core which means that the timestamp support doesn't need to be implemented for example in the plugins and one of the big things that we are working on is really integration with the station so we are engaging with the in total community and we are looking to actually add a station signing into notation and also work on some standard attestations that we believe will be important for the software supply chain we have one minute and I think I can sneak in one more demo inside and this demo is actually with Docker hub so far we've been doing our demos only with GHCR but recently thanks to the efforts at Docker food now we can push notation signatures into Docker hub so very quickly I will go through that demo there will be some some repetition here just to set kind of the environment I have a test repository which is in Docker hub and I am a Python developer so I mostly actually use pythons for demo what we're gonna do is we'll create a signing key this is kind of similar to what Yvonne did for the Venafi one this test key is created and now I am signing into Docker and the only thing that I need to do is to say notation sign signature format and as you can see the image is successfully signed of course validation also works the standard way that we do that is we create the policy we pull the image we validate it and as you can see the policy is created specifically for the test repository which is in in Docker hub so when I verify the image again it tells me that Docker IO image is verifiable from now on we'll be very happy that actually we'll be able to do our demos with Docker hub so thank you Docker and I believe that's everything so we are open for any questions and please provide any feedback to the session any questions from the audience hey thanks for the talk my question is about Coverno with the the matter extension my question is do you support like a MTLS connection between Coverno and the like the extension service and is that easily configurable and how do you how do you achieve that yes so let me switch back to if I can to the Coverno policy I have two complex contacts entry defined in the policy itself and the first one I don't know if you have noticed that I think this is the one right this the first contacts named TLS certs it actually fetch the certificate from the secrets that is created by the extension service and then use that to forward or to be able to post requests to the extension service okay cool sorry I missed that thank you yep any other questions what about performance performance for for the mechanism itself well I guess we have overall performance with when signing or with the very fine the verification yeah right right very good question let me switch to the this slide so at the last point you can see we in support high availability of this extension service and you can either like I mentioned you can scale out by increasing the replica numbers of the running instances so the verification request will be distributed to all the running instances or by allocating more resources to that you know configured instance so that you can increase the throughput of the the whole extension service so that is supported help that answers your question yeah and just one one note so you also especially when when when doing verifying it's a third-party identity service for the for the signing certificate so I think you know I think a good thing with with at least with this integration is the ability to transfer to cash so you have the verification outcomes so you're not going back to check revocation status checking that the signing identity is in fact where it comes from right so that's another I think a good part of it hi just a question here it was but it's about the best practice so here in the demo you show us how to sign locally but in like real environments we want to sign the image inside pipelines to upload them on registry however we do not want to have those certificates available on all the projects so do you have best practice on how to handle the the signature part inside the pipelines yeah it's a great great question so best practices around right the signing and that whole process so that you're you're limiting the exposure of the signing certificates and keys right so so that's exactly what we're kind of getting into more on the solutioning so at least Venify so our job is to basically reduce eliminate the need for a local certificate and at least the private signing key to be on the build system so what happens with our integration is that we we send the digest the plug-in does sends it off into the the Venify code signing service and that the private key is either in the Venify database or on a hardware security module so this is where more enterprise side of it yeah and that way yeah once again limiting the compromise the key compromise potential and just to add to that like we have the github action right so you can get the notation CLI together with let's say the Venify plug-in in the github action with very simple configuration so you can run it inside let's say github actions or ADO tasks so any other questions hi question for notary SDK which programming language do you support I can answer that so notation right now is implemented in go we are looking for other languages to other languages but right now go is the only one anything else more questions notation strongly relies on them subject refers feature of the OSU specification there are still some registry not supporting it do you have some work around so it's just completely out of scope to support them so that is true certain registry do not support it like for example as we mentioned docker until recently does not support docker hub I mean until recently did not support that but they support it right now now that the OCI specification is actually released and stable we expect actually every registry to report supported we like majority of the registries support one or the other way of actually doing referrers so there is a backwards compatibility to OCI 1.0 that uses manifest index in order to support the referrers or the new one 1.1 OCI specification uses the referrer API notary supports kind of both of those and can work with each one of them honestly at the moment I it's escaping from my mind which of the registries do not support either one or the other so theoretically all major registries you should be able to to use it any more questions or I think we are almost a time actually we have one more minute no Tom no okay thank you very much appreciate your time