 Alright, let's get started. Thank you everybody for joining us today. This talk is about Sigstore, the past, the present, and the future. My name is Dan Lawrence and up here is Bob Calloway. Introduce yourself. I'm Bob Calloway. I'm a software engineer at Red Hat working on various supply chain projects. Also one of the members of the Technical Steering Committee for Sigstore. Awesome. So thanks for joining us right before lunch today and delaying your lunch a little bit. We're going to be talking about Sigstore. Sigstore is a new Linux Foundation project to help improve software supply chain security through transparency. We're going to talk a little bit about supply chain security, but this is one of like a hundred talks about that topic, this KubeCon, so I don't think we need to spend too much time on that. Was anybody here at Supply Chain Security Day a few days ago? Yeah, a lot of familiar faces here in the back. We're going to be going over the history of Sigstore where it started a few, it was earlier 2021, but it feels like a couple months, but it's been a long time now. Some of the work we've done along the way and we're going to do some demos showing how all the different components of Sigstore fit together and can be used with a couple real world examples relevant to KubeCon. We're going to cover some of the architecture, there's some complicated hand wavy spaghetti drawings, but we'll break them down piece by piece and show how they all fit together. And then we'll talk about some of the work we have planned in Sigstore going forward. So we have one slide here on obligatory one on supply chain security before we can move on a bunch of scary figures, graphs going up and to the right. These are both from the recent Sona type report. 650% increase in software supply chain attacks and open source in 2021. These things aren't slowing down. I think the EU predicted another 400% increase next year. Lots of attacks are happening. This is very scary, but thankfully a whole bunch of projects a whole bunch of people here at KubeCon a whole bunch of people in the cloud native ecosystem are working on helping protect open source and protect software supply chains. So like I said, Sigstore's mission statement is to really help improve software supply chain transparency. We're doing this through software signing and provenance. When we originally started the Sigstore project, we took a lot of inspiration from Let's Encrypt. How many people here use Let's Encrypt every day? Pretty much everybody here. If anybody remembers what the internet was like before Let's Encrypt? Does anybody remember that? Okay. Yeah, nothing was encrypted. Everything was insecure. It was because you had to go by complicated X509 certificates from certificate authorities. You had to pay with credit cards. It was expensive. You had to email things back and forth. You had to figure out what folder to drop them in your server. It was a giant pain. And so Let's Encrypt came along and thought we can solve this with automation. We can solve this with transparency and solve this with open source. And so Let's Encrypt pushed the new standards. They built a whole bunch of tooling infrastructure and made it free, easy, and automatic to get secure. Certificates for your website. And so now you can drop, you know, one config line into your Kubernetes handle. Pretty much you can get encrypted traffic over the internet. And as a result, pretty much everything is encrypted now. I don't remember the last time I saw one of those red X's in Chrome when I hit a website. It's driven this to ubiquity by making things easy and free and transparent. And so we thought we'd try to do the same thing in Sigstore, but for code signing. When you're talking about supply chains, it's a lot of the same problems. There are code signing certificate authorities, but they're expensive and they're not automated, and it's hard to get them. It's hard to renew them. It's hard to fix things when they expired. So we copied the Let's Encrypt playbook of trying to automate things, make them free, and make them easy and try to get people signing the code that they release on PiPi and on container registries and Maven Central and all the different open source repositories. And so far, the reception has been amazing. And the Let's Encrypt playbook does work is what we're finding. So I'm going to go over some of the history of the technology behind Sigstore here too quickly because we rely on a lot of things that have been developed over the past couple of decades in computer science. Some of the first things that started here were certificate transparency logs. These were really introduced widely across the web back in 2014, so not even a whole decade ago. But what transparency logs allow us to do in the web and with internet and browsers is to put things in a central ledger where everybody can watch them. If you're a company that uses CAs to get certificates for your browser, you can monitor these transparency logs to make sure that all of the certificates issued for your company's domain name are in there and are correctly issued so that nobody can have an incorrect one and be masquerading as your company or swooping your traffic. So mistakes happen. The point of transparency logs is to capture those mistakes on record so you can find them later, recover from them, and mitigate from the bad things that do happen. We can't prevent mistakes from happening. It's about getting these things on the record so you can detect them and recover from them quickly. A couple years later, Google Team opened Source Trillion, which is a database that uses transparency logs as a core primitive. So there are a bunch of applications of Trillion, but many of them are powering the exact transparency logs that are used when you request certificates in your browser today. Shortly after that, people started thinking about how to use transparency logs for things other than internet traffic and X519 certificates. Mozilla published a paper called Binary Transparency for Firefox. Firefox is a very popular web browser. It was at the time and Mozilla was taking it seriously five or six years ago way before anybody else was. And they were working on some techniques to use transparency logs the same way they were used for certificates that should protect the Firefox browser update process. So there's a white paper here and it had some pretty cool ideas with how to do this by piggybacking on the existing certificate transparency logs who wouldn't need to run any extra operational infrastructure. Brandon Phillips, another cloud-native friend, published a project called Arget right after that in 2019. Kind of more generalizing that idea of putting things in a transparency log and again reusing the existing certificate transparency logs. Arget would download things from a URL that the contents hadn't changed by putting the mapping of the URL to the digest of the contents in that URL in one of the existing certificate transparency logs by issuing a certificate through Let's Encrypt and then stuffing that metadata inside of the certificate. It was a really clever hack. Unfortunately, certificate transparency logs weren't meant for that. So the existing CT log infrastructure, people politely asked him to not do that and stand up his own infrastructure abusing or piggybacking on the CT logs that power web traffic. So Brandon started on a couple other ideas here and we worked with him to start a more general transparency log called RECOR. That was really the start of the SIGSTOR project. Mid-2020, mid-to-early-2020. So instead of piggybacking on the existing infrastructure, we ran our own instances of Trillion as a public benefit. So anybody can put entries in here and anybody can verify entries in this instance of Trillion. I put a bunch of other pieces and tools together around March of 2021. So some tooling to interact with that transparency log and some container-specific tooling as well that we'll do some demos of later. A bunch of the components have started to mature rapidly. 2021 is where supply chain kind of took off and hit the mainstream. And we've got a whole bunch of different public services that are running that are stabilizing as we go. The cosign tool itself, which is for assigning and verifying containers, hit a 1.0 or GA release back in July. And the transparency logs that it interacts with are going to move to beta any day now. So here's some stats on the community. The activity that we've seen, it's really been amazing all of the support from everybody in advancing the SIGSTOR project. Across all the different repositories and projects we've had basically since just March of this year, there have been 820 commits, over 100 contributors as of yesterday. 20 different organizations or companies participating. It's really an awesome, vendor-neutral, multi-company, all the goodness of open source that's allowing SIGSTOR to get to where it is today. The transparency log, it just checked yesterday, has almost a million, so three quarters on a million entries in it so far, which is awesome to see too. So here's the very complicated spaghetti and wavy drawing that I talked about before. We're going to break this down into the three different pieces, so you don't need to understand everything right now and with some demos too. So I'm going to do some demos showing how to assign a container and then verify that container all the way back to the source code it came from, even through a base image. I think we're getting enough feedback again. Cool. There we go. And we're going to show how all these things fit together too and then Bob is going to do a demo that breaks it down even farther, showing how to do all this in bash without even using the tooling that we built. All right, so here's the first demo. Let me get out of here. I've got a couple tabs open first. There's a simple Go application that's in the GitHub repo here. This is a basic Hello World Go application, just prints Hello World. It is containerized using a tool called Co. Does anybody use Co to build Go application? Cool. Co fan club in the back. So this runs in a GitHub action. It builds a container in Co and publishes it to ghcr.io. This whole process is signed and verified using ambient credentials that are present in the GitHub action itself. Using an awesome GitHub six-store integration that we work together on. So you don't need any secrets in here. There are no credentials. This is all short-term stuff that's handled automatically by GitHub. One of the cool benefits of this is that the credentials are bound to the exact run of the action itself. So this GitHub action can run every time something is committed. It can run manually. It can track things down, not just to the repo they came from, to the exact invocation of the action. So we can figure out when it ran, exactly which commit triggered it, who pushed that commit, all of that goodness. This uses just the normal Co file, which bases your Go application on top of Distralis. Distralis is a set of lightweight base images designed for containers. Does anybody here use Distralis? A bunch of people. Even if you don't know that you use it, you're probably using it, because our friends here in the Kubernetes release group moved the Kubernetes images over to Distralis by default. So if you're running a Kubernetes cluster, you probably have Distralis in there, whether you know about it or not. And luckily enough, these Distralis images are also built and signed and have all of the provenance associated with these builds inside of the Recore transparency log. So in this demo, we're going to start with the application base image itself, follow that back to the commit that signed it, and then jump all the way to the base image from that as well, and follow that back to the job that built and signed that base image. So let's open up the terminal here and start typing some commands. Bob has them all saved for me. We're going to do a lot. All right, so we're going to start by setting the experimental environment variable, because this uses the experimental features of the transparency log. And we're going to verify the image itself. There's a ghcr.io, the name of my GitHub repo, and we're going to do some JQ magic here. So this fetches the signatures and it verifies the signatures and dumps the provenance information here so we can figure out exactly where it came from. The JQ command formats it nicely. We can scroll up and see a couple different entries here. We can see that it was signed by this subject. So this is the fancy GitHub sigstore direct integration that I was talking about before. So we can see that this is a token that was signed by GitHub down to this exact GitHub repository. The exact action inside of it, and it ran on main. And this is the exact commit that it ran at. So we can take a look at that commit. We can come back here. This is it. E out round repo. Yeah, that is the last commit here. The E041. Back to the terminal. Now we're going to take a look at that image itself using the crane tool, which is designed for working with registries. We're going to look at the manifest, the actual OCI manifest of that image itself. Pipe that to JQ as well. And this is how we're going to find the base image. And this is the first one. So we've got the specification recently added a standard annotation where build tools can place the base image that an image is built from. So we see these two things here. So this is built from the distrelist static image at this digest. So that's how we're going to jump back over to the base image and start looking up signatures there. So we're going to take that digest. We don't need to know anything about how this was built. We just need to know the digest of the container itself. And do some shell magic. We've got a bunch of entries here. So the reason there's a bunch of entries here is because the distrelist images are actually reproducible. There's about 50 images, 50 variants of images for different architectures that are built from the same repo and they all rebuild every time that commit is pushed. Since it's reproducible, we get the same exact digest unless something actually changed for that specific image. So here we actually have a bunch of runs reproducing that image itself, which is pretty cool. So if we take one of these, we can fetch that from the log and start breaking apart the response more. This is hitting the transparency log. A couple more JQ incantations. And we can see a lot more JSON here. But this is the exact provenance. So the provenance contains information about how the image was built. And this is captured by the build system. So this is an automatic build system using Tecton CD. This captures things like the inputs to the build, the exact containers that ran in order. So from those inputs, which were the source code, the exact containers that ran with their digests, the commands that ran inside of those containers, all the way up to the final images that got built. So we can see the commit that it was started at in the distrelas repository. See the container that ran. We can see those entry points. We can trace this all the way back to that commit itself. But why stop there? We can actually start to look up signatures on the commits. Do people sign their commits? How many people sign commits? Exactly. Not too many, because it's really hard to do. And it's hard to look up the signatures and actually do anything with them. So there's some cool demos and hacks that we've done on the signature store where you can actually sign commits. And instead of putting them in the git repository, you can put those signatures in the transparency log as well. Because a commit is just a long string. So I signed these commits in the distrelas repository and in the hello co repository. And you can look up signatures for that commit as well in our transparency log. So we've got that here. Let's get the information for that entry. And we should see my email address in here. That's a part of it. We've got a certificate. We're going to pipe that through open SSL. And we should see a subject information. Awesome. And that is my gmail address here. This is the rest of the certificate. This was issued by the certificate authority that we operate for free as part of the signature store project. So my personal account signed the commit in the distrelas repository. And we verified that backwards all the way from the code in the base image to exactly the systems that it was built on. All the way up to the application image in the exact system that was built on. So these are systems in different github organizations. One was using github actions. One was using tecton. So we've traced these across build systems, across organizations, from source code all the way to running container. Cool. I'm going to hand this demo. I'm going to hand it back over to Bob. Awesome. Thanks, Dan. So as Dan mentioned, there's many moving pieces involved here. I'm going to kind of do a deep dive into three of those components. And the first one's called Fulcio. And so this is the certificate authority that is serving as a couple different roles. But number one, it is the CA that you're going to go to to get a code signing certificate. And it's going to have that identity information that Dan just showed off with his Gmail address embedded in that certificate. So rather than just having the bare public key to verify the signature, you're actually going to have the certificate that will contain extra information about whose identity, who was the signer, who actually vouched for that signer. And in the case of his email address, it was actually Google. But we also do integrations with Spiffy Spire. So we can actually have the workload identity actually be recorded in the certificate as well. So there's kind of a, there's a, obviously with OIDC, it's built on top of OAuth and so there's a multi-legged flow that happens behind the scenes. But in essence, Fulcio is issuing certificates and it actually has a transparency log that runs as a part of its deployment as well. So you can think of that as providing key transparency to say, hey, at this point in time, we have a certificates of being granted, we had a valid identity token, which was vouched for by a particular provider and all of that is now stored in an immutable ledger. So we can walk that back at any point in time to verify the validity of the signature, the validity of all the timestamps. All of that can be put together and Fulcio acts as that authority that can tie all this together. Now, we've got kind of two different modes. One is, you know, if anybody here has ever tried to manage their own PGP keys, it's not the easiest thing to do in the world and I think, frankly, that's one of the biggest inhibitors to actually people managing software is just key management is too darn hard. So, you know, one of the things that the team has been working on is something we like to call keyless mode. Now, obviously, there are still keys involved because we're doing crypto, but think of these more as ephemeral keys. So if I have the ledger that stores all this information and I can walk that back and prove that these things existed in a point of time, I actually don't need to store a key, the private key forever. I can actually just throw it away and treat it as an ephemeral object. So we do full CO operates in both modes. If you've got your own keys and you want to use them, great, knock yourself out. If you want to go to the keyless route, we can support that as well. The, as Dan showed, the email address and the issuer flag are actually stored as X509 extensions in the certificate. So if you've ever gone through the ASM1 structure and looked through all that stuff, they're actually there inside. And again, this root of trust ultimately comes back to understanding when something was signed, who's attesting to the identity and tying that all together. So we can actually walk that through and that's part of what I'll show you in my demo is actually how do we walk and look at these individual fields. But from an architectural perspective, we can rely on public open identity providers. If you have the desire to run your own instance of full CO and you want to use a corporate OIDC provider, that's totally capable as well. For the public instance, we're not going to just federate in any single OIDC provider that might be out there in the world. We're going to keep that list fairly narrow. But overall, this architecture allows us to bind that identity without having to have a key signing ceremony or, you know, like a little party and build your web of trust that way. We're actually relying on these primitives that exist on the internet to make that happen. The second component is the transparency log, what we call Project Recor. As Dan mentioned, this is based on the Trillion project from Google and we've developed what the Trillion developers would call a personality. So this is a REST API that sits on top of that transparency log. It's got a very similar API structure to what the certificate transparency RFC would use. But in essence, this is an immutable ledger. So I can only append new entries to the log and because it's based on a Merkle hash, I can actually walk the hash chain all the way back to the root to ensure that if this entry exists in the log, some cryptographic will believe that nobody has altered that log given the state of the tree at any point in time. One thing to note is that while we are signing artifacts, we're signing containers or blobs, the transparency log does not actually store the blob or the container itself. It is only storing the digest of what was signed, the signature itself, and then the public key that was used to verify that signature. And Recor actually does signature verification before ever making an entry into a log. So things that are in the log are proven to have valid signatures at the time that they were written. I said we don't store content. There's two exceptions to that rule. Number one, Recor actually also acts as a time stamp authority because again, part of unwinding this whole root of trust is just saying I have a signed artifact here is great. But if I don't know when it was signed and who signed it, then that's not enough information to actually make a good decision. So we needed that capability in. There's certainly other time stamp authorities out there that you can use but Recor acts as one as well. So we store the entire time stamp object in the log. And the other one is for Providence metadata. So when Dan showed kind of the long JSON document of all the chains output from Tecton chains, that actually is persisted in the log as well. So we're not trying to serve as a new content store for the internet where anything that's signed is downloaded from the log. That's not the intent here at all. Only think of it as a data store for information about the signing in Providence there. Another thing to note that happens with certificate transparency all the time is there are actually entities that monitor these logs that will sit there and do the cryptographic proofs and walk that Merkle tree to make sure that nothing's actually been changed. We exposed that full information through our API set. We actually encouraged people to develop monitors against our public logs. So again, the aim of this is to not just have a central source of truth that only you know a handful of developers will ultimately maintain control. This needs to be open and transparent in order for people to trust it. So we need monitors to play that role. So we have Purdue University a shout out to Professor Santiago from there. He's had his team working on Billion Monitor and we're encouraging others to build those as well. And then finally Dan showed off the verify capability of Cosine but this is a, think of it as a Swiss Army knife that allows you to do many things, primarily with containers but it also has some support for blobs as well. This is actually interacting behind the scenes with either key management providers. So if you've used KMS from AWS or from Google, this actually can talk to their APIs and use those signing systems in addition to having native RSA or elliptic curve keys on your desktop that'll works as well. And that's also going to be the entity that's contacting and communicating with the OCI registry to actually pull down the container, verify the digest, compute the signature and go back and store, you know that information in an artifact in the registry as well. So all of these systems are talking to one another but one point I want you to take away is that this is fundamentally modular. Again, like if you're using AWS you can use the key management service from there. If you're using Google, these things are very interchangeable very modular. So again, we're not trying to solve this and you have to only use our tool and nothing else we realize that that's not probably the most effective path forward. The last thing it does is once it signs the container in this case is actually going to call out to record through the REST API and make the entry in the log and return that back as part of the artifact as well. For verifying artifacts in a transparency log, there's really two ways to do it. One is you can query that log in real time. That's not always practical. So very similar to what's done with SSL certificates is we actually can create what we call a sign entry time stamp. This is if you're familiar with certificate stapling in that process. This is essentially what can be tacked on to have a verification or cryptographic proof that hey, this CA issued this certificate at this particular point in time and I can walk that all the way back without having to make several round trips to a certificate authority as you open your browser. So we do the same exact thing in Cosign. We have that artifact that you can put alongside an artifact in a repository and you can walk that all offline and still have confidence that the entire chain of trust has not been broken. So another quick demo here. I know again another spaghetti diagram with lots of moving parts, but I do want to kind of walk it through the steps of what does this really look like if you're not using our nice CLI or in many ways we're using we're standing on the shoulders of giants of people that have worked on SSL and many other technologies in the past but trying to tie this together to make this process as simple and as easy as we can. So what I'll do is we'll start by just creating an ephemeral key as I mentioned this kind of key list mode and then we'll walk that through create a random blob we'll sign that blob we'll push that out to the transparency log. We'll actually generate a time stamp as well using Recor pull all of that together and have that collection of artifacts that then you could store in an OCI registry if you have a container or you could just put out for folks to download that artifact and to be able to verify that signature going forward. We'll switch over here all right so the first thing I'm going to do is generate an ephemeral key pair so I'll just use the elliptic curve DSA algorithm. Good practice is to not show your private key to the public internet so I won't actually echo that out here but if you wanted to see it was there. I will show you the public key because obviously without that you're not going to be able to verify the certificate or verify the signature rather so now we have this public key pair that's great it has nothing to do with my identity it's just random bits off of prime numbers that have been written to the disk. Now I got to have that linkage to who I am as an individual to be able to tie identity to this overall flow so what I'm going to do is I'm now going to initiate an open identity flow to actually get an open identity token from a provider in this case Google and just basically pull that down and store it on my local disk our tooling cosine and many of our other tools actually automate this all behind the scenes I'm kind of giving you a what actually happens under the covers look here so again it's not this complicated if you use cosine but I wanted to kind of show you how all this ties together so shout out to the folks at small step they've written a really great CLI for walking these sorts of flows called step and so this command is literally just going to reach out to our OIDC provider that we're running a federated provider also shout out to the DEX community so we're actually running an instance of DEX to federate in Google GitHub and Microsoft on our public instance and so what we'll do here is we'll just start that open identity flow and when I hit enter here my browser is going to pop up and it's going to ask me to log in if you've ever seen one of these like log into this website using Google Facebook whatever same exact thing is happening under the covers here so in this case I'll choose to log in with Google and I'm logging in with two different accounts here one from Red Hat where I work and my own personal Gmail in this case I'll sign it with my Red Hat account I click on that says cool successful come back to the command line let's actually look at what is inside of an open identity token so this is a JSON web token or JOT as people informally refer to it as so the contents of a JOT is this JSON object that's signed you've got some information in the header about the algorithm that's used to sign that content you've got a payload content here inside this payload you've got an actual URL for the issuer that is generated this token in this case since we're using a federated identity provider this is actually the SigStore OAuth is the issuer but again this came from Google since it was a federated scenario and again it's got my email address inside of it here as well so now I've got some you know an entity called Google via SigStore that has attested that Bob Callaway has presented valid tokens you know valid authentication tokens to me and I'm willing to generate this token that lives for a very short amount of time that says hey this should be used for any application that comes back to the client ID of SigStore now obviously if you're not running SigStore then it's worthless to you but since we are running SigStore we're able to use this token to call out to Fulcio and get a code signing certificate so the next thing I'm going to do is actually extract my email address from this programmatically even though I could just probably type it in more easy fashion extract it using a JQ and a couple other bash tricks and we'll put that into a file and what we'll do now is we'll start the flow of calling out to the Fulcio component one thing I need to do is I need to have proved to Fulcio that I have possession of this public and private key pair that I've generated earlier so the way that I'm going to prove that to Fulcio is I'm actually going to sign this email address that actual string using that same private key so to do this I'm just going to use open SSL using a SHA-256 digest using my private key and I'll generate a signature and put that into email.sig since it's elliptic curve that takes no time at all and I'm ready to actually send an API request to Fulcio we have a all of our stuff is written through open API so our specs are totally open we have swagger interfaces for all of this as well so if you're curious to see what the interface actually looks like at an HTTP level all of that's documented on our GitHub site but since again we're doing this through Curl and not nice UIs we're now going to type a very long Curl command with inline JSON setting content type and accept headers I would highly recommend you don't do this if you don't have to we'll keep going and now I'm finally done and I can type perfectly if you guys haven't noticed no mistakes as we go forward here so now I've called out to Fulcio everything's good to go let's look at the certificate that came back you know what I talked too long on the token it wasn't valid long enough so the joys of internet demos and we can recreate this very quickly it's amazing how good I've gotten at this alright now let's look at the certificate coming back alright so now I see the actual code signing certificate chain that's coming back from Fulcio very similar to what Dan showed actually coming out of the log because again this is going to be the public key that you're going to pass along to people not just the actual EC bits but actually the X500 certificate that has the identity information in it as well so we have an issuer here of Sigstore and we have my red hat email address that was attested to by Google listed in the certificate so the next thing I'm going to do really quick is I'm just going to make sure that I can actually verify that signed string that's my email address using this code signing certificate because if I can't then something's wrong because the public key should be able to be used that comes back and says it's verified okay so we're all good there and just to be triple sure I'll extract the public key bits out of the code signing certificate and just do a quick diff and the echo does not return which means those binary bits are totally equivalent so now I've got a code signing certificate from full CO I've still got my key pair sitting on my laptop what do I need to do next well I need something to sign so let's generate something to sign we'll just throw 128 random bits into a file right for this case we'll just keep it simple and short and sweet so next thing we'll do is we'll actually call it to open SSL ask it to actually sign it using our private key and we'll store that signature in a detached file so we're not using pkcs bundles we'll just have this totally separate into a separate file that's super quick and just to make sure that everything went well we'll ask it to verify using our public key that that was all good and it says yes we're all good next thing we'll do is we'll generate a time stamp and so open SSL has a built-in utility to do this so what we'll do is we'll use the actual detached signature and we'll test to hey at this particular point in time this signature was created so we'll pass that data in and we will create a binary artifact that needs to get sent to the record time stamp service we'll actually then send that curl over to that REST API just putting that content in the payload it enter that takes a second to come back and the next thing we'll do is we'll need to pull down the full certificate chain to be able to verify the signature on the time stamp itself so we'll again call it over the REST API that works pretty quick and the next thing we can do is call out over using the open SSL tool to verify the time stamp signature so again like most of the way through my demo and you guys are probably all like man that's a lot of commands right this is part of why nobody's signing anything because this is very complicated and the sequence here it's easy to mess it up and if you don't have the type my typing skills it's easy to make mistakes so again part of echoing this as I go through the rest is again what we're trying to do here is really simplify this and make sure that this is very addressable to developers of any skill set to where ideally if we're doing this the right way all of this is just hidden behind the scenes and I don't have to think about supply chain providence anymore and hopefully you know we don't have these attacks anymore we've built this capability into our infrastructure and we can go back to writing code which is all what we like to do anyway so we'll verify this time stamp really fast open SSL says that's good we'll also then just quickly check to make sure that the time stamp that we got back from recor was actually in the valid period for the code signing certificate so if I scroll back up here I have a 20 minute window at least that's how we have it coded up right now on today between 1928 and 1948 and at 1930 I signed this so for people to go back and walk to make sure did this person have possession of this key during the valid validity period of the certificate I now have an artifact that can prove that cryptographically as well so now I've got all these pieces I need to put them in the transparency log so I can go back and refer to that in the future so as I mentioned recourse got a rest API that's well documented again writing JSON on the command line is really painful so we'll just do this real quick and the JSON content here I'm not trying to just you know move really really fast here for the sake of time but we're putting in the signature we're putting in the public key and again record needs to verify the signature itself so you do have to send the artifact that you are signing to recourse so that it can verify the signature against it but we do not store the artifact in the log at all so once I call out to record I say cool let's actually look at the output that's come back and with a little bit of jq trickery I can now look at the actual object that comes out it's got a unique identifier this is the typical tree leaf hash for those of you who are mercury fans and then we have the actual content that's stored here we've got a version we've got a kind if you've ever seen the typical you know specs for using a CRD in Cuba this looks and feels much like everything else that is being done these days in cloud native we've got the hash we've got the signature we've got the key and then finally as we mentioned as I mentioned before we've got this artifact that we can kind of staple on for offline verification entry time stamp that's this base 64 encoded stuff now cool now it's in the transparency log we're all good the last thing that I'm going to show is the inclusion proof I'm not going to sit here and compute how you know concatenation of Shaw hashes for you over and over and over I'm going to cheat and actually use our CLI which renders it in a little bit more of a slightly more readable fashion if you walk it all the way back here but we've got a root hash starting in for starting with four C6 and I can walk all the way from the tree hash going all the way back up to the root of the tree and if you believe what is being output here the command line we get back to the root of four C6 so now I've got something I've verified that it's actually been in the log and last but not least I can delete my private key because I don't need it anymore so key management can't get much better than that I know that was a ton and we'll actually post all the gist for this as well on a blog post we'll publish after the event but so if you guys actually want to manually run through those commands you can be more than happy to do that so then one quick wrap up chart I know we're about out of time in terms of what's next in the six store community as Dan showed you with the stars and the commit track we have an amazing community that are generating PRs left and right but if I were to break down kind of our road map in three different areas signing stuff we got to sign more things in order for the supply chain to be secure we're going to continue to work in the container ecosystem we're going to continue to work with where do you put containers right you put it in cube you put it in docker you put it in podman right we've got to make sure that we've got that verification logic built into those systems we've also got to realize that this may be a little bit of a bad thing to say at KubeCon but like containers aren't everything right there's actually stuff inside the container like node packages and jars and other things that we need to sign as well so we're working with those communities distribution systems to actually sign and verify as well as working with policy bundles because again if I've got a ton of signed stuff that's great but there's still a big question who do you trust who do you really trust to put workload into production so that's where the role of policy ultimately comes in so we're working with a variety of upstream communities that are shout out to the OPA folks and the Coverno folks and many others that I'm forgetting but we're working with many of those teams to actually prove out that we have that tie back to policy and then finally we are running this as a public good service Dan alluded to we want to be the let's encrypt for code signing and part of that is actually standing up this infrastructure so that people can use it without cost so we're trying to get these services robust and hardened and audited to where we can be confident that what we're putting into production is something that the community can ultimately trust and then stand up these additional monitors to where people can verify that and keep us honest as we go forward so with that I know we're right at time but really appreciate everybody's time and attention today and if to learn more about us sickstore.dev it's same github.com you can find everything that's going on we have Slack channels mailing lists the whole nine yards so thanks everybody