 Next up, we have Bob Callaway, and he will talk about use SIGSTOR to secure your software supply chain. Bob, you have the word. Awesome. Thank you for my screen. All right. Good morning, good afternoon, good evening. Appreciate everyone listening in today. I wanted to talk to you a little bit about SIGSTOR project, which is a new and really quickly growing project under the open source secure software foundation. I'm an attack on the SIGSTOR project, an attack member, as well as a technical manager at Google focusing on supply chain issues. Also, former Red Hatter, so good to see a lot of my former colleagues out there today. So, if you have done any reading around the topic of software supply chain over the last year, you've seen numerous attacks. We've seen governments start to get engaged to say what can we do to ultimately provide greater levels of assurance of what software is being downloaded and consumed, and ultimately how do we start to fix this problem. We've seen this huge explosion in the number of open source packages that are available for use, as well as for the growth of communities. But as commensurate with that, we've seen a massive increase in the number of supply chain attacks that are out there. So, Sonotype did a report on the overall state of the supply chain in here. These are the graphs that go up into the right that you actually don't want to see go up into the right. And this just looks at a very small set of attacks out there, which are looking at either dependency confusion or typosquadding, where maybe you want to download a JSON parser and instead of an S, someone will actually submit a different Unicode character in that looks like an S, but isn't and force you to download an imported package that may have some malicious code in it. So, those sorts of attacks are becoming very, very prevalent. And they're just one example of where we as a broader community need to spend more time and focus to provide a higher standard of security for folks. And if you've been through a corporate yearly training where they talk about information security, you've probably seen that same screen we all have of the USB drive laying on the ground and saying, well, do you pick it up and do you plug this into your system to figure out what's there? And everyone universally hopefully should go no, of course you don't pick up that USB stick and plug it in. But if we really take a step back and think about it, when you go to pull an image off of Docker Hub or off of Quay, or you do an MPM install to pull down a binary when you've copied and pasted something off of Stack Overflow, you're essentially doing that exact same process. You're going out to an untrusted source, you're pulling in random content, putting it on your system, and normally you're starting to run that right away. So while, again, these systems and building up these different package managers and registries have been awesome to drive a greater pace of innovation, maybe not the greatest thing from an overall security perspective. So myself, Luke Kynes from the CTO office at Red Hat, as well as Dan Lawrence, started the SigStore project a little about a year and a half ago, really with the premise of, hey, what can we do to start to address some of these issues? And the thing that jumped out to us right away was, well, it's not like we don't actually have the tools to start to validate some of this, apply policy, sign and verify content. It's just that nobody's using them. And as we started to really dig into that, we realized that there were some significant challenges in terms of user experience, key management, identity management that really warranted some real focus and energy for us to go and look at. So what we've done with the SigStore project is we've built out a set of sub-projects to address different facets of this overall problem. We then engaged the Linux Foundation knowing that, hey, it's great if one company wants to go and try to solve this space, but really for broader adoption to take place, we need to see this happen in an open community, in an open, neutral consortium, and engage the Linux Foundation and ultimately the open SSF to be kind of the home for this work. In addition to having a set of open source projects, we're actually running free, totally publicly available transparency log and certificate authority services. They're community operated, so if you just sat in on the keynote around operate first, it's that same sort of mentality as we as a community are actually standing these up and running these. And the goal in all these projects is, again, not to try to be the single answer for solving every single scenario here. What we wanted to do was really enable a more modular and interoperable approach to bring together a lot of the existing tooling and off standards to really start to address the problem of how do we actually sign content? How do we actually verify it in a meaningful way based on the learnings that we've had over the past, let's say 10 years within the WebPKI space? So the simplest analogy I can give you in terms like, okay, well, that's all sounds great, Bob, but what is SigStore? If you're not familiar with Let's Encrypt, they were launched several years ago to really try to get 100% of Web traffic to be encrypted using TLS and SSL. And so they wanted to remove as many barriers as possible to get people actually generating certificates, putting them onto websites and getting that traffic encrypted. We at SigStore want to be for software distribution when it comes to signing and provenance, what Let's Encrypt was to the SSL and TLS space to try to make these practices as ubiquitous as possible to remove as much friction as possible and ultimately to facilitate, secure download, verification of these assets, and then to, again, bubble up some of these decisions around policy and trust in a more meaningful and visible way. So when I say like, okay, cool, let's go sign all the things, like, well, all right, fine. Well, what are we really talking about here? So I broke it down into four different areas. The first is starting at kind of the bottom of a layer cake architecture diagram. You've always typically got infrastructure at the bottom. And so there's been a fair bit of work in the CNCF and a handful of other projects to really start to generate attestations about what is the identity of this computer and what is the identity of the workload that's ultimately running. And so we need to generate those attestations and then cryptographically sign them and verify them. Next area is really around the build system, right? It's always fun to write code, but we need to have attestations around who actually was the author of that code. Who did the plus ones and the LGTMs in the code review that said, yep, this is sufficient and ultimately push that through to generate a release. And as well on that point, what system actually generated that release? Is that a desktop that's sitting on the floor of your room? Is that a cloud-hosted CI service that's out there? And again, how do we record that information and put it out there such that maybe I'll only choose to run software that was generated off of a build system that has an explicit trust stance and posture according to my corporate policies. Next one is really around deployment. How do I actually not just sign the actual binaries that are getting run, but the config themselves? Obviously with a huge push around infrastructure as code, we want to be able to sign those artifacts and make sure that everything that we're putting into production is verified and has a sufficient level of integrity underneath it. And then finally, we got to wrap all of these signed documents up and put it into a policy construct so that we can actually start making explicit decisions about what is in my environment and what do I trust. Recognizing that these changes, this utopia that I'm trying to describe around everything being signed and trusted and having the full providence of everything, it's going to take some time for us to get there. So as we go through that journey, we want to make sure that we're able to understand what is the risk in our environment and then as we improve that over time, tighten our controls if we so desire to make sure that we're improving the overall posture of our environment. So again, this project was really started standing on the shoulders of giants. There was a lot of work done going back seven, eight years ago around the notion of a transparency log. And I'm going to go into a lot more detail in terms of what that ultimately means, but that was published in an ACM paper back in 2014. Google open sourced an implementation of this in 2016. And ultimately, we started to see some of these raw materials pull up around, how do I actually get a public log that can be independently verified and would be append only meaning I can't go in and make changes to any record that would ultimately have already been committed. That started to be out there for TLS certificates over the web. In 2017, the Firefox team started looking at, well, could we actually pivot this for other binaries and started to reuse some of that certificate transparency log work that was built for TLS or it's on the web, which was an interesting kind of proof of concept as I would call it. A lot of great work and a lot of great thought, but it was kind of adapted to the limitations of CT logs. And so in 2019, 2020, we started seeing, well, okay, well, what if we took a step back and actually re-envision this more broadly? Brandon Phillips did some work on a project called Argett. When all of that really serve as the inspiration for what the SigStore community has now built. We'll go into the three major projects that are under the SigStore banner. Recor, which is a signature and attestation transparency log that launched in the middle of 2020. We then launched a certificate authority called Fulcio shortly thereafter. And then in parallel, we also started designing a new tool to sign, well, initially started with signing container images, but has since started to broaden out not only into just generic plots and artifacts, but also starting to look at the broader ecosystem. So Cosine is that signing and verification tool that we've ultimately built. Now, we have gone 1.0 on the Cosine release middle of last year, but we've been working to harden both the transparency log as well as the CA service up to a quote-unquote GA standard. And we're hoping to do that by the end of Q1 this year. Real quick blurb on the community. It has been amazing to see this go from three individuals to now seeing Slack channels with over a thousand people in them. We've had 236 individuals commit over 3,000 different patches. You can see the rest of the stats here on the chart, but ultimately we've seen kind of that up into the right growth that I talked about before. This is the type of up into the right growth that you want to see on charts. And so it's really been amazing and quite humbling to be part of this overall community that's been jumping in and really putting the energy to solve some of these fundamental problems we have. So just to give you a sense, this is a very active, very welcoming community. If anyone here is interested to learn more, I hope you'll join us. Now we've built some of this technology. We've prototyped it. The next question might be, okay, like cool, interesting. Who's using it? So on the left side of the chart here, there's a couple of big names that you might recognize around Arch Linux or Kubernetes, GitHub. We're starting to see some of these big names really take a step back and look at how could they leverage both the projects at Sigstore is under, I should say, under the Sigstore umbrella. But also how do they start to leverage some of these public services that we're running as well. So Kubernetes has recently passed the cap to look at using the cosine tooling to sign their release images, as well as start to include S-bombs. GitHub now has some built in support in their kind of starter workflows to show you how you can actually sign artifacts using cosine. The RubyGems community is looking at publishing an RFC literally today to look at actually revamping how RubyGems are signed and published into their package management systems. And we're having very similar conversations with other folks with like the Python community, the Node community, various folks in the Java ecosystem as well. So we're seeing a ton of excitement and again here, we're literally want to get to the point where all of those things in the infrastructure build deployment and policy space, we want to make sure that we're doing a really robust job of covering all of our bases from that perspective. So getting a little technical, I wanted to kind of quickly call out at least a high level diagram of what the architecture of these three moving parts looks like. As I mentioned, there's cosine, which is the tooling, which is at the top of this diagram. Think of this and I'll show it off here in a demo here in a little bit. As the tooling to where, hey, I say I want to go sign a container, I want to sign a zip file or a random blob on my desktop. How do I ultimately go through that signing process? If you've ever used certain systems named PGP, they're very powerful. They offer quite a few options, but for mere mortals that maybe don't have a deep security background or don't understand the nuances of how to actually create and manage cryptographic keys safely, they're pretty daunting and people, frankly, just look at the tooling and go, yeah, I know I should do that, but I'm not doing that. So cosine really tries to address making that user experience much more simple and straightforward. And even in some cases, we can actually totally remove the key management responsibility on the end developer, which is a pretty impressive thing that I'll show off here in a bit. Once you've used our cosine tooling to sign an artifact, that tooling will automatically publish the results into Recor, which is our transparency log. And so Recor is a public rest endpoint. You can go in query, not only what entries are in the log, but you can actually go back and verify the overall state of the log by it's built on a Merkle tree, which ultimately has distributed hash table such that where individual leaves, their hash values are hashed together and ultimately aggregate to the top so that if any bit anywhere else in the tree were to change, the values would be, you'd be able to see that right away and know that something has actually structurally changed within the overall log. So it allows for independent monitors to ultimately call those API, look at the state of the log, we publish everything in public cloud storage buckets to where folks can go and actually monitor and keep us honest. Again, we're community operated, we're not trying to, there's no profit motive behind any of this, so we want to make sure that we're not just standing something up that's useful, but ultimately can be audited and can be trusted. And then finally moving clockwise over to the left. Fulcio, as I mentioned before, is our code signing certificate authority. One of the dimensions around PGP in the past is this notion of creating a web of trust. So if you've ever seen the notion of a key signing party, this is where folks will come together, show their identity documents to one another, say, hey, I've met you, I've seen your ID, I trust you are who you say you are, I'm going to import your key into my key ring. And that way, as I go and I look at artifacts that are out there, I can be assured that if you show up with some sort of signed artifact that you were actually the person that you say you were. Now, that's again, especially in COVID times where it's really hard for us to get together and meet in person, that's not quite pragmatic in all scenarios. But in other scenarios that we'll talk about, what happens if the identity of the signer isn't a person? It's actually the build system, or it's actually the desktop system that wants to generate it as a station. What we've done with Fulcio is by leveraging the OpenID Connect standard, we're actually able to import an identity token generated from a person or from an identity provider or from a workload itself, and actually store that information inside of a code signing certificate. And so building in that more programmatic linkage between an identity statement from an identity provider into the code signing process starts to really expand the use cases that we can ultimately address with these sorts of scenarios. So those are the three pieces, that's how it ultimately starts to come together. But rather than sit here and join on too much more about, you know, on charts, I wanted to quickly show you a demo of this. So what I'm going to do is two different things before I jump to the terminal. The first is I'm going to build in a very simple kind of Hello World app and ultimately sign it. And I'm going to walk through kind of the browser based login workflow of a developer that maybe has a Gmail account from somewhere and wants to use that to generate that identity token that I just mentioned. I'm going to commit that code. I'm going to push it up to GitHub. And while I'm showing you the demo in the background, GitHub Actions is actually going to go and build a container-ized version of that application. And it's going to publish it into the GitHub container registry. And in doing so, it's actually going to sign the image using its own workload identity, so there won't be any interaction from me, the developer, beyond just pushing my code up to the repo. And we're going to actually be able to go and look at the container registry and see the digital signature being published into GCHR. So with that, I'm going to switch over here. And just before folks look and go, man, Bob's an amazing typer just to make sure that the type, I don't have any typos and I don't have to go back forth copy and pasting. I'm using a little script called do it live, which allows me to make sure that I type by command totally cleanly. So this is live, it's running. So we'll hope the demo gods keep me in their good favor. But if you notice my amazing typing abilities with no errors, I am using a script to help with that. All right, so let's start this first demo. First thing I'm going to do is just start off and clean my environment. I want to make follow real quick just to clean up everything so this goes well. And then I'm going to actually go into a directory and let's open my simple hello world. Go file. So pretty typical hello world. I've got a simple print statement here. And what I'm going to do just to prove that this is real. I'm going to change this string to welcome to dev comp. I'm going to save that file. And then what I want to do is ultimately build that file. And just for reference, I'm going to take the check some of the build artifact that is spit out by the go compiler. So we see here that the Shah starts with O O A A. So it should be pretty easy to identify later. So I've made a change to my file. Now what I want to do is ultimately let the commit that and you'll notice here that I've got the I've got a quick shell out to get the current date and time. So when we go and look at GitHub, we'll be able to know that this came in at 9, 9, 20 Eastern time today. So I've committed that I've now pushed it to the repo. And I've got a binary sitting here on my desktop. And in the background, like I said, it's been that that commit has already been pushed to GitHub. What I'm going to do now is I'm going to use the cosine binary. And you'll notice here that I have this experimental flag that we have turned on. And that's because as I mentioned a couple slides ago, the recourse services and the full CO services are still in a beta mode. And we want to make sure that that's very apparent to users. So we have a flag that's still set on on our tooling. So what we're going to do here is we're going to use the cosine tool to sign the actual executable, the actual binary itself. And once in that signing process, we're actually going to take the cosine insert that comes back from the full CO service and store it in the local directory as well. So when I hit enter on this, what's going to happen is you're going to see my browser pop up. And if you've ever been to a website where you've ever tried to go to log in and it gives you the option of log in with Facebook or log in with Google, log in with Twitter, it's the same general concept that we're using here. These are all the OAuth2 and OpenID standards. We're simply using those as published. We're not modifying any of those standards at all. And so we present a screen here to say log in to the sigstore identity framework. Click log in with Google. We're not capturing your personal information. We're not getting access to your Google Drive. We're not getting access to anything other than your email address. That is the only thing that in this flow we actually capture. So in this case, I'll click on my Google login and it looks like I took too long to sign that. So you can see that this is actually a real demo because I got an error. So what I'm going to do is quickly rerun this and do another commit. And then we're going to run this process one more time here. And I'm not going to talk and take so long. I'm going to click through and then I'll go back to my terminal here. So you'll see this is the actual code signing certificate that was generated. I didn't have to go and understand any crypto calls whatsoever. This just flowed right through. And I've now not only made an entry in the transparency log, but I also have the code signing cert. Let's crack that open. Again, this isn't something you necessarily have to do. But for those of you who are familiar with looking at SSL certificates, what I'm going to do here is just print what this ultimately looks like. Quickly running through it. So sigstore is ultimately the certificate authority. It's the issuer of this cert. This is a public key that was generated in memory by our cosine tooling. And then down here at the bottom, you can actually see we're storing both the identity issuer, which in this case was Google. That's who I logged in with when I clicked on the login with sigstore. But it's also here captured just my email address. No other information whatsoever. And then this is ultimately signed by the certificate authority and the signatures there in the bottom. So now I have this code signing certificate. I've got this signed artifact. Part of how all of this works is to put this into this immutable log, the signature transparency log that we have. So what I'm going to do now is I'm going to query the log. And I'm going to say show me all the entries you have for the binary that I just compiled on my system. And so it's a simple recourse CLI. We've got, again, trying to keep the user experience here very simple. So search for any entries you have in the log for this particular artifact. This is, oh, well, in this case, we have one entry that we just put in by running the code signing tool. So what I'm going to do here is actually ask for that log entry. And what's ultimately come back here is some reasonably pretty printed JSON to say, hey, there's a record in the log. It's at index 1.191756 million. And it has this particular unique identifier. In the transparency log, all we are storing is the Shaw sum of the artifact, not the binary itself. We're not trying to redistribute binary content. We only want what is minimally necessary to verify the signature, which in most cases is simply just the digest over the artifact. The public key is used to verify it, and then we have the signature itself. So what we're storing on the log is a record that says, at this point in time, RECOR was given the hash. It was given, or the digest. It was given the public key, and it was given a signature. We did a cryptographic verification to make sure that that checked out. And assuming that it did, we made an, you know, an append only entry into the log to say, here's the instance of this artifact. It's this particular point in time. Now you might say, like, okay, cool, you wrote down that something happened. What does that really give me? So if you've ever heard of the term kind of a split brain attack or a split view attack, we don't really have a single source of truth as to who signed that artifact and at what time was it ultimately signed, and what happens if somebody comes along and uploads a different binary with a different signature, and everything looks like it checks out, but you may not be seeing the same thing as what somebody else halfway around the world is ultimately seeing. If you're particularly targeted with a man in the middle attack or other things, we want to make sure that we have a single source of truth to say what happened when. And so what the transparency log ultimately provides is a way for you to independently query something and to know, for this binary, it's only been signed once, and here's the email address of the identity that signed it, or you may see 700 signatures that are all tied together there. So again, this doesn't necessarily solve this problem, but what it does is it level sets the information, and so people can actually now start to query this and better understand how all of this is pulled together. So you may have noticed, as I said, that kind of the framing of the demo here, I committed the code and pushed it up. GitHub Actions has been running in the background, and it's been building a container. So what I'm going to do now is switch back over to my browser, and we're going to look at the state of GitHub Actions. And here you'll see that the first two commits, or the last two workflow runs that have gone through by GitHub Actions were from this morning during my demo, the first one on my timer expired, and this one, which should represent the current state of the repo. Let's go ahead and dig into this CI run here. Pretty simple CI run. It's just checking out the code, downloading the correct version of Go. It's using a tool, or I should say a project called Co, which is a cool open source project that just makes the process of compiling Go code, putting it into a container based on distrilous images, and then pushing it out to a container registry, super simple and straightforward. So all of this I could have done manually, but this Co tool just makes it a little bit easier. I log into the container registry once I compile everything, and then I'm using this Co tool here to actually sign the image, push it to the container registry, and upload its signature. So this last call here is where you can see the actual CoSign command running against that container image. You can see here that an entry was made in the log for the actual container image itself at number ending in 768. So what I'm going to do here is come back over here and let's query and say, let's verify the container image. And via the command line tooling, we can see we have the exact same image emitting in 768. This was actually signed, not by Bob, but by GitHub itself. So you can see the issuer of the identity token actually was GitHub. That flowed back and forth between the six store services and made an entry into the log showing a container with this particular digest value was put into the registry, and we have the information here from the log that's actually stored as well. An interesting note here, this information, this sign entry timestamp can actually be verified offline. If you have the public key of the log and you choose to put that into a trust store, you can actually verify that timestamp and have a cryptographic assurance that it's in the log without having to query the log in real time, so you can use this in both online and offline cases. And then finally what we'll do is let's pull the actual certificate out of the entry in the log and just quickly show one more time that this did come from GitHub, and we have a little bit more information here to say this is the actual name of the repository. It happened on a push event, it was on the main branch, and this actually was the commit hash from the repo that generated the build workflow to kick off. So if the commit starts at 9-6-9-6, let's go back to the repo real quick and check out my commit history. And sure enough, 9-6-9-6 is the commit that I pushed up. So all of that information is now tied together in an immutable log for reference by anybody who's consuming that project. So that is the end of the first demo. So let's quickly switch back here to the slides. So what I'm going to do here in the next few slides is just dig into a little bit more detail around what is actually happening underneath the magic of some of these tools. So we'll start first with the cosine tool. It's a Go-based binary that runs on cross-platforms that has essentially the responsibility for looking at the artifact, generating the signature for that artifact, and then publishing that signature to a transparency log. Again, it does this for both blobs as well as containers, which you saw in the demo. And another thing to note here is, again, the goal for SigStore was to be a modular and interoperable set of tools instead of services. So while I used a ephemeral key, and we'll talk a little bit more about what that means in the demo, I could just have easily grabbed my UB key that's here on my desk, plugged it into my system, and used the key that's on that UB key to do that signing. I don't necessarily have to get a key in memory. I can use, if I'm comfortable with managing my own keys, you can simply do that and just generate a call-out to our services to generate an ASA station that, hey, at this particular point in time, I actually possessed the key and I'm proving it cryptographically by signing something and publishing that up to the log. So the cosine tool is really there just to make that super simple and super easy. For containers, we leverage the OCI specs to generate a new artifact on the image manifest object that actually stores the signature itself in the registry alongside the container image there. So you're not having to go and fetch these from different places. They're all stored under the same object. And if you're familiar with using tools like Crane or Docker Inspect to go look for a registry, the signature shows up in those tools alongside the image itself. So this is, again, a pretty flexible tool that helps to walk through that workflow with a pretty simple and straightforward set of APIs on the command line. Next component is Fulcio. So this is, again, the code signing certificate authority. And so what you saw in that first demo was the use of what we like to call keyless signing, which is kind of a nod to the notion of a serverless type workflow, which obviously there is a server somewhere that's running that workflow, but we still like to call it serverless. In that same vein, keyless mode is where we actually generate a private and public key pair in memory. We do the signing, we walk through the whole workflow. But at the end, because we've published this proof of possession and the actual signature itself into it and a mutable log for all to see, I don't have to keep that private key around at all. I can actually just delete it and move on, which is a really powerful concept, because now I don't have to go find that UV key and keep it safe. I don't have to worry about what happens if I've got that little stubby thing plugged into the side of my laptop and my laptop gets stolen. I don't have to worry about that at all. All I need to do is make sure that I maintain the integrity of my OpenID provider's credentials. And assuming that that's true, key management is totally taken out of the case and taken out of a concern. As I showed, the Fulcio certificate authority either pulls the email address out of that login process or it pulls in the URL for the identity token that was passed into it, and it records that in the certificate itself. And it does all of that with using its own transparency log that records those code signing certificates and stores those, again, in a mutable log so that we have a record of who authenticated, when and how did all of that tie together. So again, all of this works with keys, whether you want to manage your own UV key or whether you want to use this kind of keyless mode that we talk about, any of those work with Fulcio. A couple of things more to call out about the Recur transparency log. There is a requirement that we will only insert things into the log that it can independently verify. I mentioned this before, but it's worth reiterating. Artifacts themselves are not stored in the transparency log. We are not trying to be the content store for the entire Internet. For digital signatures, we only store the digest, the signature itself, and the public key or the code signing certificate. The only exception to that rule is for full providence attestations or timestamps. We do record that data in the log, but again, there is no content itself. There are no binaries. There is nothing that can actually be used. These are just attestations or metadata about a particular artifact that would get put into the log. We have a full open API that documents what the rest endpoints all look like. We act as a compliant timestamp authority. And as I mentioned before, a public good instance, you can publicly monitor and verify the integrity of it yourself. But you can also run these services yourselves behind the firewall if you so choose. There's nothing preventing you from downloading these yourself. I'm going to check time and see if we can walk through some of this. I kind of hopefully showed you the nice way to use cosine and our tooling to make this super simple and easy. We've got time, so I'm going to dig in and actually show you more of the hard way of how to do this. What I'll do here is walk you through how I would actually replicate that whole demo that I just showed you before using open SSL, curl commands, and things that you could ultimately recreate without our tooling whatsoever. And in doing so, really the goal is to not just to show you, like, hey, look at how cool and simple we've made this process, but also to prove that all we're doing is reusing existing concepts and reapplying them in a different way to try to solve this problem in a more easy and straightforward manner. Inventing our new crypto algorithms. We're not trying to come up with the next generation of quantum safe stuff. Like, we're not pushing down that road. What we're trying to do is really drive this experience to be simpler and more straightforward. So with that, let's kick off the demo number two script. And the first thing I'm going to do is I'm going to generate that key pair that I talked about before. So this is just a simple call to open SSL to say, hey, we've got a private key based on the elliptic curve algorithm. Let's write that into ec underscore private dot PEM, and we'll generate the public key that corresponds to that and put it into ec underscore public dot PEM. So never good practice to show your private key. So I won't show you that, but I will show you what the public key is just for reference to say, hey, we've got a PEM encoded public key that could be used. The next thing I'm going to do is I'm actually going to call out to that same six store identity provider. And again, I can point this at any open identity compliant provider. If I chose for kind of having my own private deployment of the six store services, I could certainly configure them to point to my internal SSO if it generates ID tokens. That's totally a possibility. But for our public service in this beta period, we just wanted to point to something to get the concept out available for folks. So what I'm going to do here is I'm going to leverage a nifty little binary from a company called small step. And this is all open source that you can go and take a look and reproduce this yourself. But all this ultimately does is walk through that same OAuth bands to say, let's call out to a provider, let's get an identity token, and then let's ultimately write that into the file system. So you'll see my browser pop up again. I'm going to click the login with Google button one more time, click on my address. I get the thumbs up. This was successful. And let's pop back over to the shell prompt. So the next thing I want to do is let's actually extract the email address from this identity token that came back. And so here you'll see I'm using some JQ tricks and trimming new lines and things to try to make this process as simple as possible. But ultimately all I'm doing here is generating something that I can sign to prove possession of this private and public key pair that I created above. And so I'm going to actually sign the email address itself as a string. And so now that I've extracted the email address, what I'm going to do here is use open SSL and actually just say create a digital signature using this particular hash algorithm, use the EC private key to generate that signature and do it over the file named email. And then let's store that in email.sig just to make that easy and straightforward. The next thing I'll do is like, okay, I've signed the email address. Let's locally verify that that signature checks out. So instead of using the dash sign option, I use the dash verify flag. I pass it the public key in this case, not the private key. And we'll pass it the signature itself as well as the actual input message. So given those inputs, I'm able to verify the digital signature and make sure that everything looks good. Now here hopefully I didn't talk too long. I may have to restart this one more time depending on how long this took. But all I'm doing here typing very quickly is generating a rest call to the full CO endpoint. I'm passing along the ID token here as a bearer token to that API call. And I'm just passing it some basic JSON in the body of that request stating, hey, I'm using ECDSA keys. Here's the actual value of the public key. And then here's the value of what I've ultimately signed. So let's see if that actually worked. I did not. So what I'm going to do is I'm going to rerun this quick. The joy of doing things live is you talk and you forget how much time things take. So same sort of steps here we're going to run through. We'll click log in again, check the email. Let's sign it, verify it again. And we'll make another API call to full CO here. I'm passing that API token, some simple JSON here. It looks a little bit crazy because I'm doing it here on the command line with curl, but hopefully it's simple enough. And then finally here, we're going to write the output certificate into the SigningCertChain.pem file. Just to show you the identity token that we wrote, again, just to look at what information is here. I mean, I'm going to print it out, but I have notably removed my user ID and a subfield here just for my own personal sanity since this is being recorded. But this is the JSON payload that's inside of an identity token. It itself is signed by the identity provider. So that's the way that we, again, we cryptographically link this up into a root of trust. And here you can see that I don't have any information about how to get to my documents within Google or any other public information that I don't want to disclose. It literally just is my email address and nothing else. So now that I've got that certificate downloaded, let's actually print it out and look at it. And this time, we get that certificate. Again, issued by SigStore. We'll come back and talk about this validity period here in a second. But again, all that's stored in this certificate is the same thing I showed you before. It's just who was the identity of the identity provider that signed that token and what's my email address. Nothing else is stored. So what I'm going to do now is, again, a CodesigningCert certificate is really a signed document that says this person presented a public key with this value at this point in time. And so embedded inside of that CodesigningCert certificate is actually the public key itself that I generated at the beginning of this second demo. So what I'm going to do is I'm actually going to extract that public key value out of the cert. And then I'm going to actually make sure that I can still verify that same signed email address that I had at the beginning using what I sucked out of that CodesigningCert that came back from FullCO and OpenSSL tells me it's still good. I'm going to just quickly prove to you by calling diffs that both of these things are still identical because I don't see the message here that says, wait a minute, this doesn't match. So now we'll move on to the actual signing path now that I've got that CodesigningCert certificate. I'm just going to generate 128 bits of randomness and put it into a file called artifact. Let's again use an OpenSSL to digitally sign that artifact using our private key and we'll put that into artifact.sig. We'll quickly verify that that digital signature still checks out, which it does. And now what I'm going to do is I'm actually going to generate a timestamp, a digitally signed timestamp. There's an RFC 3161, if I remember the number correctly, that dictates what the standard for getting a digitally signed timestamp from a variety of different authorities that are publicly available. We at Sigstore also operate our own, which is the one I'll use here, but there's nothing to prevent you from going and using one from DigiCert or another provider. All of that is interoperable based on adherence to that RFC. So what we're going to do is we're going to use OpenSSL, generate a timestamp request, and in that request we're actually going to provide the SHA-SOM over the signature itself. And what we're doing here is we're proving possession of the signature at this particular point in time. And so what we've done is we've created that request, let's now call out to the Sigstore API, again using curl, just a simple push up of that binary content, and we'll also fetch the certificate chain that we will use for verifying that signed timestamp. So what we're going to do now is use OpenSSL to verify that signed timestamp, and what we get back is to say, alright, that timestamp came back okay. And again, like you may go back, Bob, why are you generating timestamps? This doesn't really make any sense. What we want to do is we want to make sure that that signature existed during the validity period of the code signing certificate. So you'll see here that that code signing certificate is valid for 20 minutes, starting at 1440, ending at 1500. So what I'm going to do now is I'm actually going to let's break open the reply that came out and let's compare the value in that timestamp to the validity period. And what we see here in the first part of that output, the timestamp I got was at 1443, which is after 1440, but before 1500. So now I have cryptographic proof that this signature existed during the validity period of that code signing certificate. And at that point, I can now let's go put that in the transparency log and let's throw away the keys because I won't actually need to keep them anymore, because all of this process can be walked back and then apparently verified back to the root of trust with the information from the logs. So again, I'm going to do a really quick job at typing a bunch of JSON flawlessly and posting that up to our rest endpoint. Again, no, no. I'm sorry, Bob, we reached the time for Q&A. So if you mind, so if you finish the idea and we can move for the Q&A. Yep, I'm right at the last step. So we'll move right to that here in 30 seconds. So what we'll do here is we'll submit that signature. We'll break apart the output. Again, this is similar to what we saw before. We're signing just the hot to digest just the public key. Nothing else in the log. And we are done. So last chart and we'll move to Q&A project is growing super quickly. In terms of the areas that we're going to continue to focus on, we're continuing to do a lot of work with the upstream container ecosystem, as well as I mentioned with the various different package managers that are out there. We're trying to get this integrated into a variety of different admission controllers within the Qube community, as well as the broader set of Linux distros and others. And we're again pushing towards public GA of many of our services. So long story short, really appreciate everybody's time today. If you're interested in more information, feel free to look at GitHub, visit our website, or the broader efforts under the open SSF. So with that, thanks. And we'll switch over to Q&A. Let's see here. Thanks, all. So what is the name of the scripting tool? Yeah, so that's the most popular question I get. It's called do it live. It's a Python script. If you just do a pip install, all you do is input to that as you give it a pointer to a shell script and you tell it, run this. And for every command that I mash or key, I mash on my keyboard, I can type whatever keys I want. But it actually then puts into the shell what is in the script itself. So it allows you to not be worried about typos when you're doing this stuff, which is a pretty nifty feature because you don't have to worry about the back and forth between all of this. Any other questions here? Another questions are in Q&A. I can read them. Oh, I didn't even see there was a tab. Sorry, I got it wrong. Yes. Sorry. Are there any container registries such as Docker Hub slash QAIO interested in this? Are they being onboarded as well or showing interest? Yeah. So the answer to that is yes. The actual, for the Red Haters out there, Quay 3.6 actually has all of the OCI media type support required to implement this. That local product is GA. I'm not sure what the rollout status for the public Quay.io services to get to that 3.6 level. I'd redirect that question back to PM. But Docker Hub already supports this. GCHR, as I showed, supports this today. GCR and Google Cloud, Amazon's container registry works with this. So we're getting to the point where this is going to be pretty much ubiquitously supported across all popular container registries that are out there. And I see Peter asks, is it possible to push signatures to any container registry? Same sort of thing. We're using the OCI standard for recording this information. We're not doing anything custom. So as long as a registry is OCI compliant, then it will just work. So there's a different question that just came in. If you understood it correctly, the compiled binary was signed, not the commit that was signed. That is correct. We could have actually signed the commit as well, but just in the interest of time, I didn't want to go through and sign the 70 different things. We can sign that and record the signature of that in the log just as well. Because frankly, the commit itself is text that has digest values in it. It's structurally very similar to other things. There's nothing preventing us from doing that. It was just more of a trying to scope the demo. The other thing that's absolutely correct is that we ultimately want to generate provenance statements to say, who was that person that ultimately generated the commit? Where did it originate from? And to be able to walk that back, get obviously with its structure, provide some guarantees there that we rely on. More broadly, the community relies on for integrity, but for the exciting the commit itself, the Git tooling has some stuff that's built in. We're looking at some different variants of that as well, but it's totally supported and a great point. So we've hit the total time of this talk. For anyone who hasn't, has their question answered, will be available on our virtual venue on the work adventure. And if you want to meet him and any more questions about the talk, you can catch up with him there. Thank you for the talk and thank you everyone for joining us for this talk.