 Hey everyone, I'm Veda Shankar and I work as a product manager in the trusted software supply chain group at Red Hat. I'm going to show you how you can make sure your software supply chain is rock solid in terms of integrity, transparency, and assurance. Thanks to a project called SIGSTORE. In the software world, we often see a lack of digital signatures in the supply chain. And even when they are there, they tend to use those traditional digital signing methods that can be a real pain to automate, scale, and audit. Now, in the next few slides, I'm going to explain SIGSTORE and its components and how it's setting the standard for signing and verifying software. I'm joined by Greg Pereira from Emerging Tech Group, who will give you a detailed demo of SIGSTORE in action using tecton pipelines. Most software security is done using digital signature, which is based on public key cryptography. Public key cryptography uses a public and a private key pair and is a magical gift from the world of mathematics to computer science. The public key can be distributed and shared with everyone. But the corresponding private key should be kept confidential by the owner. You can sign some artifact with the private key. Now, anyone who knows the corresponding public key can verify the signature proving which private key produced it, thus confirming the identity of who produced that artifact. Now, the advantages of using digital signatures is that it verifies integrity of content, authenticates with an identity, and provides non-repudiation. But there are some disadvantages. Private keys need to be stored. So invariably, you need to use a key management software. Current signing tools don't scale and rarely used by developers. Revocation of keys is also a problem and inconsistently done by software developers. Also, traditional keys don't easily integrate with a DevOps automation. But there is good news. These disadvantages we see with traditional digital signing techniques have been solved by SixTor. SixTor is an open source standard for signing, verifying, and protecting software. SixTor was initially released in March 2021 through collaboration by Red Hat and Google and a project with Linux Foundation. And it aimed to solve issues around where software comes from or how it was built. SixTor specifically seeks to improve the software supply chain integrity and verification activities. It enables developers and consumers to securely sign and verify software artifacts. The software artifacts can be release files, container images, binaries, or software bill of material manifest. For example, in case of a container image, you would sign the image with a cryptographic key and the verification process would confirm that the image has not been modified since it was signed and that it comes from a trusted source. In addition, the transparency log provided by SixTor is useful for auditing the signing events. Let us look up some of the key components of SixTor. SixTor is a free to use certificate authority for issuing code signing certificates for an open ID connect identity, also popularly known as OIDC identity, such as an email address. So how does it work? Foolsio, before creating the ephemeral certificate, will prompt you to log in with a popular identity provider like GitHub, Google, or Microsoft and takes a token from the provider, which proves your identity on that platform. Foolsio then takes this token along with a new public key it generates to create a certificate. This certificate is an attestation that says this public key was definitely owned by this identity. This way, we don't need a discovery mechanism for the public key. We don't need a way to update that public key because, again, we only care about the identity here. Also, to notice Foolsio only issue short lived certificates that are valid for 10 minutes and it deletes a private key. In addition to public identity providers like GitHub, Google, and Microsoft, you can also bring your own identity providers that support an OIDC interface that's required by SixTor. So Keycloak works great with SixTor. Recor is a component of SixTor that leverages Trillion to provide a transparency log for storing and verifying metadata about software artifacts. Trillion is a tamper evident verifiable and distributed log for storing data in a transparent and auditable manner. So when container images are signed, Recor appends the metadata to the Trillion log. This creates a verifiable record of the metadata which can be audited and verified by anyone interested in checking the integrity and authenticity of the container image. Cosign is a client side command line tool that can be used for signing and verifying artifacts. And if required, it will push the signature off the to the OCI registry and also push the signature of the artifact into the Recor transparency log. Cosign also has built-in support for in-toto attestations. Git Sign is also a client side tool that implements Keyless SixTor to sign Git commits with a valid OpenID Connect identity. In practice, that means you won't need GPG keys and a complicated setup in order to sign your Git commits. Later in the demo, Greg will show how to configure Git Sign within your project and sign your commits. You can use your GitHub login as the valid identity for authentication and after signing the commit, the details will then be stored in the Recor transparency log for subsequent verification. This diagram shows how these SixTor components come together. In this case, a developer through a client tool will request Fulcio for a certificate for signing an artifact. Fulcio will prompt the developer to sign into an OIDC identity provider and retrieves an identity token. Fulcio then issues a certificate at testing to their identity and generates a key pair. The artifact is then signed using the private key and the signed artifact is pushed to a repository like say a container registry. For security, the private key is destroyed and also the certificate expires. Now, we don't need the private key because all we need is the public key to decrypt, which proves the identity. Now, after the artifact is signed, a transparency log entry is created. The entry contains the hash of the artifact, the public key and the signature. The Recor transparency log witnesses the signing event by entering a timestamp entry into the records that attests that the secure signing process has occurred. Now, clients upload signing events to the transparency log so that events are publicly auditable. Now, as artifact owners, one should monitor the Recor log for their identity and make sure there are no unexpected signing events. Now, during verification, when a software consumer wants to verify the software signature, a comparison of a tuple of signature key and artifact from the timestamped object is done against the timestamped Recor entry. If they match, it confirms that the signature is valid because the user knows that the expected software creator, whose identity was certified at the moment of signing, published the software artifact in their possession. Six Store Community offers a free public good service, which you can use for signing and verification. You can also deploy Six Store Service in your Kubernetes cluster using the scaffold helm charts that's on the Six Store GitHub. Let me hand it over to Greg, who will show you how he's using the local Six Store service in his Tecdon pipelines. My name is Gregory Pereira. I'm a software engineer in the emerging technologies department at Red Hat. And today I'm going to be demoing two Tecdon pipelines for you guys, showing how you could enhance your security with regard to software supply chain. We have downstreamed a version of this Six Store scaffolding repository, but I recommend you start here. This is kind of what everything is based off of for us and how we deployed this. So I would take a look at this repository. This is kind of how we've done it. And we can take a look really quickly at these components. We're using an OpenShift cluster here, a Rossa cluster, but it could be any Kubernetes cluster. None of this is specific to OpenShift. Keycloak is our OADC provider in this case. We're going to be using the Keycloak system namespace. For that, we have Fulcio deployed in the Fulcio system. We have the Tough Mirror deployed in this Tough System namespace. And we have Recore deployed in the Recore system namespace. So those are our Six Store components and why we chose to deploy them ourselves. And now I guess we can jump directly into the pipeline. So the first use case that I want to talk to you guys about is imagine that within your team you say you want to verify all your own builds and all your own commits and you want them verifiable within the transparency log that you deploy. That's this case that we're going to be exploring through this pipeline. So I guess how I'll start is I'm going to go ahead and trigger the pipeline and then I'll pop open the pipeline and I'll show you guys the code so that way you can follow along and see what's going on here. So I'm just going to prep this by going into our demo namespace. Perfect. Okay. So this pipeline is very simple. It's triggered just by a commit. So I've been doing some testing here already. You can see this testing file here. So if I do get log, I should not see that because that should be or I get status. Sorry. It should be part of the last commit. So there's some other changes here, but you won't notice this testing file, right? Because that's been committed. So what I'm going to do is I'm going to remove this testing file to status. So that is now deleted. I'm going to add that. That's the change I'm going to make. Imagine this is a feature change. And now I'm going to commit and sign it. So one thing I'll talk about here is that if you want to use your, this is one of the things like I was talking about, if you wanted to play your own version of the six store components, you're going to have to do some additional legwork to set these conflicts. I've chosen to use get sign to sign these things. And I need to pass my URLs in the Kubernetes cluster for, for these different components that we have deployed, right? And I need to say I'm going to be using get sign. So if I now get commit, it will, if we follow this, I'm already signed in. So it won't pull through this, but you can see here from this URL, it will use our OADC provider key cloak in the key cloak system namespace to authenticate. So that way I'm able to sign and commit this to the record transparency log. And it will spit me back this T log index saying, yes, you have committed this to the record transparency log. So now I can go ahead and close this page. And this will trigger a web hook, which I have set up in that repo. And it will start this pipeline. So let me go open that up so we can follow along their pipeline runs. Oh, sorry, I've just committed it. I haven't pushed it. So that's why it wouldn't have set it. So I have this web hook set up on the upstream. So I'm going to push upstream. Don't need to force push. Okay. So if we look at my web hook here for the pipeline's demo repository, if we go to recent deliveries, you see one here for 2016 is the time. And that matches up pretty closely. It's 817 right now. So that was this commit. So what that will do is that we'll kick off this pipeline. And you can see this verify source code pipeline run has just started. So I'm going to pop open the logs here. It's going to start by just installing some stuff, which we'll see in a second. But I'm going to step into the code and show you guys a little bit about what's going on here. So the way I've set up this repo is all of the manifest that applied to multiple pipelines I've put in this 00 base pipelines manifest directory. I've got like some trigger bindings here for Tekton. And this is just a smaller side on Tekton. If you don't know about Tekton, that's perfectly okay. You should be able to follow these concepts. They're not too hard to understand with or without knowledge of Tekton. But I digress. So these two trigger bindings are part of this, you know, generic, not pipeline specific directory, and then also the namespace. In case I wanted to delete everything here and customize delete and pipe that to kubectl apply or kubectl delete, I can just do that through here. So I'll touch on these trigger bindings. So basically what this allows us to do in terms of Tekton is this one is for GitHub pushes, and it will enable us to take the data from that push event and pass it and propagate it through our pipeline. And this is just this one is basically these are all static values. This allows us to pass these values also into our pipeline, but these are statically defined for our deployment of six store because we chose to do that we have these components here. So this 01 verify source is the actual pipeline, verify source code pipeline. This is the actual code for the pipeline, right? So getting to know this testing file for now, but I'll start by showing you the pipeline. We'll get into this task at the end here because this is where the actual code happens. But it's very simple. This pipeline has two tasks. One is it just pulls the source code from a get clone task. And the second is it will run this verify commit signature task, which we'll see at the end. So this is the pipeline. The trigger template will take the data passed from the two trigger bindings we saw earlier up here. And it will propagate them into this pipeline run, which will kick off the pipeline that we just saw. We also in here have an event listener, which is what actually takes these trigger bindings and passes it to the template. And it will also only run on push events that it filters from this repo, the secure sign pipelines demo repo where I have this set up. And it also has this web hook secret attached to it. So it's able to communicate with that web hook. We have a route to expose this event listener, which also helps it to receive that data from that web hook. This is sealed secrets is just how we do our implementation of secret management, where we're getting an installation of vault, but this is easier for us. And so now let's look at that actual task, right? So it starts by just installing some binaries, two from six store the recourse CLI and get signed. And then a go implementation of JQ, which is just for parsing JSON data, it'll set some aliases. So we can call these binaries. And this is all traversing the recourse log. So it's going to start by searching on our recourse instance for every commit or everything that is signed with the email from the commit author. And then we'll firstly, what it will do is it'll check, are there no entries in recourse from that email? If not, you obviously haven't committed it to recourse. But otherwise it's going to check the date of the commit. And then it's going to iterate through all of the entries in recourse. And it's going to pipe that to JQ so it can figure out the difference between what the commit date is and the time that it was integrated into recourse. What this will allow us to do, right, is it's going to finally check here if it's less than 300 seconds or five minutes, which is roughly the amount of time it takes the pipeline to run. So this will enable us to get that recourse entry that was just the one that ran from kicking off this pipeline. After that, it will pull down some certificates it needs and PEM files it needs to be able to do this. It will set all these values as environment variables and get configs. And then it will finally run this get sign verify for the head commit that it checked out from the get clone. So if we look at this pipeline, we can see a run now. I'll briefly go through this. I know these logs look intimidating, but it's just installing these three binaries as I talked about earlier. These are all the UUIDs for the recourse entries that are from my email. It's adding these all to the array. This is where, yes, you see 31 entries from recourse. It's getting the value for each of those and it's adding each entry to this array for recourse entries. And this is the actual data passback from recourse. We've got a Shaw value here. We've got signature content here for actually how it was signed, etc., etc. These are all 31 of my things that are signed here. And here's where it's starting to do the time comparison and it's trying to find all these values. We can skip past this because if it doesn't find any, it'll eventually fail out and say, oh, there's nothing within 500 seconds. So, because this value is not empty, it has found something that's less than 500 or 300 seconds. This is where it's curling down those certificates and PEM files. It's exporting the values. And then, finally, it is able to actually do this get signed verify. It spits us back a T-log index and says that, oh, these are all verified. Verified gets signature. The certificate claim is true. And we can actually compare this T-log index to what we saw when I was actually making the commit, right? It actually spits us back the same T-log index. So that proves that this was verified and found the right entry and recourse and the right commit. So now, the second pipeline is a little bit more simple and a little bit more common. It is for building and signing container images. So I'm going to go ahead and kick off the pipeline and then we can, again, talk about it. So I have this testing file here already. So if we call get status, we don't see that. So I'm going to remove this testing file and I'm going to add it back. And I am going to commit, again, there's local get configs for adding here, add it here. Commit message isn't great because it removed the file and so added it, but whatever. So T-log index, this has been signed and submitted, right? So now I'm going to push this to origin testing branch. Don't need to force push. Okay. So now we can look at our wrong webhook, right webhook, recent deliveries. We can verify 2024, 824 my time, so that lines up. So this was correct. Let's look at the pipelines and see if that got kicked off, which it looks like it did this build and sign pipeline. So I'm going to pull this guy up and look at the logs. And again, we'll step through while it's doing this. So in this build and sign image directory, we've got another event listener for a separate trigger binding as well as the get push trigger binding that we had. Still webhook key, different repo, but same filter essentially. And it's going to run this separate trigger template. We've got a route to expose that same deal as before. We've got, this is the pipeline. So the pipeline here is going to pull the source code kind of as we saw in that first task. And the second one is it's just going to run this cluster, cluster task build up. Now you might be wondering like how, how does it end up actually doing the signing? If it's just a build task and I don't see any parameters past year for signing, this actually leverages tech and chains. So tech and chains is like a controller that can watch over tech and pipelines. And it will enable you to do signing for task runs, pipeline runs, results from tech and so it could be files, it could be container images, all this stuff. So this is my configuration for tech and change. It's really as simple as just applying this and should be able to create that. I've got this, this is my pull secret here, which will enable it to actually push to Quay. We looked at the pipeline, this is the trigger binding, which just has my Quay repo. Yes, we're going to use TLS verify and GitHub as the good host. And then we have a trigger template here, which will take the cumulative parameters pass from those two trigger bindings that get push and this new one that we looked at. And it will then pass that into this pipeline run, which will kick off this pipeline, this build and sign pipeline. So those are all the components we have here. And now we should be able to take a look, right? It has been building this image. You can see it stepping through the 13 steps of this Docker file. It's building this image. And here we go. We have Pacman testing branch 01E5 is what we're going to be looking for. So if we hop over to Quay and we reload this page, we can see this testing branch 01E5, this image is actually been pushed. We can see here, this tag has been signed via cosine properly, and it's pushed an attestation that goes along with that. And we can see here in the, sorry, we want to look at the pipeline run. The first part of this is it has that SHA-2021CA87. And that should be the SHA for the attestation that it pushed. 2021CA87. So this is the attestation that it puts. So we can see that both are pushed and validated. It's been really a pleasure playing with these components and building out these pipelines to get a better sense of all the tools out there for software supply chain and security in that respect. All this code will be available in the repo that we saw earlier, the secure sign pipelines demo code. I have a verify branch here for verify source control pipeline. And this is where all that stuff is available. You guys will be able to pull this down and play with it to extend it to your heart's content. So thank you so much for watching. And I hope you guys get a chance to play with this technology. Thank you so much.