 Hello everyone, welcome to another OpenShift Commons briefing. Today, we're going to talk more security. You've been seeing a lot of security briefings lately. That's because, well, I mean, we all really care about security. You've heard about the recent acquisition and hopefully you watched the Stack Rocks briefing. We've had briefings on DevPsychOps and this one's great because there's this new project called Project SigStore that our guest Bob Callaway and Ivan Font have been working on. They'll tell you more about it. And they're both here from the CTO office at Red Hat in the Emerging Technologies Division. And this project was just recently donated to the Linux Foundation. So, giving that intro, Bob, would you like to introduce yourself as well and kick it off? I think you're still muted. Hope so if you unmute, right? All right. So, hey everybody, Bob Callaway, good morning, good afternoon, good evening, wherever you are. So, as Karina said, I'm part of the CTO office here at Red Hat and I'm one of the maintainers on the collection of projects that we're calling SigStore. Today, what we're hoping to do is just give you an overview of the problem space that we're working in, some of the challenges that exist today, what we've actually done to try to rectify some of those challenges. We'll do a live demo of a couple of different things, not only the system, just generating keys and signing software, but also tech on pipelines, as well as open policy agent. Then happy to take questions at the end. So, with that, we'll jump right in. So, as you guys have heard from the last few briefings, there's a lot of challenges in securing the overall supply chain for software. There was a report that came out that said in the last two years, there's been a 400-some percent increase. And part of the reason why is that the supply chains are fundamentally complicated, right? You've got developers' workstations to various CI systems, you've got public code repos, you've got masterories behind the firewall, container registries, I mean, there's a lot of moving parts that it ultimately takes to develop and deploy and iterate on cloud native software. So, there's no shortage of issues that have come up of late. Certainly, the SolarWinds attack is one that I think is on a lot of people's minds right now. But at the end of the day, right, maintaining keys, making sure that you've got account security set up. These are all complicated tasks. And the more that we put a shift left and put more of the responsibility on developers to solve these problems, that our jobs are getting more and more complex. Now, oftentimes, you think about artifacts and even if you go to the SolarWinds kind of post-mortem and look at what happened, code signing is often kind of looked to as one way that you can hopefully verify the integrity of what is being distributed. But when we look in the open source community today, across many of the popular projects, some of them are being signed. You can see here in the table, the little, we have some of the most popular ones out of the, okay. But, you know, there are different ways in terms of how we publish key material, how, what are the trust models that are applied, whether you need to start and verify from default or whether you need to trust on first use. You know, these, there's a collection of some of the other package managers that exist today. Long story short, you know, we have a very disparate set of practices out in the industry today, as well as in open source around how software is signed and fundamentally our consumers of software looking to verify those signatures at the end of the day. You know, part of the reason why signing is so important is not only does it provide integrity, but it also provides some other features, non-repudiation to make sure that, hey, somebody can't claim that they didn't create something, that, you know, the presence of a signature implies the possession of the public or private key, rather, you know, if we have a linkage between identity and the possession of the keys, we could actually use this for authentication as well. But I don't know if anybody on the line has ever tried to use some of the more widely known public key cryptography tools, namely PGP. You know, the user experience leaves a lot to be desired. Not to mention, once you figure out how to use that, you've then got to figure out, what do you do with this key pair, right? I've got to clearly keep it secure to make sure that nobody gets, you know, hold of the private key and can masquer, masquerade as me. But, you know, do I buy an HSM? Do I use a UB key? What if I want to do this in the cloud? How do I mask all this? There's different cloud APIs. There's key management systems. You know, quickly, if you're not a security expert, this becomes very, very difficult to implement correctly. There's no shortage of folks that have tried to do this and failed or tried to do this and are very cautious about exactly how far they want to go down the rabbit hole, just due to the complexity here. And so, you know, even if you go that far, we'll take it one step further, even if you actually do go to the extent of signing your software, if you're an open source project, you now have to think about, hey, I've got multiple maintainers. Who's actually able to speak on behalf of the project itself? Who's able to publish an artifact? And how do consumers of that software ultimately go through the steps to verify? Not only is it signed, but is it signed by somebody that we should trust? And maintainers come, maintainers go, how do I maintain that list of the project? Maintainers is also another challenge as well. And so here you get an example on the right side of the Node.js project. Now, they publish this on GitHub and not meaning to throw any shade at the Node team because this is, as I mentioned before, this is a hard thing to fundamentally manage. But if a maintainer leaves, how do you actually ensure that that key pair is not used after the fact to go sign a malicious release of that project? Again, assuming no will, and then maybe that's not an issue, but at the end of the day, we need to be able to prove the provenance of what we use in IT environments today. So even if you go to all this extent to try to sign software, again, this is a pretty complicated thing to get right. So that's the setup for what we hope to solve with SIGSTORE, we wanna make, imagine a world where signing is just done by default and it's ubiquitous across everything. And part of the way that we have to deliver that vision is to just make it so much simpler. The other thing that we're missing is a notion of transparency, and we'll get into this a little bit later around how we actually add this in. So we created this collection of projects that we're calling SIGSTORE, and as Karina mentioned as well, we've, a little earlier in this month of March, we donated the project set to the Linux Foundation as well. So we've actually also launched what we're calling a public goods service, basically as a hosted instance of the set of projects that developers can use. But if you wanna take it and use it yourself, you can certainly do that firewall as well. So let's jump in and figure out a little bit more about what's in SIGSTORE itself. So as I just mentioned, SIGSTORE is this collection of modular projects that can be used independently or together as a whole to address some of these problems. We have implemented this by default on top of CUBE with certainly everything done in containers. And I guess the best mental model that I can give you if you're familiar with the Let's Encrypt project, that's a public goods service that's trying to make it easier for folks to secure their websites. And so through the advent of the ACME protocol, as well as hosting up infrastructure that's paid for by an open consortium of entities that are all incentivized to see a more secure internet and that's certainly good for all of us. We see ourselves in that same light as well. We want to make sure that software is signed and we want to make sure that it's transparent. So the more that we can do to offer that to folks. So SIGSTORE, you may hear it used in a couple of different standards throughout this presentation or if you are reading more. That's kind of our umbrella brand, if you may. So giving you a quick little decoder ring is to some of the key projects that we have right now. There are really three that we'll go through in some detail today. FullCO is the first one. This is a certificate authority that actually issues code signing certificates and that there's a linkage there between your open ID connect identity and that's embedded inside of the code signing certificate. The second project is Recor and this is what we're calling a signature transparency law and we'll go into what that ultimately means. But think of this as an append only, a mutable ledger that serves as that transparent source of truth as to what artifacts have been signed with what key pairs and by whom. And then finally, obviously we're talking on the OpenShift briefing today and containers are certainly a hot topic. So we've created a tool called Cosign which actually uses the Red Hat simple signing approach to actually go and sign containers and publish that information into OCI compliant registries. So if you go to our GitHub organization, you'll see a ton of other projects that are coming in, many more are under development and we're starting to reach out and get new ideas coming into the community other ways that we can integrate capability. So I'm gonna walk you through how from a developer's point of view how you would take advantage of sick store and then we'll jump into a demo because obviously sitting and staring at charts isn't as fun as seeing it work in the field. So imagine you're a developer and you've created some awesome piece of software that you wanna go and share with the world. So first thing I've gotta do is I've gotta generate a key pair. If you can't sign without using a public key cryptography so I need a key pair. And we jokingly call sick store keyless signing in the same vein that serverless is doesn't use a server. Obviously it does behind the scenes. It's just obfuscated. We call this keyless signing in the same sense. Yes, we use a key, but the lifespan of the key really can be a matter of milliseconds in this system. So we're not worried about protecting that this key pair in perpetuity. We're relying on the immutable nature of some of the transparency logs to ensure that we've got a trust route that can be independently verified. It's a, hey, at this point in time, the developer owned this set of keys. We'll note as well that again, back to the modular point, if you have your own key pair and you're gone down that process to store it securely in an HSM and a vault with arm guards, that's totally fine too. Nothing here precludes you from bringing here. So anyways, developing software. I've got a key pair. The first thing that I'll do is I will go and authenticate using the open ID connect protocol. For those of you who aren't familiar with this, if you've gone to any website of late and you see the login with Google or login with Apple or login with Facebook buttons on a website, they're actually using a protocol called open ID connect behind the scenes. This is built on top of OAuth and essentially just uses an external identity provider to authenticate and make an attestation around who you fundamentally are at the end of the day. So in our system, you would reach out to the full CO service. You authenticate using open ID connect. And then we would get an ID token that's what's passed underneath the covers. We verify that that came from a provider and that's cryptographically verified through it being signed itself. And then we generate a code signing certificate and think of this as a record that has the public key from your key pair. And we put that into an immutable transparency log. And you can use that code signing certificate and publish that in, it's kind of a basically your new public key. So folks can use that certificate to verify a signature just as they could if they only had the artifact. So once that certificate is published into the log and returned back to the developer, now I've got this key pair. I've got a code signing certificate. What should I do now? Well, the next thing to do is actually just sign the software itself. You can use open SSL. You can use our client tooling that we provide. But at the end of the day, we want that artifact to be signed. Now, the next thing, once we have a signed artifact and we've got the signature, we need to put that into a transparency log and that's where the RECOR project comes in. So we'll show an example of what goes in the log, but at the end of the day, it's the public key, it's the signature itself. And then we extract the hash value from the signature. So in the immutable log, we have a record of what artifact was signed, it was signed with what public key pair and we store all that information on the log. So if people wanna go back and verify that themselves, they can do that. And then finally, we'll get into this when we talk about the integration with OPA that we'll demo. You have to have some way of kind of rounding out this trust loop to say, hey, these are the actual software releases that exist and are trusted. And these are the maintainers in the software project that I trust. There's a couple of different ways that we've seen folks address this problem. Some people are comfortable putting a maintainers list on GitHub. Some people put a maintainers list on a website. Obviously those have some pros and cons, but we also see transparency logs potentially playing a role here as well. So at the end of the day, you wanna be able to publish what are the known good releases and who are the signers that I fundamentally trust? Put that into a system as well. So now I've signed it, I've published it to all the transparency logs. I can throw away the keys. I don't need them anymore because they're again stored in this immutable ledger. And I as a developer, I can now monitor these transparency logs and look for, hey, has anybody else published anything with that private key? Maybe I thought I threw it away. Maybe somebody, I use a system and the memory was compromised. We wanna make sure that you monitor your identity and say who else might have had my password and logged in and generated an identity token. So the concept of monitoring is also really critical in this model to make sure that, hey, if you see a certificate or a signature being generated with your identity, then obviously that's a notion of, hey, wait a minute, it might be my keys weren't as secure as perhaps I thought. And then once I've monitored it, I can give that signed piece of software out to end users that can download it, verify the signature, they can actually query the log themselves and make sure that you as a developer were the person that signed this and that you actually published it into the log and you can tie all of this together into a trust route that is bootstrapped off of, again, full CO, providing the certificate transparency, record providing the signature transparency and the open ID provider attesting to your identity. So with all that, we will jump to a demo. So we'll do this live here. Hopefully you guys can see my screen. So we'll walk through basically the exact same thing that I just showed you in slides through the command line. So the first thing we'll do is we'll create a software artifact that we want to distribute. In this case, I'm just gonna generate some random, a small little random file and put that into a file name artifact. Next thing again, we call it keyless but we need a key pair. So we'll generate a set of keys. This case we're using the elliptic curve algorithm but we can use RSA as well. We'll generate two files, the private key and the public key and we'll write those into the directory. Next thing we'll do is we'll just sign that small little artifact file using the private key. In here, we'll just use open SSL to accomplish that and we'll store the signature in a detached file called artifact.sig. Now, just to prove that the signing worked fine, we'll go back and verify it using the public key. All of these commands will publish as well on our website on our blog post sort of a library. I actually wanted to try this out. We'll verify that signature, everything comes back and says it verified okay. So the next thing we'll do is we'll reach out to an open ID connect identity provider and get that ID token that I mentioned. So we're using here a CLI command that will help to launch a browser to drive this authentication workflow. And so hopefully you can see screen switch away from my terminal now into the browser and I'm presented with, again, one of those login screens that you may have seen where it says login with GitHub, login with Google, login with some other identity. So in this case, SigStore is not managing a list of username and passwords. We never get your username and passwords. We're relying on a delegated token to be generated from one of these two providers today. We're gonna probably add Microsoft and many more to come. But for just the sake of this demo, we'll login with Google. I choose my email address here and then I come back and say, hey, great, a token was granted. Inside of that token, there's a couple of different things, but mainly the one that we're looking for is just an email address. So what I'm gonna do is just quickly execute a little bash trickery to get out the email address and we'll print that there and you can see my username at redhead.com is what's printed out. I'm gonna sign that email address using the same private key and the reason that I do this is it proves that I actually have possession of the private key at the same time that I'm going to submit this to FullCO. I'm gonna quickly sign that email address and then I'm gonna send a little bit of information over to FullCO. Now this is a rather long bash command here, but I'm doing this deliberately just so you can see at the end of the day we're sending JSON over an HTTP post. We're extracting the public key and we're including that sign signature. So assuming that that worked, we're gonna now throw away the keys because we don't need them anymore. We're now gonna look at the signed certificate that came back from FullCO. Couple things to call out again, if you're not familiar, I'm looking at X509 certificates. This may seem like Greek to you, but at the end of the day, again, we want to linkage between identity and the possession of the key pair. And you can see inside the certificate, we've got the identifiers that were specified as well as the public key that we generated is all stored in this co-signing certificate. And we've got the appropriate flag set up to say, hey, not only is this a cert with a public key, but it's meant to be used to verify signatures. You've got a root CA in the list as well, just in case you're interested and want to port that all into your trust store. So now I've got a signed artifact. I no longer have any key pairs. I've got the co-signing cert that I can use to prove this. The next step is to actually go and submit this to the RECOR transparency. So what I'm gonna do is I'm gonna use one of our CLI tools called RECOR CLI. We're going to upload the artifact itself, the detached signature, the public key. And we're just telling the command line utility here that we're using an X509 for PKI collateral. Again, you could do this with Curl, but that submit is pretty nasty. So we'll just see a line instead. So what we've done is we've submitted all that information up to RECOR. You'll note here, we did actually send the entire file up to the system. That file is not stored in the transparency log itself. Only the SHA-256 digest value is actually stored in the log. So again, if you were to sign a piece of software, you don't necessarily want that to be a distribution vehicle for it. We only require that that be sent just to be able to go back and verify the hash. We now see, hey, there's a new artifact that has been added to the transparency log. We could fetch it by putting that into the browser and we would get back some JSON content. But more meaningfully, what we'll do is we'll use another sub-command of this called Verify. Where again, we pass at the same exact credentials and we're gonna be able to see not only is that entry in the log, but we're going to see a ton of SHA hash values. And I'll go into a little bit around how this is all built together a little bit later in the talk. But long story short, this is the mathematical backing behind this concept of a transparency log that allows it to be pen-only and immutable. And this actually displays all of the math that you would need to do to verify that this, that particular artifact, that particular signature are actually in the log. So now that I've wowed you with a ton of CLI content, I'm gonna hand it over to Ivan who's gonna show you the integration with Tecton. Ivan, take it away. Thanks Bob. Yeah, so my name is Ivan Font. I'm also in the Emerging Technologies Group within the CTO office. And as Bob mentioned, I will be demoing the integration of six-tor within OpenShift and show you an example of how you can integrate this into your CI CD workflow. So here, there's an app running that we have on an OpenShift cluster. And let's go quickly take a look at that. We'll just open up a tab here and we can see that we have a simple, just, yeah. We can't see your screen, so you may wanna share it. Oh, let me, thanks for that. Yeah, it would help if I shared, wouldn't it? Thanks, all right, so yeah, just you should be seeing the terminal here and we'll go ahead and open up a web browser. As I mentioned, we'll open up a tab here and you can take a look that we have a demo app running here. It's just a simple Hello World app. And just show you quickly what the contents of this repository is. We have this repo that's posted on GitHub, just a simple Go web app that displays some, you know, Hello World message. We have a list of maintainers. As Bob was mentioning, you know, one of the practices you can do is list the maintainers for a particular repository, which you can then use to apply policy and government's information against your signed artifacts. So we can also see that we have a Docker file here and this is used to actually build the image that's posted in this repo. Here we just have a builder image that we use to build the Go app and then we use a rebuilt UBI minimal container image here with a stable tag that we're using to run the application. So let's say you come in on a Friday and you have some work to do, you have to make a change to your app. So we go ahead and first, we have to build the base image for the application that we're using here. This is the Hello World application. So we can do a podman build using, we have a base Docker file here, we have a Docker file for the base image and we can see we go ahead and build that. We go ahead and push this built container image to the container registry. So now we'll go ahead and make some changes to the app here. So let's just change something like, let's just say, Hello, CB World. We'll save that. We'll commit the change. You've come in and you've made a change on a Friday and now we have a pipeline that gets kicked off here that you should be able to see. And so we have a pipeline that's in our OpenShift cluster, run in OpenShift pipeline, built with Tecton. And we can see that we have several steps here, several tasks that are run as part of this pipeline. We'll go ahead and fetch the repo. This is the Git repo with the commit that I just pushed. Then we'll build the image. There's a few steps here. We build the image and push the image using Builda to our integrated container registry that's running inside of OpenShift here. Once that image is built, we then inspect the image and extract the base image that was used to build this web app. And once we extract the base image, we can pass that to a task that will verify that base image. And so there's two steps here to verify it, which I'll go into once we get to those steps. But one of them is running a step called Cosign, which is that tool that Bob mentioned earlier. And it's a tool that allows us to sign container images. And we'll go into a little bit more of that. And then we extract the email from the actual certificate and that will refer to the identity that was used to sign that image. And then we'll pass that to an OPPA policy that will apply and verify that that image was signed by an entity that's in the maintainers list that you saw earlier. And then we'll just go through a simple, apply the manifest and update the deployment so that we can get the changes deployed into our cluster here and see them go live. So here we can see we're building this. We'll go ahead and push that. And then we'll get the base image. As I was mentioning, so now that the image is built and pushed to the integrated container registry, we'll go ahead and extract the base image from that built image. And this base image is that UBI minimal that we built and pushed to the container registry right before we launched this pipeline. So you'll be able to see here as we inspect the image, this is the image that was built. And it's in the integrated registry as part of OpenShift and it was built using the base image that you can see here. And that's that UBI 8 minimal. Then we go verify the base image, but we didn't actually sign that image. The base image wasn't signed. And so we fail here because we're actually trying to retrieve the signature for that image and it doesn't exist. So that's what that error is saying and it's basically saying that we failed to verify. So we actually failed the pipeline because the image isn't even signed. So let's transition back to the terminal and let's actually sign that image. So here, cosine is generating an ephemeral key pair and then it's calling out to full seal to get the OIDC provider to authenticate us and be able to provide that certificate. So we'll go ahead and log in with Google. And here we'll just use my iphoneredhat.com. We authenticate successfully there. And then what it'll do is it'll actually use the general, the ephemeral key pair. It'll create the signature and then it will use the certificate that it got back from full seal and be able to upload a new object to the container registry that contains the signature, the certificate and chain, along with the reference to the image, the SHA of the image that was signed. So now we can go ahead and rerun the pipeline that we just failed. And this step will take a little bit but as we wait for it to propagate through, let's just go back to the slides a little bit and I'll talk a little bit more about these particular pieces here. So Cosign is a project within the Sixth Floor envelope in the GitHub org. And that's the Cosign command that you saw there to sign the container image. And it's architecture, as I spoke a little bit to, is basically outlaid here and you can see that Cosign will generate that ephemeral key pair. You can also use a key management system or any existing key pair if you have that. So we try to accommodate all the different use cases and then it requests a Cosign certificate from Pulsio which you saw then calls out to an identity provider like GitHub or Google in this case and it will basically authenticate that you're the owner of that private key associated with that email account. So once we get that back, Cosign will then download the container manifest from the registry. This contains, this is the reference to the image that you wanna sign. It generates the signature using the key that the ephemeral key that it generated attaches the certificate and chain and uploads the signature and the public key and certificate and chain to the container registry as a new OCI object. And then after it does all that, it creates an entry in Recor using similar sort of content and uploads and it creates an entry in that transparency log for the sign container. Let's go back and take a look to see how our pipeline is doing here. It looks like the login expired here. Let's go ahead and log in. That's taking a little while, there we go. All right, how are we doing here? Okay, so we're getting the base image as we can see. Then we verify that base image and here you'll see that Cosign runs to those similar steps as I just showed. So we verify the image and then Cosign outputs here that the claims were validated so you can actually add annotations as well. If you want to the signature that gets uploaded, you can actually add any sort of additional annotations and it also verifies that the entry was existed in the transparency long along with those claims. So the signature was integrated into Recor. Any certificates were also verified with the Fusio route that we obtained and then you can see that the certificate has the common name here iPhone at redhat.com and then you can see the redhat simple signing spec that gets dumped as well that the payload for the object that gets uploaded to the container registry. So once we get that common name, we apply our OPA maintainers policy that says iPhone at redhat.com is an authorized maintainer for this image. And so we go ahead and pass the OPA maintainers policy and then we apply the manifest and update the deployment. So let's go take a look at the deployment. And if we refresh, we see the updated helloocb world change that we made a little bit earlier. Okay, great. So we can show that we can find the container image and we won't go through the CICD flow successfully until that image is signed. Well, what happens if credentials are stolen? So let's say you leave for the weekend over the weekend, somebody obtains the credentials and still the credentials to the container registry that you use to upload the face image as well as the signature that you also use to find that face image. Well, let's go through that example. And we can go ahead and we have another Docker file here. That's just the exploited image that the malicious black hat user that I'm now representing wants to build. And it's just a simple echo something malicious for the sake of this demo. So let's go ahead and build that malicious image. We'll go ahead and build it and push that to the registry. Again, we have the stolen credentials, so we are now authorized and able to push a new updated stable tag for this face image. Let's say this is a very sophisticated black hat hacker and this user also knows that, hey, we are using recor and the six store architecture to be able to create a transparent trusted source for anything that's fine. So this user knows that we have to create an entry in there, otherwise we might, there might be some red flags that are raised. So let's go ahead, this user knows that we need to go ahead and find the actual base image. This is a malicious image and they're signing it. So this goes to show you that you can use, you know, the transparency log is there for a good or malicious intent and the idea is that it needs to be monitored. And so here, the user would be able to log in with whatever identity provider they want to use. In this case, they'll try and use Google and they don't have access to my iPhone at redhat.com. Obviously here, I have both accounts integrated. I'm representing both, I'm putting on both hats but this person has their own email that they can only authenticate with and it's just evil person, you know, hackerloals at gmail.com. So that is authenticated there and we can see here that cosine generator is a femoral key pair for that signature, grabbed the certificate from Fusio, which then authenticated with Google again and then uploaded the signature, the payload and the certificate chain for that container image and created a transparency log entry through Recor. So all right, so we now have pushed a malicious image out there. So now it's open to be able to download and be used by others. So let's make another change. You come in on Monday and you do not know what happened over the weekend. So let's go ahead and come in here, make another change. Hello again, OCB world. You have to come in and Monday morning make this change. Let's commit another change here and let's push that. And so let me just draw your attention again to this maintainers.json file here. And these are the list of maintainers that are in this repo. And the only maintainers are Bob Callaway and Ivan Font here with their email addresses. So if we transition to the pipeline, we have another pipeline that's running here. And as this goes through, we'll go see what happens as we try to apply that policy. Let me just transition back here to the slides and we can quickly cover while we wait for that. Let's go into a little bit of the RECORE architecture. So RECORE is another project in the six-room Bella and this is the transparency log that maintains the signature log for the different artifacts that get signed. So the architecture here is that the RECORE will only insert records into the transparency log with signatures that it verifies. So the certificate that's used, the public key that's used to sign that better match the signature that's used to create that record and it will verify that. And so the developer will use whatever key pair whether it's key lists, whether it's key management system, it will sign and publish those artifacts and publish the signatures to RECORE. And let me just cover another point, a few more points here that the artifacts are not actually stored in the transparency log as was alluded to a little bit earlier. We do have some ideas around that, but right now we'll need the digest, the signature, and the code or the artifact signing certificate and public key are stored in there. And there's a REST API that you can use that you can actually keep appending to the log and you'll be able to submit request to verify like an inclusion proof to verify that a particular artifact or signature exists in the logs. And we also have a public good instance that again needs to be publicly monitored because as you just saw, we can have good uses of the signature transparency log or malicious intent as well. So it needs to be monitored to keep track of those types of things. We have a RECORE manifest schema. This is kind of the basic version here. The spec basically just contains the signature information. So we can do like GPG, X509, MiniSign, we support different signatures and you can have the URL here to the signature itself along with the public key that was used. And then the data itself, again, it's just a link to the data, it's not the actual data itself along with a hash of that data to verify the integrity of the content. So RECORE has been built with extensibility in mind. So there's a pluggable PKI interface that supports multiple different formats, X509, GPG, so on and so forth. And we have plans to add further signing systems. And then it also has a pluggable supply chain format so we can support like OCI or notary B2 signatures in the future whenever that becomes ready. And it's also designed to work with systems like the update framework and in-todo and as well as like RPM signature transparency, DBOM and S-BOM stuff as well. All right, let's go back to see how we're doing on the pipeline here. And we went ahead and verified the base image. So remember this malicious Black Hat user actually signed the image so everything was validated, everything existed in the transparency log, everything checks out okay here. So you can see the certificate was actually contain the content for the common name of the user's email address here, Hacker Lowell's. Unbeknownst to the user trying to use this image unless we have a actual OPA policy that we can enforce the list of maintainers here and we can see that it fails the OPA policy because the certificate with this email is not, it does not match the list of maintainers from the repo which was as we showed Bob Calloway and myself only. All right, I think that's it, Bob, back to you. Yeah, awesome. You wanna jump back to slide 24 please, Ivan? Sure. Just in the interest of time because I wanna leave a few minutes for Q and A. We'll just jump to the kind of where we're looking at kind of the roadmap for some of these projects. And you can view all of this at our community repo as well. So if you have thoughts or ideas, I meant those certainly as issues in terms of how can we make the usability better or other ideas for integrations, we'd love to hear what folks think. I think the example that Ivan showed today is pretty powerful that we can actually include signing and a policy around signing into our build pipelines. There are other places within OpenShift or Kubernetes that we could potentially integrate with. There's a concept of admission controllers that can decide whether or not we want to create an object on Coob. And we could use that to enforce only running sign content. We could look at integrating with schedulers to ensure that we only run trusted content on particular nodes. Or we could look at even going down to the lower levels, integrating in with container D, Podman and whatnot to actually catch this at the runtime layer. So we would only launch the image if we were able to verify the signature. So we've started looking into all of these areas and we think we'll be able to offer a variety of different approaches going forward. I think the other thing too is, while we showed a pretty basic example with the maintainers list, OPA and other policy engines certainly have a larger functional set that we can maybe take advantage of. So we'll look at improving and formalizing those extension points. And finally, this is solving this problem more broadly. If we don't get that list of community projects that I opened the presentation with to start signing their content and getting people used to verifying this content, then all of this is just a fun technical exercise. So we are also working with many of the most popular open source community projects out there, whether it be the package managers for Ruby or PyPy, whether it be popular distributions like Fedora, to make sure that we are signing the content that they are distributing and that we are making it very easy and simple to use and integrate with their build systems. So there's a lot of work to do to kind of close the loop on both the policy side as well as getting people to adopt this, but we're seeing a ton of interest in the community. We've got hundreds of people in our Slack instance. So we're pretty excited about what the last couple of weeks have ended up being for us. And we think that there's a ton of awesome work to be done. Next slide, please. So as we kind of alluded to, we are in a point where we launched a couple of weeks ago. So we're still in a soft launch phase, if you may. So we've made kind of a public statement that while we are running these services live, they are not to be considered production grade. We have no SLA around it, but they are available for folks to go play around with. Everything that we showed today was a live demo. So none of that was hacked up behind the scenes. You can go back and recreate everything that we showed today. We use REST APIs, and so we have open API specifications published for all those APIs. You can go and play around with that as well. If you choose not to use our CLI, it's willing, that's fine. You can use the APIs directly as I showed, but we're in the soft launch phase to try to work out some of the bugs and make sure that we've got some of the use cases really hardened and we look to go into more of a production capacity this summer. Next slide, please. And as I mentioned, if you use the analogy of Let's Encrypt, part of the reason why people can fundamentally trust Let's Encrypt is because they're not owned by one single corporate entity. This is not Google or just Google or IBM or Red Hat or Microsoft saying, hey, we're a valid certificate authority. We knew for this to be successful in the public good model that we needed an open consortium and a neutral third party that could fundamentally run this just for the good of not only open source, but the entire software industry. And that's why we've partnered with the Linux Foundation who are actually the sponsors behind the scene for Let's Encrypt as well. And so we're relying on some of their infrastructure and their outreach and guidance to help get this launched and off the ground. But while we are running these soft launch services today, ourselves, we hope that when we transfer that into a more of a production capacity, that will be officially run by the Linux Foundation. Next slide, please. So in summary, click through the animations, please. This is 100% open source. You would expect no less from Red Hat and Google and others here, but all of the tooling, everything that we're running is all publicly available. We are, the system is live. We are actively in development. Like I said, we've got a lot of great activity going on in Slack and on mailing lists. It's totally free to use. There's no cost required. We're gonna support a number of identity providers so that you're not locked in the only using GitHub or only using Google. As we showed today, we're gonna add many of the more popular ones as well. And then finally, we've got a blossoming community and we would love to see folks that are watching this to come join or come play around with some of the technology that we've made. But at the end of the day, open source is successful when folks jump in and share their ideas and help to contribute. So this is kind of a call to action. If you're passionate about something that we've talked about today and you'd love to help or give us feedback, please jump in the Slack channel, shoot us an email. We'd love to hear from. So I think with that, we'll jump to questions. Thanks Bob, thanks Ivan. Looks like we have a question. I may have missed it, but can this be used to verify third-party resources, such as code images, et cetera? Yeah, great question. So short answer is yes. There's nothing, you know, the cosine-specific tool is container-specific but there's nothing to, as I showed you, generating random content. There's nothing here that says we can't use this system to sign XML files, JSON files, pretty much anything that you could use, open SSL to sign, we could put into these transparency logs. So we're trying to go after one of the most popular vehicles for distributing software today, which is why we're working with folks like the Java community, the Node.js community, et cetera, as well as some of the Linux distributions. But the use cases here are pretty much all content agnostic. So we think that this model is pretty powerful. I mean, as I was watching both of you talk about it in demo, I mean, I'm sitting there thinking, okay, this project could really use this, or this project, it's one of those you can take over the world type projects. So I mean, how, where do you see this? You mentioned productization, well, I mean production, not productization, but like being able to use this in production environments, when do you see that happening? Like I said, I think, give us a few more weeks here. We are putting the system through its paces through many of the community members and partnering with many of the open source projects. But if you look at the transparency log component, we're actually reusing an open source project called Trillian, which came from Google. That runs the transparency log service behind certificate transparency. So this is a, IETF RFC standard that was created and that's run by Google, Let's Encrypt, Cloudflare, many of the other popular providers today that use the same immutable concept of an appennantly transparent log to attest to what certificates, what SSL certificates have been generated. So that has been battle tested up to thousands of transactions per second under full load from many of the popular web browsers that are actually calling the system on a regular basis. So we are pretty confident that we've built on top of scalable infrastructure to start, but we want to necessarily not just go in with assumptions, we want to test that out. So we think by the end of the summer, we will have a system that we can turn on into production and get folks to start using it. And the other thing about production, as I will note as well, for all our GoLang programmers out there, whenever you pull down a Go module to include in your program, we're actually, certificate transparency is used behind the scenes there. So we're looking at, again, we're building off of a system that's already been battle tested for quite a while. That's pretty impressive saying in the summer, we're going to hold you to it. I'm on record now, so we got to do it, but no, it's, we've had, like I said, with the graded community involvement and partnering with folks from the LF, we've made a ton of progress over the last couple months. No reason to see that moment of stopping, so. And Ivan, where do you see it going? Where would you like to see it go? Yeah, I mean, I think I'd like to see it go everywhere where open source security is needed, which is pretty much everywhere. I think we still have some more work to do, definitely, to get it at a point where it can be started, start to be adopted more inclusively everywhere. As Bob mentioned, I think probably a few more weeks to get the instance a little bit more battle tested. But yeah, I would like to see this definitely expanded to a bunch of different open source projects. Go modules is a good example. I think Kubernetes as well is a project that's not using any sort of signing on the artifacts that are produced. So I think that would be for the cloud native space that's probably one of the first things we need to tackle. And one other point I think we've made, but I think it's worth maybe making again. This is a modular system. So if you already have your own certificate authority behind the firewall, we can integrate with that. If you already are managing your key pairs today, we can certainly integrate and use that. So you can run just the signature transparency log if you want, if that's all that matters for your particular use case, or you can replicate the entire infrastructure that we're running for the public service, you can run that behind your own firewall as well, or you can leverage the public service. So the intent here is not to just stand up a single entity. It's really to meet customers and developers where they're at and just get them signing things. Let's get it, let's make sure that things that are generated are signed and that we get people to actually start verifying those signatures. And as soon as we do that, we're gonna get to a spot where hopefully we've made some progress in terms of addressing some of the attack vectors in supply chain. And I know Luke is on, I think still. Nope, he may have dropped. I was gonna see if he wanted to jump in. But any closing thoughts? I'm honestly right after this call, I'm gonna go jump and go check it out and try to install it. It was a fantastic presentation and demo. Thank you. No, thanks, I mean, like I said, this is a chicken and the egg type thing. If nobody signs, then what's the point? So I'd really encourage folks to go, not only play around with it and get familiar with it themselves if you're a developer, but also lobby your favorite projects, right? Send them to a link to our website. Send them a link to our blog. Get them aware of, hey, there's innovation happening in this space that they can take advantage of. And so part of your role in the community, even if you're not a coder, you can advocate for these better practices, whether it be software signing or just general best practices with security. So there's other ways that folks can get involved on just showing up or anything. Absolutely, yeah. And just to add to that as well, it's kind of a network effect sort of problem, right? Because as more projects start using it, eventually that grows and grows and then eventually it may become the default. And then now if you don't do it, it's seen as a security risk, right? So get the word out, you know, as more projects start using it, we have the infrastructure there to meet people where they're at, as Bob was saying. So let's get more projects in here. Yes. Well, thank you. And we will be posting those slides. If you are watching this on YouTube or on Twitch, you can find them there, as well as the recording, but we'll send that out. And thank you everybody for joining us and go to the six store.dev. Am I remembering that correctly? Awesome, and slack.sixstore.dev. All right, we'll meet you all there. Thanks again, Ivan and Bob. Really appreciate it. Thanks, everybody. Yeah, thanks, everyone. Thanks for having us.