 Good to see everybody, I am Luke Hines and I work in Red Hat's office at the CTO where I'm a security engineering lead. There's a group of us of seven engineers and we're all working on security in different domains. And I'm going to present to you today a new project. This is only around four to five months old, but it's gained quite a bit of traction within a small amount of time. So to kick things off, something I have to come up front about is we've had to change name. So originally we were called Recor, which we really liked, really liked the timber of the word. Recor is Greek for record, all seemed great. And then we found out there was in fact some sort of AI company that had this name and they were trading on the NASDAQ. So because of trade possible trademark complications, we had to pivot and change our name. So we're now Sigstore. So this is very recent. So we're just sort of migrating across our namespace to this new naming convention that we have. So there's a residue of Recor in the slides. Ignore that and just think of Sigstore. So software supply chain. I'm guessing a lot of you folks are in some way have an interest in security or your work in security. So I don't really need to spend too much time outlining the problems that there are in the software supply chain. There's a myriad of attacks that have occurred. One that was very recent with the SolarWinds attack, which got a lot of attention, rightfully so. But we see the same thing with open source projects as well. There's targeted attacks, keys get compromised, accounts get compromised. So many people use single sign-on to access other systems. And if a nation state wants somebody's Gmail account, they will get it. You know, it does happen. Malicious hashes are a thing. I don't know if anybody remembers Linux Mint. The poor folks there, they had a WordPress site. They released an ISO and they used to put an MD5 checks on that ISO on the same website where they had the ISO. Somebody hacked the WordPress website, swapped out the ISO, generated a new MD5, put it there. 10,000 people downloaded it and ran a malicious image of Linux Mint. A lot of the time with open source projects, build systems are open. People can go and look at YAML configs. They can look at bash scripts and they get an idea of how things are built, what sort of components are involved. So these are sort of easy to recce, as we say. Typo squatting happens quite a lot in packaging systems. So somebody will come with a name that's very close to an existing library. And then people sort of make a typo and they put in the wrong library. And quite often maintainers accounts are taken over. You know, they might sort of no longer have an interest in the project. And then somebody somehow through some sort of social engineering gets to take over the management of the packaging and then they backdoor it. Okay, so what if you want to get really clear here? I'm not talking about Red Hat and Fedora's build systems. Okay, this is generally the wild west of open source and its ecosystem. So I don't want to see a post on why combinators saying Red Hat engineer talks about how weak Red Hat systems are. We're not doing that. We're looking at the overall wider ecosystem, but that sort of affects somebody such as Red Hat because a lot of the time, like any sort of large software organization, we consume and ingest these various projects. Okay. So what really kind of got this project underway was we are inspired by, I don't know if inspired is the right word, but we were, we took example of an existing technology called certificate transparency. And I will go into what that is. And certificate transparency user, a very simple cryptographic algorithm is sort of a binary tree if you like that's called a Merkle tree and Merkle trees are something that I've loved for a long time. Because they're very easy to construct, you know, they're very easy to relatively easy to query. And they utilized a lot. Git has a form of Merkle tree block chain transactions user a Merkle tree. BitTorrent is one of the first ones to prolifically use Merkle trees so that file parts could be hashed and then a tree could be computed of an overall, say for example, Star Wars dot mp4 or whatever. You could you could take a file split into many parts, hash that those file parts into a tree and then have a root hash. So Merkle trees are really great and I will go into Merkle trees as we go through the slides. So before certificate transparency, this is the sort of very simple flow of how things would occur. So my admin would say, I want a T last certificate for red hat dot com. Here is my certificate sign in request so they would generate a key pair that computer CSR, and they'd send it to the CS to the to the CA and the CA is a certificate authority. The certificate authority would do some sort of checks. A lot of the time they'll check for a TXT record. So they would do stuff behind the door. And then they go yeah okay you're good. Here's your sign certificate for red hat dot com. I mean my browser would say hey CA, I'm going to red hat dot com. You know, is this right have you signed this particular certificate and the CA will say yeah sure you're good and then you get a little green padlock and everything looks great in the world. You know, but the thing is, we're putting all of our trust into this CA. Okay, and the CA's, I don't want to I don't want to give CA is a bad name here, you know, but they do operate behind closed doors. They don't have any insight into how they sign things, you know, they they might let their processes be known roughly but we can't actually see this happening, you know, that there may be humans that make mistakes that are manipulated, you know, the systems that automate the signing could be compromised. We've got no visibility there we just have to completely trust them. We've got each certificate transparency now we're going to look at post certificate transparency and this is where this new actor comes in that's called a certificate transparency log and transparency log is a form of mercury technology. Okay, so now what happens is you get all the yeah yeah yeah yeah yeah that we had before I'm red hacked. Can you sign it yeah let me check there you go there's your certificate. He says hey CA are we good. Okay, but the extra step now is that the CA can submit the science certificate into a transparency log. And the thing with a certificate transparency log is with a Merkel tree it has a nature that we call immutable or it's a penned only read only. Okay, so when an entry goes in. You have to change that entry without changing the whole calculation of the tree. Okay, so effectively you would sort of break the structure of the tree if you try to manipulate any single entity that's in the tree. So you now got an immutable audit of certificates that are signed by CA is okay. And then the browser can then say, Hey, certificate transparency log, have you got this certificate inside. There's an actual header called an expect CT. Now, if they don't, then that is a signal around, well, can I really trust this certificate because it's not transparent. It's not there for the public to scrutinize red hat.com, for example, they can't search the transparency log looking for people signing, making sign requests against red hat.com. And this actually happened with Google and Facebook, a CA was tricked into signing certificates to somebody that was not Google or not Facebook. So somebody managed to get a certificate as Google.com. Now imagine what you can do if you've got a node that you can bring up and say is Google.com, you know, there's all sorts of damage that you can cause. There's a big push for certificate transparency logs to be used. And now what happens. So yeah, so if it's not there. So, so, so for example, somebody like red hat can, they can monitor the log. Okay. And then if they see something come up. Okay. And it's not something that they signed. Right. This is not good. Something's happened here. Somebody's trying to sort of do something nasty under our name. Okay, so certificate transparency logs. These are run by Lexing crypts run one cloud flare and a few other entities as well. Public. Okay, so anybody can query them and anybody can attest the state of them. They can, they can calculate the leafs to an eventual root hash and then make sure the integrity of the tree is sound so you can you can have like public auditors. Okay. You know, it'd be really cool if we could do the same thing with software signing, because at the moment people signed software. Okay. And then there is no real transparency around that. You know, you can't really be sure that what you're seeing is the same what everybody else is seeing, you know, and was it the actual person that's claiming to sign a piece of software was it them. We thought about this so software transparency. If we had something like this, you know, questions such as the following. If somebody using my private key to sign artifacts, you don't know but if there's a transparency log that you can monitor, then you can start to discover those sorts of behaviors. Somebody's hacked their keys are hacked lost. What is the blast radius have those keys been signed in other things. They've released an artifact or a binary. They've hashed it. They've signed it. Who was that is everybody else seen the same as me so this is a targeted attack. So what can happen is you might have the latest release of ACME software, which is one dot nine dot seven one nine one dot nine dot six contains CV XYZ it's got a really nasty horrible vulnerability in there. When you go to some sort of site to get what you believe is latest, you could be targeted, which means that you are rendered as showing one point nine point six is the latest version so you think great and you grab it. Meanwhile, a lot of people are getting one point nine point seven because they've not been targeted, but you have no source to be able to look at what everybody else is seeing. So this is where you get what we call a targeted attack. And then, you know, how can you have an audit source of what's happened, and be sure that that's tamper resistant, because you know you have login frameworks like standard sort of to disk login. But these can be changed you know people can get in and they can cover up activity they can make edits and they can change to cover up any sort of malicious nasty actions that they might have carried out so we thought it'd be really great if we could use this for software transparency so I started working on a prototype about four months and since then we've had some other people come on board and this is really starting to get some traction now. So, previous to six store, kind of using that pre certificate transparency post certificate transparency. We had a look around to try and see how many projects are signing stuff. First of all, let alone put it in into a transparency log, our projects sign in their releases and no, not many are at all. I mean some big projects as well we're talking the likes of Kubernetes, for example, okay. Now, some are, okay, which is great. But then it's you can see it's a bit of a challenge for them and this is a bit of a problematic area so to the left. This is Tails and some of you probably heard of Tails it's a it's a Linux distribution for journalists, essentially so you're kind of Edward Snowden they would they would download Tails they'd run it in a VM and then they're destroyed afterwards. Now when you download Tails they do sign this with GPG but the key is on a website. Okay. And then you don't know who owns the key you don't know who else has got access to that key. Do you see what I mean, so you're kind of putting your trust into the website really I mean it's better than no signing so kudos there but. Hey, you know there's still some holes here and then to the right and I'm really not putting these folks down because you know they are really doing the best with what they have here. This is the node JS releases so for you to verify the signing of a node JS release you have to import all of these public keys. Okay. And then there's these people that may have signed releases in the past but they're no longer on the project. And we're not even really sure where they are now. Okay so again I'm really not putting down these projects. They are really you know a lot of projects are not even signed in the first place so so kudos there. So now we've got six doors so what I'm going to do is I'm going to do like a bit of a cartoony overview. Okay but then some of you that are more wanting to technical deep dive we'll start getting into the the internals after this but this just helps give a good sort of overall user experience for the whole audience. So what would happen is a developer would go right I'm going to sign my stuff I've made a release generated a key pair. I'm going to sign the release. Okay, and then afterwards the release is going to go into the log. It's actually the signing materials that go into log and they will be validated. Okay. And then that developer can then monitor that log for other people signing stuff using their public key. Okay, so this is kind of, there's a bit of mutual benefits go in both ways here. Okay, the transparency log benefits from the developer using it, and in turn the developer come on into the log to make sure that they've not been compromised and there's not somebody that's has their, their keys, or is trying to sort of sign stuff with the same project name, etc etc. Okay. Likewise the user can then say hey have you got this in the log, or is it just me is there some sort of targeted attack, and then a kind of a third party entity I'm using red hats an example here but it could be anybody. They can also monitor the log to look at, for example is somebody using our signing key, or as a developer that we know left a project still signing project X you can start to build up all these sort of behavioral analysis patterns, because the information is public, and it has a good root of trust in the cryptography of the Merkle tree, which is open to be to be audited as well. Okay, so this is this is where we are right now we have this, this is, this isn't just a Google slide deck. We have a server that you can stand up there's a CLI that will submit a sign in manifest which will go into the transparency log people can do inclusion proofs. So we have all of this this is right where we are at the moment quite shortly I'm going to go into where we ultimately want to go to and we're starting to make plans to move that way. So, right enough of the bad animations let's have a look inside so six stores a software supply chain transparency log. It's open source patchy to the server and client tooling and instances are completely auditable by anybody and we're running a public instance at the moment for prototyping. So what happens is with recall we have manifests. Okay, and what will happen is somebody will sign some sort of release. Okay, they could even sign a signature. Okay. And, you know, as in some digests. And these will all be put into a simple JSON schema. Now, these schemas are customizable so anybody can come up with their own schema type with their own data entries. So one of them shown you here is the very simple rudimentary one that we have, but you could put in stuff here such as time stamps compiler version build server hostname, etc, etc. It's really up to you to, to come up with your own design if you need to, if you need to. And we're also PGI plugable so we can work with BGP X 509 mini sign and also there's not mentioned here RPMs as well. Okay, and essentially what will happen is the, as you can see the, the artifact URL, the hash of the artifact. Okay, the signature and the public key. So these are all URLs, but they could be they could be in line as well so they could be the raw keys. So to say, these are sent to recall and recall will then validate that. So it will take the hash. It will take the signature and the public key and it will verify that the that signature was generated against that hash or that hard effect. Okay, so we do a form of signing validation. That way we don't let any junk into the log. So anything come into the log as long as it's signed. Okay, so as you can see the, it's a plugable framework and the main things in the context of a simple manifest is the signature, the public key and the content. Okay. Now, when these come into the tree, what happens is they are hashed. So we don't store the binaries in the tree. We just store the manifest. Okay, so this is hashed into the tree. So what does that mean. So essentially everything has a unique digest. Okay. We're talking Shah two five six here for for you crypto folks. These hashes are concatenated together. And then you go up a leaf where you go from two leaves to a single leaf. Okay, and then a hash of those previous leaves is computed until you go all the way up the tree to you get to an eventual root hash. And then when you have this sort of structure you can do something called an inclusion proof. Okay, so you can do that against the actual tree, or somebody could pull off a local monitor. If you wanted to really really fast speedy operations. And then if you found something questionable, then you can do a full inclusion proof against the tree. This is a kind of a very simple architecture slight sort of layout of the various actors that are involved. So you can see you've got your creators your developers your build systems your tools. They then populate an entry using the manifest that we showed earlier. Just to mention we're format agnostic so it could be XML JSON YAML. Okay. Once the entities are in there, then the auditors can start to measure the tree. So these could people like people like showdan.io antivirus vendors security researchers, etc, etc, people that are have a key interest in the supply chain. You can have your clients as well. So your people that ingest software dependencies or, you know, they pull down release artifacts and deploy them into a container or whatever. Okay, so essentially your end users. So just to quickly sort of iterate again we've got a, we're not fixed to any sign in type, you can use different sign in types we at the moment support X509 GPG and mini sign. Others can be added. And then we also have pluggable types as we call them so you can set your own schema. So at the moment we are kind of live. We're in a kind of a soft launch. There is a public log that can be used. We can't guarantee the data that's in there because we've not gone into full launch. Everything are, we have a simple rest API so people can build clients to integrate against RECOR. So you don't have to understand the complexities of Merkle trees and sign tree hashes and compute and inclusion logs. All of that is taken care of behind a simple rest API that you can use, which is coincidentally what our CLI points at. So this is open API so you can go on to swagger and see all the various APIs that we have. So at the moment, I contribute, sorry, I can't talk, project contributors are red hat. Okay. Purdue University got involved as a guy who I've worked with before called a sent professor Santiago from Purdue University. He worked on a project called in total. And the professor that taught him worked on something called tough the update framework so he started to work to work on this with us. And he's sort of considered an expert in this space. And there's a guy I know from Google and security engineering who's taken a good interest in this as well. And we really want to see this be a public good service for the benefit of all which is which is what this really brings to the table. So that is where we are now. We can stand up a signature transparency log. People can stop putting signatures in, but we're still not there. We still have a problem. So I'm going to go into our sort of evolution now. So we've sold transparency, but still nobody's really signing code. Okay. And then if we ask them why it's because what I don't know if they should they don't know how to they don't care. Or, you know, it could be mixed at all three and nobody's verifying signatures. Well, I don't want to say nobody. A lot of people a majority or not because they don't know what keys to trust the, you know, the kind of the digest is stored on a read me that's on a GitHub repo. You know, they don't know how to perform a validation or you know, I'm talking about the majority here there are your kind of, you know, your, your folks that are more fluent in this ecosystem and we'll always do that then they were dread not doing it, but unfortunately that's not everybody. So I realize I'm getting close to time here so I'm going to really start speeding up through things. So we did some due diligence on package managers. Okay, it's a very mixed as you can see Kubernetes not signing anything Python and open SSL the keys are stored on a website. Sign in should be simple but people are like well where do I put my private key do I need to buy a UB key or a HSM and you know what about my public key do I put it on the repo or a website or tell people to search on the GP GPG key server. What happens if I delete or lose my laptop. What happens if my key gets compromised what do I do you know this this is a sort of thing that's stopping people from inheriting this sign in essentially not inheriting implementing. So we thought what if we can make this easy what if there were no private keys to store. Okay, what if keys were in fair and roll, what if we could use existing identity providers to fix it to an identity. Public and transparent so that is what we're working on now. Okay, so we're looking at using X509 because of a Pico X infrastructure that we get. Okay. And, you know, this is this is a technology that supported everywhere it can be issued by a CA. So we've been talking to let's encrypt learning from their experiences there. So what we're going to do is we're going to start issuing certificates so somebody will generate a key pair locally. It'll be in femoral it'll be short lived it won't even touch the disk. They'll contact a web pki. There'll be a no off to IDC identity verification. This will get us the email we don't want anything else apart from that. Okay, and then that will then be injected into a certificate sign in request, which the CA can sign. So we put this into transparency log so everybody can make sure the CA is behaving and then that can be used to sign the artifact and the key thing there. I'm really not going to have time to go into my big sprawling picture here, but the key thing that this brings is a trust route. So we can be sure that the person that was in control of this particular email account or this particular identity service provider, they have an account with them. They are the ones that signed this artifact. Now, somebody's account could be compromised. But if it is we got the transparency log so we can start to monitor around certain email accounts from certain identity providers that are signing artifacts and if they shouldn't be there or you see somebody signing it as you, you know, something's happened. And so this isn't perfect, but we shouldn't let perfect be the enemy of good. This gets us into a lot better position than we are at the moment because we finally have transparency and we can fix things to an identity. How am I doing for time? I'm probably just running out, aren't I? Should I keep going? Actually, you still have 10 minutes from the regular time, but after this we have like 20 minutes time gap in this session room so you can actually use the time if you want to. Okay, okay, I'll keep going. I'll keep going. I'm kind of getting towards the end now and I imagine there might be a bit of Q&A as well. Yeah, there are a lot of interesting questions in the chat so we can do it afterwards if you want. That's fine, great. Okay, so let's look at an example journey. So somebody would log into the Sigstore WebPKI, okay, and there would be an OIDC grant. So you would then see that familiar screen where you allow GitHub or Google to provide authorization to another application to access details from that identity service provider. Some of them will want your contacts, they'll want access to your Google Drive, you know, all we want is the email address, that's all we want. So there's no privacy concern here and then we use that email address to ensure that is in the certificate signing request. We also do a bit more around timestamps as well but this way that we know that the certificate that we're providing that we're signing to sign an artifact is requested by somebody that has access to that identity service provider. Okay, so they would receive the cert, they would then verify that it's going into the transparency log, they would sign the package, the keys can then be thrown away. I mean, in fact, we don't even, these don't even touch disk, okay, so these are in thermal keys. Then they will publish the signature to the log, okay, and then they can check that the inclusion is in the log. And then we have that trust rate to the certificate transparency log as well. Okay. So here you can see the sort of the sign and publish part where we have the developer. They authenticate with OIDC, authenticate is not really the right word, they get us a grant scope, okay. And then the CA will publish the cert into a log and the developer can check that the log, that it's actually in the log, okay, and then and that's the first part. So the next 509 cert, there's a very simple example here, you can see my email address is in there because I was the person that had an OIDC scope with that email address. So you then have key transparency so contains a log of all the certificates that are issued. Developers can monitor the log for certs issued on their behalf, okay. And then you can audit the log for bad behavior. Users can discover keys in the log. You can see, you can do things such as show me all the certs issued to El Heinz at Red Hat.com. Show me all the certs with the Red Hat namespace that are in there. Okay. And this allows you to then compute your own policies effectively because this information is available. X509 certificates are short lived, signatures are published through a transparency log and you can monitor the log for key compromises. So let's look at the second part, okay, where we have the certificate transparency log and the artifact signing. So here it's quite simple, it's the signature and the public key and the artifact hash, okay, those go in. And then using the public key, we have the trust route to the certificate that was granted by the WebPKI, okay. So signature transparency helps detect compromise, of course, as well. And it can also act as a timestamp authority. So this can help around target hits and replay attacks. And then you've got discovery as well. So anybody can say show me everything that Linus Torvill's assigned or anything that Luke Heinz assigned. Okay, which Node.js releases, Miles is a maintainer on Node.js assigned with his key. Show me everyone that signed a particular container image, you can start to get these data sets down, okay. And then we have trust. So we now have a method for developers to get keys, a method for developers to publish and audit signatures, a method for users to find keys, artifacts, signatures, and a method to verify that signature happened while a certain set was valid. Okay, so there's no need for revocation anymore. You don't have this problem of ownership around cryptographic keys. Okay. And then the final piece really is we don't seek to be an arbiter on who you should trust. Okay, so that one thing to really get in mind with a transparency log. There is no such thing as a bad entry. You want bad entries in the transparency log. Okay, because then we can start to look at, we can then start to make decisions on something is good, something is bad from that data set being available. If we try to be the arbiter and only let good stuff in, then we won't know about the bad stuff. Okay. Now somebody could prevent the bad stuff from getting in, but then that in itself is a signal that something is wrong. If I have something and I check it's not in the log and it's not in the log, then that is effectively a sort of statement that you should use to question if you can actually trust that. Okay, because for some reason somebody's kept that information concealed. And the root CA is also under scrutiny because we have that same principle as we have a certificate transparency. So somebody then, not us, somebody can start to build a trust policy. So they can say artifacts might be must be signed by the Lawrence of Google dot com, or they must be signed by one of these three email addresses or two email addresses. Then you have the thing of like, you know, the two people that got keys that they both let they both turn at the same time so that they can equip the nuclear weapons and send the missiles. It's that kind of you can build up a consensus algorithm around what constitutes you being comfortable to trust something. Okay. Well, for example, artifacts must be signed by at least so many of these. Okay. So the trust part, the getting towards the final part here is where we have the trust route. Okay. So to verify a package you would find the package retrieve the package somehow find the signatures for the package determine who the package was should be signed by. Okay. Find the certificates for those trusted entities in the log check the signatures against the certificates verify the certificates were valid when the signatures happened. Okay. And this then effectively gives you that trust route which I've explained quite a few times. Now there's a final part here is to transparency logs weren't enough we've gone and created a third one. Okay. So this is what we call the depreciation log. Okay. Now this is exactly the same process as when somebody signs an artifact, but they what they will do is they'll sign an artifact. The end of life. It's an old release, or it's a release that's unpatched contains a nasty vulnerability. Okay, so they're going through the same process again the request a certificate. There'll be an OIDC session that would occur. Okay. Digest signature of the unpatched artifact will be logged into the depreciation log. Okay. And then a user downloads that old release and they can check it's in the depreciation log. If it is, they know that the maintainer or at least the maintainer that had control of the account that's with the identity service provider. They've signed that artifact that the actual digest of that artifact has been something that shouldn't be used. It's a deprecated so I said depreciated or deprecated. Okay, so generally if it's in the log, probably not a good idea to touch it. Okay. So this is the sort of final part where it's kind of a no brainer really to take what we do with signing. It's a good release is somebody can sign bad releases into the deprecation transparency log and then using our scheme as we can put extra details in such as the CV number when it happens, etc, etc. So meaningful information can come out of that. So I think towards the end here if anybody wants to find out more. This is our website from here you can find our, like I said our open API details, you can find our GitHub organization where our code is. There's some details around how to pull up a server how to run a client. And we're at the moment in talking to various people to try and get this as a public good service and have it there for the benefit of all really. And that's the great thing about this is that it's, it's the more people that use this, the more better it is for everybody. And that uses the log gains protections and somebody that monitors the log gains protections as well. So I'll stop there I can see it's just ticking on to 10 past. So I can just work out how to come back to the need to stop presenting if I stop presenting. Yes, yeah, that's okay. So let's go over some of those questions. Okay, sure. So we have a question from Ben Fisher. What is specifically stored in six store and what are all the files for a particular website or software release that are stored. So in the, in the signature log. Like I said it's customizable, it's extensible, but we store the digest. So if you have a table digest of the table, the signature and the public key. Okay. And then with all three of those, you can validate that a particular artifact has been signed correctly. Okay, so that allows you that's all we need. And then we put the entry into the log. So somebody can point locally at the artifact because we do actually hash to type the artifact at the server as well. Okay, so we perform a full validation. And then if that computes it goes into the log. Now somebody could sign something bad. Okay. And that's cool. They can put it into the log as we want to know about the bad stuff. You see what I mean. But that's that's our sort of generic rudimentary manifest. There's no blobs go in there or anything like that because that would obviously be dangerous because somebody could put some legal material into the log. So all we do is we just put in manifests. So another question. So am I missing something. Doesn't it pull in the contents of the URLs URLs as well as any external reference needs a valid until at least time on it. So as I said, we do pull in the artifact, but it doesn't go into the log. We pull it down. We capture the digest we perform the sign in validation and then it's dropped. It doesn't even touch this. Another question. One hacked. How can I revoke my key and say that the specific package image is malicious. You might have answered that already I guess. Yeah, so, so you don't need to worry about your key being you don't need to revoke a key because they're all short lived. So, like I say, we don't even persist them to disk they're encoded to memory. And then they're, they're purged, essentially. So so you don't have to worry about key compromise or key revocation. And the second part was, I'm sorry, there's a second part to it. I think that I think that was like, yeah, you talked about the revocation with time, right? But yes, we asked what about when you get hacked. Oh, I see. Okay. Yeah. So, generally, that's not in our remit. I mean, I could give advice of what to do, but we're more about having a transparent source for you to realize you've been, you've been hacked if you see what I mean. I mean, what sort of action you should take is there's a lot there might be a lot of specifics around, you know, change your email password or, you know, take a machine offline and and do some autopsy as to what's happened. Yeah. Now the question. So what data goes into this get repository data CIC data test data or just release data. Just release data at present. Okay. But we are able to extend and start to put other data into the log as well. Okay. So like I said, we're actually we've actually got the code in now where we can strip out the sign in parts from RPM and then put that into the signature transparency log. And it's quite a rudimentary model, but like I said, we're what we call manifest agile. So you can include your own details. All right. Next is sort of a question and comment. So no one is signing their code. How has the industry dressed. It used to be baked into RH Linux packaging process included and checked upstream signatures. Internally, it's very good. Okay. So when I should remember this is I'm seeing a lot of people from Red Hat. Okay. So, you know, the when you're an internal organization, it's very good to have a simplified level of control over your sign. You can have like a hardware security module, you know, and for RPM, we sign all the RPM packages and then the red hat public key is in the in the image. Okay. And same for Fedora as well. There's lots of really good stuff there. So in that sort of scenario, it's, it's, it's a lot easier to control sign. See what I mean. So, so there's definitely a benefit for a transparency log for auditing. And, you know, if an internal network was hacked, you know, somebody could not tamper with that, but it's more really the open source ecosystem. Now there are bits and pieces where people can sign stuff and we can pull code down and we can do various things, you know, we can work directly with the Git repository. But on the whole, it's very patchwork around people that are signing code. So there's another comment here. Some sounds like a topic for a mini research project to understand why a problem that was solved years ago has reappeared. I don't know if it was solved years ago. Yeah, um, yeah. I mean, um, yeah. I'll go to another one. Who would be the operators of a six store instances. That's a really good question. So what we're hoping, keep the word hoping here, and what we're driving towards is that we will have some sort of public good foundation will run one. Okay, as a start. Okay. And we're kind of basing ourselves off the lexing crypt model. But having said that anybody can run a transparency log. And the more transparency logs you have the better because you've got various different entities to capture information and to compare against. So a lot of people think wouldn't blockchain be better, but not necessarily because, you know, you can, you don't have you only have a single blockchain. You know, you've got multiple people that are signing transactions within that blockchain but you, you can't really have a separate entity to compare against another entity. If you see what I mean. Okay. Another comment here. So deprecation log needs to text each time, but to restrict that log to facts, not opinions. Well, again, there is no good or bad. There is no facts or unfacts with the transparency log. This is quite a key idiom to understand is that we there. There is no good, bad cop in a transparency log. There is just what's occurred. Do you see what I mean. And then by having that information publicly accessible and enable to audit. And then you can see what's actually happening. You can make decisions. So, so again, we've read with a with six dot. We don't seek to be an arbiter and prevent entries coming in as long as they're signed. That's the key thing because otherwise somebody could just make up whatever they want, but that particular key set that signed it could be bad. And that's good. Come on in. We have another comment from Robert, who is our next speaker, I believe it seems that the precation lock is only meaningful for objects already signed. Otherwise the attacker could modify one bite and make a deprecated object invisible. Upshot no point deprecating an unsigned object. I don't quite understand that. I could read it offline. There is always the, the shower channel for discussion there. I mean, if somebody changes something, they're going to change the digest. So it's, it's going to become a new unique entity. So, yeah, but chat about that in the chat. Great. And there's one more question from Eric from Redhead. So how well six store store build systems can produce a dozen thousand of signed packages per day will be able six store to store it. I mean, yeah, the original question was how well six store scale. Okay, good question. So, so for our Merkel tree implementation. We're using a project called trillion and trillion is used by certificate transparency. Let's encrypt cloud flare. A few others are running these, and they're seeing thousands of requests. I don't have the poor, the benchmark results to hand, but they are seeing a lot of heavy production use thousands of requests a second. So, so we're pretty confident there that, you know, we're not going to hit problems. I'm sure we're going to have to go through fine tuning and, and, but luckily the folks that let's encrypt are quite keen to sort of share their large pains with us and help us with optimization. And the other thing as well, I should mention, this is where are we kind of around the sort of proto typing stage still, especially with this evolution that we move them forward with. So there are some things we still need to work out the best way of doing XYZ. So timestamping is something that we're looking at. So, you know, this is a work in progress.