 Thank you. So, as mentioned, my name is Mikhail Swift. I'm going to talk a little bit about supply chain security, which changed the pace for the past few talks. It's me. I'm co-founder and CTO of TestifySec. I've contributed a little bit to the CNCF software supply chain best practice paper in Toto. And I really love open source software. So a lot of what I'm seeing today is really, really cool and speaks to me. So I guess talking about software supply chain security is a really complex, many problem, many faceted issue. We have a lot of problems to solve at once and a very little amount of time to do it. And kind of a high-level overview of some of the problems that we see, obviously, are how do I know what happened with this build? Where was it built? Who triggered a build? Who committed to it? And how do I prove that when I'm using it, or when I want to use this piece of software? Next big problem that we have is, how do I know if I can trust who's giving me this information? So the signature on it, where did it come from? Who is it? And how do I know that signature was actually from them? And the problem that I'm kind of going to really focus on for this talk is, how do I discover this information? How do I actually get it to use it when I'm wanting to evaluate policies? So as far as actually generating this data and at a station's go, we have a bunch of really awesome work happening in the open source. We have tecton chains, the middle one's a new logo. It's witness. It's the project that I primarily work on. We have salsa provenances. And salsa gives us a bunch of awesome frameworks on how to make decisions and really assess the risk assessment of an artifact, given the information that you have. But the one thing that all three of these, and I'm sure many more projects that I did not include here have in common, is they speak the in-toto language. They use in-toto statements. They create in-toto subjects and kind of wrap them up in in-toto attestations. For the trust problem, we've seen, you know, six-door come out with a keyless signing, really the recor transparency log does give kind of a non-repudiation about the stuff being signed. Spiffy Spire for workload identities, establishing trusts there. But what I feel we kind of have a gap in right now is the actual discovery and usage of this information. We have policy engines. We have all this to actually make the decisions. But we're somewhat lacking a way really to find it and kind of query and use it. So ponder a situation where we have a large CI pipeline. We have some things that are happening, like a manual process, people approving things and service now, tests happening, linting, testing, and then finally a build and then a deploy to some environment. A lot of these things might happen prior to an artifact's creation. And trying to link that back to the things that have happened before isn't always the easiest thing to do. So that's kind of one of the problems I keep encountering when we're working on witness and policy enforcement is how do I find the code review attestation to prove that, you know, Alice approved me to put this in production. So if we can find the attestation from when the product was created, maybe we can use context clues from those attestations to find the other more relevant ones. And when we start looking at this, it kind of starts resembling a graph. I have the attestation for when the product was built. That gives me the commit it was built from. I can use that to then look up attestations that were relevant to that commit, linting, testing, scanning. And I could start bringing those into my policy decisions. And sorry for the kind of rough graphic, but this is kind of what it might look something like. We have a program. We use that digest of that program to look up a compile attestation that shows us this is how it was built. This is where it was built. And this is who signed that attestation saying that I built this on this infrastructure. Maybe that attestation might contain things like the commit ID to get us back to that code attestation one. A GitLab project ID, if it were built on GitLab, to get us to maybe some deployment info that didn't show up for us at first. Or maybe who made the commit so we can start getting back to the provenance of the developer themselves. So as we looked at this and kept seeing this kind of graph form up around us, we decided to go ahead and try to create a graph database and graph service to find and discover and query this in total attestations. So what Archivist does is it takes Intoto statements and indexes them onto a graph using which is a go framework for graph databases. It exposes a GraphQL API so users of this can query it, find things and kind of refine their queries over time to find more and more relevant attestations. It pulls out specific things like what attestations were in that Intoto attestation, the signatures on it. So we can look at the signatures before pulling the attestation. What other subjects existed on that Intoto attestation? So we can then use those to kind of expand our graph search. So it uses Intoto subjects as graph edges. If you're familiar with Intoto attestations, they all have a statement, some subjects that describe what the statement is describing. And then the statement itself which is at this point kind of a arbitrary amount of data could be a salsa provenance. It could be anyone else who takes some of these Intoto attestations and implements them. So I have a few demos I'll show real quick to kind of demonstrate how we can use this. So I have a simple main program. All it does is print hello security con to the console and I have some commits about it. I have these commits here. And what I'm willing to do is I'm gonna create a witness policy which is describing what I expect should have happened when I get this binary. I expect a step named build to happen that should have some materials which describe all the files that were used during the build process. It should have run a command and it has an embedded rego policy here that will let us enforce things about that command. I'll show that here in just a second. And it should also have some attestations about a product which will be the binary that was created during that build. And the functionaries describe what identity we use to trust the attestation when we're evaluating the policy. In this case, for this demo, I'm using public keys, nothing fancy. And then the next step we expect should be a packaging step where we're gonna package it and create a tar ball of that binary and then maybe upload it to some repo to use later. And similarly, I have an expected command that I'm going to enforce when I'm evaluating this policy. And again, the functionaries and then just the global functionaries for the policy itself. So what I'm gonna do real quick is I'm gonna sign that policy we just looked at with this command. So this is just telling Witness to use a YAML config I have and sign the policy document we just looked at. It's going to do that. And then I'm going to wrap my go build with Witness as well. It's going to create this and it's going to store the attestation in Archivist with some GitOid that we can use to find later. The GitOid is just basically a shot of that file, the unique identifier of it. And the next thing I'm gonna do is copy my packaging step here. So that's gonna create that tar ball and again, going to create an attestation that we store in Archivist. So now if I want to validate this policy, my policy file, sorry. This happens when you change your demo last second and mess up your scripts, my apologies. Okay, so now we use that policy that we created in that first step to evaluate this just as binary that we have. And what Witness did in this case was go to Archivist with the hash of that file, that binary, and kind of interrogate it to figure out what evidence do we have about this build process. So it found the two different attestation of elopes we had generated prior and said this is what I use as evidence to say this artifact is all good to run. And so kind of to show what this looks like maybe a little bit is we have just a playground right now to kind of query into and show what the data in Archivist looks like. So we have this hash, which is a hash of a different product that I created earlier. And we're gonna try to find attestations about it. And what we find here is we have one attestation, this getoid, we say it built, it has a few different things in it. And the subjects include a get commit. So we found one piece of evidence about this attestation, but we can use this get commit to kind of dig in a little bit deeper. And if we look that up, we can see we find six more envelopes with perhaps relevant data that we can use to continue to make decisions about this artifact. And one last thing I can do to kind of show is, so we have this spire server binary that we had built previously and I'm going to record it's SHA-256 hash and I'm going to use Archivist control to output just a rough visualization of the graph of evidence that it can find about this artifact. So all I'm doing here is querying Archivist. It's going to output a dot file from Graphis, which I can use to turn that into a PNG. And it kind of shows the relationships between these attestations and what we're looking at. We have some information about three different builds that happened on this commit maybe. We have the commit attestation itself. So whoever committed that would have recorded some attestations about themselves. We have dependencies, which records the, well obviously the dependencies of the project and kind of allow us to make policy decisions about that later. So kind of where we want Archivist to go and what we really envisioned for it is we want to archive more data. Right now we're really focusing on an attestation but something we see frequently is, if I have an S-bomb or if I know of a CVE, how do I find things that are relevant to that CVE? If Archivist can index these S-bombs, it should make finding those things later trivial. This is a GraphQL query we can run on the database and we can find everything we know about that's affected. And obviously improving the user experience. Right now it's obviously early in development where we just have playgrounds and demos but it's gonna be important to kind of get this right for us going forward. And I kind of ran a little fast but if there's any questions, otherwise Archivist is available on our GitHub publicly and anyone's free to pull it down, play with it, break it and tell us about it. Yes. Yeah, so Link right now we're not currently recording it. I mean we, Archivist can adjust them but it's not gonna get kind of the more rich data around it yet. In total it was moving toward their kind of generic attestation format with IT was at five or six I think. So that's where we chose to focus on for right now. But the Link stuff will definitely still be relevant and we'll definitely be looking at bringing that into Archivist. I think that's it then. All right, well thank you.