 Hello, everyone. I hope you're having a great con. I'm here today to talk to you about witness. It's a new framework for supply chain security from testify sec. My name is Cole Kennedy. I am the CEO testify sec. We were founded about seven months ago and we were founded really to solve this problem around. Supply chain security with respect to software and vendors and other compliance concerns. You know, our team is growing. You know, it's probably probably be bigger next week. So we're really excited to have everybody at board as we was resolved this problem. You know, it's not just me here is we have a great team of experts. You know, we've got a lot of plans coming up. So, but here, I'm here to talk to you about witness. You know, we start working on witness after cube, Tom last year, you know, Michael and I sat down and really looked at the complexity of, you know, solving this supply chain problem. Right. How do we. How do we verify that artifacts are what we expect them to be? So we created this act at the station framework and implement the in total spec, including the 5, 6, 7, you know, myself, Michael work together with community around some of these. And did some implementation in in the upstream repositories we also use open policy agent. We changed the layout specification from the standard and lay out to something that was a little more flexible and allowed us to use some tools that have come about since that that layout was created. And we're also have extensible support for different back ends and different types of a testers. So, right now we support record as a, one of our plug will back ends. And then we have several testers that that will be go over, go over a little bit. And really what we wanted to do was provide a framework that was robust enough to meet salsa level for Providence requirements and eventually meet salsa level for be able to automate the guarantees around salsa floor and be able to create policy against those types of constructs. So, what is salsa level for Providence requirements? So, we need to have all that Providence seems to be available. We meet that by publishing that Providence to record. We need to make sure that province is authenticated. We sign all the Providence within witness with either spiffy spire or, you know, for machine identity or we use full seal for user identity. And that kind of coincides with service generated Providence right with the whole point of witness and in total is to automatically generate the inputs and outputs of a compilation process. So, we do that by implementing the. Non-falsifiable right that that goes into our ability to use short lived certificates or short lived keys as well as a time stamp for record ensure that. You know, these access stations are signed, but not only that, the private key material that they're used to sign is protected and then finally, we want to make sure that all the dependencies is complete. We have a full build materials of what went into that bill and we do that through our tracing ability within witness. So, let's go where witnesses trust moment, right? You know, see the turtle there because it really, Joe tross is all about, you know, finding that bottom turtle and get rid of it. And we do that by implementing spiffy spire to authenticate identities. Machine identities rather than using the token. So, we use remote attestation to verify the identity of the machines that are doing the build process. Second, we, we incrementally established trust with these cryptographic documents. So, if you're running a build on GitLab on AWS infrastructure, you have 2 cryptographic documents available to you, right? You have an AWS metadata service. And then you also have that JWT that get multiplied. So we use those documents as well as other data that's available within the system in the process to create these attestations. And then, you know, like I said above, right, we use these ephemeral short lived signing keys to sign these attestations. Signing automated workloads, hardware keys, very, very difficult. By introducing spiffy spire, we kind of saw some of these problems and we're able to automate the process of signing these attestations while retaining trust and protecting that private key material. So we're talking about signers. We actually support multiple signers right now. Hopefully in the future we'll support more. But the way the signing works is that we take all these, these documents, these attestations and we bundle them together into one JSON file. And we sign that using the DSSE envelope. And we need some keys to actually do that signing. So while we say keyless, right, we actually do receive keys from Fulcio, right? We receive a signing certificate from Fulcio when we sign the stuff with those user identities. And then, you know, in the CI process, that doesn't always work so well. So we implement spiffy spire to actually identify workload identity of whatever the builder container or the builder agent to make sure that, hey, yes, this is exactly what I want to build my binary and I trust it. And then, finally, because we're using those short load keys, those certificates are only valid for very short periods of time. I mean, over a minute or hours maybe, right? So that workload's probably going to get scheduled when that certificate has expired. So we need a different way to make sure that those attestations were signed during the certificate availability period. And we do that by doing another timestamp on top of that signature. Right now this capability is fulfilled by RECOR. When we upload the attestation to RECOR, that attestation then receives a timestamp and is stored on a log for non-repudiation. So we talked a little bit about cryptographic document support, right? So during the witness attestation process, we look through, we look for these different types of documents. So, like I said, if you're running on GitLab, we find that JWT that has all the information about that CI runner and what generated it. So we can tell who made the last commit on that by expecting that JWT. We can figure out what project that came from. We can identify all different sorts of permissions and information metadata about that running process that we can then apply policy to. Same goes with the AWS Metadata Service, right? We take that metadata and put it into a JSON document that we assign. Well, this gives us these trusted selectors that we can then establish policy against. We'll go through this a little bit more. But currently we support Google Cloud, AWS, generic JWT tokens, as well as GitLab. We encourage contributors to add additional attesters to witness and just reach out if you need help with that. Then finally, the last part of witness, and this is really the most exciting part of it, is we need something to do with all these attestations, right? How can we make them actionable to improve our security posture and efficiency? So we have a policy engine embedded within witness. It's witness verified. But policies define what attestations must be satisfied. So within that policy document, you may say you want to GitLab attestation for this stuff. You want a GCP attestation for this stuff. And then you want a command run attestation that has a trace on it. So you define that and then within those attestations, right, you can also attach a regal policies that must pass in order for that policy to be satisfied. So now we have this trusted sign data that we can evaluate with our policy engine to understand whether this workload meets our policy or does not. And then we can decide what we want to do with that. And then the last part of this is that we also enforce a cryptographic identities that are allowed to execute. And what this means is that, hey, we inspect the public certificate that was used to sign, sign that attestation, and we can check the certificate constraints on that to see who signed it and if they were allowed to sign it. We also embed the certificate authorities that we trust within that policy. So kind of a blown out picture, a lot of words on the screen. So to backtrack what we do is we take these identity documents, whether it be the cloud incident data, some sort of a JVT, along with the source files and materials. We create an attestation for all of those. We execute the command that we specify. And then this command, when that command is executed, we trace all the materials that go in and out of that command. And then we bundle all these together and what we call an attestation collection. This attestation collection is signed by a key provider and then uploaded to a back end store. We'll blow it out a little bit more for more. A CI approach what this looks like, right? So recourse are evidence like we're normalizing all of our evidence, putting in a recourse, and then we're able to evaluate policy against that normalized evidence. Witness Verify is a library. And so we're currently working on a admission controller that will enforce these policy documents in a Kubernetes environment. But really the library can be implemented in just about any piece of software where you need to verify the providence of whatever you're running. So I'm going to go through a few use cases and hopefully this will give you a little bit better understanding of exactly how witness fits into your environment. So one of the most important things we want to do is that make sure that all of our software is built on the physical machines that we trust. These machines are part of our system security plan. These machines that have been attested to our chief information security officer. We should make sure that our builds are actually coming off of that and we don't have a rogue developer that is bypassing the CI process in order to get his feature into production. So what we want to do here is we want to take we create a GC, we create a regal module specific to our GCP project, right? We can see here at testify sac our GCP project at numbers 3243222. So when we apply this policy into our admission controller, any of the bills that don't have an attestation that proved that it was built at least once on our infrastructure. We'll not get admitted into the cluster and that will not be executed. So next we want to verify that an artifact actually did pass a static analysis testing. So in a lot of organizations, you may have, you know, dozens of CI systems, right? And to understand, you know, the compliance of each artifact that comes out of each of those CI systems. They're really difficult tasks. So instead what we're going to do is we're going to create some policy that says, hey, every single artifact that goes into our production system must have a SNCC priority score of less than 510. So now we create this policy and we implement it within our admission controllers and scuba netis cluster. Now this means that any of the workloads that are scheduled must must pass our policy for static analysis. We're not allowed to bypass this because the developers in a hurry, or maybe there's a misconfiguration or some other situation that's happening. Finally, this is something that is usually very, very difficult to mitigate against. And if your compiler was compromised or has a critical vulnerability that transfers that vulnerability into your software, it's going to be really difficult to sift through everything in production to figure out exactly what was compiled by that malicious or vulnerable compiler. But if you have tracing enabled within witness, while your CI process is running, we actually collect that information. So we have the digest of every single process that happened during this step of the CI process, as well as all the files that went into that, all the intermediate files and all the outputs. And we can take those digest and compare them against vulnerability database or different threat databases to understand more granular risk level of that workload. So I'm going to go into a demo now. What we've done is we've taken the SPIRE project, the CNCF SPIRE project, and we use this internally at Testify. So we really want it's probably the most critical security component of our system. So in order for us to trust it, we want to make sure that the SPIRE server running on our infrastructure was built by us, was approved by myself or another engineer that has that approval status, and that it passes aesthetic analysis testing. And then we verify that before we push it off into production. Now it's time for a demo. The first demo we're going to show you is an attestation of a developer's commit. And we're going to use that attestation to verify in our CI pipeline that a developer that we trust actually created that commit so we can kick off the rest of that pipeline. So the first thing we're going to do is actually create a file and then we're going to commit that file. So what you're going to see here is we have a post-commit hook that rugs witness. And witness is configured in the commit hook to actually do an OIDC credential verification with Fulcio where it's going to get those certificates and sign that attestation. So we can see right that attestation work. Now we should have a CI pipeline kicking off here. Well, we got to push it first right. So go ahead and push it. And then that should be kicking off here pretty soon. We'll go ahead and look at this clone step. Now you can see this clone step succeeded because we use the correct credentials when we made this commit. Now let's try it again and use some incorrect credentials and see what happens. We'll go ahead and use my personal credentials here. Those are not specified on the policy. And there you see this step failed because we use the wrong credentials to make that commit attestation. For our next demonstration, what we want to do is we want to make sure that all of our SPIRE agents actually get compiled on infrastructure that we trust. We want to make sure that it's not being compiled on some developers' computers sitting underneath their desk or by a malicious actor. We want to make sure that everything that goes into production gets built on our production CI system where we have our checks and we have our burn compliance. So let's go ahead and see what that looks like. What we're going to do is we're going to go. Let's go here in the build agent and we can see that we have some attestation data that was generated. So let's go ahead and take a look at that. We have a little attestation viewer here that we created. So we look at the GCP IT attestation. We can see we have a JSON file with a bunch of trusted selectors. And one of the selectors that we have is that this build was created in this specific project ID. Now, and if you go look at our repository, you see that matches what's in our policy. So let's go ahead and see what this looks like to do this offline verification of our policy. As well as we'll go change a couple of things around and see if our policy still passes. So as part of the last step of the pipeline, we actually published these artifacts. So feel free to go ahead and download these yourself. But we published the public key that corresponds to the policy. We published a policy and then alongside that we published the artifacts that we can verify. So we've actually downloaded some of these artifacts ahead of time. So the first thing we're going to do is actually verify the Spire agent. And we're going to verify that it was built on that GCP project ID. And that passes all the other constraints that we have on policy. What we're always doing right now is it's looking in recore for all those attestations. And then as part of those attestations, some of them will get more references. We call these back preferences to things like the pipeline URL or the commit hash. Well, we can go and find more attestations that may correspond to that CI pipeline. Essentially building out the entire Providence graph required to satisfy that policy. And you can see verification succeeded. So if you look at any of these indexes right here, you'll see that we go there. That's actually that's a sign attestation for that artifact and that stuff. So what happens if we change that, right? So here we've actually changed the GCP number in the policy. We're going to go ahead and see if this one's verified. And there we go, right? We don't have the evidence we need to satisfy that policy. Finally, I want to show what we do with tracing. So let's go ahead and pull up an attestation. So this is a command run attestation. We'll see this is a command that we're running. We're building the spire agent. We can see exactly the parameters that were passed into it. But more than that, we're running a trace on this process. And we have full permissions over it because we're actually wrapping it. And this allows us to grab, you know, all the sub processes, all the executors, as well as all the open files that went in and out of this, this process. So this gives us a lot of information about possible vulnerabilities that may be introduced into our system. And this is we can create policy against these items just like anything else. So if there's a malicious compiler that's identified, we can now backtrack and find everything in our system that was compiled using that malicious compiler. Another situation where this works out really well is something like Heartbleed, right? We want to figure out all the different, all the different places where the vulnerable version of OpenSSL was compiled into our software, right? So this is really, really difficult to do unless we have really good accounting of what went into our software. And that's what we're doing here. We're accounting for everything that goes in and keeping track of it. So we can look that up later. With that, that's end of our demonstration. And let us know if you have any questions. We'll be on the live stream. And then I think we have someone there in person too. So if you see Frederick around, make sure you say hi.