 Hi everybody and welcome to this session on attestations, the glue behind secure runtime environments. So today we're gonna talk about what attestations are, why they matter to runtime security and secure software supply chains. We'll discuss how to use attestations as part of your security strategy. We'll also discuss VEX or vulnerability exploiterally exchange and how that fits into the attestation workflows. And then we'll summarize all of these technologies and save some time for Q and A. So first a quick introduction to myself. I'm Jim Begwadia, co-founder and CEO at Nirmata. I'm also the creator and a maintainer of Kivirno. And in the Kubernetes community, I serve as a co-chair in the policy and multi-tenancy working groups. So let's start with the language definition of attestations. Quite simply, attestations are evidence or proof of something. It's a declaration. For example, in the real world, an auditor may attest to some event or some fact, just like a notary, a public notary would help notarize a document and attest to the fact that they have viewed the document and the identity of the person holding the document. But where does this fit in the software supply chain and what we want to do to improve runtime security? So the four components you wanna think about are artifacts, metadata, attestations and policies. First off, any software build will produce artifacts. These artifacts could be binaries that you're gonna execute in your cluster or in your runtime environments. They might be packages that are also deployed. Could be jar of our files, libraries or container images. So anything that your build will produce, which is gonna help your application execute. Metadata are additional artifacts also produced during build time. And metadata, just like in a software world, metadata is data that describes other data. Here, metadata are artifacts that describe or provide more information on existing artifacts or perhaps the build environments. Some examples of metadata could be provenance or data which proves the origin of the artifact or whatever you're deploying. It could be scan results which provide more information like scorecards or vulnerability reports on your artifacts. Could be a software bill of materials which describes exactly what the composition of your artifacts or your runtime images or what you're deploying looks like. The next piece is attestations. And like when we saw the language definition, attestations provide proof about artifacts itself. So in this context, attestations are signed statements created using a trusted entity about either the build environment or artifacts produced during the build. For example, an attestation could tell us in a secure manner that the build was executed in a trusted and in a known environment. The attestations can also indicate to us if artifacts comply with various organizational or regulatory standards. For example, you might not wanna deploy any container image with a high severity vulnerability if that's your baseline policy, or the attestations can help prove that the artifacts you're deploying like a container image comply with those requirements. And finally, policies verify attestations and metadata at various phases of execution. For example, right before deployment using admission controls in your cluster or could be as periodic scans which occur routinely. And it's important that these are done periodically because policies can change and attestations may also change which information that might be dynamic. So some examples of policies could be to verify that your build or your artifacts have the right attestations attached to them. And also to verify that the metadata that the attestation is verifying complies with your policies. Again, if you're checking for one little bit of reports the attestation maybe in the policies will verify that the attestation includes the right metadata produced by the right tools that you expected to and has the right results to allow that container, that image or other artifact to run in your environment. So summarizing this, artifacts are created by bills which could be as part of your CI CD tools. Metadata also produced by your CI CD tools describes the artifacts and the build environment. Attestations provide proof or evidence of these builds and artifacts and are created by a trusted entity, typically a trusted build process or something with a machine identity that can be verified. And then finally policies are used to verify these attestations for a defense in depth or a zero trust strategy across your supply chain. So now that we understand the concept of attestations and where attestations fit in a software supply chain let's dive in deeper and talk about the implementation details. So to do that I'd like to introduce Salsa. So Salsa or the supply chain levels for software artifacts is a set of standards being created by the open SSF supply chain integrity working group to help secure software supply chains. So Salsa is a security framework which provides a checklist of different requirements and controls to prevent the tampering and to check and verify the integrity of packages and infrastructure across the software supply chain. So currently Salsa is in alpha status but there's very active work being done within the community itself to define Salsa requirements as well as a set of generators and provenance generators and verifiers to output securing software supply chain. So Salsa defines four different levels of compliance and these levels check different requirements across source code, build environments, provenance data produced during the build environments and various categories including software code reviews and other aspects of your overall software delivery process. So level one is fairly easy to attend. It requires proper documentation or some verification of the build process itself and unsigned provenance. So fairly straightforward and simple to attend. Level two gets a little bit more difficult. It requires tamper resistance in the build environment. So typically if you're running a hosted build environment or service, level two is possible to attain by given the security environment of the host of the managed service you're using. If you have your internal build services then level two requires proof that your builds are tamper resistant and you also need to produce signed provenance as part of level two Salsa compliance. So this signed provenance is where attestations start coming into play. Level three requires further checks where the provenance now needs to be non-falsified viable which means that it cannot be tampered with, it cannot be somebody who has access to the build environment cannot falsify or change that provenance data and it requires extra security controls in your host systems itself. And level four goes even further requiring two-person code reviews and completely sealed or hermetic build environments which cannot be influenced by external factors or external systems itself. So within Salsa the definition of a software attestation is simply an authenticated statement or metadata about a software artifact or a collection of artifacts. Salsa uses an format called from Intoto. Intoto is another open source project within the Linux foundation related to software supply chain security. And Intoto has a specification for creating signed statements which Salsa also leverages for attestations. So the attestation model looks something like this. So starting from the right, you have bundles and bundles can contain multiple attestations. Each attestation consists of an envelope, a message and a signature. And the message component of the attestation is where the Intoto statement format comes into play. A statement contains a subject and a predicate. And a predicate can be any custom data or there are standards being developed for predicates for things like scan reports or predicates for different pieces of common information. Predicates within your statement itself, it will link back to the artifact which is being verified. And your predicate can provide further links to other things which can be looked up by policy engines or during policy checks. So putting all of this together, you can have multiple attestations produced. You can bundle these together and then you can use the Intoto format to secure and sign these attestations and attach them to your OCI registries along with your images itself. So here's an example of what an attestation might look like. So here you have an artifact which is identified by its hash, by a SHA ID. And what this is saying is that this particular artifact was built by a GitHub action which there is a link to that GitHub action and it's signed by GitHub itself which is acting in this case as a managed build service with an identity provided through an OIDC token. So all of this goes into an envelope which can also be signed to complete the signed attestation and deliver that securely. According to the in the Salsa attestation model, the tool stack looks like this. The envelope is signed using DSSE or dead simple signing envelope. The statements are Intoto attestations in a format which is being developed within the Intoto project. The predicates are JSON and this could be either custom JSON statements, could be things like scan reports or other standards being developed. You can have bundles of attestations and these multiple attestations are combined using a specification called JSON lines. And then the storage lookup is not defined yet in the Salsa spec, but typically these attestations are attached to your image using within an OCI registry itself either as a layer in an image or with other standards being developed within the OCI registry specification to be able to securely attach metadata to images itself. So here's what this looks like in practice. If you have an Intoto attestation, so you're on the left, it has the payload type, the payload and signatures. The payload type itself is well-defined, but the payload is where the statement begins. And as shown in the next box on the right, each statement starts with the declaration of the Intoto statement, the version of the specification it has. And then the statement has a subject, predicate types and predicates. So the predicate is the actual data that will be checked using policies, but all of this put together gives us a secure implementation of taking metadata produced during build time and in build environments and at build time and then signing them using a trusted entity like a GitHub action or some other build builder with an identity that can be trusted and creating this Intoto statement, which can then be attached to an OCI image. So in this next section, let's look at some live demos of how all of this comes together, using SIXTOR to create as well as sign the attestations and then Qiverno policies to verify the attestations. So first off, let's understand a little bit more what these tools do and how they work. So starting with SIXTOR. So SIXTOR contains three major tools, Cosign, Fulcio and Raycore. Cosign is used for signing pretty much anything. In our example, we're gonna use this to sign attestations. So attestations being again, Intoto statements which will use Cosign to sign as well as then push into an OCI registry. Fulcio, which is the other component in SIXTOR, acts as a web-based OIDC authority, which can generate temporary certificates and then which are used for signing. And then those certificates are actually just thrown away, but only after the signing event is recorded in Raycore, the third component, which is a transparency log. SIXTOR also has a policy controller and another component for Git signing, but we're not gonna use these components as part of this demo. So Kivarno is the policy engine that we're gonna use to verify these signed attestations produced by the SIXTOR components. Kivarno is a CNCF project. It's in the incubation phase and it acts as a Kubernetes native policy engine. What that means is policies in Kivarno are Kubernetes resources. They do not use any external policy language, but they are declaratively AMOs, much like Kubernetes resources themselves. The policy reports and results are also Kubernetes resources available inside the cluster, which can be managed using standard Kubernetes tools. Kivarno integrates directly with the control plane. It acts as an admission server, as well as it can query the API server for additional information to enrich policies. It also has the ability to cache things like config maps. So really, understanding how Kubernetes works, even patterns like owner references or how pods and pod controller works, Kivarno can take advantage of these type of things to make it easier to write and manage policies for Kubernetes itself. With that, let's dive in into the demo and I'll switch into my console. And what we're gonna do is first we're gonna look at how we can create attestations as part and attach them into your OCI image using, you know, six stores cosine. And once these attestations are published, we'll look at how to verify them using Kivarno. All right, so I'm gonna switch into my console and what we'll look at here, let me clear the screen. And I have a cluster with Kivarno installed. So if I do get namespace, I see Kivarno is already in here. This is my local mini cube. So let's say if I do get deploy in the Kivarno namespace. So actually let's just check the logs for Kivarno. What this will do, I don't need the W here. So I see Kivarno is up and running and the logs, everything looks okay. If I do get cluster policy, at this point, I don't have any cluster policies. C-Poll is a short for cluster policy. So I don't have any installed so it returns no resources found. All right, so before we do anything else with Kivarno, the first thing we wanna do is actually sign of an image. So if I do Docker images, I have a Tomcat demo image that I created. It's a Java Tomcat image. And I've just recently built this image, as you can see. And what we're gonna do is I've already published this event into my registry, but what I want to do now is also to be able to check for or attach an attestation into this image. So if I go into my repo and if I look at versions, I see that V1 has been published. And what we'll do is we'll create an attestation for that V1 image. So in this repo itself, I also have a scan.json, which I produced using Trivi. So the command I ran for that was just simply to produce that Trivi, but I produced a scan using Trivi, but I did that in JSON format. So if you look at the file over here, just in memory, this is what, or in my editor, this is what it looks like. And through Trivi, I was able to produce the scan report. And as you can see, it has lots of information on different vulnerabilities, et cetera, which we want to now create an attestation using. So to do that, I'm gonna use cosine and we'll use the cosine command. So cosine can sign container images itself, but it can also sign attestations. So instead of the cosine sign command, which I would use to sign in container image, what I'm gonna use is cosine attest. And to that attest, I'm gonna pass in the image that I want to create the attestation for. I'm gonna use a public key to sign that attestation and I had previously generated public private key pair using cosine itself. And the predicate I'm gonna provide is the scan.json file we looked at. And with the type, so the statement, the predicate type I want to use is, and this could be any string, but I chose to just say triviaquasac.com scan v2. And what I'm gonna do is if there's an existing predicate with the same type in our existing attestation, we will replace it. So with that, when I'm signing it, it's gonna ask me for my private key. It's gonna use that payload and cosine should then have attached that into my OCI registry. So if we go back here and we look at v1 and I refresh, what I'm expecting to see now is an attestation was just created and pushed into my OCI registry, which is associated with the hash, the digest of that image itself. So at this point, we have an attestation for the image. And what I wanna do now is verify that attestation using a policy, right? So I'm gonna use this Paul Kivano policy, which I have, and this Kivano policy is using a public key to check whenever there's an image which matches this pattern. So if it matches the name demo Java Tomcat, it's coming from this GitHub container registry. If it matches that pattern, it's gonna check that it has an attestation signed using this public key. And then also it's gonna match within that attestation that the predicate type matches what we expect, which is triviaquasect.com scan v2. And also that attestation, it's gonna parse to JSON, make sure that the scanner URI matches. So if we go back to the scan dot JSON, we see over here, there's a scanner, there's a URI, which tells us which version of trivia produced that. And it's gonna also further check, this is using a James Path expression, but then my Kivano policy to parse the results and check for vulnerabilities and to make sure that there's no vulnerabilities with a score greater than eight. So if we run this command, and the nice thing about this format is it's very easy to test this also on the command line. So if I kind of just do this, run this command and check for the exact same expression that I had in my Kivano policy, I can check over here and let's change this to eight. What I'm expecting is it's showing me that there's no vulnerabilities, which are higher than eight. If we switch to seven or something else, let's say if we switch to six, I see that there's two vulnerabilities with medium scores, which are reported as part of my image itself. So let's apply that policy. So I'll do kubectl apply, and we wanna do check, I think it's local attestations.yaml. Okay, so now if I do kubectl get cpol again for cluster policy, I see I have one at a cluster policy. And now if we do, if we try to run our Tomcat image, we will do v1 is the one we wanted to match. And I'm just gonna drive on it. So we'll see if that is accepted or not. So at this point, that image got accepted because it has the matching attestation. And just to prove that this, if the attestation doesn't exist, what we can go is we can go and delete that attestation from the registry, right? So I'm just gonna go ahead and delete this and let's see what happens if the attestation doesn't exist. So I still have my v1 image, which I can pull and check, but if the attestation doesn't exist, I would expect the policy in this case to fail and say, hey, there's a problem here. You can't run this image. As it says, you're no matching attestations, right? So that works. And it's fairly easy as you saw to be able to sign it with Cosine just using public-private keys and to verify it. However, if we were to try and achieve, again, a higher level of salsa compliance with this, like as we were discussing with salsa level three, the requirement there is that your attestation cannot be tampered with or cannot be falsified, right? So in this case, the problem with just using a public key and not checking any provenance data is that anybody who gets a hold of this public key could potentially try and falsify this data. And keys, of course, are difficult to manage. They need to be shared and secured themselves, which creates another set of problems. So this certainly is far from ideal. It's a good starting point, but what we want to do is we want to create and sign these attestations in a manner which is non-falsifiable and which cannot be easily tampered with, either inadvertently or even by, sometimes by whether it may not have to be a malicious actor, could be a trusted user who's just trying to figure out a way to work around some policies or compliance checks. So how can we do that, right? So instead of signing with the public key, what we can now do is we can start, we can use the keyless approach or what cosine and six store calls the keyless approach to signing. In this case, we are going to use full CO to generate a temporary ID or a temporary certificate of which will then be used to sign. And then we're going to use the identifier embedded and some data embedded in that certificate to verify. So let's take a look actually at a policy first, and then we'll take a look at how to build this with using GitHub actions as our trusted builders and in our build environment. So in this case, if you're using keyless instead of a public key as a tester, in my Kivarno policy, I have a keyless attester. And here, what we're going to do is we're going to, for my keyless attester, I'm going to make sure that it's the subject of the certificate is a workflow that I know. And the issuer is actually GitHub because this is that the certificate is received using GitHub's OIDC. In addition to that, there's several other things that we want to check in the certificate. We want to make sure that the trigger for creating was actually a GitHub push. We can also check the workflow Shah ID. So in this case, I'm going to check not just the name of the workflow, which is in the subject, but the SHA ID. Now, one thing with GitHub itself is you can have reusable workflows. So your workflow may be in a different repository from where it's being executed. So it's very important to actually verify the workflow SHA ID for off the one, the reusable workflow that you're executing and not just the where the workflow, the repo in which the workflow was executed. So here you can also check the actual name of the reusable workflow and the repo that it came from which all of this can be now authenticated or part of the authentication step to make sure that the right trusted builder in the right environment has created the attestation we want. So to do all of this, we can create a GitHub workflow and this sample GitHub workflow that I have has four steps. It builds the image, it scans the image. So then generates an S-bomb and also then creates the attestations using cosine and attaches them to the OCI registry. So building the image is a fairly standard and straightforward step. So here you would use your Docker builder or any other image builder you wish. Scanning the image in this case, I'm using Trivi. And as you can see here, we're using a Trivi GitHub action and it's producing the image scan which we're saving as a file called scan.json. The next step after that would be to create the S-bomb. And for that I'm using, in this case, GRIP, which is, or we're actually using SIFT from Encore to create the S-bomb. GRIP is a vulnerability scanner from Encore but we'll use because we're using a different scanner, here we're using SIFT for the S-bomb itself and we're saving that in Cyclone DX format and also which can also then be created or used to create an attestation. And then in the final step, what we're gonna do is we're gonna take all of these inputs, the provenance data that was created as part of the container build itself. We're gonna take the S-bomb and the scan report and we're gonna assign them using cosine and publish all of this into our OCI registry where we have attached our image. So if you kind of look at all of this and in GitHub workflow itself, if we see the registry here and a guy go to the build that I ran, so the build takes a few minutes to run, so I'm not gonna do that or rerun it again. But if I check all of these steps, what happened is first to build the image which also creates the provenance file and this to do that build, create the provenance file, what we're doing is we're using a salsa framework GitHub action. This is a sample action. And once that provenance file is created, we can attach it to the image as you can see over here. And then also from the scan as well as the creation of the S-bomb, similarly we're attaching these files. And then finally in the last step, we're creating attestations using cosine like we saw. So that's how this pipeline would execute. And the nice thing is when the pipeline executes, one of the important steps over here is to get an identifier for the GitHub builder itself. So all of this, the way it's happening is because we're using OIDC tokens, GitHub, the builder is getting a trusted ID and that token is what's integrated with now the six store tools to be able to generate a temporary certificate which embeds the information that we want for the builder as well as some of these other pieces of metadata which tells us which we can then verify when we're trying to prove that these attestations were actually created in a trusted environment and were not tampered with or non-falsified as per the salsa requirements. So given that, let's take a look at, let's apply a different policy and see what happens, right? So I'm gonna delete this policy and we'll install the other policy which can verify the image which was produced as part of that pipeline itself. So I'll apply, in this case, we want just attestations and this is the key list verifier as we saw. And now if I do kubectl run and if I do, just as a test, if I do v1, this should not pass because it was signed using the public key, but if we go ahead and then also check using, so this as expected, it says no matching attestations because it couldn't find, for this image, any attestations signed using the GitHub, the key list identifier that we wanted. So now if we run this for the actual version produced by the pipeline, what we expect to see is that this policy will allow that image to run because it has all of the right information in it. In this last section, I'm gonna talk about VEX or Vulnerability Exploitability Exchange. So we all know that vulnerabilities are hard to manage at scale, even or even for small projects, right? Scanners typically produce too much noise and any static analysis tool can't really determine if a vulnerability actually applies to a component or an image or if some features of that what's being reported are being used and not used. And of course, upgrading to the latest version is costly, it's time consuming, and may not even always be possible. So VEX attempts to solve these problems. What VEX is, it's a draft spec as part of the OSE CSAP for the Common Security Advisory Framework 2.0 and it basically allows a software supplier or some other party knowledgeable in that software to assert the status of vulnerabilities in that particular product. So VEX documents are meant to be machine readable, they're dynamic and expected to change as information becomes available about vulnerabilities and as more information may be attached to a CVE or to a vulnerability report. VEX documents work really well with S-bombs or software builder materials, as well as CVEs which are produced or reported in scan reports. So a typical VEX document contains product information, it will contain a CVE or an ID, it will contain status which the party providing the report can supply, this could be not affected, affected, fixed or under investigation. And based on the status, there's details on the threat or the mitigation and if the vulnerability isn't not affected, for example, there can be details to provide why that vulnerability does not apply in that particular situation or to that component. And if there's mitigation steps like certain configuration, those can be provided as well. And then finally, there's a note section which is this free form text for additional details. So what this allows us to do is now putting together VEX as well as attestations, we can imagine a workflow where whenever there's a new vulnerability reported, the product team can either analyze the vulnerability and say, yes, this is relevant and provide a fix which could contain updates or other information or they can publish a VEX document as an attestation which can be verified by the receiver and to make sure that the attestation is produced by a trusted entity and that particular vulnerability can then be allowed. So all of this can with attestations and what we were looking at before in the demo can now be then be automated and can be checked as part of a secure software supply chain. So this really can be a potential game changer for managing vulnerabilities within enterprises and also across enterprises, which is why a lot of folks are very excited about VEX and what that means to securing software supply chain. All right, so to summarize what we've gone through, certainly signing images, verifying images as you're deploying them into your clusters is a fantastic start for security but attestations provide a lot of additional data which is required for a higher level of compliance like salsa level three. What we looked at is using cosine from six store to sign and attach attestations to OCI images and then Kivarno to verify these attestations through policies which can be run as admission control checks or even as background scans. And finally, we briefly talked about VEX which solves several major issues with the whole vulnerability management workflows we're used to today and it fits very well into attestations and being able to deliver VEX documents as attestations as part of OCI images. Thank you.