 Good afternoon, everyone. Good evening. We're very close to the evening time. So we're going to be speaking about today is Fresca, an open source project. So real quick, before we start off, a quick introduction about ourselves. So I'm one of the co-founders of Kusari, a supply change security company. And some of the focuses we have is building secure artifacts, as well as answering the hard question of, what exactly am I ingesting? Before that, I was an institutions architect in regulated industries, as well as commercial settings. And I'm a maintainer on various open source projects, like in-toto attestations, in-toto golang, fresca, and guac. I'm going to head out to my colleague to introduce himself and continue the presentation. I'm Mike Lieberman. I'm also a co-founder of Kusari. I've mostly, past several years, I've been an architect with a focus on supply chain security. A little bit, a few other details about myself. I'm a salsa steering committee member for folks who are familiar with salsa, the supply chain security framework. I'm also a tag security lead for the CNCF. And I also helped co-lead the secure software factory reference architecture from the CNCF. All right, so we're going to talk a little bit about the threats, you know, just supply chain security threats and how fresca helps out with that. I'm going to talk about, you know, secure software factory reference architecture. We'll talk about fresca itself, and we'll talk about mapping the components, and we'll also give a little bit of a demo on fresca. And also, one thing just to note is, fresca is an open SSF project underneath the Linux foundation. So this is the threat diagram that came from salsa. And so there's a lot of different things kind of going on in here, right? You can have threats coming from the developer, threats in your source code repository, threats coming in from dependencies, from your package manager, and all sorts of different places. Fresca is, if we kind of look at it a level in here, fresca is focused around build security, right? And the specific threats that it's focused on is, you know, the inputs into the build, making sure that, you know, are we running what we expect to be running? The build itself, like, is somebody actually actively getting into the build and compromising stuff as it's building? And then also verifying that, hey, is the thing I'm looking at the thing I actually built? And so, you know, you might say, like, okay, how can we do this, right? It's, and so that's where you need a secure software factory. And so the secure software factory, it has a lot of different definitions. I believe NIST has a definition, which also includes not just the build systems, but also the NIST and the DoD, I should say, I think have some definitions around it that talk also about the people in process. But the secure software factory, as far as the CNCF was concerned, was more thinking about it from the perspective of the build and how to do it in a cloud-native way. And so just real quickly, right, you know, you have all this information coming from source repositories, you're going into a build environment, you're running some tests, you're building some artifacts, you're distributing those artifacts to artifact storage, you wanna verify that actually, oh, all that sort of stuff is good, and run it in production. So now let's look at how the secure software factory helps out with that problem, right? How do we sort of protect the build and verify that we're doing all the right things? Well, this diagram's kind of a little complicated, so let's take it step by step. And okay, so let's, yeah, let's put all those pieces together, right? So Fresca stands for it's a backer name for factory for repeatable secure creation of artifacts. It's, you know, because there's Salsa, there's Salsa Fresca, so we're kind of trying to play off of that a little bit. And, you know, as a reminder, it's an open SSF project. It's a holistic approach, right, to meet Salsa requirements. We also do a lot of stuff to make sure that if somebody does tamper with the build, it is very clear and it can be very easily detected that you did tamper with the build. And we're also including runtime authorization so that, you know, you're only running what you built using a secure system like Fresca. So let's go and take that diagram from a few slides back piece by piece, right? So the first step is you have all of this different stuff, right, you have a build, you wanna do all these things, you wanna, you know, you wanna build securely, but like you might have a Jenkins, you might have a bunch of extra, you know, build VMs and you have all these network systems and testing systems. Well, let's take a step back, right? Well, you first wanna say, is the workflow I'm running as part of my build, is that itself secure, right? Am I running all the right steps? Am I generating S-bombs? Am I generating Salsa attestations? Am I going through the various security scanning steps, the security linting steps, all that good stuff, right? And so that's kind of where we're using some tools like Q which allow us to constrain what actually runs in the pipeline, right? And we also enforce that through admission control. So you wanna make sure, hey, no developer can just go and say, nope, I'm not gonna run a security scan, right? Sorry, it doesn't run, right? That will not run in a system like Fresca because that sort of is encoded into the policy of the build system, all right? Now let's take a look at the next piece of it, right? So now you know that you're doing all the right things to protect the pipeline itself, but how do you know that the build system is actually spinning up the containers it thinks it's supposed to, right? How do you know that nobody has come in like an admin on the Kubernetes cluster, right? Hasn't come in and told the pods to now run something differently in the middle of your build? Well, that's where workload and node attesters come in. And those tools are, in this case, Spiffy-Spire, what that allows us to do is it allows us to verify that the workloads are only the workloads that were told to run by the pipeline. If it gets changed, you know, Spire would detect that and no longer, you know, you no longer have a valid certificate authenticating that workload to run. And then with all of this sort of stuff, we're recording everything that's actually happening inside the build, inside of a pipeline observer, in this case, TectonChains, and what that allows us to do is, you know, you don't wanna have a build self-report what it's doing, because if you have a build, you know, an end user build self-report what it's doing, right? That end user build can say, yeah, I ran everything great, but that's not always the case. So you need something that kind of exists outside of what's happening inside the build environment to go and introspect in to what's happening in there. And that's where we're using sort of TectonChains there. And then all of this metadata is being published into metadata storage here. And one thing I just realized I forgot to talk about from the previous slide is, we tie it all together with runtime authorization, and right now we're testing that out with, you know, tools called Tetragon, which allow us to actually look at what's happening inside the build environment, right? So if the workload in no detectors can detect, if you're tampering with Kubernetes, if you're tampering with the pods, well, what happens if the pod itself, like the container image, is trying to do something malicious or if somebody is actually exacting into the container and trying to do something malicious, using stuff like EVPF and similar sorts of tools, we can actually look into there and say, if this build should not have network access and is trying to have network access, we wanna know that. And now I'm gonna hand it back over to Parth. Exactly how fresh guys actually set up all these different components. Like Mike mentioned, there's multiple components coming through, right? Tecton, Tecton chains, Tetragon, Spire. How does all this stuff kind of fit in into the framework of Fresco? So starting off, right, Tecton? I'm sure if you're all aware, but basically it's cloud-native, CICD solution, right? It allows us to run natively on Kubernetes and, you know, it runs all in containers, right? All the tasks and everything that it does is all in containers, so that it allows us to be ephemeral and allows us to be isolated. So it's very useful in the environment that we wanted to create. Next, write the pipeline observer, like Mike was mentioning, as Tecton chains. That's the one that's observing the actual pipeline to see exactly what's happening within the actual tasks, right? It's gonna give us the Salsa attestation. It's also generating those signed attestations for us so that we can use that later on to validate, yes, this is exactly what artifact that I created through my pipeline. The third piece, right, is Spire, very important, right? Because this is the one that's going to give us our short-lived certificates. It's also gonna be doing the workload and node attestations. So what this does is that it validates that, yes, you are running on the node that you trust. At the same time, it also validates the workloads you're generating. So the short-lived certificates, just keep that in mind. We'll talk, we'll show that off in the demo exactly how all that kinda works together. What you can see from this diagram, right? Spire is connecting directly with pipelines and with chains, right? It's gonna use those short-lived certificates to do signing of the results as well as a task object itself to make sure that it's non-falsifiable. And then for the runtime visibility, Tetragon, right? That's one of the tools that we're looking into. It's, because it's Kubernetes aware, it makes it very easy, right? Because if you remember Tetragon, right? Everything runs within containers. Because Tetragon is Kubernetes aware, we can actually view the actual pods themselves, the processes that are running within the pods themselves, isolate them, and use that to make our runtime attestations. And we can validate that, yes, this is a hermetic build, or we can see that it's reproducible and so forth. The next piece is Kyverno, which is our admission controller, right? That's, we can use that to enforce policies, to enforce that we are running signed images in our production environment. They have valid attestations. And if you can further enhance that later on with projects such as Q, I'm sorry, projects such as Guac, that I'll speak about a little bit later on. Thirdly, it's Vault, right? Vault is the one that's gonna be housing our long-term keys, right? That's what we're gonna be using to sign our images and so forth, right? So, Spire has our certificate short-lived keys, Vault has our long-term keys, and that's what we'll be using to validate them in the future. All this information gets stored into a metadata store, right? So, as OCI, the image itself, the signature, the attestation, SBOM, all that kind of stuff gets stored into the OCI. And then thirdly, it's the Q, right? So Q is a configuration language. What this allows us to do is that it abstracts away a lot of the complexities that come with using Spire, that come with using Tecton, chains, everything, all this stuff together, right? It's complicated, and setting up the Kyverno policies. So can we make it simple for the developers, for the users to just use it from the scratch, like have everything set up for them and take out all the complexities? So make a simple configuration file that sets up everything for them. So that's what Q allows us to do is abstract away those complexities that come from some of these tools. And then finally, Tecton triggers, right? It's simple. We want to make sure that the developer, all they do is they're not thinking about the security or the pipeline or anything like that. All we want them to do is, you know, triggering the build, basically, however they want to trigger it, right? They're PR or pushing, right? Whatever they want to do, they trigger the build and it runs through all the different steps, you know, generates the pipeline, it generates the shortlist certificates, creates the attestation, S-bombs, so forth, right? It does all that stuff in the background and the developers and everyone don't have to think about this. So we want to make it very simple. And then finally, running into the production environment, right? Once it's in the production environment, we have, again, an admission controller validating. Yes, it's, right, Kyverno's validating. Yes, it's the attestation. Attestations were signed, attestations were created, the image was signed, so forth, right? All those kind of data that we need to validate that, yes, I can run this in production because this artifact is something that I trust. So how does Fresca meet Salsa, right? At least Salsa v01, because I think in the next couple of weeks at least, Salsa version one is going to come out. So that some of these requirements might change and Salsa level four itself might, is very much in flux. So you can see from this, right? Salsa level, Salsa one, Salsa two, Salsa three are met, and especially the non-falsifiable provenance. And then Salsa four is very, you know, project specific. So if you're insured that you're in the environment that is deployed in, right? If it meets those criteria, then Fresca can meet Salsa level four also. So let's move on to the demo here. So the first thing I want to show off is the piece that I mentioned about the non-falsal viability in terms of SPIR, right? So if someone comes in, I'm having a bad day in the office or some other intruder kind of got into our system and they wanted to modify our pipeline as it's executing, right? How do we catch that? How does Fresca catch that? So here what I have open here is a real simple task run. And you can see that it's using the Ubuntu image, just for an example use case. So what I'm gonna do is I'm gonna trigger this build. So as this demo is running, you can see real quick before we move on, I just want to show like, okay, the pieces we talked about before are all in here, running in my environment, right? Hiverno, SPIR, tecton chains, pipelines, triggers, tetragons, so forth, right? Everything is all set up, nothing extra has been introduced. So we're gonna run this demo and what this is doing basically is that as this task is running, it's gonna try and inject and change the image that it's using to build that artifact. And what's gonna happen is that SPIR integration with tecton pipelines is gonna notice that there's a change in the actual task run object and it's gonna invalidate the whole pipeline and it's not going to proceed any further. It's going to cancel the, it's gonna make it so that the image, if an image got produced, it doesn't get signed by tecton chains and a note attestation is associated with it because it was invalidated. So right where you can see here that it failed, you can see the task run condition was not verified. The task run result itself was not verified. So if you scroll up, right? Look, it changed to not a bunch of, right? There's another image in there now. So something modified this as it was executing. So we can scroll up here, we can see some more pieces basically saying like, okay, yes, this was not verified and that SPIR did not validate this. So it negates the whole man in the middle attack right there. So the next piece is, let's say, now, you know, I don't want to use Fresca. It's, I just want to build my own images and push them into a production environment, right? I don't want to do this, kind of like let's, what can I do instead? So how does Fresca kind of protect against that? So I've already built out this Python image in the background, right? Not using Fresca. So it didn't go through the whole process. It didn't get the image didn't get signed. The attestation didn't get generated. So what happens, right? So if you remember from the Pierce diagram in that production environment, right? Admission controller is still running. Hivernosal running is still validating that the image needs to be signed. There needs to be a valid attestation associated with it because there's nothing associated with it. Right away, it blocks it. You can see this fail to verify the signature as well as there's no matching attestation associated with it. So it's not allowed to run in the production environment. So now we saw all the bad things. Oops, sorry, wrong one. So we saw all the bad steps. So let's look at this in terms of the good pipeline. What should actually happen when everything is running properly? So I just picked off a pipeline here and it's gonna run through the usual steps, cloning down, building. It's gonna do a vulnerability scan, create, generate, S-bombs. All the stuff is happening right in the background. All I did was trigger the build using, like I pushed an image or pushed a PR or pushed some kind of change in my Git repo. It triggered the build and it's just running in the background kind of thing. So we see that it finished here really quickly. But that's the thing I wanna kind of show off is like, fresh guys, you don't have to think about all that security and all that kind of stuff in the background because it's all happening, right? All that spire checks I showed you before about someone kind of coming in and changing something, that's already happening in this, right? But you don't see any of that in your outcome or in your everyday life, right? It's all like built-in security. So let's take a look at this. What does this actually look like now? So now that it's actually finished. So we can take a look at, you can go verify that image, right? What does it look like? It got pushed to OCI. So you can see that there is an added. So up here is the, oops, this is the actual image up here. So you can see there's a dot ATT. That's the attestation. There is the S-bomb associated with it. There's a signature associated with it. And then we can use cosine to go verify. Yes, this is something that was generated by me. It has a valid signature associated with it, right? These signature checks. And it also checks to see that the attestation is also valid and signed by someone I trust. So now let's look at the actual task run object here real quick, just showing off here real quick what tecton change does, right? Out of the box is that it generates the actual attestation, the salsa attestation. Here's a signature. And then if you scroll down here a little bit, we can see those checks, if you mentioned for spire are still occurring, right? Here you see this, if you remember the old one, it had like, oh, this is not verified, right? You can see here successfully verified all the results, right? So you see that it's all valid. And then it generated, real quick I'll just show off, it generated a valid salsa attestation. So that's here, right? The in total format. And then it's also in the background, like if you remember I said it does a tetragon. So it's using that to generate this runtime attestation. So this runtime attestation up here got generated. So this is very much in works. So we're still defining what the predicate looks like and how exactly it should be formatted, but you can see some of the information. So for example, right? It made some TPCP connects out, or you can see that right here. So now we know for a fact that, okay, this is not a hermetic build because it was making some calls to a public IP address. And then lastly, just showing off the Sbom, right? An Sbom got generated for this specific image that we built, and then what kind of packages and everything that came with it. So going back to the slides. So one of the things that we want to do in the future is utilize guac. Guac is graph for understanding artifact composition. So you can see here, like the artifact here is in the middle, right? The image that we created. There's two attestations associated with it, right? The runtime attestation, as well as that salsa attestation I mentioned, and then the Sbom and so forth. So if you want to learn more about this, there's, they have a talk coming up tomorrow that goes into detail exactly how this works and how we want to build policies around this. And if you remember, I mentioned Kyverno before, utilizing guac. So you can take this information and you can utilize that to make, you know, decisions in a production environment. So our future plans is, you know, like I said, salsa or version one is coming out. So we want to make sure that we remap to those, to make sure that we meet all those kind of requirements. There's some spire, the spire integration work with Tecton is still in the works. So that's getting merged in into Tecton itself. Finalize the runtime attestation, like I mentioned, is still being in the works. So get that merged, the tetragon, Tecton change, that work is also being in the works right now, get all that stuff integrated. And then of course, finally, is the final piece is Q, right? We want to make this as simple as possible. Can we make it so that we can, you know, extract out all those complexities even more and make it very easy for them to, you know, use Fresca to deploy it and to start and not even think about the security aspect of it. So it's quick recap, right? So we talked about the reference architecture, right? And Fresca is an implementation of the reference architecture. We talked about the different components, how it's a holistic approach to meeting a supply chain security, how there is a tamper evidence seal with spire and Tecton change integration. And finally, the runtime authorization to make sure that we are running exactly what we need. And right, this is an open SSF project. So we have an open SSF Slack channel for Fresca. We meet, we have a community meeting every Wednesday at 10 a.m., every other Wednesday. So definitely come in, you know, if you want to ask questions or if there's any other kind of comments you have, definitely come and ask us. And then you can contact us directly, you know, my contact and Mike's contacts right there on the list. Yeah, we're also obviously, you know, looking for more contributors as well if folks want to come in and help plug in additional tools. If, remember, the idea here is to try and make sure that this thing is, you know, it's kind of a good representation of what a secure build system could look like. So if folks are, yeah, like we're definitely always interested in seeing, you know, adding in additional use cases. We're looking for additional contributors and so on. Thank you. Any questions? Hello? Okay, cool. So that was great. And I have like a quick question that I was kind of curious about. So from what I understand, I can text on chains produces as out of stations and maybe currently or maybe with the Spiffy-Spire integration, they'll be signed with the same identities as the Spiffy-Spire ones issued by those. But are the signatures also signed by that same identity? And if so, what's the purpose of the signature? They're not signed by the Spire identities now. Oh. Yeah. So that's where the bolt comes into play with the long run, the signing key is stored in bolts. So securely stored in bolts so that that's the one that's used for signing. So Spire is just used for that short list of certificates that's just used to validate that nothing changed in your task or results itself are valid and your task object itself is valid. So that's all it's used for. Gotcha. And the long running key from is, there's a transit plugin in vault. So it's never even taken out a vault. It's signed inside of, yeah. Gotcha. Okay, yeah. That was a little confusing for you, but there's a lot to cover. There's, I gave a whole, there was like an open SS, the open source summits, I gave a whole talk just on the Spire integration with Tecton and then there's a whole, like I had a previous talk yesterday for Tetragon and Chains integration. So there's a lot involved. Hey, great presentation. I saw you're doing a signature with vault at the end. Have you looked into using any sort of timestamping service or anything like that as an alternative to using vault? Yeah, it's definitely on the table. I think like, it's one of the things that at least right now, the existing maintainers are not hugely familiar with how to kind of integrate all that. It's definitely something that we look into if folks who sort of understood that better could help us out a little bit. Roger. Thank you. Yeah, and we're hoping to kind of like integrate whatever sort of signing and authorization and all that sort of stuff. Like we're looking to sort of integrate with as much as that is possible. So I'll follow up question on Astros, like how does the authorization against vault happen to get the signature being made? Question was how does vault get authorized? Or how does the authorize against vault to sign artifact if it's not? So actually we're using OIDC with short-lived certificates with Spire. So how it does that is that it's Spire authenticates with OIDC with the short-lived certificate that it obtains from Spire, and then that's how it logs in and uses a transitive plugin to sign the artifact and stuff. Good, thank you. Yeah, and it's something also that we're interested in kind of going with stuff like keyless signatures inside of like we're looking at like, hey, could we, and this is once again, maybe pipe cream a little bit, but could we use, let's say the OIDC of the person who kicked off the build to actually go off and then use that to generate like a root tough certificate and then use that to then generate all the other stuff. Generate child certificates based on both somehow the workload as well as like what we expect the steps to be. And we're also looking at potentially integrating stuff like in total layouts and all that sort of stuff as part of it as well. Hi, thanks. You mentioned possible integrations with other tools and I'm wondering if how something would work if you're using a traditional CI CD system like an Azure DevOps, how do you ensure having like this separate third party observer, is it possible to verify an external process outside of a cluster? Yeah, so this is also like one of the key goals of Fresca is to kind of also show what is potentially possible. And we're looking at this as maybe a way to then have conversations with some of these other tools, whether they are SAS or whether they're actual tool, you know, or deployed tools by, cause we're also looking at stuff like folks have asked hey, this is a lot of great stuff, could you replace tecton with Jenkins? And the problem is like there's a lot of stuff that's happening right now that like cloud native tools just sort of make it very, very easy to integrate. But we're hoping that this sort of stuff, folks can kind of take a look at how it's getting done in the cloud native space and say, okay, cool, maybe we can pull this out and use it with code build or Azure DevOps or any of these other things. And we're definitely open to those conversations. We're just, for us I think it's a weird balance to play because like the, some of the elements of it can be pulled out quite easily. Like the queue stuff allows us to just easily generate all of the custom resources. So one of the nice things about queue is you can automatically generate stuff like your tecton tasks and your tecton pipelines along with the policy that's used alongside it because you just have one file that goes and says, okay, great. This maps to this config map, great. This config map gets automatically put into all the custom resources that need it. It's kind of a little bit like, it's for this sort of use case, it's really nice as opposed to let's say using Helm because every single time you'd have to do a new deployment. Here it's like, hey, I have a new pipeline. Okay, great. This one little bit of queue can go and generate all that stuff. And then for the end user, they don't have to worry about, oh, do I need to include an S-bomb step or do I need to include a scanning step or a salsa, anything else, like with any sort of verification steps. The queue stuff can kind of define a default pipeline. And we believe a lot of that work can be used in a lot of these other things. To say, hey, can we use this to generate secure Azure DevOps pipelines? But some of the things like workload identities and the observing things, I think are things that we know that some of the cloud providers are starting to look around, but we're not sure exactly what, you know, yeah, some of that stuff. But we know, for example, GitHub Actions has a pretty good model around workload identities. Thank you. Leave me back. Heather, I had a question about if you're doing security scans for other kind of compliance checks in the pipeline, like CIS benchmarks or TFSEC or check off or something like that. How easy is it to integrate and assign those as maybe attestations and then double check that, like for instance, you don't have any CVEs or you have like a certain level of compliance? Yeah, so there's two things that are happening. So Tecton itself is building out and I think Billy knows the exact term for it. I think it's like called types. Is that right? Okay, yeah. There is some work on the Tecton side to sort of have the ability to sort of declare an output type. And then based on that, you can say, great, this is an output type that should be signed and should get a salsa attestation associated with it and so on. But in the meantime, one of the things that we've done with Fresca is we can actually use those short-lived spiffy identities to in certain contexts, like you're not getting, let's say, salsa level four for that benchmark run, but you might be able to still get a high level of salsa like salsa two, salsa three for stuff like where that container will use its own workload identity to then have vault sign that output, like whether it's, you know, whether it is an S, and in fact, we didn't show it in this example, but if you go to the GitHub, we do have examples where we're generating S-bombs and we're also signing the S-bombs and all that sort of stuff. Okay. Thank you so much. Thank you. We also have.