 Hi, I'm Brandon. I work on the Google open source security team. And naturally, part of that is securing open source. And naturally, part of that is supply chain security. So let's talk about what everyone's been talking about. Over here, we see the supply chain security bingo card. So software supply chain security, something everyone's been talking about. We've seen the charts. So it's going up. The error is always going up. And this is important. Indeed, software supply chain compromises and attacks have been on the rise of flake. However, the industry also has gotten together and given a response to match. We've seen a lot more efforts, a lot more working groups, a lot more foundations, and organizations prioritizing supply chain security. In fact, even a cognitive security can't decide we've seen a very large number, the most number of submissions coming around the topic of supply chain security. So as a community, we have something to show for that. So let's just look at all the cool new and existing projects that have been trying to solve the supply chain security problem. This goes across many different areas, including build systems, as you've heard already in today's keynote, Signing Trust, Software Metadata, and Scanners. So there's a whole lot going on here. I'm going to spend a little bit kind of talking a little bit about what the layers are and what's going on, the progress we are making, and then talk a little bit more about what are the next steps where should we go. So to start in this list, we have projects that help set a strong foundation of trust. We've heard it a ton of times, Six-Daw, and kind of underpinning that tough or the update framework. They help keep signing simple and open. And in addition, we also have zero trust projects like Spiffy Inspire and Key Lime that are integrating with the entire ecosystem as well. We've also seen a lot of activity in progress in terms of software metadata. Standards such as Salsa, they're working on their one-point-all release that will have a draft in a couple of weeks, and hopefully GA in the next one or two months. And if you're in the US and you've heard about the executive order, which I think most already have and are affected by S-Bombs, we see both SPDX and Cyclone DX standards as well becoming more popular. And on top of that, incoming tooling, which is very important. And we see things like vulnerability exploitability exchange, that it's coming out of the CSI working groups to tackle now that I know what my vulnerabilities are with all my scanners, how do I triage them? What's important to me? What is not? Maybe some vulnerabilities I am not affected by. And finally, of course, we have built systems like Tecton and the OpenSSF Fresca who have helped to be able to create these trusted artifacts. So seeing all this great tooling being done, Tech Security had an effort last year around creating a secure software factory. And so what we've done is we've created reference architecture to show you how we can take these different components and put them to a cohesive structure to produce trusted software and attestations. However, one thing we noticed about these projects that we just looked at is a lot of them are focusing on the producing side. How do I produce trusted software? How do I produce software metadata that is useful? How do I get S-bombs? That's the number one question that everyone's talking about. But as with any supply chain, there is producers that are consumers as well. And so we've done a great job producing trusts. There's many, however, there are so many open questions about what do we do with this? We have all these documents. How do you evaluate them? We have questions like, okay, I have an S-bomb. What do I do with it? How many levels deep do I have to track for things? How many levels deep of salsa do I have to track transitive salsa? Or even so, within each software metadata document, which of the fields are important and which can I just safely ignore? So today we are faced with an overwhelming amount of software supply chain metadata, right? And we somehow need to find meaning out of it and kind of like, I feel like this picture kind of expresses how I look when I see like a 300 megabyte S-bomb. That's basically not much that you can make meaning out of it besides trying to grab true a few things. So then how do we address this conceptual story? How do we make sense of all this software metadata? So to recap today, we've established a strong trust foundation, decentralized, flexible anchor of trust fabric. And then we have a layer on top of that, which is attestations of metadata consisting of schemas and sources for rich security metadata. So now we need to build on top of that. We need to talk about the consumption story and here's the framework to think about it. We have to add the layers in green here, aggregation and synthesis, as well as policy and insight to be able to convert them to actionable items. So let's talk about what exactly are these layers? So let's talk about aggregation and synthesis. And in a nutshell, this is about bringing all the metadata together and performing intelligent linking between them, right? So best way to illustrate this may be true. An example, let's say you have a home-grown application like an acne application, right? So first thing you need to do with the reason about the security is first you need to know about who built this application internally, who else it built. So you have to pull data from internal teams, internal systems, built systems, source repositories, you need to get all the information in. And as we all know, you're probably using open source libraries in that and therefore the next question is like, how do I get information about open source libraries and software that I'm using? And therefore we had to pull information for package repositories from the various ecosystems like PyPI, RubyGems, MavenCentral, if you're using Java. And on top of that, if you're using vendor libraries and software, you will have to pull that in and get them from your vendor as well. And last but not least is threat intelligence, right? Given all this metadata I have, how do I know what's important? What do I have to check for? What affects my security posture? And so this includes things like CVEs, right? That's the one that we are most commonly familiar with, but now we have VACTS. And addition to that, we want to kind of take a little bit further in terms of now I need to think about developers and actors of who's producing what in the software supply chain. But collecting all these S-bombs and files and putting them in a single directory doesn't really do much, right? We're just ending up with files in the same directory and maybe if you're really good with prep, you can do a lot of great things with it. But I think the question here is, we need to be able to link them intelligently and be able to perform queries over them. For example, if I give you a SPDX, a Cyclone DX, and a Salsa file, right? How do I make a query across them? How do I be able to reason how this particular component in my S-bomb relates to this Salsa document that has told me how it was securely built? So examples of projects that do aggregation synthesis today, we have the graph understanding artifact composition that we're working on together with a couple of organizations like Kusari, Purdue, and Citi. And the idea here is to be able to take these data sources and to be able to link them intelligently so you can query them as a graph. And of course, we have public data source aggregators like Depth of Depth and Repology that give you information about open source libraries, their security as well as licensing. And of course, we have package managers that have been silently doing the job for many years to some regard, such as PyPI, RubyGems, and for collaborative OCI registries. And I just want to point out here that we actually have a talk this afternoon about how we're doing that, how we're attaching S-bombs and Salsa attestations to OCI registries. And there's a talk this afternoon by Brandon Mitchell. So going on to the next layer, now once we have the metadata aggregated and synthesized, we need to be able to make policies on that. So on one end of the coin, we have how do we actually enforce these policies? And I think we are pretty much set on that. We have CNCF projects like Kyberno Open Policy Agent that can do that. And if you're in the enterprise, you have your favorite ones from your government interest and control, and CMDB systems. However, on the other side lies the question, what does it mean to have a circular software supply chain? You can ask, most people ask, I want to have a policy that says containers running in my cluster must have a circular software supply chain. But what does that actually translate to? Can we break it down to tangible questions that we can tackle? Are we talking about vulnerabilities, built provenance, tooling, developers? How many layers of transitive dependencies do we care about? How do we reason about trust, and risk, and policy? And these are largely unanswered questions, and tech security is starting an effort to rally the industry on defining what good looks like for software supply chain policy. So this is undergoing as part of the supply chain working group that meets every Thursday. So some kind of questions we'll be exploring in the group is various policies. I think they kind of break down into three main categories. We have reactive, preventative, and proactive. So reactive is kind of like, you log 4j, we talked about the open SSL on vulnerability this morning. There's a new hot vulnerability that's out. Question one is, am I affected? And then we go ahead to say, how am I affected? Which software is being affected? And then how can I go about to remediate and draw my entire organization? And we have preventative policies, where I say, I want to be able to check if software hits a compliance requirement before deploying into my cluster. This consists not only of measures like vulnerability scanning or fuzzing, but we also want to include organization claims and certification on software. For example, certifying for fraud, certifying that certain departments only can run software on certain clusters. And finally, we have what we call preventative policies. And this is somewhat a little bit more on export, but it's trying to identify, you know, the next log 4j before it happens. For those that are familiar with the XKCD comic, basically we're trying to find the underpinning libraries that are critical to our open source infrastructure. So there's some pride out here. For example, the open SSLs have criticality scores. However, as we've seen with the log 4j slash log 4 shell case, is that criticality scores are only one part of the picture, and that definitely are more metrics and more analysis and policies that we can make in order to be more proactive in finding these before it happens. So in conclusion, we've made a lot of good progress in the world producing good software supply chain security. We need to start making it easier to consume what we've built. Tech security has many efforts. I do encourage everyone to drop by, to have a chat with us and get involved. And we also have a couple of talks happening today. Besides the one that I mentioned, we also have not all that sign is secure, verify the right way with tough and sick stall, and spicing on container image security with SolSign go out today. So with that, I hope to be able to come back to the next colony, the security con, and see policy and aggregation and synthesis pictures kind of be filled out with many more projects and community efforts. So with that, thank you very much.