 My name's Isaac Hepworth. I'm a product manager at Google. I work a lot on supply chain stuff. I work on Google's own internal supply chain. I do a lot of work in open source as well. I chair the supply chain integrity working group in open SSF. I work closely with the Salsa team, SigStore, and so on. And I'm here with my colleague Brandon, who I'll introduce himself to. Hi, I'm Brandon. I'm a software engineer at Google. I work on software supply chain. As well as open source security, both concentrating on metadata, what to do with it, a lot of S-bombs, as we're going to talk a lot about today, S-bombs, Vax, Salsa, and other jazz. Also involved with the CNCF tax security as well. All right, great. Well, thanks for turning out to talk about S-bombs. Quick show of hands before we get started. Who's heard of S-bombs? Awesome. OK. Who has heard of the US executive order, which specifies an S-bomb requirement? OK, good showing. All right, this is good context. Let's begin. So we've got three things to talk about today. We're going to talk generally about Google's journey through the S-bomb landscape over the last couple of years, driven primarily in response to the executive order in the United States. I'll talk a little bit about how we iteratively understood the assignment. What are we supposed to be doing? What does the executive order even say? What does it mean? How should we approach it? Brandon is going to talk a little bit about our journey into the technical parts of S-bomb land, the tooling and infrastructure we built, how that played out, the types of properties we were looking for in S-bombs, how we guaranteed quality, and so on. And then at the end, Brandon and I are both going to share just a few of our hot takes, some kind of insights or surprising things that we've learned on the journey, and give you a sense of what's next. So let's get started, understanding assignment. I started, I joined Google a few years ago, and within a week or two of me joining Google, I received this email. And the subject line is, producing an S-bomb for various ecosystems to meet the executive order. And it had this ominous line at the bottom, Isaac, do you have bandwidth to drive this? I was a couple of weeks in to Google, and apart from training and everything, turns out I did have bandwidth to drive it. And so let's dive in. And I kind of figured I should start by understanding what is the executive order. Turns out there's an executive order that came out from the Biden administration. In May 2021, executive order 14.028 on improving the nation's cybersecurity. And it's a long document. And I've linked each slide, which has a URL on the footnote, also has a QR code, which will take you to that URL if you want to look up the primary source here. But the executive order, amongst other things, has an implication for software companies that supply products to the US federal government. And the implication is that these software producers have to be ready to provide S-bombs for their products. And there's an obvious question. What is an S-bomb? And how should we understand that? Well, it turns out that CISA, a part of the US government, has got your back here. And cisa.gov.slash S-bomb has actually got a great set of resources on understanding software bills of materials. And I love this quote here. An S-bomb is a nested inventory. It's a list of ingredients. And it turns out that looking at software supply chain and looking at the supply chain for, say, packaged foods, there's a set of really rich parallels there. And hey, think about Salsa. It could be that the tamper-proof seals on your food. Or it could be the food handling processes you use in your food manufacturing. And amongst these rich parallels, I think that thinking of an S-bomb as a list of ingredients for software is actually a really useful way to think about it. And CISA will guide you and give you a list of, what are they useful for? And again, like foods, if I'm a producer of food, I should be keeping track of what's in it. So that's kind of my basic responsibility. If I'm in the grocery store and I'm picking things off the shelves and I'm looking at ingredients labels, I can use the ingredients labels on foods to select foods which I'm going to enjoy, which are going to be good for me. And when I'm at home with those foods, as I'm consuming them, I can make sure, again, looking at the ingredients that are not allergic to any of the ingredients. These ingredients don't have any known contaminants in them. You can think of those as vulnerabilities, maybe. But I can assess that ingredients list at the point of consumption and decide, do I want to consume this food? The same is true for software. And so CISA lays out use cases for S-bombs in these, sorry, NTIA lays out use cases for S-bombs in these three categories here. Mentioning NTIA, I just confused CISA and NTIA, one of the aspects of my journey through this world was making a whole load of new friends with a bunch of government acronyms, many of which I didn't know existed when I started off here, some of which I really not quite sure how they fit into the picture today. And I would really struggle to demonstrate or join up a diagram of how will these various pieces fit together, how they interact, which ones produce normative requirements, which ones not, which ones are guidelines, which ones are specifications, where does regulation come from, where does legislation come from? There's a whole story here, which is probably an entire other talk. But as a product person, I started off thinking, let's think about the requirements, OK? I got this email, we're going to do something with S-bombs to the executive order. How should I think about what the actual requirements are here? And I started here simple enough. For each Google product, we're going to produce an S-bomb, and we're going to identify the dependencies of that product. It's simple enough. And then you look a little bit closer at this, and you kind of highlight, well, an S-bomb. Maybe some more specificity is needed there, exactly. Again, writing requirements, I need to kind of give engineering teams at Google a sense of exactly what are they going to have to do. So it turns out that NTIA, again, has your back. There's a document called the Minimum Elements for a Software Bill of Materials. This is a great document, super approachable, super sure, very understandable. And it lays out from the US government's perspective what makes for a kind of a baseline adequate S-bomb. And it talks about the minimum set of data fields that an S-bomb should include about each dependency. It has some mention of automation support, what serialization formats and interchange formats you should be thinking about. Highlighted SPDX here, and Brandon's going to talk a little bit more about that. And I really love this thing down the bottom right about the accommodation of mistakes. It was a welcome discovery to see the US government thinking this way that this idea that as S-bombs become operationalized at scale, there will be hiccups from time to time. And S-bomb consumers should be fairly lenient with respect to genuine errors made in good faith. And I thought, again, highly pragmatic, really approachable as a baseline specification document. So I love this document. Definitely recommend you check it out if you haven't before. So we have some idea about what S-bomb is. And we can go back to our requirements and produce a second draft. So now for each Google product, we're going to produce an S-bomb in line with NTI's minimum requirements, identifying dependencies. And you may think, OK, we're done. Great. But actually, you look a little bit closer in this. And you go, well, each Google product, that's a lot of Google products. Wing, Google Wing, Wave, Waze, Waymo, Workspace, the Pixel Watch. These are just the ones starting with W. And so there's a whole ton of Google products to think about. And again, as a product person, again, as someone on a timeline, I'm thinking about how can we scope this? How should we think about, from the universe of Google products, where should we begin? Well, in this case, NIST shows up and has this document here, again, linked from the slide, and has actual guidance on what NIST defines as EO-critical software and a list of characteristics which software has to be considered critical in the context of the EO. And that gives some great guidance as to where we should focus our efforts to begin with, which products we should prioritize, how we should phase our approach to building S-bombs across Google's product portfolio. So we'll go back and revisit our requirements again. And we'll now just make this slight tweak here for each Google product in scope. We're going to produce an S-bom, NTI's minimum requirements. I don't know if I get dependencies. And again, you may think, well, we're done, right? Well, not quite, because there's nuance lurking in this word dependencies here. And S-bombs were originally conceived for a world of package software, where you get software on maybe a floppy disk, like that Chrome floppy disk that we saw with the S-bom illustration. And in that sense, you look at what are the bits and bytes on that distribution? What are the bits and bytes you give to a customer? And you can kind of imagine notionally starting at the middle of the floppy disk, working your way out to the edge. And when you got to the edge, you're done. You've inventoried the whole thing, and all of everything you've captured is your dependency set. Well, you've talked there about, perhaps, you've talked about the bundled runtime dependencies. But there are other dependency types, too. Your software may have shared dependencies with other pieces of software on the same deployed platform. There may be build dependencies. What tools were used to build the software? What tools used to test it, to link it, to integrate it, to sign it, to package it? And this is important, again, from a think of it in the food analogy. Yes, I want to know the ingredients in my peanut butter. And they also want to know what machinery in the factory made that peanut butter. Was it leaking engine oil into the peanut butter while it did so? And so the build tools you use do have an implication about product quality. Service dependencies. Again, the Chrome example. Sure, you may imagine Chrome on a floppy disk, but Chrome itself syncs to the cloud. Syncs, bookmarks, cookies, passwords. So there's a hidden server side of Chrome which you may not consider. But it's a dependency of the product. So service dependencies perhaps come into scope. There's platform dependencies, too. What about the operating system? What about infrastructure or middleware? This stuff runs on. Is that a dependency, also? And then configuration as well. Configuration can change the behavior of software. It can change what it's vulnerable to. And so there are implications there for how you think about dependencies. Again, as a pragmatic product person, I decide to scope in. Let's try and focus on the most critical part of the S-bomb, which we identified as the bundled runtime dependencies, including transitive one. So we come back to this idea of we're shipping a piece of software, inventory all the bits and bytes of the software, and you're done. But we're not quite done. Because this word identifying here, there's some nuance here as well. How should we actually have an S-bomb which identifies the components? What identification scheme do we use for software components? A given piece of software, log4j, in this case, may be referred to by a bunch of different names, different capitalizations, maybe including version numbers as a combinatorical explosion of ways to even think about log4j. And then when you try and boil that down to an actual, unambiguous, externally referenceable identifier, there's more to it than meets the eye. And I've linked from the bottom here. Ciccer actually put out a request for comments in October last year asking the industry for input as to, how should this problem be approached? For open source, it actually turns out that Perl's PURL is a great identification scheme. For closed source components, it's a relatively unsolved problem. It's not solved terribly well right now. And there are active efforts to try and do better here. We ended up using Perl's. And so we're going to say our requirements here now say not just identifying the components, but using unambiguous, external identifiers. And you know what, we're going to add one more thing as well, which is we're going to add, why are we doing this? And the focus of this initial effort was to make vulnerability management easier for customers. Why is that important? Because it turns out that one of the primary things you need to think about when generating S-bombs is, how do I join up my S-bomb data set to a vulnerability data set? And if your vulnerabilities are indexed using one set of software IDs, and your S-bombs are indexed using a different set of software IDs, you're going to have trouble. And so again, landing on Perl's as a way to identify software allowed us to do that joining and allowed us to say, OK, now consumers of this S-bomb can join it to vulnerability databases and guide themselves down their vulnerability management journey. And at this point, we kind of are done. And a version of this, actually a 12 or 13 page version of this statement here, became the requirement doc for S-bombs at Google in service of the EO. But even there, there are a few other considerations. I'm not going to go into great detail here. I'll talk a little about these mythical creatures, S-bombs, later on in the talk. But you need to think about a number of other factors here, non-functional requirements for how are these S-bombs going to meet the world? How are you going to support them? What are they going to look like when they're outside? But I'm going to hand over to Brandon now, talk more a little bit more about the technical details of what went into our approach. Awesome. Thanks, Isaac. Cool. So let's get into how we actually did this, what services, what systems were involved. But before we go into the details of that, we kind of looked at the problem and wanted to figure out what are the big questions we are going to ask that will eventually influence the designs that we're making. So the first thing we started off was, what is along Isaac's kind of thinking? What is the S-bomb? What do we want on the S-bomb from a technical generation perspective? And the thing we came up with was both for the EO, but also for vulnerability management, is accuracy and completeness. And this is around, does my S-bomb contain the right dependency information so I can make vulnerability decisions? And the second one was something called trustworthiness. And this really boils down to the question, can I, in good faith, use this S-bomb for important decisions, or in this case, give them to the government for compliance? And around these properties, we came up with a list of best practices and kind of trade-offs for all the things that we could do. And we'll talk and highlight a few throughout this section. So the second broad question we were asking is, along the lines of many people were asking when the EO came out, SPDX or Cyclone DX, which standard should I use? But this really led to a little bit of a broader question is, how opinionated do we want to be? And in this case, given that the scope, as Isaac has pointed out, is huge, and Google is huge, and has many products, anytime we can use less, so less is more in this case. So there are a lot of ecosystems, products, organizations, tech techs. Whenever we say that there's only one common denominator, this means that we have better shared tooling and fewer integration points. So coming back to the question of SPDX or Cyclone DX, in this case, we said, you know what? Based on the expertise we have and the familiarity and analyzing the ecosystems of these two standards, SPDX is the standard that we're going to go with and no exceptions at all. So there are many other kind of such decisions that we have to make, and we'll go through this again throughout the process. So technically, not about what's included in fulfilling the EO, from an engineering perspective, is one generation of S-bombs, two, storing the S-bombs, and then finally, whenever a request comes in from a compliance officer or a federal agency, we want to be able to retrieve the S-bombs that they want. So breaking this down to three, and let's start with generation. So in generation, I think we've seen a lot of different definitions of S-bombs. We have like source S-bombs, build S-bombs, analysis S-bombs, and like, you know, document was produced sometime last year talking about the different things and how they are all important. But if we just scope this down to dependency information, we can kind of do a comparison across all these. And so we have source, build, and analysis S-bombs, which analysis S-bombs are really around artifact, getting S-bombs information from artifacts. And we took a look at this, and if we look at, you know, on the far left, if we start looking at source, right, if we take source and try and create the S-bombs from it, what we found is we got a lot of dependencies. And like, a lot of dependencies is good, in means it's complete, but not necessarily accurate in the case of what we noticed was, you know, things from tests and plugins, and for example, Java build plugins were all included in the S-bombs. So this doesn't necessarily reflect accurately the information in the artifact. And then again, if we go all the way to the other corner where we take analysis S-bombs, where we are scanning the artifact, you know, we find that the S-bombs is incomplete, and we showed this two years ago, we had a talk in a Q-con, and we showed like, if you had built a Rust binary and then you copied it into the container, you would see no dependency information of the Rust binary at all. And this is part of the process, I think like built inherently lossy, and therefore we run into the problem of incompleteness. So we are in a little bit of a go-de-lock situation here, right, on one hand, it's too inaccurate, and on the other hand, it's too incomplete, right? And so the only place where we really know how the software or the sausage is truly made is where the sausage is made, in which in this case is the built process, and more specifically, the built tool. So what we ended up with, and in the spirit of being as close to built, right? So this was our first mandate, we said, all EOS bombs that are produced can only be produced by a builder, right? So saying that only built processes can produce S-Bomb builders was the first step. Having S-Bomb generation be the default in these builders went a really long way in kind of getting that, and on the team of less is small, we had many, many products, but we only had maybe a handful of builders, and therefore it made sense. And number two is like, what did we end up with generation tooling? And following the same philosophy, right? We wanna focus on built tools as much as possible. And so in certain cases, so for example, like Android, where we had very specific constraints, we were able to say, you know, Android has its own builder, it's written mostly with the Gradle package manager, and so we created S-BX Gradle plugin, which we then donated to the community, and this built tool generates very accurate and very complete S-Bombs. Another case where this is possible, and I know most of you may have heard of the Google Tree Mono repo, that's where we keep a lot of code. And in this repo, there's something called Blaze, and which is similar to Bazel, the open source version, where you declare your dependencies, you declare annotations, you kind of have a lot of metadata about your software. And so from this, we were also able to just pass the metadata and generate accurate and complete S-Bombs. But Google is still like many, many, it's still a tech company, and there's always a long tail of software where we have different use cases, we have specific needs, and we end up with a long tail of the many different ecosystems that we may not fully understand, or we may not have the correct tools to deal with. And for that long tail of software, we use generic composition tooling, SIF is one of them, we also have an internal tool that does that as well. So with generation out of the way, Nick's a storage, and storage is easy, right, the S-Bombs just put them in the database. Yeah, Blob-Saw. But unfortunately, this wouldn't be an interesting talk. So let's talk a little bit of, if we had the S-Bombs database, what would we want out of it? And the question we want to ask is, how do we create an S-Bombs database that we trust? We will end up, and federal agencies will end up using these S-Bombs to make security remediation decisions. How do I know that I can trust it? And the same way we decide what food we want to eat, we depend on the labels that are on them, right? And in this case, in this picture here, provided by Isaac, there was this, apparently this Amazon rep, which God knows what's in it, because it's a default template. So to achieve this, for storage, we have a project called SIDO, or Supply Chain Integrity Log. And contrary to the name, it's a bit of irony, but the purpose of the project is to break down metadata silos. And so this component is in charge of gathering all the software metadata together and then from all these events that happen in Google Supply Chain and then make the metadata usable. And for those that are familiar with the OpenSSF Quark project, this project has some similarities and it's also worked on by the same team. So let's take a little bit into SIDO. And so pre-EO, what does SIDO do, right? So traditionally, when builders build an artifact, they generate something called Salsa Providence, which is a now OpenSSF project. And the TRDR, if you're not familiar with Salsa, is basically this document. It says, hey, I'm a builder. I build this artifact securely. Here's some additional build information about the build. Here you go and signed off by the builder. And so what happens here is then SIDO then validates and says, okay, great, this artifact with hash A, B, C, D is good. And it was built by a trusted builder that I verified the signature on. So that was SIDO's purpose, right, day to day. And so when it came to S-Bomb, so you're like, okay, let's try it out, right? So let's do the first iteration. Let's say, okay, builders send the S-Bomb to SIDO. And in this case, it doesn't actually work quite well for multiple reasons. But the biggest reason in this case, the first problem we ran into is that S-Bombs are not naturally very good at self-describing. And so the S-Bomb may not be able to accurately describe which package or which piece of software that it's generated for. And this is especially so if you're using analysis S-Bomb tooling, because they are always based on file system, so they don't have the logical view of what the software they're actually scanning is. They say, here are a bunch of files. I'm here, what's in these files. Here are all the dependencies, but I don't actually know what these files are for. And most of the time, the artifact analysis engine does not have the context to be able to put that in the S-Bomb. So what we ended up with is an in-total attestation. We create the custom predicate called reference attestation. And this reference attestation allows us to say, here's the S-Bomb, here's where it's located, here's the hash, and this is for software with hash ABCD. So this is great because now we can say, if I have an artifact, here's the S-Bomb for it, but we're still missing something. It's like, how do I know whether this S-Bomb was generated by the builder, or I downloaded a bunch of files, I ran SIFT and then I uploaded it to SIDO. And this is where we do the same as in Solstice, where we assign the attestation and then the signature is validated by SIDO. And this gives us two properties. One is that the S-Bombs hold their integrity. We know they haven't been tampered with since they left the builder. And the second is we know that all the S-Bombs come from a proof builder because they were assigned by a builder key. And we know that for all these builders that we've actually have in our allow list, that we've bettered the S-Bomb generation process. And this is more like the engineering exercise where we talk to the different builders and I ensure that what their tooling strategy was okay, they're doing the right things. Awesome, so with all the builders now sending their S-Bombs and attestations, we get this really nice mapping of like, here's the URI of Artifact, here's the URI of the software, here's the hash and here's the path to the S-Bomb. Now, with this, which free role should be simple as well, right? And as usual, famous last words, we also thought so. We thought it would be straightforward. So let's talk about it. So we wanna be able to say here that I have a path to a container or hash in this case we say like some GKE container image A, B, C, D. In this case, we wanna say, okay, look up this container ID, give me back an S-BDX document. We ran this and then we got nothing. So, right, where's the S-Bombs, they're like sneaking out somewhere. So why is this a case? And kind of it boils down to, you know, there are many cases we'll talk about a little bit more, but they all stem from this idea of like, it is a supply chain, and what we're doing is you're just looking at one part of the graph. And so we need to look at it in terms of the graph. And in this case, we look at the supply chain and we notice that, oh look, the last step of it was actually image promotion, right? It was like, I took a staging image and then I copied it to production. It wasn't a build, well, depends how you define build, but it wasn't like building a piece of software in terms of compilation. And so it didn't have an S-Bomb. So we didn't have an S-Bomb for it. So let's kind of follow the graph back, right? So we go back, next step, look for S-Bombs, still no S-Bombs. So what's happening here? So let's go back further. And in this case, in fact, we find two S-Bombs, right? Over here we see, you know, staging, we have two staging S-Bombs being built, one, each of them for different architectures. And then there's a seminal step where they become a multi-architecture image and then the multi-architecture image just then gets promoted, right? So this is what happens and if we can just keep on doing this, going back and back, and you know, we may find out, oh, there's a Rust binary or some other binary that's opaque that's built early in the chain and we return all the S-Bombs that we find, right? And this is exactly what we did. We implemented this, we said, and thankfully the S-Bombs appeared. So all was well, we didn't do all that for nothing. And this is what we're using today. So this is how we're returning S-Bombs. We don't do anything fancy with composing them, we just zip them up and say, here's all the S-Bombs that you need. But yeah, what were some of these things, right? And here are some of the issues, type of issues that we ran into. And we found that a lot of this was including repackaging, re-tagging, signing. But our best practice here, take away is compose S-Bombs to obtain more complete S-Bombs. And we find that each ecosystem generally knows what it's doing, right? Python knows what it's doing, NPM when it's building something, knows what it's doing. But where a lot of the incompleteness comes out is from packaging, where you're going across ecosystem. And the same way we don't expect like the mailman to know what's in our Amazon package, we can't expect like a pack, a Docker build to know what's in the Rust binary. Awesome. So all this work, it was great, but I skimmed a pretty big detail here, which is like, what's this graph that we're traversing? How did we get it? Where's this magic coming from? Like, how do I get one of these? And for that, we can go back to the build. So as a quick recap, we saw this slide earlier, which is what Scyler was doing pre-year, basically it's generating Salsa provenance. And part of the attestations that are built with Salsa in the Salsa provenance is including what was also used to build the artifact. So the materials that went into the artifact. So in this case, we have both the inputs and outputs of the build, and what we can do is we can take the Salsa provenances and kind of just like chain them up, one, the output being the input of the next one, and we can construct the graph of the supply chain. And we all do this by hashes, by source repositories and so on. So this is how we're doing the graph composition analysis. And if you wanna read more in detail, there's also a blog post that Isaac and I wrote about a couple years back about how to do this and kind of like just a high level concept of it. Great, and last but not least, just to wrap up this retrieval section, one of the things that I talked about so far is that you have a URI, you have a hash, like what do I do with it? But I think the question that as we're doing run truths of the EO to try around this with compliance, we notice that the requests may look, maybe a little bit different, right? So in this case, they may be asking something more like, give me the S1s for Pixel OS, they're not gonna say give me the S1s for Shara 256, DEF, because that's what they're understanding from their use of the software. And what we found about this is, this is really hard, and this is where we lean a lot on the product owners themselves to kind of have a sense of like, how do I translate these requirements? But one thing we found that is very effective, we found like successful teams do, is if they kept an inventory of the product they had and the product mapping. So to sum up, in generating S-bombs, always use builders and build tools when possible. You can do the composition analysis otherwise and generate S-bombs in all steps. In storage, ensure S-bombs are tested and signed. Sousa, it's a good thing to have, to store them, use good software identifiers like URIs and artifact caches. And lastly, in retrieval, compose S-bombs whenever you can because they're using Sousa and have product teams maintain some type of inventory mapping for their products. Awesome, and the fun section. All right, thanks, Brendan. I think Brendan and I, having spent a few years looking at this and working together on this, we could talk pretty much forever about S-bombs and definitely if you are looking for more, come and find us afterwards, I'm happy to take more questions and follow up. We've got a few things that we wanted to just highlight. Some things that stood out to me, I'm gonna start with a couple of things and Brendan has a couple of observations and we'll close. The first one that I thought hard about and I've come to some realization about is that I don't think that S-bombs are the goal of the executive order S-bombs requirement. People ask, hey, one day when the executive order deadline kicks in and everyone's producing S-bombs and giving them to the federal government, they're gonna wake up to three-quarters of a million S-bombs in their inbox and wonder what to do and probably they're just gonna take them and put them in a filing cabinet somewhere at which point you may ask, well, what good was the executive order S-bombs requirement? And I think there are a couple of things that stand out to me. Number one is getting something like S-bombs off the ground in an industry is a really difficult chicken and egg problem. There's a cold start problem here that no one wants to produce S-bombs if no one's gonna bother consuming them. And no one is gonna build consumption tools or great tools to consume them if no one's producing them. The great way or a great way to solve problems like this is with regulation where the government will come in and say, okay, we're going to decide that we're gonna use the purchasing power of the federal government. And those words are used literally verbatim in the executive order, the purchasing power of the federal government to catalyze the industry to start producing S-bombs, to break this deadlock. The government is gonna say, well, we now require S-bombs. And that means suppliers to government are gonna need to build them. And that means suppliers to those suppliers are gonna need to build them. And so on, transitively, up the chain. And so it's a catalysis motion to get S-bombs to begin to be operationalized in the industry, whether or not, and that's independent from whether or not the federal government actually gets individual utility from S-bombs. So that's number one on this point. Number two on this point is I think that generally, discovery that we've made along the way is that it is rather difficult to produce an S-bomb unless you have a certain amount of just basic baseline operational hygiene and operational discipline in your software supply chain. And so again, this is a way of forcing that into the industry. So I'm just saying, this is now the new acceptable baseline. You need to at least know what the ingredients are in this software. That's a place to start. I really see it like this is a starting point for an entire decade-long journey. If I think back to the US food supply chain, the equivalent to this regulation came in 1939 in the United States and that was the legislation that brought in the mandate for ingredients labels on packaged foods. And it took about 20 years, following that regulation, to get to 50% of the US food supply chain in compliance with that legislation. This is a long process. And so I think the executive order S-bomb requirement is a catalysis motion and it's a requirement for people to start to pay attention to this domain. The other observation I'd share, and this is a little bit spicy, is that I think that S-bombs are a poor fit for SAS products and why do I think that? Well, I think that if you're in the world of packaged software, you've got that floppy disk, you look at the bits and bytes, you inventory them, say, here's the ingredients of this thing I just gave you and when you run that software and when you operate it, you know that the totality of the components you're exposed to. How do you solve that problem for SAS? Like let's imagine that I go to docs.google.com, Google Docs on the web. What is the totality of computing infrastructure I'm now exposed to in using that software? Well gosh, obviously there's the Google front end which serves the web, there's all the JavaScript, there's the Google identity system which has seamlessly logged me in, there's Google Drive which is used for storage below Google Docs, there's Google infrastructure like Spanner which is used for storing the data behind this thing. There's probably all of Borg of Google's compute fabric and you pull on this thread because you're interested in transitive dependencies and what's your total dependency footprint and you pull on a thread and pull on a thread and before long you've got the entire internet in your lap and you're really stuck. And so I think there's a consideration that S-bombs are a good fit for SAS. I don't think they're there yet. I think we need more specificity about how do we reason about this transitive dependency footprint in a risk-centric way? How do we think about the risks involved? And there's a question about agency who's operating the software, who's responsible for vulnerability management and so on. So anyway, spicy take, there's more work to come. Ciccer itself is gonna be putting out a white paper on this topic in the coming weeks and months and I think there's much more work to do here. Yeah, just like the last quick one, like Isaac said, S-bombs don't need a beginning. We found a lot of these use cases internally that have been kind of what people are interested with S-bombs and it provides a unique opportunity because we've gathered all this information of different products all in one place. And an example of that is, we use guac to kind of do some analysis and we took all the container S-bombs that we had and we said, okay, give me this open SSF scorecard of all the third-party upstream dependencies and now we have this mapping of like which are the most frequently used third-party packages which on this top left-hand quadrant also have a really, really bad like open SSF scorecard score and then we can make this action to say, okay, let's create, let's have more efficient security investment to fix things which are used more widely. Awesome. And that's it. Thank you so much.