 Well, it's 11 o'clock and I like to start on time. This is the legacy of all of us being on Zoom for our entire lives, I think. But thank you all for coming in. And I guess this is the beginning of the supply chain security contract, at least for the day here at the open source summit. How many of you were at OpenSSF Day yesterday? Okay, about half of you. Good. And how many of you write code for living? I assume all of you do, right? Or, okay, yeah, good. And so this is a talk that's about trying to figure out how we make the developer tools that we use today just more secure by default. This is a talk that we plan to evolve quite a bit and really want to fill with more detailed kind of examples over time. This is just to be a talk that actually any of you could give if you'd like to, because this is really going to, I think, be a big focus for the OpenSSF community going forward. I'm Brian Bellendorf. I'm the CTO for the OpenSSF. I need to practice saying that, because that's recently announced as of yesterday. I've handed the general manager baton off to Omkar Arasaratnam, who many of you have perhaps met. And so I'm going to be diving into more and more of these details over time. But I also know I'm probably speaking to a room where not only could all of you give a talk like this, you probably could even give some of the specific examples. So look at this like you might look at an early stage Wikipedia page. And I actually maybe I should put this up in GitHub or something like that. So I don't need to like tell you all that the world is indeed on fire when it comes to security. In fact, there have been kind of some famous conflagrations. I, you know, log4j, yada, yada, yada, all of that. I don't need to scare you with kind of the current state of things. I think many of you also know that there are not just like the big bang kind of vulnerabilities that are out there, but a tremendous number of common, simple bugs that continue to be pervasive across the open source ecosystem. Jonathan Lightshell will probably kill me for giving the example of Trellix rather than one of his own campaigns. But they just have such good numbers. There was a company called Trellix that did a scan for an old bug in in across open source repositories, all the open source repositories they could scan on GitHub. And this old bug was has existed since 2007. This is a bug in a Python module for expanding out tar tar files. I think it was or zips tar files or zips. Everything was tar that if you did that from a root, as root, you could overwrite the Etsy password file and other things like that, right? This is behavior that the Python developers decided not to fix because it's POSIX compliant to do that very thing. And they said you have to be a secure user of this code. You have to know how you're invoking it. And it's up to the person invoking this library to keep it from doing bad things. That's kind of lame. So Trellix did a scan to see how many software packages out there actually don't. Hey, Jonathan, I'm giving the Trellix scan example. Nice duck on your head. So they did a scan in 61,000 open source projects out there are vulnerable to this insecure usage of this Python module. And so they submitted not only issues to say this is a bug in your code you should go fix. They submitted pull requests to go fix all that because they could automate the process like Jonathan has done with his research. And not just detecting that vulnerability but coming up with the pull request that closes that vulnerability, right? And it's a simple fix. It's a really simple fix. Submitted 61,000 pull requests against all these different repos and about 2,000, 3,000 of them responded. And the other 68,000 did not, right? Are still vulnerable. Now a lot of those will be one off weekend kind of for projects, you know, student projects, those kinds of things that are, you know, not maintained anyways. It's kind of hard to calibrate for that. But the fact exists that that across the open source landscape, there's a lot of these low level bugs in addition to the deeper issues that we all know about. And again, I don't need to scare you about all the kind of news around software supply chain attacks. We had a great presentation yesterday at OpenSSF Day by Elizabeth Weiss about some of the typosquadding and clone packages kind of attacks that are emerging in the NPM ecosystem, which they're trying to address. And I'll talk about a way that they're addressing it in just a bit. But these things are becoming more and more pervasive. And that is because open source software was largely devised and developed its tooling at a time of extremely high trust where there is high trust in the link between the developers of code and the packages that they would put up for distribution. Where generally speaking, we just didn't develop the rigor and the kinds of paranoia that frankly we should have. And it reminds me a lot of the early days of the Internet when there was no TLS when there was, you know, when people kind of used PGP. But those are the weird ones, right? The people who do that were the paranoid. All of the rest of us, you know, sent unencrypted email over bare S&TP like real people. And meanwhile, now we have to worry about that kind of thing. Now we have to encrypt our connections in the software development world. It used to be the case and I'm an old FreeBSD user. Is anyone else here? Have you used FreeBSD? Okay, great. FreeBSD had this great system where you could always recompile the OS and recompile world, all of your packages, all of your dependencies from source locally there on the machine, right? And to install a new package, you could go get the binary package. But, you know, what real people did is you'd pull down the source, you know, and go to user ports and type the right directory and type make and then make install and suddenly it was there. And you knew that all the dependencies were at least PGP verified and GPG verified and it was probably going to be secure enough, right? But these days, everybody builds from preexisting binaries, right? Everyone bootstraps from a system image, a container image, a collection of modules that they've pulled off of Maven Central, compiled, pre-compiled, because nobody wants to sit around for 12 hours while everything recompiles, right? I understand that. I get that. But in doing that introduced the prospect of a whole lot of vulnerabilities in that exploit these default assumptions that we make that the world is safe underneath our feet. Another big change, of course, that has happened is that the world of packages is incredibly heterogeneous. And the world of software within any given repo varies tremendously and in ways that are challenging to determine between those software packages that are well maintained that are frequently released where releases are signed off by other developers, you know, or those packages that are maintained by a single person who kind of wrote it, throw it out there. And without really any marketing or effort or awareness, those packages ended up getting used thousands or hundreds of thousands of times and pull down hundreds of thousands of times a day. And so that's how you end up with packages like Colors.js or shift left or left tab, right? Left pad, thank you. And others that became stories that were kind of drama-filled, a developer felt that they were being taken advantage of and maybe they were, you know, they're certainly overlooked when people were thinking about where to invest and they kind of got pissed and made a change that then broke a bunch of websites. Like, that's a supply chain attack. But we all might have sympathy for the developer. That's fine. But some of you probably had your builds broken in a way that you didn't expect, right? So the world of packages here is pretty frustrating. And we can try to be better developers. We can try to be smarter. We can try to go and look at all the social dynamics behind every package we can include. But that's not scalable. So it was okay when at the level of the Apache web server, including a few libraries and the standard lib and everything like that, but it doesn't scale out when you have applications that are consuming thousands of dependencies like some of them do out there today. So by embedding security awareness into developer tooling, we have a chance at fighting that fight. We need to make it easier to write secure software by default. There's a lot of principles about how to do that. And in fact, we have some training at the open SSF called software development secure software development fundamentals. That is about 1520 hours of training. It covers kind of the basics of here's common patterns that tend to lead to CVE things like don't trust user contributed input. And if you do just don't parse it for format strings, which is what led to one of the things that led to log for show. But but doing that is hard. And while I certainly recommend all of you take that course and certify against it, you know, that's not something you can really require every open source developer to do, right? So try to are there things that we can bake into the tools to make them more efficient in this way? Are there ways that we can reduce the burden on develop on maintainers, right? You've all heard the term going back to the beginning of open source that open source developers. It's like this phenomenon that works because everyone is scratching their own itch. Well, we know that that's not exactly true, right? Many developers are involved in open source because they're solving a problem. They see a bug. They solve that bug. They give the bug fix up to upstream and it solves it for everybody. That's great. They implement a feature. Everyone can benefit. They work with other people on a new architecture or some refactoring. That's all great. But there's a lot of things that go on when it comes to reducing risk in software that are really hard to justify to the higher ups really hard to get priority time to work on. And you could call it paying off technical debt, but it's doing proactive security reviews of code. It's all the kinds of things that have tended its triaging bugs. It's the kind of things that have tended to fall on maintainers and maintainers have limited time to spend on this kind of work. Also by getting into the tooling, it allows us to collectively invest into the security of open source communities and the ecosystem as a whole. If we can, through the tooling, if we can better automate the adoption of certain practices, then I think we're all better off. So many of you will know about some of these initiatives and so I kind of offer them humbly as kind of pointers to more. And if you haven't heard about them, if you haven't, getting involved understanding and then looking for places to weave them into your tools could be interesting. But the first is SigStore. And SigStore went general availability back in September. SigStore is about signing artifacts through the software supply chain attaching a signature and those signatures being based on keys that are short lived and ephemeral, essentially. And this is very different from the GPG model of signing of artifacts. This is not about maintaining a long term public private key pair as a developer that is attached to your email address that you have to roll the key over if it ever gets inadvertently released. This is about, just like let's encrypt, it works because you get a 90 day certificate in an automated way as a kind of a lower trust threshold. SigStore has a component that will issue short lived keys and then when you sign something, it'll restore that in a distributed ledger in a log file that then is widely available and people can use and they can consult to verify the integrity of those signatures and know that the build is traceable back to the developers who publish that code. And this solves a problem for repositories where there's not necessarily a provable verifiable link between that package sitting in a repo and the upstream GitHub repo that was where that code was developed. This has been widely deployed and adopted now in the cloud native ecosystem and we're seeing a lot of other ecosystems start to pick it up. A complementary technology, which is a little bit more of a spec. I mean to be fair, SigStore is both a specification and a collection of software and a service, both the key issuer and the signature record. Salsa is more specifically a spec, although there is some tooling now that supports this. Salsa stands for security levels for software artifacts and this is a framework for being able to understand what were the attestations about the build environment for each of the components as they were assembled through that supply chain. And it's a way to try to meet up what are emerging as regulations in not just in regulated industries like finance and healthcare, but potentially partly as a result of the pushback to the log for shell kind of breach. Things that we might start to see across all of the software industry in terms of requirements that you have to hit to demonstrate due diligence in building your code in a secure way. Now, why does build matter? How many of you remember the, is that a Ken Thompson paper called Untrusting Trust? Is that right? From 1986? Something like that from the 1980s? I'm sorry, 1986, thank you. Untrusting Trust, where this is a paper that demonstrated that you could write code all day long that had zero bugs and zero security defects. But if somebody could compromise your compiler, and he demonstrated this by, I believe it was the GCC compiler, building an implementation that could add a backdoor even to code that didn't have that backdoor. So you not only had to trust the code, you have to trust the build tools and the build environment and basically every factor that goes into building that release to really not only have reproducibility, but guarantee the integrity. And so we have to think a lot more concretely than a lot more securely than just using kind of random build servers fired off with GitHub actions, right? If we really want to have higher integrity and guarantees around the binaries that we're consuming as we build this code. So Salsa is intended to address that. The Salsa specification went 1.0 two weeks ago. And that is the result of a ton of work of people in the open SSF community. The Salsa tooling, I believe, tracks the standard pretty closely. Someone correct me if I'm wrong. And the goal is to try to use the tooling to get the standard adopted as widely as possible. And one kind of proof point of this was that last week, I believe it over the week before last week, GitHub announced the adding provenance checks to NPM. And what this means is, and it does require those modules to opt in right now, and this will be an important point I'll make later. You can now use a flag to NPM dash dash provenance to be able to get a record that shows the traceability of each of the modules used in that build back to the source code and back to the developers who wrote those modules. And this, this is a really cool thing to see it pulls together SIG store and Salsa. It doesn't touch on S-bombs yet, I don't think, but GitHub also separately has now enabled a generation of S-bombs again as an opt in kind of thing. And this is great because the NPM community has suffered from the kinds of attacks that this tool is designed to address. Now this tool is only as good as the numbers of projects that enable it for their modules so that, you know, downstream the further packages that include it can benefit from that traceability and also only as good as the pull that comes from the end of it. It also is something we don't want to rush too quickly towards. And Salsa being 1-0 and SIG store being 1-0 are a nice demarcation point that this technology is mature enough for organizations to start pulling in. But, you know, we start to see some regulators out there demanding S-bombs, demanding traceability a bit before some of the technology is ready. So this is great to see in a beta form. We have to make sure that the community is working on that code, really want it and want to pull it in and have high confidence in it. But what we'd like to see in the long term is more examples of this. More examples where the Python community or the Rust community or even the Java community through Maven Central or others start to tie together these different technologies into a way that can provide that provable provenance for code. And along with it bring information about known vulnerabilities, for example, or updated versions or other kinds of things that would help developers understand, not only developers understand when they're using a code that might be more risky, but also the folks managing the build environment and managing risk to get a picture of, you know, really what are we using here. So the overarching picture here is that across those different parts of that life cycle, across source, build, dependencies, packaging and consumption, there are opportunities to get tooling inserted at each of those stages to have a positive impact on the trustworthiness of that code. And again, this is not delivered as gospel, this is delivered as a starting point, would love to see more ideas, and I'm, you know, trying not to avoid trying to not name like specific technologies and instead try to describe this kind of somewhat generically, right. But I think the key will be getting to specifics and getting this adopted out there in the tool chains of the world. So, I won't go through and reach each of these because they do get into some depth, but but there are opportunities for intervention at each one of these steps. Now we have a working group inside the open SSF called the security tooling working group. And then so far they've been quite focused on the espon question like how do we drive greater adoptions of software bill of materials for some useful purpose right for doing inventory control inventory, kind of tracking is a big thing, because people didn't know where they were vulnerable to log for J or vulnerable to log for shell vulnerability when that hit. They didn't know when they were done, because they didn't have a metadata format that allowed them to easily understand where where in their They did in many cases organizations bought these SCA tools to try to like do forensics to try to understand what's installed where, but forensics is the right term for it it's like looking for the killer after the you know the bodies are decomposed in a way like having that metadata would be a much more formal way to be able to track that. But there's a lot of other use cases for for espon security related use cases for for espons and in the supply chain, among them to track vulnerable dependencies and outstanding CVE is that have not been addressed. That's a pretty dynamic data set. So it's less metadata in the package and more something it's attached on it as it goes through the system. There are others who are looking for use cases around capabilities, you know, here's a package in this package should never open a network connection to anything other than known good IP address or something right as a way to audit the infrastructure and attach that to network monitoring tools and the like that could be pretty powerful. So lots of good ideas but until we have this this ubiquitous kind of metadata structure for carrying this kind of information through. It's really hard to get to that. And so the security tooling working group has been talking quite a bit about that which we've categorized as kind of espom everywhere. And there's a SIG focused on that but it's kind of embedded inside the working group. They also developed a guide to security tools that really is where many of these ideas came from that are worth diving into a little bit more and thinking about ways to weave those into your development processes. They've developed something called the CVE benchmark, which is code and metadata for over 200 real life CVE as well as tooling to kind of analyze them using a variety of static analysis tools. So to try to just look at those as templates for where there might be similar bugs throughout the rest of your code and others. It's also where we focused on some of the fuzzing work that's gone on in the open SSF and including a fuzz introspector tool that Google open source through us to help drive fuzzing as a more standard part of the software development process. In fact, the fuzzing fuzz testing is also one of those things the security scorecard look for if you include those tools and you show that you're making some reasonable use of them your score on the scorecard improves. That's just even if you've implemented everything in a memory safe language. There's still the opportunity that the possibility Jonathan I'm going to confirm this with you like like fuzz tooling that still makes sense for something like Rust and go. Jonathan's not a fuzz or he's not the right person to ask. Okay, well, somebody correct me if I'm wrong on that. But I think even even still with memory safe languages, there's the opportunity that you've incorrectly used memory in memory in a way that fuzz testing might surface. Yeah, I'd also be remiss if I didn't mention that in the Alpha Omega project, and which by the way Jonathan works as does Yssenia sitting over here. One of the things that we've developed is something called the analysis tool chain, which is it's a toolkit for identifying vulnerabilities in critical open source projects. It's automated. It's something you could plug into a development work stream. But it's beginning of what we hope is a framework for for just more rigorous kind of interrogation of code. It's not quite fair to call it a static analysis tool. I don't think the analysis tool chain is what would be the better term to describe it really. Right, right. So it's just to try to get that kind of analysis to be something that that open source projects do more, more often more in a more systematic way. So I do check that out as well. Where things head in the future. I mean this is a little bit more kind of blue sky stuff. So I, you know, just this and I do want to talk about a frame of reference that I have in my mind about how these kinds of tools get adopted out there. That provides perhaps some insight to this, which is I compare this to the way that TLS was adopted across the web ecosystem over the last 20 years. So it has to start with carrots, right when the web was unencrypted again or when email was encrypted or all that kind of stuff. There was really no incentive for people to go the extra mile. Unless you were like a bank and you're trying to convince people they could open a bank account and you could keep their their account details secure. But consumers that was like the web was either secure or not secure. It was when they realized, oh, we should probably add kind of a green highlight at the URL bar for those sites that are using TLS and so that they can reassure consumers and get some extra kind of benefit from doing that. Right, so they had to be a carrot to using TLS in the early days. And that carrot was kind of this projection of increased trust to the consumer. But at a certain point, if you really want to, if your goal is to get the web to become entirely encrypted, you need to move beyond offering those incentives and making them the kind of the exceptional case to now becoming the default to getting them embedded inside of the standards. And in there, I would point to like let's encrypt once let's encrypt automated the process of when you install a web server, any web server automated going and getting a 90 day certificate for it. And then and then updating that on a regular basis that allowed every new website from that point forward, essentially to be to be secure and moved it from being the exception to being the rule. But there's still that final stage of either those those websites that had been set up prior to let's encrypt or the the folks who were just kind of slow in the game to adopting it. And if you know the unencrypted web still represents a threat out there, then you really want to deploy a set of sticks. And my example there is how in many web browsers today, if you try to access an unencrypted website, you'll get a little bit of a warning. You know, the URL bar might turn red. I think different web browsers take different approaches. They'll still let you access it. It's not that they've been banned, but it's there's a there's a fair bit of friction and a fair bit of warning adopted than that. And I think when we think about how to get security tooling adopted across the across the developer landscape, nobody proactively wants to go out and take on additional steps take on additional burdens. If it slows them from being able to to fix the bugs and add the features that they need to do in order to be able to go home for the day. Right. Security should be somebody else's problem. It should be something that we pay other people to do. Well, getting it into not just into the tooling but adopted as a cultural thing depends upon these three phases. And by the way, not stepping ahead to sticks, which is where some of the regulations we worry are heading today. Some of the regulations, especially in Europe are starting to call for publishing of s bombs, publishing of your, you know, provenance and even getting third party audits for that kind of thing, which is interesting and cool, but could by being pushed too quickly to sticks drive a lot of bad technology, a lot of bad adoption out there. So, so in thinking about that kind of approach, one thing that we think we're ready for at the open SSF is to dive a little bit more into this concept of a sterling tool chain, which is kind of a misnomer. The word meta tool chain might just be to meta for some people, but because it's really not about like one true way to build software or one true one true system for building software, but instead saying there's a collection of components and standards and processes that have tended to lead to more secure software by default, right? And we have a lot of the pieces out there and and some of the glue has really emerged for that. And I think s bombs are a form of glue for these processes. And you could point to six store and salsa as examples of these, but there are a bunch of others that need to come through even something like security scorecard could be worked in, you know, in an early part of the dev process. If you warn a developer that they're trying to include a dependency that scores pretty poorly on the security scorecard, you might want to flash, you know, yellow or red at them, right? So can we pull these different pieces together into something that looks like a model for how we would want the other language ecosystems to adopt these tools and down to not just use s bombs, but use this particular profile of s bombs, use them with this particular data set that kind of thing and that ties to six or like get as specific as possible so that you can get as simple as possible. Try to avoid the temptation to provide support for five different ways to do the same thing, which is something we all cherish and open source ever since Pearl. Pearl had this, you know, there's always another way to do it kind of motto, and we love an open source to have yet another way to do something. But this is a little bit more like can we come together to do something cohesive, minimally viable, but as simple as possible to drive better security throughout, not just the upstream language ecosystems and open source, but also downstream into the companies that are adopting that need, that kind of traceability, even into their own internal apps or into the proprietary software that they go off and ship. Something like 70 to 90% of the code that's in a software, typical software product, whether that's inside of a car or phone or an enterprise server, something like 70 to 90% of it is pre existing open source code. And having that traceability upstream and through that, even as you go downstream to deploy it into your production servers, that's the kind of thing we need. And if every language ecosystem adopts a different way of doing it, you know, all of these heterogeneous end user kind of systems we build are going to be incredibly confused. So working together across these language ecosystems to adopt some common standards and approaches for this is really what this is about and trying to as a result end up with more trustworthy software through the chain and especially at the end of the chain benefiting all of us. There's some other areas that I think are complementary to this that are worth highlighting as both risks but also opportunities. And, you know, I would be remiss if giving a presentation on the second quarter of 2023 if I didn't mention AI or chat GPT somewhere which I did purge from this deck. But I think there's an opportunity to use AI tooling in the supply chain to not only get us to be smarter about the potential risks, potentially as well to use in the scanning of code. I'm kind of skeptical about that because AI tends to be a little bit more based on good enough answers whereas security holes in particular tend to be about specific off by one kind of semantic errors, right. So we'll see but there are some projects out there that have reported some some good results from trying to apply machine learning to software analysis so let's see. I'm also really worried about look so many of our processes in the open source world are our social processes right who we grant access to become a maintainer on our open source project is often based on building a trust over time with somebody that you think is human. With with somebody on the other side of an GitHub ID who you kind of have been trained not to care where they come from and I think that's good. You've been trained not to care about their credentials right you might be care if they're on other open source projects that's a good thing. But not whether they're a PhD or a college or you know a 16 year old that's that's fine right but but we've set up these high trust processes and there's some vulnerability that could come from using large language models to try to attack those systems. And I'd love to believe there's a counter use of of AI to kind of shield us against what I don't know you'd call like there's got to be a new term it's like catfishing but but for software. You know where an attacker would try to create fake profiles to go and gain the trust to key projects get themselves in as maintainers and then slip in vulnerabilities that way. And I'd like to figure out how we might use those kinds of tools to counter those new kinds of risks. And then secondly so much of the what we're dealing with here is in the form of text mode kinds of tools for verifying signatures for getting getting here's your JSON file with your long provenance in it right. But that's going to be really hard to build into and you know smart systems and there are a bunch of vendors building interesting dashboards to try to visualize the software development process. There's a talk by a Nova. She was the visual visualizer yesterday on the panel at open SSF starting at 1155 today where she's going to talk about some work she's done trying to visualize the software development lifecycle and security issues in that. So that's just after this I'm thinking this room it might be in the next room over but we've kind of done generally a poor job of visualizing where that risk is in the supply chain and installed system. I want to make it so that with all of the both objective and subjective data that we can collect about the state of the chain. We can find the next log for Jay right because really those developers they were professional developers you know they were using log for Jay for production systems in their own environments. They were under resource but I've never met an open source project that I said no no we have too many resources too many developers right they had but they were also responsible for a tremendous amount of code. Including a whole stretch that had been donated to them years earlier and not really maintained to do things like substitute variables in the real real real time processing of log files that you could argue probably didn't shouldn't have even been. You know something you'd had in log for Jay that's a post processing kind of thing regardless like what we all failed at doing what we all did was we took it for granted. We all took log for Jay for granted and the idea that security was somebody else's problem. If we'd had a dashboard that showed us you know there hasn't ever been a third party code review of log for Jay. Or you know there are substantial stretches of code without test coverage or you know other indicators of either community health or process health or other things. Maybe we won't probably won't be able to find the next one that's going to be the problem but if we could find the next hundred collectively we could demonstrate to our pointy haired bosses. You know here's a risk area and if we all collectively chipped in to fund a third party audit we bring that from a green to a yellow and suddenly your your enterprise risk scores would go down and we'd all be heroes right. That's what we could get through better dashboarding is better collective sense making and a better opportunity to identify these forgotten kinds of projects these under resource projects who through no fault of their own could inadvertently become the next the next source of a major breach. So that's what I'd love to see us head to when it comes to tooling is understanding that. But this is just the slide where I say there are still not enough eyeballs in open source that you know that that general statement that with enough eyeballs all bugs are shallow. If that was ever true and maybe for a few projects out there that's true that they have enough eyeballs per line of code which is the qualifier there per line of code. That's the case but we simply don't and having tooling that can help us augment what has been a very social process up to date would be incredibly helpful. So if any of this resonates with you all and like I said I want to be kind of humble this this doc is a starting point. Think of this as that embarrassing Wikipedia page that you just feel compelled to come in and edit and stay up all night to do so. I we would love some feedback on this this is this represents some emerging work in the open SSF so we'd really love you to come in and help with that. But also in like charter and crafting this this vision of how these tools work together and how we how do we build them in such a way that the other tool chain and language ecosystems want to adopt them. I would love your help on that so there's obviously on the specific working groups and projects there's ways to get involved on those happy to point you to them. The security tooling working group is the one that I might point to as the kind of broader place where I'm hoping these conversations take place more on Alpha Omega. We have the security tool chain is of course all open source and has its own kind of community emerging around it. But would love to see all of you involved in this conversation and if you are connected to these tool chain communities would love your your guidance on how we start those conversations by getting them adopted. And with that there's a bunch of ways to participate. This will be links in the deck attached to the sked as soon as I sit down and upload it. But would love to see you all on Slack or on Zoom sometime soon. And with that I think I've got some time for questions or comments or thoughts. Yes. Yeah. Well let's give due credit to the licensing and conformance community which sounds like a really weird thing to say. But let's do good due credit to the licensing and conformance community for having first raised the importance of tracing traceability and metadata through the software development process because it's been. But for the fear of the GPL contaminating your your commercial project we now have a system to be able to reassure your your your your legal councils and others that no trust we've only got good open source code when we don't have any like inadvertent proprietary code as well in this bundle that we can collectively say is appropriately licensed. And that drove a lot of the work at SPDX due credit there. And I and development of some tooling for managing that although a lot of that tooling is proprietary which I think is one thing that's that's there's a lot of open source code too. But that's OK great. Good. Good. I'm doing collective sense making here and that is a very strong voice that is different from other voices out there and we have to help educate them about the tooling that's available. So that is great to hear and I think it'll also make sense when these things are woven together and as bomb becomes the vehicle for carrying a lot of this data between other tools. So let's let's follow up on that in particular but open chain is a product another project actually at the LF that has had success in deploying as bombs in this way for the licensing and conformance use case. And I do think like working together on SPDX in particular is a way to close that gap. So that's where I mean SPDX 3.0 that was the release candidate for that was just announced last week and it has security profiles which I think a lot of folks have been waiting on in order to before they start building code that talks and emerging standards So now that that's trending towards close I think you'll see more tools development for the security use cases so that's good. And yeah and we just need you all to spend more time on our project and us to spend more time on your project and close that gap and try to avoid duplication of effort. I know an open source for famous for like wanting to you know see three different ways to solve the same problem but security isn't assisted by that. So functionality has a cost and and I think you know there's a lot of times we work in different things when really only disagree on some fine things that could be made runtime variables. So so Shane and I talk a lot and we'll figure out how to bring our projects closer together. And I think we're at no time for one more. So there's a lot of excitement about espoms and some folks have called for developing repositories of espom data. Obviously at the tail end of a production process you have like all the espoms for upstream right there locally. Your question was do we how do we think companies are going to slice and dice that data to try to understand it. No no no so the question was just repeat for us will the open source tooling replace the need for kind of collecting that espom data and try to understand it and digest it. Certainly I don't see the answers yes to that. I think I think this these systems will depend upon metadata you know being attached to objects as they flow through the supply chain and espom the different espom formats and spdx in particular I think could be a container for that metadata. And that container can be firmly attached to the object by the publisher right but adding additional espoms to that by somebody who notices hey there's an outstanding CVE. You know then we should add a that's one of the security profile features right you add that as an additional espom to that package as it keeps going or this new thing called open vex which is a way for software publishers to say. We know we haven't fixed this vulnerability in this dependency but we're not affected by it because we don't use something that it triggers I mean you should be able to attach that as well so espom data is going to be pretty dynamic. And then there'll be some of that is static and attached to code but also it'll evolve as it goes through the system and every company will have the repository that says here's all the places that according to the espom data say you're using log for J. That you need to go includes close up and before you can call the problem solved right I do also think we'll see global repositories of espom data for open source projects potentially even for non open source projects where they serve as you know as a search engine. Right as an analytics tool as a way to understand globally who out there is dependent upon a version of log for J before version before one of the fixed versions right to try to top down kind of like address these issues rather than purely bottoms up. And no one's yet proposed that we do that but I don't think we'd be opposed to the idea who did. And it's called Ocelot. Okay, great. We need you to start. This is like this is how we close the gap is getting more the folks from the licensing and conformance community into help us understand what's already there and be leveraged. That'd be great. Exactly. Exactly. Okay, well, one more question. I'm sorry if begging everyone's forgiveness. Yep. Yeah. Well, the output of these tools will be signatures you can verify there'll be JSON files that you can you can present as you know, either testimony or even in some cases proof that you've achieved levels of conformance. That's what salsa is about so that you could say here's you know everything self attested through this chain has hit salsa level three or salsa level four. And there is a certification process in regime that folks are talking about setting up to have auditors who will be trained in this will be able to say yes as a third party we can attest we witness that that level of conformance was hit. And so they those kinds of groups will consume this data and then verify that the real world matches what the attestation is. But I predict that the base level command line tools to basically go everything is cool that you'll weave into your CICP pipeline. And so bad stuff won't even make it to, you know, prod, you know, let alone test right. So I think I think that's how they eventually will get in these days if you want to go it is at the command line level but but I think we'll see that shift over the next year to being something more more visually appealing and integrated into the tools themselves. Thank you all so much.