 Hi everyone. Good afternoon. Glad to be here. My name is Art Mannion. I work at the CERT Coordination Center, and I'm here to talk about supply chains, particularly vulnerabilities and how they flow down and affect supply chains. So at the CERT Coordination Center, I do a lot of coordinated vulnerability disclosure. We specifically focus on multi-party or multi-vendor issues. These issues underneath are entirely supply chain problems, or more specifically problems because of lack of knowledge about supply chains. Who and what are affected by a vulnerability? In 2002, significant manual effort and a best guess. In 2021, also significant manual effort and a best guess. We have not improved the state of the practice here in 20 years or perhaps longer. These numbers for these vulnerabilities, 285 vendors, 183 vendors. Again, manual effort, best guesses, historical anecdotal collections of who uses what software, who we could reach via email or by pinging people or doing open source research. Again, awful situation. Best guess is what we have as a state of the art today. That Alloc is a more recent one. Just a small example here. We've got the Alloc function in BlackBerry QNX vulnerable. So a bunch of BlackBerry QNX product lines are vulnerable. That in turn hits who knows how many embedded, operating embedded system or IOT or in this case, ICS OT suppliers and EPA and the water ISAC without a particular warning about the QNX issues. QNX is just one of the 18 vendors lit up by the Bad Alloc vulnerabilities. There have to be thousands of affected vendors and systems, possibly tens of thousands for Bad Alloc. And the answer is we don't really know who they all are. So the hope here is that software bill of materials can actually really help us. Software bill of materials is exactly what it sounds like. It's a bill of materials for software. So one or more identified software components. You identify them with names and hashes and version numbers, things you already identify software with. Their relationships, that's key, otherwise we have no chain in the supply chain. And other associated information you would need to do things like improve vulnerability management or improve, in my case, coordinated disclosure. If the S-bomb phrase is giving anyone any kind of heartburn or trouble, upstream dependency tracking is a fine way to think of it. Third-party inventory is a fine way to think of it. That's all it really is. So very simple concept. But it's more than just your third-party dependencies. You are someone else's third-party. So please label and enter into the S-bomb your first-party created software as well. In theory, if we all do this, the graph and the network work and we start to gain transparency. This work I've been involved for almost three years now comes out of the multi-stakeholder community in TIA process. Just to be clear, that is distinct from the executive order requirement for NTIA to produce a report, which they also did. So my work and my discussion here today comes from the NTIA community side of that work. And that's the short URL for the collection of documents coming out of that effort. Two sort of subuse cases for S-bomb. One is before public disclosure, whom do I notify about a vulnerability who might be affected? The second is typically post-public disclosure. If I'm a deployer or a system administrator, do I have to patch? Do I have to do something? And that's the vulnerability management use case. In the end, we want the same information, right? What software it contains these components and what software is affected by these vulnerabilities, as we're going to see those are potentially different things. A slight nuance on what happens before public disclosure and what happens after. There's an idea, and we'll explore this a bit, that an upstream vulnerability is inheritable down through the supply chain. And also an idea that increased S-bomb data and inventory data will reveal what's actually already there, lots and lots of upstream components we didn't realize were there, and their associated vulnerabilities. So we've now created more of a vulnerability management problem than we already had. We even created a bigger one where we just have more awareness of what's already there. How do we handle that at scale? Individual human written and human read advisories probably are not going to cut it. So I typically view S-bomb as a graph. It's a dependency graph. Pretty straightforward if you're dealing with software dependencies at all. This is a toy example that comes again from the NTIA work. And the sort of high level of abstraction dependent relationship is simply included in. Very likely more nuance is necessary there. You will see in the upper left bingo buffer is meant to be source code that is compiled, modified perhaps, and compiled to produce ACME buffer. So the ACME version of bingo buffer, and that's going to be key that built from or derived from is different than simply included. And then not that it's all that important but at the far right with Frank's final good, that's sort of pointing back to itself. Primary relationship just sort of indicates that Frank's final goods S-bomb is talking about the main subject of Frank's final good. Frank's final good could be considered a product here if that's a distinction that helps. Although product is somewhat relative anywhere in this chain, one person's product is someone else's sort of component. Now what happens when a vulnerability is identified in bingo buffer? So the bingo buffer developer, supplier, vendor, maintainer confirms the problem, produces a fix. There's no argument. It's a true vulnerability. Problem is solved. Great. There's no question that that CV affects bingo buffer. But what does this mean through the supply chain from bingo buffer down into its use ultimately in Frank's final good? I was hoping early on that perhaps some degree of inheritance could be assumed, and there was a safe way to do that. You know, safe, sort of in a conceptual belief sort of sense. It turns out that probably instead, every node needs to investigate its own vulnerability status. Early on from the NTIA work, there was a presentation from Veracode, and their study of at least here Ruby Java and Python produced very low inheritance rate. So 5%, lower than 5% of upstream component vulnerabilities made it down to the Frank's final good level or the product level. So this is real data. I'm no reason to doubt it. That's an important bit of evidence. I'm not entirely sure what other ecosystems look like. This is sort of something you could apply anywhere, or if it's less common in C or something like that. Also, even if there's a 5% chance of inheriting the vulnerability, do you want to make that assumption? Maybe you do. Maybe you don't. 5% chance of a truly expensive horrible impact might simply be too high. So from the NTIA SBOM community work comes VEX. VEX is essentially a way to record and convey vulnerability status. So if you imagine the nodes earlier, each one of them could have a VEX statement about that CVE. One of the tricks to VEX is this first sentence here copied from the VEX one pager, reduce efforts spent investigating non-vulnerability, non-exploitable vulnerabilities. So it's one thing to certainly convey that something is vulnerable. It's actually also very valuable to convey, if it's true, that it's not vulnerable and save us all the time digging into a non-vulnerable problem. Affected, not affected, straightforward, action required to remediate. Fixed and under investigation are perhaps a different or two different dimensions here, but they are all treated as status for VEX. You can read more about VEX. There's a one pager at the bottom of the slide here. Also, there's a CSAF profile for VEX. CSAF is a structured advisory, security advisory format. VEX was developed with the SBOM work. However, it's designed to be used without SBOM. You don't need an SBOM tied to the VEX. You can simply issue a VEX statement about any software whatsoever as long as you can identify it in a CSAF compliant way. So despite some evidence here that inheritance is not something to assume, I'm not done trying to figure out if it can be assumed in some cases. My suspicion is that non-vulnerability, certain kinds of non-vulnerability might be inheritable. So VEX does not currently support this feature, but under discussion during development of VEX, there was the idea that I would have a status of not vulnerable and then a reason for being not vulnerable. So for this example, if ACME grabs bingo buffer source code, compiles out or pound defines out certain functions or certain parts of the code, builds their version of it, their own component ACME buffer that they are now responsible for which finds its way into their application, which finds its way into Frank's final good. If ACME investigates their use of bingo buffer and can very clearly find and state that the ACME buffer variant of bingo buffer is not vulnerable because the code is not present. The vulnerable code is compiled out. I would argue that that reason for being not vulnerable is transitive and the ACME application and Frank's final good and cross that vulnerability off their list. The code is not present, at least not from the ACME buffer component and you can move on and deal with the next CDE in your list. So that's the end of the slides. I'm happy to answer any questions and I'll turn it over to our moderator. Thank you.