 So I'm going to start off with three caveats. The first is not going to surprise into this community, which is we're still cooking this up. This is in the DC cybersecurity world. We make the horrible cliche of building the plane while flying it. That's not new for the open source world. The second piece is there is plenty to sample today. There are people providing this service. There are people who are creating vexes for money for other people. So this is something that is being asked for and produced today. And then the last piece I do want to explicitly acknowledge because I think it should inform how we talk about it in this room and as a community is that the focus has been on products. Now products does not necessarily mean proprietary, but it does mean discrete units that are going to be used by other people. And that's something that's always going to have to acknowledge is, hey, there are a lot of people that make software and sell it for money. And there are a lot of people who use things on their network and need to think about that software from a product perspective. So we heard last couple of days in the Global Summit on vulnerability, on security vulnerabilities, about vulnerabilities. So there's a lot of really hard work that goes into security. And known vulnerabilities should be the easy part. And of course, after the last talk and after the work that we're doing, we know that that's not quite the case. But let's look at this from the downstream user's perspective. What is it that they need to know? Well, they have a network. They have some products that they're using. They have some services that they're using. And they need to know which of those products and services are at risk. And what do I need to do about it? There are a couple of ways that you can find that out. You can look for security advisors. You can go directly to your supplier or your vendor. You can do your own homework. A lot of us spend money on very smart security people. And of course, you can go to the SBOM. Ask your doctor if it's right for you. Now, there are some challenges here. A lot of us have many suppliers, whether open source or not. And so as we go through this, security advisories, we heard about some of the challenges of security advisories. And I'll get into some of the details about actually enumeration of products. You can just go and call up your supplier. They love it, especially when there's a massive security crisis. They always love getting the direct inquiry to the helpline for the customer support cost. Right, you've got a support ticket. You can do your own investigation, but again, your security people are busy and expensive. And of course, SBOM, ask your doctor if it's right, use about potential vulnerabilities. And that's one of the cores here, which is to say, at the basic bottle, an SBOM gives you the maximum amount of risk that might be in your product. But one of the challenges is we're not sure how we narrow that down. So one other thing that I want to highlight is, again, thinking about this from the downstream user's perspective, from the organization's perspective, what do they want? Two months ago, we did a two hour long tabletop exercise at the S4 conference, which is sort of the black hat for industrial control systems. And we asked participants from some of the largest manufacturers and from the largest utilities and industrial control asset owners, power and things like that, to role play both as manufacturers of ICS equipment, the things that go boom if we do them wrong, and the asset owners, the people that are keeping the lights on and the water running. And from the supplier's perspective, SBOM is absolutely critical. You need to know what you have, and you need to understand the supply chain perspective. From the asset owner's perspective, the people who are just trying to keep the lights on, where security is, we know the cliche, security is seen as a cost center, they eventually want the SBOM. But at a time of crisis, what do they care about? They care about whether or not they're affected. Their ideal outcome is to be told, you're not affected, you can move on. So what are the problems we're trying to solve? One, we know that the number of securities are rising. We know that SBOM is going to make this problem worse because we're going to have more visibility. More visibility means more things that I'm worried about. That's not a bad thing, but it's something we need to plan for. And perhaps the most important piece here, which is that not all vulnerabilities are exploitable. And we'll dive into the many ways why that's true in a little bit. But what we need is some way of communicating that a given product doesn't pose real risk. So how do we do this today? Well, we're going to have lots of screenshots of security advisories. Here's from the Great Zephyr project when they announced, it's Forescout this week, just announced a massive security advisory around 53 vulnerabilities in the ICS chain. This is from the last one, Amnesia 33 from last year. And the Zephyr project, which is a great open source project, it's an open source RTOS, said, hey, we're not affected. So it's useful. It's important for everyone downstream to know. And of course, we know that that doesn't scale. And so how do we communicate that a product is not exploitable? And the answer is VEX, the Vulnerability Explodability Exchange, possibly the worst named project in all of Infosec. That's my fault, by the way. This was a temporary placeholder name that we said. This will be a placeholder name. There's a bit of a running joke in Washington, DC that there's nothing quite so permanent as a temporary government program. And of course, we have that for naming. So first, let's take a look at this question of exploitable. Because that, especially for those who spent a long time in the security research world, know there be dragons. So what are we trying to do? Well, we're trying to minimize the list of products with an unknown status. We don't want uncertainty. We want to be able to document what we've done to protect ourselves, because the government, or my shareholders, or the broader community that depends on me wants some knowledge of this. And we want to focus our time. One of the most important things that we're all kind of aware of, but we need to sort of remember, is that security resources are very scarce in most organizations. So let's not fight over the term exploitable, because many of us have had those for a while. Let's use the term affected. And now, first, I get told, Alan, you're very affected pretty often, but not that style of affected. It's basically, do I need to do anything? Because that's, at its core, what we care about for a security risk, which is, do I need to do anything? So affected just means actions are recommended to remediate or address this vulnerability. Now, that often means patching, but it doesn't always. So say you've got a nice blinking box that you buy for your hospital, and its job is to keep grandma alive. And when they sold it to you, they gave you explicit instructions. Please verify that you haven't done the dumb thing that we told you not to do. You're good. No further action is required. And that's really what we're trying to do. So how do we talk about VEX? What is it? It's a binding of three things. Status, affected, not affected. There's also fixed, which just says, hey, this product is now fixed. And then we also have a under consideration. This allows a supplier to say, yes, I'm aware that there is a risk out there, and we're working to figure out whether or not we're affected. And then, of course, we need the vulnerability identifier. So this all builds on the idea that we have that identifier already. So this is where it can complement some of the other efforts that are going on. The last piece is, of course, the product. And this is one of those areas that is actually quite sticky. I'll talk a little about this. But I loved the comment in the last talk of saying, well, everyone should just use semantic versioning. And it's sort of just bless your heart. It is where we would love to get a lot of projects to move. But if you've ever tried to parse, hey, is version 1.2 before or after 1.2 alpha? And it turns out different vendors rely on different approaches. So that's a hard problem. We're working on it. We haven't solved it yet. But it's something that's been baked into this. So what do we actually need? Again, we need some way of describing the software in question and then some way of describing the vulnerability and the status. So again, making this a little more detailed. We need some metadata to track it so we can have updated versions. We need some product status. And then, of course, we need some information on the action or impact statement. So this is, again, starting to sound a lot like a security advisory. And we all love security advisors. We get one or two or three or many. And then the other contribution here, going back to what we're trying to solve, is this is about a negative security advisory. We're trying to say, rather than R-affected, we really want to enable the not-affected. So we need automation. Again, the whole goal of a lot of what we're trying to do is if you're doing anything manually, if you're doing any new project that makes manual assumptions 2022, you're kind of wasting most of our time. So there are lots of different types of security advisories. They come in all kinds of formats. And the goal is to make the machine readable. But of course, today, it's kind of hard to actually get them that it's human readable. There are different suppliers, including those of us in the government, that will actually issue, this is an HTML, this is in text, this is a PDF. If you've ever tried to parse some of the stuff, it's quite difficult. So how are we going to actually make this machine readable? We're going to use the common security advisory framework. This is something that has grown out of, for those of you who like acronyms, grown out of ICAASI, which is a now largely defunct industry consortium of a lot of sort of the traditional software shipping organizations. This is an international standard through OASIS Open, which is a formal SDO. There are clear rules that define what an SDO is. International standards are important for folks across the US government. Our friends at NIST do an excellent job of helping to shepherd that sort of thing. So the vision here is it's got to be machine readable, should be standardized. We should think about the distribution side of things, building out a set of tools. And of course, like good standards projects, it should be open to participate and have a process. So I do want to acknowledge that VEX is also being implemented in Cyclone DX. Cyclone DX is one of the two popular SBOM data formats. We at CISA are really emphasizing CSAF because we think that this ties to a broader question of advisories that is related to SBOM but separate. So the vision of CSAF is to automate security advisories, automate them for searching for vulnerable information, and also automating them for evaluation and prioritization. So how does this fit in? VEX is a profile in the broader CSAF data format. Use the same infrastructure and systems. And as I mentioned, this is something that is going to be strongly related to SBOM and should be indeed explicitly linked to SBOM. But it is not SBOM, right? And the goal of, we're trying to follow the design goal of building this infrastructure and ecosystem in a modular coupled model rather than in a single model. So there was an old ad for Tums, the antacid, which if you've been eating as many tacos as I have, you may need some this week, the antacid Tums. And someone who looked at it in the Tums, it's got calcium. That's something my body needs anyway. And similarly, your security team should already be doing this work of saying, hey, when a new vulnerability comes out, does it affect the software that we're building and the software that we're distributing or selling? So that's hopefully happening already. If it's not, then we should work on that part too. So the vision here is we can actually make this something that, hey, we can reward folks. Let's allow them to do this. And from a corporate perspective, there's very real dollars and cents that can be saved by reducing customer support costs, right? I know a lot of you were spent a very busy December in remediation of the thing that I swore I wasn't gonna talk about during this talk. So what we do know, however, is that that actually imposed in addition to taking people's time, it imposed non-trivial customer support costs. So why not have this data come straight from the source, straight from the product security team? Although it doesn't have to, right? This is built to be flexible enough so that any third party can make an attestation, an assertion about whether or not a given product is affected. And that could happen with the permission or without the permission, because the goal is to make sure that the statement is possible. How to trust it is a separate layer. I mentioned this is being implemented today. We see there's some folks in the ICS space that are already providing this stuff for customers for money. And in the broader medical device space, this is something that is being built and distributed and sent to hospitals. And maybe some of you saw the keynote on Wednesday by Jennings Aske. I was the CISO of New York Presbyterian Hospital, one of the biggest hospitals in the United States and certainly one of the best as well. And they are interested in their consuming S-bombs today and they're integrating VEX into that tool. So we're going to see this. And of course, your friends in the government are doing this as well. So the executive order from last May required that everything the US government buys have S-bombs. Part of doing that was to define, hey, when you say something's an S-bomb, what does an S-bomb mean? This is for the moment, the official US government definition of an S-bomb. And it explicitly calls out VEX as a key future piece of this. This is an optional extension. It's not part of the required model, but it's something that we're going to have. So as I mentioned, this work is continuing to be developed and deployed today. One of them that we're trying to figure out is what are the use cases? What's the modality of the VEX approach? So there's the simple way, right? I have one product, one vulnerability. There's also the model that I can say, hey, for a range of my products, let me give you some information about this vulnerability. So that's going to happen. The next time we have another high profile vulnerability or something that is of great concern to a specific corner of the ecosystem, we're going to say, hey, this can apply to a lot of them. Or we can also imagine a world where a third party security researcher just says, I'm going to do a teardown of this product and I'm going to sort of announce, hey, here's what I found about different vulnerabilities with different statuses, stat eye, I don't know, for this approach. So we've got a document that's today on the Sysa website that walks through what this looks like and how to express this in CSAF. And so again, there's flexibility. I have a hunch that over time we're going to see them structured probably in not all three ways. We're going to find out that one of them is more popular and that's how people prefer to consume them. I think a lot of that's going to come from the downstream user demand, right? Banks are going to say, hey, we would like it implemented this way or that way. We also have the notion of status justifications, right? It's not just enough to say I'm affected or not affected. We want to be able to talk a little bit more about why something would not be affected. So let's walk through some of these and again, these are machine readable codes here. First is just the components not present. And again, in last December, a lot of companies said or had to go out and say, of course we're not affected by this. Our products written in C. So please shut up about this job of vulnerability that everyone's concerned about. So just being able to say, hey, this isn't in our product at all. That's going to be a very useful piece. And we think that that's actually by itself could save a lot of time and effort. Just being able to communicate, we're not affected. As an aside, one of the things that Sysa and my team did in December was try to build out a list of how different products were affected or not affected. And we started off saying, well, let's do a list of all the products that are affected. And someone said, but no, it would be useful to have a list of all the products that are not affected as well. And I don't know if you guys know about set theory. But if you try to list all of the things that are affected and all the things that are not affected, you're basically building a list of all software on the planet, which was a very busy time for my team. Then the other approach, and we think this is going to be one of the more common pieces, is to say that the vulnerable code is not present. The component is present in your supply chain. But the actual code is not affected or is not in there. Now, again, you may have noticed that I've been talking a lot about the embedded world. This includes, because again, this is where the US government has a very strong interest in the public safety side of things, industrial control systems, medical devices, cars, planes, things that really affect human lives and welfare. And so one of the things about compilers of embedded systems is they're particularly violent. And so they tend to rip out a lot of things that don't need to be in the final product. So we're going to find a lot of instances where this. The cliched example is, of course, the Heartbleed where, depending on how you measure it, OpenSL 1.0 has between 600 and had, between 600 and 1,000 different function calls you could make. Two of them called the Heartbeat function, which allowed the attacker to read random slices of your memory. So if you were only using the pseudo random number generator, and I don't know why you use that pseudo random number generator, but if you're only using that one, there's a decent chance that the Heartbeat function wouldn't be in the product. That chunk of binary is no, that chunk of code is not there. So third piece I want to say is sort of, it's not in the execute path. So what are some examples here about why, hey, the code's in my product, including that vulnerable function. But there are a number of examples why it would just never be in the execute path. So for example, there are a lot of times where old libraries ship with distributions for a variety of reasons, sometimes just laziness on the package side, sometimes it's because a software update most ship with a rollback capability. So I'm saying here's the new version without the vulnerability. But by the way, I'm going to include the old vulnerable version so that if the installation goes wrong, it will reboot back to a known stable space. So there's also the idea that the code is only executed in a particular hardware module, but that hardware module doesn't exist in the product. So again, the code is there, but it can never be called because of how that configuration is there. Another approach is that the vulnerable code cannot be controlled by the adversary. So just you cannot get to that point. So a hard coded variable that denies user generated input for that approach. Or there's a logging facility. Okay, I'll start to talk about some logging bugs. But that logging model is only called if there's a hardware malfunction or it's only called during installation. So the attacker cannot get into that a particular approach. Or again, I'll use a slightly outdated version, but that really did affect the healthcare sector. There's right, the eternal blue vulnerability in Microsoft was widely used, but if the device turns off the two affected ports, 139 and 145, then again, the attacker cannot gain access to the device because the ports are unavailable. And then the last piece is just, hey, there are other mitigations. So I'm using something that allows the, I'm using a buffer, a library that's vulnerable to buffer overflow, but I sanitize the input someplace else. So at no point can they get to that point. So those are some of the explicit use cases. We think that if you're trying to sort of start to map this to policy, some of these are going to be more popular than others. So you may a very high assurance organization may say, you know what, I'll trust you if you tell me that the vulnerable code isn't in your product because that's something that a tool probably won't get wrong. But I may not trust you if you're trying to assert to me that the adversary can't get to that because adversaries are pretty darn clever. We've got some good hackers in the room. So we're not going to necessarily think that you'll got that one right completely, but we are going to trust you if you just do a simple source composition binary analysis scan. Okay, let's talk about what we're trying to do. So I think I've been mentioning a bunch of use cases. There are media hype vulnerabilities that we're interested about. We're very interested in trying to minimize the time spent for false positive security scanners. And of course there are lots of folks who are downstream and don't actually know what their vendors are trying to do. So if we're trying to put this in the context of the broader ecosystem, we sort of think about there's the vendor role for the supplier role. And again, I'm explicitly acknowledging this is a very product oriented approach. And we have the customer model. So right now I've got the vendor doing some work, a security advisory comes out and there's the customer vulnerability management. So the CSAF model is in the middle there, which is to say we need to sort of think about the data format and the distribution retrieval. We want to work towards thinking about how do we manage the creation and distribution of these models. And then of course downstream, ideally, since there are lots of CSAF documents floating around, an organization can say, hey, I don't even need to touch the CSAF documents, the VEX documents that aren't on my network. So again, this is the challenge of very large sets of vulnerability databases. How do we narrow it down to figure out what are the things that affect me? And that's one of the ultimate goals of CSAF is to allow your power utility to easily understand which security advisories they actually need to pay attention to. And so again, we want to have some sort of matching approach. Now, all of us today are talking about automation. There are some important considerations on this. One of them is just sort of understanding what is that automation, what is the distribution mechanism going to look like? This is a pretty hard problem across large sets of software, right? This isn't unique to VEX. It actually overlaps a lot with SBOM. It overlaps with a lot of other areas, which is to say, there's an increasing amount of metadata about our software that we care about. How do we make sure it ends up in the right hands? And that includes everything from attestations that your development environment had MFA or that you follow a Salsa framework level to SBOMs and VEXs. We want to integrate this into the broader existing tool set. It's our belief that organizations already spend a lot of time and money on security processes and they use security tools. So the ideal case is to integrate this into an organization's existing vulnerability management system, their CMDBs, their data lakes, their SOC models and things like that. And of course, we want to be able to tie all this data together. Rather than build the omni-standard, the goal is to make sure that each of these data points can be linked and correlated and then people can go make money selling you plumbing. So we have the ability to link VEX files or VEX documents with SBOMs. We have the ability to link them to specific SBOM components so that you can actually say this is right, here's my SBOM, here's the component that is affected or is not affected. However, there's one small wrinkle which is software identifiers are a pretty sticky problem. Inside small domains, it's a better understood world. And I would argue inside large swaths of modern open source, it's a not nearly as tricky a challenge. Right, if you have a modern package management system, it won't guarantee a unique local namespace but it gets you there. Doing it at scale across ecosystems and across especially freestanding pieces of software, this gets very hard very quickly. As I mentioned earlier, versioning is tricky as well. It turns out that some of the largest ICS manufacturers when they sell giant industrial control system, many potential features, not you only pay for the ones that you need. And so it actually gets quite tricky for the supplier and the customer to have a shared vision that can be calculated in a machine readable fashion of exactly what is on the customer's premise. This is a hard problem that folks are working on but it's not something that's fully standardized today. So as we think about software identifiers, it's always important to acknowledge what we don't have. And there are some great solutions out there for certain parts of it. We talked about Pearl in the last talk but this is a tricky. And of course we wanna align this with open effort, other efforts. So I'm looking forward to sitting down with the cloud security alliances, global security database, GSD, which we heard about yesterday, and with Google and GitHub on the open source vulnerability database and saying, hey, how do we make sure that we're aligned? Because I think they each bring something very important to the table. They're each working on a slightly different part of the problem. Earlier this week someone mentioned the metaphor of the elephant and they talked a little about the history of it. Those are the blind men in the elephant. So everyone sees an elephant but each of them sort of says, oh, they touch a different part and they say, okay, the tail is a rope and the side is a wall and the ear is a different piece. I like that metaphor but that's an easy metaphor. That metaphor is very easy to solve. It's in game theory terms, it's a simple coordination problem. All you have to do is talk and everyone can realize the truth. I think there's a much more powerful metaphor for an elephant which is someone looks at an elephant and says that is a farming implement and someone else says no, that is a tourist attraction and the third person says no, that is a symbol of my national heritage and none of them are wrong but what we need to do is understand how we can align all those worldviews to actually be able to play with more elephants. So anyway, that's my rant on metaphors. So what are we looking for? So come build things with us, right? That's the plea for at every open source discussion. Standardization is boring but necessary and the OASIS Tactical Committee is open. We have the published new standards to be a voting member, you need to be a member of OASIS but there's a GitHub, anyone can pull requests or create issues. We need tools, we need efficient, we need modern and a range of product teams to make sure this fits their models. So this is again one of those cases where I come to you saying this wasn't built with the open source community, expressly in mind and that's where right in 2022 we really need that engagement. And of course there's plenty of room for those of you who sort of are interested in actually taking things and building things and making a little money. I would put to you that there's going to be huge demand for VEX in particular to help people manage their S-bombs and so if you still believe that there's a little bit of venture capital money out there, why don't you start your own company? So summing up, vulnerabilities going up, better insights in the supply chain going up which means there's going to be a lot more advisories and so we need a way to better manage them for risk-based decisions, that's why we have VEX. So we've got some documents publishing, the status justification codes are going to be published by the end of the month on this is a webpage, that's the CSAF GitHub page. This is how to get in touch with me. We would love your help and love your contributions and I think we have plenty of time for people to throw fruit, hopefully soft fruit and we can talk more about this. So also just very quick plug, if you're interested in the broader S-bomb community and what I love about this week is we've spent like everyone has to assume they know about S-bomb, if you're interested in joining the S-bomb world, there are new community efforts launching in July, primarily around these four topics, we're also gonna be setting up some work on software identity at CISA, so please shoot me a note and join the broader S-bomb community. But let's focus on VEX for now. Yes, so the question if I understood properly is to say that it's one thing to talk about a vulnerability in the underlying dependency, but there are sometimes risks that come if I built from tooling, for example, right? If I use a particular tool, then we have that model as well. So that's a great question. The long-term vision of S-bomb is to be able to capture things like I implement not just the dependency graph, but this was implemented from this standard. So then if there's a risk in the standard, then we can say that this component implements a vulnerable standard. Tooling can be another approach, which is to say, hey, this component is a descendant or a risk of this tool. The challenge is going to be, and again this isn't tracking this, how do we create visibility into that? There is some fun work thinking about that for the future of software assurance, especially things like in Toto and Tough. How would Vex capture that is a really fun problem. I will be honest, I have not thought about that. I think you could, as long as there was a way to sort of say this is the risk I'm concerned about. So as long as there was a way to communicate that there was a potential risk, so whatever that namespace is going to look like, then Vex could be used because it's a binding of that risk. And like a lot of the other efforts, CSAF and Vex is vulnerability namespace agnostic, right? It says this is the vulnerability universe we're looking at and then inside that universe, here's the identifier. So as long as we had a way to capture that, I think we could do that. Yes, so the question was, does this work for transitivity? And in fact, it's a slide that I keep meeting to make. I just keep trying to figure out what the clever graphic is to capture the notion of transitivity. And the short version is we explicitly punted on transitivity. That's why we spend a lot, so the S-bomb captures the dependency graph. The challenge is there's so much that's going to be context dependent as you move across multiple hops that we said a Vex really should come from the last hop because that's where the testing and the assurances can actually be made. Whereas if we try to sort of calculate the explicit, well this wasn't, but somehow downstream someone may have implemented something that does allow it, that's tricky. And so Vex is offered as sort of an as is model, just to say hey, the vulnerability and all other known vulnerabilities. So it does include chaining, but it doesn't include the idea of saying, well this is resistant to future chaining because someone may, so for example, in the example that the attacker can't control this particular chunk of code, a future vulnerability may allow them to control that code and then they can exercise that. So we are very, there are, the more common thing is gonna be saying yes, transitivity will hold, right? If this version upstream isn't vulnerable, then downstream it probably won't be vulnerable, but we don't wanna make that assumption in. So the comment was hey, this is going to depend on trust and there's going to be plenty of time when if you make a mistake, your customers are gonna be very uncertain. This is also, this is a really good point. Any lawyers in the room at the moment? So one fun question is going to be, our company is even going to be allowed to say this, we're encouraging people to do a bold statement which is to say, it's one thing to say I am vulnerable, that's not really putting yourself at risk if you're wrong, but we think the demand downstream is going to be high enough and people's reluctance, especially the beginning, to make full public S-bombs available to their customers, they're going to start to demand this model. One of the things that we explicitly talk about in the executive order implementation is to say early adoption must have some acknowledgement that tools are not going to be perfect. So they're just as, there are truly terrible crisis responses to announcing a breach or a security flaw. We care about your privacy and security which is why we've done nothing except write this PR statement, right? We know that companies do it well and we know that companies have the ability to do so in a way that engenders trust. I've been told by folks in a number of critical infrastructure sectors, their vendors were simply never going to trust, they've lied to us in the past, they can give us all the vets as they want, doesn't matter, screw you, give me a patch. But we also know that there are organizations that people do trust and especially in domains where patching a giant piece of enterprise software means that someone's not going to see their kids that night or it means you've got to turn off something that really impacts human life. Folks, we think are going to be very interested in accommodating this. So yeah, but at the end of the day, this is going to rely on trust. One of the things we're trying to do is enable policies to help enforce that trust in a machine-readable fashion. All right, well hopefully this, oh, we got another question here. So the question from a major supplier to the US government was when, hey, there are a lot of different moving pieces. What's the timeline for a lot of this work in actually implementing it and ultimately maybe requiring it? And you're right, there are a lot of different moving pieces. The executive order, the White House was very clear for things like what ultimately was put into the secure software development framework, which is to say compliance, right? The White House wanted a compliance rule in May of this year. Whoops, we're a little late. We have to figure out how to turn a framework into a compliance regime. But the, so that part is happening from the executive order side of things. We want to enable VEX as a way of establishing this. VEX can be implemented today. What we don't have is the clear vision of the broader infrastructure for how do we move this stuff around at scale? And that is a piece that we need some good collective skull sweat on. This big question of how do we share and exchange metadata about software is sort of one of those big missing pieces that we don't have in the ecosystem today. And I don't know what we'll base it on right there. A couple of smaller solutions, none of them are really ready for prime time. So, S-bomb you can do today. There needs to be some further work on maximizing interoperability, but we get a lot of the value without that. Or with that partial interoperability. And VEX you can do today as well. And hopefully that answered your question. If not, always happy to chat. Yes, so the question was about the CSAF schema. And you drilled straight into one of the pieces that we really don't have, which is the way it describes products, that product identifier is probably the most confusing part, and I'll be honest with that. And the reason that is is because it's trying to capture almost any way that they can think of, of how products are organized. And this is the thing that I keep coming back to, which is software identity is a really hard problem. And there are no general solutions today. And S-bomb in particular has really shined that strong relief, right? CPEs haven't scaled, NIST has the wonderful idea of using SWID tags, which are more decentralized, but we're still working on, oh, it's really hard to do string math in XML. So we need to figure that one out. The product tree model tries to capture the full range. So, for example, it allows you to say, you could describe a product family where 1.2 alpha is both before or after 1.2, and because there are companies that do it two ways. So I will acknowledge that that is the hardest part of manually looking at the CSAF spec. It's one of the reasons why we're trying to get as many tools into the model, because, again, the nice thing about JSON is you write down a list of all the products you write that script once in a way that fits your community and your product, and then you can just generate it whenever you need to. The comment was that you're built on what a security advisory needs, and I agree completely. CSAF, like a lot of technical standards, was designed to be cover a wide range of examples, and so that's, I think, why it does have a little bit, I'm not gonna use the word matter of one like, but it does have a learning curve. Thank you, and I think that might be the time for us. And at the risk of doing a truly terrible plug, CSA is actively trying to hire, if you know any young people that want to do public service, send them to the government, and then two years hire them away from us. Hey, thank you. Yeah, I'll do the fist bump, yeah. Yes. Well, if you wanna spend a year with us, we'd love to have you. But no, we're hiring all levels. My director just got yelled at by Congress for having 300 open spots, so yeah.