 Welcome everybody. Thank you for making the time to attend the session. Some housekeeping before we start. First of all, I'm delighted that my friend and colleague, Andrew has joined Control Plane today to head up our North American operations and build supply chain trust for the organizations with which we consult. Secondly, we're both recent authors of various, hopefully not nonsensical tomes, maybe the judge. These are free and available. We'll also scroll our signatures across them if you so desire, if you want to come and grab one afterwards. And this will be a Socratic session whereby Andrews gives me a run for my money as I attempt to make sense of these slides. And with that, let's jump in. Vexing open source security vulnerability data for everything. I might actually require some speaker notes. So let's be a bit more careful about how that starts. And there we go. Okay. As CVEs proliferates and vulnerability scanners light up like Christmas trees, security teams are under increasing pressure. Can we do better than the CVE common vulnerabilities and enumerations, exploitation that we currently deal with today? Vulnerability scanning and correlation between the packages we use and the exploitability of those packages in the context in which they are run. You will also hear a little bit of S-bomb draped throughout the talk. Okay, so hello. I'm Andy. I'm from Control Plane. We are a cloud-native security engineering firm based out of London. We work in North America and the Antipodes, Australia and New Zealand. And we are 50 bright souls with a particular interest in open source and cloud-native security. My background specifically is Devlike, Sec, Ish and Ops E, written more than I care to admit or reflect upon. And my colleague, Andrews. So that's that, Cops. I am a product manager by trade. I'm an operator. I have done multiple, multiple things throughout my career. I've worked at organizations large and small in different capacities, always facilitating the outcome of producing software. And software that is meaningful for the industry, for the use cases that different organizations drive. I am very involved with Andy and the CNCF. We're both part of the technical advisory group for cloud-native security. Andy is the co-chair. I'm one of the global technical leaders for the group. I'm also very involved in the SPIFI project around cryptographically verifiable identities. I've put some words down in paper. I've done security audits for projects throughout the ecosystem. And, well, now helping organizations make sense and adopt these technologies. Andy? To some extent, I feel self-conscious that I have glorified my book with the standard slide and avoided yours. It is possible, should you wish to avoid re-keying errors to download the first half of Hack and Kubernetes with hot, spicy one-liners that you can copy and paste from the PDF behind an email paywall on the website. I have to give a shout-out to my illustrious co-author, Mr. Michael Hasenblas, without whom the publication would never have succeeded. And with that, what does this talk about? Not all vulnerabilities are exploitable. CVE scanning is a broken state of affairs. Toil less. Security teams do a lot of work. Necks are on the line. Heads are above the parapets. Finally, everything should be machine-readable, automatable, and I hope there's no disagreement there. Right, so vexatiousness runs as an undercurrent throughout the talk. Of course, what is the problem? What is the issue here? We want fresh software. Freshly minted software contains freshly minted vulnerabilities. This is a fact of life. We accept it and embrace it. Vulnerability scanning. Reverse engineering the contents of an image, or a container image, or a tarball, or an open-source package. It's variable. There is no deterministic consistency between approaches, between software, discovery software, and builds and materials have trust issues. Do you trust build time? Do you trust packaging time? Do you then reverse engineer afterwards to verify that those things all line up? Are these things signed? Do we have a chain of trust? Ultimately, these things lead us to a state where we get to the log for shell issue. We don't really know what's running in production because we don't have manifests and decomposition of where we're running specific packages. So vex to jump to the end of the talk is the vulnerability-exploitability-exchange format. Try saying that five times. Vex is a machine-readable mechanism by which we can correlate a specific CVE to a specific project under some set of conditions. We'll look at that a little bit more. It's a couple of years old. It is very nascent. People are not shipping S-bombs. They're not shipping Vex documents. We have 0.1% of the industry aware of S-bombs. We have a vanishingly small quantity above that who are aware of Vexes. This is a future-leaning talk looking at what the future could be like. This is not a current state of affairs. Like S-bombs, this is a distributed effort. It shouldn't be the last person in the chain responsible for cobbling these things together. And augmentation of vulnerability information is key to understanding how things can be exploited and therefore making the decision should we ship this line of code to production. Machine-readable. Good old JSON. This is an extract from the Vex specification. This is not in the wild, if you like. We can see the Vex self-identifies itself and then provides essentially vulnerability information saying is this package vulnerable? Is there a patch that you should have applied already? Or are you in the hinterlands? And should you consider whether or not you actually want to run this software in production? This then becomes a business balance. Is it higher risk to lose potentially millions of pounds a minute, day, hour by removing a financial services trading application from production? Or would we prefer to accept the risk? Everything is a balance. That's the clear answer to this ever with the understanding that we might have a remotely exploitable piece of software running with a web-facing socket. Okay. Is this actually an S-bomb? No. The two exist in the same space. But S-bomb is the composition, the recipe by which we determine an application or a container image. Vex is the allergy information. You probably don't want to eat this if you're vulnerable to this specific biological condition. If allergies occur daily, sort of zero-day allergy style, why would we need a Vex? All software is vulnerable. We have to accept this as a fact of life despite some of the prostitutions coming out of the US government. When a CVE is identified, it's on the vendor to actually disclose that to potentially create an embargo, a very long-winded process with screaming backflips through flaming hoops. And it's a misaligned incentive. CVEs, from some perspective, and obviously not mine because I think full disclosure is key to remediating these things, but CVEs can be seen as a badge of dishonour. This project has this many CVEs. Now, that could mean that there's a well-oiled release process or security-patching disclosure process and the organisation or the maintainer is very good and very security-conscious. On the flip side, it could mean that those things were reported and the individual thought, wow, probably best to sweep this under the metaphorical carpet, not announce this to our users. Maybe the changelog says this is a bug fix rather than a security fix. So the producers of code accepting and communicating a CVE doesn't quite line up properly in terms of incentives. Then once we have a CVE even sat on the registry, how do we correlate it to a package? How do we correlate it to something that is installed within a container image? There are so many ways to drop binaries, source code into a container or any application. One of them is essentially kind of hot-loading. Run.sh is your entry point for a container. What do you do? You're loading latest from GitHub and then execute the thing. You can't scan for that. It's not practically possible. Obviously, that's poor practice. But also just downloading and installing something as part of the doc file, obfuscating a binary after it's installed. Most scanning discovery is done by interrogating the package manifest. That package manifest could be removed after it's used. Package JSONs, POM XMLs, even what's installed by APT or APK in the distribution packages. So it's not difficult to obfuscate or entirely hide what is installed from a scanner, because the only way to really understand is to do hash analysis of directories and binaries. Even then, you can do binary obfuscation that then disconnects the hash. Finally, accidentally shipping vulnerabilities to production is something that we would try our hardest not to do. Andy, a question if I may. Yes, please. Projects that are mature are particularly good at including their security advisory and their repositories. I can pull it as code. I can aggregate the data. Now, if an organization consumes software from multiple suppliers, oftentimes it's a full-time job to go to a security advisory website to their knowledge base that may be a PDF, that may be HTML, it may be a TXT that comes with the artifact. How does that change with all of this? Reduction of toil. Instead of one person or a group of people having to manually reconcile advisories, like you say, we get to a position where we can poll and we can notify. In the same way that Dependabot has revolutionized pull requests into a dependency list, let's say, we want to get to a state where actually we get a remote notification that you want to upgrade this because you have a specific security vulnerability and semantic versioning in the same way plays a useful part. Fascinating. Just you wait. This isn't related to CVEs. CVEs are very broad stroke and correlate a packaged version with a vulnerability. Modern vulnerability assessment is essentially CVE scan, panic, guess it exploits ability, raise an exception and deploy anyway. What do I mean by this? If I have a dependency, or it's easier to think about this in terms of open source software, well, let me rephrase that, software that executes the source. We're talking about interpreted languages, Python, Node, PHPs, all those kind of things. With compiled artifacts, you get into a tree-shaking space. I'm jumping ahead here. The point being, if I ship a package to production and there are five function signatures, four of which are vulnerable, but only one of which I use, should the CVE apply to my deployment? From a risk reduction perspective, you could actually delete all of that other code. It's not being executed. It's not on the hot path. It's not actually executed at all. So it doesn't make any sense to block a production deployment based upon the potential exploits ability of code that we're not exercising. We'll get further into this as we progress. So CVE's present, but the component is not affected, is what you're saying. Exactly. The portion of the library. There's no execution path for it. That's it. What software is not vulnerable? I mean, open source, of course, is vulnerable or anything that accepts third-party contributions or accepts first-party contributions or exists as software. That is to say, open source, compiled binary artifacts, things that exist in the cloud, either software as a service or even cloud providers themselves, your operating system, the bash group running production, anything and everything, allowing trust to machines to run software for us and humans are fallible, therefore it is an extension of our mortal fallibility. As I mentioned before, we really want to ingest and run new software big up to the dude maintaining Orc for his whole life. That's my favorite Orc one-liner. Slightly mind-melting, but it's a de-duper fire. The rapid pace of software innovation requires maintainers to security patch, add features, quash bugs or a rival project may overtake their adoption curve. New software has new vulnerabilities. This is a fact of life. We embrace it. We extend it. It is how we want and need to live. There is no such thing as secure software. This is the most prescient gift I think I've found in a few weeks. That's the red and the blue team chasing each other around. Software that stands still dies. We want to take advantage of these new features. New features bring the risk of vulnerability. Companies that don't ship features are beaten by a less risk averse competitor. So where is the incentive to ship secure software? That is not going to be answered in this talk. So, of course, widely used but occasionally insecure software. Is the internet on fire? Is your definitive reference accessible via multiple protocols? Vulnerability is a fact of software. We have to live with these things and our mitigations cannot be as broad stroke as they currently are. So, what is it that we're talking? Sorry? What is it that we're talking? Well, exactly. Who knows? What is the problem? Software exploitability is all over the place. We can't avoid it. And any exploitable software leads to multiple negative outcomes. Vulnerability management is super difficult. We see false positives all the time. Again, there is no incentive for a vulnerability assessor to not raise an issue. When their job is to wave a flag of terror or surrender, the incentive is aligned to falsely report a positive rather than incorrectly report a negative. Security teams are burnt out. Of course, SOC and SIEM analysts are on the front line. They have to do the work when open source vulnerabilities proliferate. And DevSecOps technically means that it's all our job. So, the problem with scanning, we might have... There's a typo there. The first line is supposed to say type 1. Type 1 is the false positive. The gentleman is pregnant. So, these are unexploitable CVs in scans. This is the fundament of the problem that Vex is trying to fix. Type 2, false negative. Well, clearly, pregnancy is upon her, but undiscovered packages. We've been able to drop things in either by removing our package installers manifest, post hoc, or just pulling things in with direct downloads, or untyped, unexploitable, but undiscovered CVs. We'd still like to know exactly what our security posture is when we have other things installed. Very... Go on. Yeah, just to say, that is a fairly reductive way to put it, you run the scan or you're peering the scan, everything comes red, and then you need to go line item by line item, explaining why, why not. And this is something that is not done in one meeting. This is something that takes weeks, often time months, gatekeeping the ability to move to production. You're not getting the developer productivity you want. No one's happy. The security team's not happy. The development team is not happy. There's so much noise in the scans. You go back to your vendor. The vendor says, oh, it is not vulnerable, or it's something that just came on base image. Take our word for it. But then how do you rationalize that to the people who have the final say of what goes into production? Exactly so. The front line of the fence are these vulnerability scanning tools. And not to hate on anybody who is providing us with open source software, much appreciation. The difference here is how vulnerabilities are assessed and discovered in an image. Trivie, with my voluntastic OCI, its latest image has 702 instances of vulnerability, whereas GRIP has 748. What's the missing 46 vulnerabilities? Do we care about them? Should we let them roam free amongst our infrastructure? Whatever your favorite tool, disparity in results leads to questions. Are any of those vulnerabilities critical, remotely exploitable, or on the hot path for an attacker about to pop a shell in the infrastructure? Here is our standard lifecycle project. Understands they have a vulnerability, form analysis, patching, they're in internal tests, time frame of concern. Are any of the systems that we're running vulnerable by virtue of running the software and by virtue of having an attack path for an external? And then finally, we go to patch the thing. What do we do with this current situation? Well, we could just do nothing. Kind of do. If we want to ship firmware for radio devices, the American organization is the FCC, happily I put that there. The version of the firmware is accredited, stamped, out it goes. Those vulnerabilities that accrue over time are so difficult to update because the accreditation process takes so long. So for firmware shipped in radio chips in the US, it is actually easy to just leave the vulnerability there and build something new altogether than to go through an update cycle. We could do that with software. It would lead to catastrophic results, no doubt. Some other alternatives here. Tree shaking source. Tree shaking for a list of dependencies and transitive dependencies is such that if a package exists twice on different branches of the tree, it can be removed and everything reduced down to one level of depth. So instead of having multiple levels of transitive dependencies, we put everything in the same depth. We can to some extent also do this with those five function signatures that we talked about before. How we do so is language dependent. Some languages support reflection and aspect-oriented programming that would allow us to analyze and do this. Others just need instrumenting like we would do for a code coverage report. We could take it one step further and actually run symbolic execution. Throw loads of runs of the software and try and understand exactly which line of code is being executed where. There are ways around this to really address memory. It is unlikely that symbolic execution would really say, well, this particular compiled binary is not using this portion of binary code. Let's just excise that from the binary. Or we get into a detailed code review. Going back to source, this is where Red Hat made their money, ultimately, by providing assurance around the packages that we install. Google have launched the Assured Open Source Projects where everything from PyPI and NPM and various other places is rebuilt in Cloud Build from a Solter perspective, just to deviate very slightly. Solter looks at the provenance and veracity of the things that come into the pipeline, but when the pipeline itself is not assured, and let's say running on untrusted infrastructure or in Jenkins perhaps, then Solter-1 becomes a useful concept. There's little point in rubber stamping the veracity and provenance of software if that veracity and provenance itself is junk or if it's intercepted and mutated as part of the build infrastructure. So Google's Assured Open Source Projects really looks to be driving that in the right direction, but again, it's a huge multinational-style project and you can say the same thing for Alpha and Omega, the Open SSFs, I guess, sort of sub-project, which is looking to secure the Alpha being the most critical projects, I think we have about 50 of those, and the Omega being the breadth of Open Source. These are huge initiatives and come with a huge amount of complexity. And for those initiatives to thrive, we should be driving towards a common standard. Next slide, please. So what's the perfect solution? I'm glad you asked. Zero vulnerabilities. This is what the US government would like us to achieve. It's absolutely impossible. Automated vulnerability analysis. So we're talking about these VEX documents. We want a way of identifying if a CVE is vulnerable for the software that we're pushing to production. How do we do that? Well, DARPA ran a project in 2016 with these cyber-reasoning systems. They used 32-bit CPUs to perform basically binary attacks and defenses. If you could pop some form of exploit, a buffer overrun, any sort of privilege escalation, and I say you, I mean if your AI, an inverted commas, or your automated tool can do this, then the adversarial tool on the other side then attempts to patch the thing. And you kind of play them off against each other, generative adversarial networks, the way this is kind of framed. Now it wasn't quite that level. That would be incredible. It would be fascinating, and the future would drenches with terror if we could do that. But if you've seen GPT-3 injection attacks, you'll know actually the state of artificial intelligence is still artificially unintelligent. There's very little in terms of generalized AI, in terms of actually learning. And all of these systems have injection and misdirection issues. That mean we probably don't want to rely on them for the lack of fundamental security. What else do we have? Well again, if machines don't do the work, then people need to do the work. And looking at these, again, multi-billion dollar organizations, Red Hat, the Linux Foundation and OpenSSF, Google, the quantum of the problem is insanely large I suppose. Engineers are doing the work, but their work is often lost, because it's not well communicated. And then finally, what about vendors? There was a question from some of the UK government work that I do as part of the Open UK advisory charity, which was, should open source maintainers be responsible for patching CVEs? The speed at which I shot down the question, the problem again is we're taking so much from individuals who generate so much value for large organizations, but then the question of how we deal with these vulnerabilities is pushed back onto them and they either have neither time, inclination, understanding, they've generated the value, they owe us nothing. So understanding this from an application perspective, when a vendor is shipping us something from an application perspective, when we build something ourselves in our own infrastructure or from an open source maintainer perspective, again puts a different lens on the same problem, why the multinational organizations make a difference here. So what about vendor provided exploits ability indicators? What about it? Do we want them? Do we trust them? And let's have a look at it in the context of Vex's use cases. So if we are provided these documents by maintainers or vendors, we should have less toil, we should have less cognitive overhead, we should have fewer sleepless nights, that's because we have fewer false positives. When we go to install SQLite 3 in the container image and we get a critical because the way it's used in Google Chrome has an exploits ability path, the context is so far removed from our daily concerns and the repetitious nature and duplication of that effort internationally every time someone installs that package, it really is just expending effort and toil for no reason. So reducing false positives will make everything better and it's very rare that such a sweeping generalization can be made, I think verifiable exploits ability indicators for workloads. A zero-day vulnerability is announced, we should be able to interrogate the SBOM manifests of everything that we have running in production, cross-reference that with a Vex, and there's an important distinction here, which is that an SBOM is static, it should be deterministic, there are SBOM standards with timestamps in them, anybody who's worked with NICs or reproducible builds knows that removing any non-thermalism or temporal data is essential for reproducibility, why is reproducibility so good, because you take a hash, you build it in multiple places and you know that the underlying infrastructure isn't compromised. Fundamental point about reproducibility that requires a lot of effort shifted left, so actually hopefully we can ship that metadata about an SBOM as metadata and not as part of the document in future, but the point being is that that document should be static. Once a container image or an application is built, its composition is known and ideally we ship that information at build time and then at run time we revert, or sort of CI time, we then reverse engineer that information and just compare just to make sure that actually what we've discovered in the image matches what the vendor said. Nevertheless, that is static data. VEX vulnerability information is temporal. It is important that when a zero data is announced we don't mistrust a VEX and SBOM document that says actually we were fine on this day and that was two days ago, so we'll kind of run with that. There are movements to kind of merge the two. We feel very strongly that they should be separate documents and we'll look more about that later. So being able to interrogate production for the SBOMs that it's running and then cross reference those against the vulnerabilities in a VEX gives us actual insight into the risk of the decisions we make in production and debating sleepless night syndrome. Fundamentally, security teams should be automating themselves out of a job, it's DevOps, it's DevSecOps, all that good stuff. Toil less is the goal. So how does that look? So looking at the model, what do you want to contain in a VEX is the description of the data that goes alongside with it and what's the status of the known vulnerabilities upfront. Do that in an API fully automated driven fashion but to be very clear, we're not saying you no longer need to think about vulnerabilities, we're only talking about the ones that are known. So your threat team should still be doing analysis but at least they'll be focused on discovering what hasn't been found by others upfront and knowing that they can safely or take the risk to deploy the software knowing what the compensating controls, remediation, mitigation should be. Next slide, please. There are a set of required fields in a VEX, you can see them here on screen. You have the metadata which includes the author, the ID and the timestamp and alongside with it you will have the product identifier, vulnerability identifier mapping back to NVD or other database could be an open source database, the vulnerability details, recommendations including mitigation or remediation and the product status. So there are a few different angles on this right now. As you can see, we've got one format, Common Security Advisory Framework and then also Cyclone DX is supporting this as well. As we've said, VEXes are not being shipped. We don't see this yet because it requires relatively significant effort from a vendor to hold the hand up and say this is definitely a correlated CVE that will affect your production systems. There's a lot of vulnerability assessment and manual work in there. The only tool publicly that's doing something useful here is a Wasp's dependency analyzer. We haven't actually called this out specifically in a slide but it's worth looking at how they have integrated that if this is of interest. So what does a VEX document do? It gives us a map of an application, its versions and the particular vulnerabilities that exist in those versions. Again, we've got these remediations, maybe it's just patch under investigation. Obviously the worst state to be in and it's less called out in documentation is we have no mitigation for this. Any CVE, any zero-day exists for a period of time with no mitigation and at that point we have to make our judgment. Risk of pulling software from production and losing money versus reputation or financial harm from an exploited CVE. So VEX and SBOM coexist. You think they should? They're complementary but they're not bad fellows. Okay. This question of an SBOM being trustable. Again, if it comes from the vendor, if it's generated at build time, that's what we want. Secondarily, if it's generated at package time when that artifact is, and when I say artifact again, the whole spectrum from container images to open source libraries to individual packages with no dependencies to applications to the usage of those in cloud systems, really this is a hugely generalized concept that sometimes becomes a little murky but the SBOM can either be generated at build time, package time or reverse engineered as in container image scanning. SBOM generation and container image scanning are identical processes. They're going through and trying to identify what software is installed, in one case to correlate, in another case just to provide a manifest. They should be definitive. If they're not honestly generated, container image scanning just doesn't work in the first place. So it is not difficult to bypass these tools. Any sufficiently competent developer trying to avoid a security control will use these kind of manipulations to ship stuff to production. Because ultimately, they have tickets complete, et cetera. Vex exists fleetingly at the moment of creation. This really is the point. Vexes are temporarily bound. They're only relevant for the period of time until a new CVE affects the particular package. SBOMs should be statically defined and exist in perpetuity against that content addressable hash of whatever the package is. Again, there are some obfuscation issues if somebody chooses to obfuscate or repackage a binary or a container image. But nevertheless, again, working on some degree of good faith. Again, a useful analogy is to think of the Vex as the security advisory being dynamic, going out more frequent, automatically notifying the install base and the consumers. Yeah, again, shipping Vex with an SBOM doesn't make very much sense and generating a false sense of security. And finally, of course, signatures for everything. A signature just means that at the point in time that that thing was signed, the person in control of the key trusted it or liked it enough to make that signature. It doesn't really mean too much more, but the chain of trust, the internet is based upon this and the human key exchange ceremonies are still my favorite. So as we draw to close, what practical guidance can be offered? First of all, there are still a number of issues, kind of rehashing these slightly. What is the context for an SBOM? Again, is it that single application? Is it the library? Is it the set of transitive dependencies that are not actually captured when we go some depth down the pull graph, if you like? There are multiple different ways to do it. An SBOM is sort of a broad catch-all term for everything. This question of misaligned incentives from CVE producers, from producers of software, let's say. False positives, they cause problems for all those security teams duplicating their efforts around the world. False negatives, bang, immediate annihilation of trust in the process. So really ensuring that we have automated testing and some degree of automated exploits ability for the VEX documents in the first place is the first step to generating them. That is a very high bar. I'm conscious that that is not a suggestion for most organizations. And then finally, this question of when do we derive an SBOM from the artifact that we're attempting to interrogate? And again, it's all questions of trust. Do we really trust people to ship or disclose things honestly? So, almost there. The brave new world. API-based VEX solutions. Content addressable API that says is this thing vulnerable? Why? What do I do? Throws back SBOMs and VEX documents. We have some internal prototypes of these things. There are things bubbling away from other places as well. That's what the future should look like. The problem is not the API. The problem is generating the content to sit in those VEX documents. It is a significant vulnerability assessment issue. Who should do that? Well, vendors. And the framing of the U.S. government to push back on suppliers to not have any vulnerabilities is one question. But to start to adhere to good software delivery practices, including the build infrastructure, OSS, OSSF scorecard, et cetera, vendors should do the work here. We also have a degree of independent analysis. Of course, we see plenty of software that is independently vulnerability assessed. Some mechanism by which those trusted assessors can generate VEXs and push back that is not necessarily just a bug bounty program. We've seen, again, the disclosure of CVEs is misaligned and sensitive. And then finally, who actually trusts these documents? So what are we looking at? Metadata, distribution, and storage. Where is the S-bomb? How do you know it came from the person that you think it came from? And where is the VEX? They're easy questions to answer. Centralized and signed, ultimately. The security of these documents, again, do we really trust them? And then finally, what we're looking for here is tighter integration between CVEs, S-bombs, and VEX, the triangle of insecurity, perhaps. And there's some really interesting work going on with actually, in tech security, with using graph analysis and asking questions of graphs for contribution, beneficial ownership for open source projects in the CNCF landscape. Of course, that same nature of graph querying would map perfectly by extension if we ship VEXes with those things as well. And with that, thank you very much for your attention. That's what happens when a supply chain goes wrong. We'll open it up for questions. What questions do you have, if any? Nisha, yes. I'll pass you the mic. Okay, with the mask. Hi. You had a slide that said VEX should be API retrievable. Can you explain what that means? Yes. One of the benefits of container reproducibility and obviously the TAR algorithm, cross-platform does not mean containers are not fully reproducible, let's say. But the idea being is that you build a container image, it has a content-addressable hash, and at that point, we can start to attach metadata to that hash. It's a SHA256. By extension, if we can then submit a hash for a piece of software to an API that says, we've collected all the VEX information, then we're in a position where we can start polling for that information on some frequency, daily, hourly, and then pushing that in terms of kind of a dependable update into our package manifests. At some point, that would be a streaming API and it would actually have to notify itself. Does that make sense? Hi. So I have a question about adoption, about real life. How do you see this unfold in reality? Like, who does it today? How do they do it? Who consumes it? How? Like, what are the bits and bytes of reality here? Yeah. Nobody. They don't. They don't. This is a standard generated to address a problem, but the onus is on the producers of the software to determine the exploits ability and ship it. To some extent, this is a call for participation. I mean, we'd probably be better at doing that at DEF CON than the If-And-So Summit. But ultimately, if vendors can be convinced to ship this information, it will save countless person hours and, as I say, there are very few, I mean, there are no major projects shipping VEXs and the only tool that really integrates them properly is the only standard is cycling the X, VEX itself separately, and the OWASP dependency checker. So this is projection and manifestation. What we started to see happen is consumers asking their suppliers for S-bombs. Not getting them yet. They're demanding it and the pressures in those organizations. At the same time, there's a lot of conflated expectations on what will an S-bomb deliver to you. People think an S-bomb will give them what a VEX gives them. It will give them an understanding of what components are affected. An S-bomb will not give you that. So educating and delucidating the difference between the two and saying, okay, great, you're working an S-bomb. Can you also start working on VEXs? We'd like to see those. Other questions? I'll ask something more in the micro level. To the extent that I know the VEX format, it's a bit binary. I mean, your code do not connect with the trigger vulnerability, etc. But often risk is a matter of an assessment. For example, if you had the redhead assessments, they often die from the CVSS because they know their environment. So instead of a nine, which from my point of view makes it uninteresting, can VEX consider that or it's only binary, exploitable or not? Great question. So you're describing the process of recategorization, re-scoring the software. So the format is extendable. You saw, Andy, we go back to the anatomy, the diagram of the minimum elements. So the vulnerability space is fairly extensible and you can receive a VEX, customize that VEX, and expand it with a field of recategorization. Does that make sense? Right, right, right. Correct. What other questions? Stop. We're out of time. Thank you. Well, if you'd like to chat more, we're going to be around. Thank you for being here. There's plenty of free books, should you say desire. Thank you.