 Hi everyone. I'll at least do a quick sound check. And my clock just struck 4.55. So we're going to roll on lines. OK, great. Welcome, everyone. My name is Art Mannion. This talk is about the future of CVE. I'm going to do a little bit of audience questioning and polling to start off with just to help sort of tune things. And feel free, please, to ask questions during. If you want, wait till the end is fine. I could easily overrun my time. I'll try very carefully not to. But do please interrupt me if you'd like. If you're waving a hand around, I'll try to stop. Try to keep it on topic with whatever I was just talking about, but I'll be happy to have a bit more of a discussion. Are folks in the room familiar with CVE at all? Raise hands. OK, good. We can skip some stuff. I was asked to talk about the future of CVE. That's we're really going to focus there. I am. I didn't put the quote in, but those who forget the past are condemned to repeat it. That's very true here. Even in doing my minimum research for this talk, I came across a great source of past knowledge from 1999 that completely predicted all the problems we're having with CVE today. So let me jump into it. I will ask you to quickly read this text to the bottom left is the mission statement of CVE. Identify, define, catalog, publicly disclosed vulnerabilities, right? CVE originally stood for common vulnerability enumeration. I believe it was also then common vulnerabilities and exposures. It is now pretty much just CVE. That trademark symbol is meaningful. I'm going to wear a number of hats. Get it? During this conversation, I'm employed at Carnegie Mellon University. I work at an FFRDC, so a Department of Defense funded FFRDC with the university. My sort of day job is coordinated disclosure. Yes, sir? Sure. Federally funded research and development center. This is a thing in the United States. There's federal law that it's literally a triangular intersection of academia, industry, and government. If you can contract it at scale and you know what you want in the government, you go do a bid. If you have a problem you're not sure what to do with, you can talk to an FFRDC is a very rough reason for the thing. So coordinated disclosure, right? Somebody finds something, a security bug of vulnerability. They try to tell the vendor supplier maintainer. Sometimes they come through us as a third party to help them coordinate that conversation, particularly when there are problems like disagreement with the vendor and the reporter or the researcher or many, many vendors involved. That's when we get involved these days more often than not. So that's where my sort of background comes from. CVE is very important as a piece of that disclosure. When the vulnerability leaves its embargo and becomes public, advisories go out, patches and updates go out, commits are committed in public source trees, researchers post their work. That boundary of hopefully not public to public, the CVE marks that boundary. I am a board member of the CVE board. I earned this hoodie by being around during the 20th or 25th anniversary of CVE. I guess it was the 20th. My organization is also a CVE numbering authority, which is to say we're allowed to grab a number and write it up and assign it to a vulnerability. And then we then own the description and the content that's supposed to get published with that CVE. I work very closely both as a sort of a professional partnership and also I'm funded by CISA. So in the United States, especially in the federal civilian space, CISA part of DHS has public safety, critical infrastructure protection mandate and cybersecurity for those things. So we work really closely with them. So number of hats here, I'm going to mostly try to wear a CVE flavored hat. I am not a pure CVE apologist and I really want to ask, if people have complaints, concerns, questions, I have a number of my own. It is my responsibility to hear those and bring them back to CVE and not to just tell you, that's not a problem, don't worry about it. So please be as blunt as you'd like. Been in this for 20 years, I probably won't be offended. We would rather hear about problems than not hear about problems. So this font of knowledge from 1999 was the first sort of the paper, the seminal sort of paper about CVE towards a common enumeration of vulnerabilities. And in a just a horrible, probably juxtaposition of pictures, we have, that is the spirit of CVE past is from Dickens and that's Scrooge extinguishing the spirit of Christmas past who came to haunt him. And we have Stefan, this paper really does have it all. It worries about the definition of the word vulnerability, the abstraction of where the vulnerability is and sort of a logical hierarchy, how precise the definition of a vulnerability can be. I mentioned this boundary between public and not public and public. CVE at the time was looked at as a bridge. CVE has never tried to intentionally be the only one single ID space to name them all. It definitely has idea and market share from being around for a long time. But it was originally planned to be a link so you could have many, many IDs. If they all happen to reference the correct and same CVE ID, you can connect those things together. That was sort of the initial plan, yes. Yes, we want to connect these independent islands of information in an accurate way. And this is important for later. This early paper had a very minimal idea for CVE. Just enough information and it was a modular piece of the problem to define the thing with the ID on it and link to many other sources for sort of more information. Very briefly, this is the spirit of CVE present. So there is a built by the CVE program, a largely MITRE staff, but some other folks as well. Called ID Reservation System. So right now, as a CNA, I want to go get an ID to assign to a vulnerability. I have an in-house tool that someone built for me that goes and talks to their API. We have accounts on their system and I can pull IDs. I think there's some rate limiting or some maximum at which they think I might be abusing the system and they'll shut me off. But that is in place today. That is part of a module of the CVE services suite of things. And we submit CVEs via GitHub, which is actually, it's called a pilot. It's been like a four-year running pilot test. It's worked very well actually. I'm going to withhold judgment. I may have had the opinion that leaving the submission process in GitHub was a fine thing to do. A lot of transparency. Everyone knows how it works. You don't have to build your own software. I'll call that a personal opinion and certainly not consensus of the CVE program. There's a JSON format for CVE IDs. It's the version four format. That's going to be important in a minute. Managing all these authorities who can assign and issue CVE IDs is a quasi-manual at the moment. But again, this is all a setup. There's good future stuff coming very soon. There are clients to pull IDs and soon to submit IDs. The three main clients I'm aware of are listed here. Vulnergram, CVE Lib and CVE Client. All of these developed by not MITRE proper or not the CVE program proper, but by concerned and highly involved participants in the community. Just to remind folks, I mean, county vulnerabilities is a horrible, hard problem in the first place. I'm not trying to be precise here. The red line, I think I updated this for 21, yep. The red line is the count of CVEs. The national vulnerability database in the U.S. is keyed directly off of CVEs, so those are the same counts. There are a couple of other databases here that have a higher number claimed. And I'm not going to get into, if they don't count the same, you already have a problem comparing. Just to say, CVE has a mission, all publicly disclosed vulnerabilities. CVE, we are not meeting that. We are maybe several thousand short by this measure, at least. Again, questions about how you define and how you count, but I'll be the first to say we've not met that goal just by trying to. That blue line very briefly is my organization. In 2008, we stopped trying to count all the publicly disclosed vulnerabilities because others were doing a much better job than us. And then in 2014, a colleague of mine generated some Android mobile app, SSL behavior testing. He automated the whole thing, downloaded a million apps and tested them in 23,000. Horribly ignored certificate warnings basically, and we connect to anything. MITRE called us, after our fifth request for 1,000 CVEs, and we had a discussion, and the answer was no more CVEs for you. But just to prove that a little bit of automation can blow anyone's manual counts out of the water. I'm sure there are fuzz testing infrastructures that can generate a lot of output as well. Is it a crashable bug? Is it a vulnerability? Hard to tell at scale, but just as an example, we're still kind of human scale dealing with this stuff, and that's not a great way to do it. Progress, there are, so CNA is right, a CNA is authorized by the CVE program. You have been judged responsible enough. You've met some criteria to interview that you will assign CVEs in a meaningful, good, accurate way, follow the rules, be responsible for your assignments and not let them sit out there lingering. There's a reserved but public thing where I reserve 20 CVEs and I assign them the vulnerabilities and someone goes and publishes those, but they're sort of empty in the CVE corpus. There's no information there, there's no description, bad behavior, that'll get you kicked out of your CNA status. The point here is, I already sort of said maybe CVE isn't quite reaching all publicly disclosed vulnerabilities. There's been a concerted effort and the chart doesn't lie, right, since around 2016 to really distribute and federate the work so that others who care perhaps even more than the central authority about the CVE content and the CVEs are out there doing that. This is a lot of, I think we call them vendor CNAs or maintainer CNAs, right. If I'm being responsible for my software, I'm owning my bugs and my security bugs and I have decided to own that, I will issue CVEs for my security bugs. That is the desired path. So, there is growth. I don't know if it should be, shouldn't be higher. You can go search GitHub for things and find numbers in the orders of magnitude higher than this for numbers of projects and vendors and suppliers of software. These are the numbers currently though and that's what they are. I think it's 200, so this was 2021. The number is up to like 220-ish CNAs as of this date. Yeah, it's a US thing and a lot of Western Europe and sort of Western attention on these things. Nonetheless, there's a, it's not meant to be a US specific thing. There is a strong desire and attempt to make this global for anyone who wants to participate. This is how it's organized and I should say a bit more about MITRE here. I don't work for MITRE and I won't sort of speak for them but the URL for many years was cve.mitre.org, right. MITRE is at FFRDC, similar to mine. Sorry, they are a larger one and they are contracted with to operate sort of the CVE program. The whole program and MITRE included have been working hard in the past years to design an organizational structure where it doesn't depend on MITRE or any single entity to run the thing. There is the need for a secretariat, which is currently MITRE, and at least one top level route of which MITRE is the predominant one here and that's simply historically the way things have been but this is designed such that a lot of the work happens at this bottom tier with all the CNAs. There's a hierarchy to the CNAs. So, a country, Japan for instance, JP-CIRT-CC is a route CNA and they have a defined scope which I'm guessing is probably things ending in .jp. In CVEs in Spain, Google, I looked this up. There's a Google Android CNA and I think the Google CNA here, I think it has a broader scope than just Google stuff and I don't recall without looking up why that they are a route CNA. Oh, sorry, I do remember. It's any of their alphabet sort of owned organizations. Google has a CNA that will cover those. So this organization is designed that one could replace the secretariat with a different organization and you could have more than one top level route and it would survive a post MITRE world. So, yes, MITRE is very much tied to the CVE program but the official term is the CVE program. The fact that MITRE has a prominent role is true but we're trying to sort of build around that dependency. So, to the future. First, some bad news, right? In Scrooge, his future is not looking good because he's been a jerk for a long time and I think the ghost is pointing to his grave here giving him a hint. So, Linux Kernel Devs, at least one of them, don't really care about CVE and Greg talked about this a lot a couple, two, three years ago. And I actually spoke to him about it before he sort of talked about it more publicly and I don't really have an argument with him. First of all, I'm not a person with commit to the Linux Kernel branch so I have no place to argue. Furthermore though, if you think about the abstraction and he has first-hand accounts of this, right? Hey, there's something happening in C and C works a certain way and there's a bug in it and it has something, it does something, right? At the level that they're all working in the kernel, it may or may not be clear at that layer of abstraction that it is or isn't a security impact and he has stories where I think his story is he created a bug, someone told him about it and he fixed it three years later and then after that realized it was a horrible vulnerability the entire time. There was no CVE, no one noticed it was that. I had a security impact at all. So there's a piece at least the way I look at this as sure. At that, with that perspective, bugs are bugs. That's how computers in C behave. Fine, no argument. The rest of us kind of want to know if we have local privilege escalation or networking stack vulnerabilities in the Linux Kernel that we all really use a lot of or depend on even if we don't use it ourselves. There's also a great write-up from Kurt Seifried and Josh Pressers and Josh, the new Josh, sorry, yes. Yes, the other Josh, other Josh B about the global security database. And again, despite being around longer and having some de facto status, CVE does not claim to say we should stop using GSD identifiers, stop using something else, only use ours, that's not the approach. And that would be a foolish approach and it wouldn't work anyway. But there's a nice blog post from I think mostly Kurt but maybe it was jointly written about why and tell some of the history of going to the GSD. And it's a good read and I have a tab here I'll flip to if things go smoothly later. Kurt and Josh really put some pressure on CVE in a good way with the distributed weakness filing and DWF, ha ha, right, it's one letter off of CVE. They could have gone with bud, I realized later but I don't have an acronym for it. Kurt was, I believe, and I write very rightfully so, not happy with the speed of CVE assignment and getting things published and he did a much quicker version of it that was DWF. There was a period of time where DWF was sort of made part of CVE and that didn't kind of pan out. DWF tried a second time working outside of CVE. This led to the global security database work. Not gonna get into the history too much but these are all very fair criticisms of CVE. And in fact, despite some probably strong feelings and discussions between Kurt and maybe Josh and the CVE folks, their work was very, very beneficial and probably kicked off CVE's sort of change of attention to try to modernize and be faster and lower overhead. So, better future, a brighter future and this is a big question we get. When are the new services coming out? I am told there will be an announcement on June 30th. The announcement is not the actual delivery of the new services but the next update is coming out then this URL at the bottom is where these announcements of progress are made if you're really waiting to hear what's happening, check that page every so often. I think I'm safe in saying June 30th will be the next one. Someone told me that too, I believe. So, what's coming up though is replacing the GitHub submission and the ad hoc-ish, slightly manual user management with straight up proper API services. So, this IDR where I can reserve a CVE that I need is already in place. That's how things work today but coming soon and these are in sort of beta and test right now so this is, I really hope, pretty soon this time. Being able to submit and upload my content, populate my CVEs through the API and I can self-manage, once I'm approved I can sort of self-manage my users and my organization. A JSON version five format is coming out. There's a big up conversion process going on but people smarter than me and with better Python skills are getting that worked out. This CVE JSON five format, probably one of the biggest new features is containers. So, sort of by default there's just one container. A container is the source, right? Who provided the information? And a future CVE record, JSON five record can have more than one container. So, the cert CC can publish a CVE ID and we're responsible for the first container and someone else could in theory come along and provide other content and it would be clear to the consumer that cert CC said these things and a different data provider in a different container said these other things and that's built into the JSON format so you can tell who said what and who to go ask questions of or yell at. Also an attempt and actually the OSV folks filed a nice issue, a couple of issues to help with this at just a little bit more sort of machine-based way to deal with versions. Versions string, version systems are all over the place and it's a giant mess and there's just almost no hope. Nonetheless, the OSV folks had a really nice contribution. They're probably using it in the osv.dev format where basically you can parameterize and say I'm using this type of version syntax and then your version operators take on meaning because you've declared that greater than means, greater than for instance, or whatever your range definitions are. So a little bit more machine readable. I'm not sure what a comfortable means. Compatible probably. Sort of package and product version information is going into the JSON five. There's a bunch more. Not gonna get into it too much for now. Very brief example of some, so I'm gonna go quickly through the remaining slides. This is now a set of issues I'm aware of and I'm kind of tracking not in any particular order. I'm gonna hit them pretty quickly and then ask if you wanna come back to what we can talk about them. And very importantly, if you don't see something here that's on your mind about CVE, really wanna hear that, I will take them back, things back to the board. This is a vendor called Moxa. They make ICS OT gear of some type and just as an example of a vulnerability ID mess, this first screenshot has an ICS third ID, a bunch of CVEs and a bunch of CNV, sorry, CNVD IDs. The second one only has some CNVD IDs. So my question is why on earth is Moxa not putting out CVEs for that second one? In 2019, I don't know but that's odd and weird and there's a gap, there are two things that CVE has missed in some way. CNVD is a Chinese vulnerability database. Not to be confused with CNVD, also a Chinese vulnerability database. I believe CNVD is officially government run and it's not clear what CNVD is run by. It might be a slightly more commercialish or academic issue, I haven't sort of figured it out yet. So coverage is still an issue, we've covered that, right? Mission focused, this one's actually important to me. I have a less is more mindset and I'm very clear I am not in the majority. This is not a consensus opinion of CVE. Part of JSON5 has those containers, it also has a whole lot of other stuff you might see in a full featured vulnerability advisory, impact, description, credit, CVSS scores, other scores, lots of other stuff you might see in a full featured vulnerability record. I don't know if that's a good idea and I'm of the opinion that it is not and my argument is basically if we could really focus on the narrow mission by identifying the stuff and only that, we haven't fixed that yet, just do that. There are plenty of other people out there who are going to add on content and link via CVE and the vendors are gonna have more detailed advisories, third parties are gonna have more detailed advisories, researchers will have more detailed information. When you reproduce the information in another record, you now have debt. There was a great example of this with the log for J disclosures, there was some initial heat of the moment mistake, I think from Apache about that a certain Java version protected you and Apache realized this and corrected the CVE entry and then that got fixed and there was an accidental overwrite and that got taken back out again and then it got fixed again. So there was some thrashing with the CVE content and people look at CVE and they see if you're running this Java, you're safe. That was a pretty misleading bit of information for three or four days. Had that not been there in the first place, we didn't need it there, right? That belongs in the Apache advisory and I can say CVE, oh I'm gonna go, it's an Apache thing, I'm gonna go look at that vendor or that supplier or that maintainer's advisory and read that authoritative information. If you're maintaining multiple copies of your authoritative information, you have some debt. And in fact this quote's from the paper, 1999. And I'm gonna be using this with the CVE board to see if I can get any traction on this. Less is more, minimum information to catalog. Transparency, CVE is by no means trying to not be transparent. I think there is some accidental lack of transparency these days. This is the second part of my text here. The board email list is fully public and archived publicly. As I was searching for research for this talk, I realized, it dawned on me that a lot of the discussion has moved off into the CVE working groups and they're not, again, they're not intentionally non-transparent but there's a lot of variety in how they sort of log their stuff or keep track of it. And most things you can join and get access to the archives and all the information. Some of them are just public but there's a lot of variety and so by, I think by accident and by growth, the implementation of the working groups has less in transparency a little bit unintentionally. This GitHub pilot, the great thing, the Log4j issue, right? A colleague and I were working on Log4j, you know, whatever the day that was or weekend that was that it came out, we noticed this and get blame. It was straightforward to figure out what had happened. There had been an inadvertent overwrite from the CVE program that put the wrong information back in. We could email someone with two, you know, this commit introduced the fix, this commit accidentally overwrote it, please fix this straightforward. I am actually not sure in the future services 2.1 World how that will be possible, how you'd notice that sort of thing. So that may be a material transparency issue. The other stuff is accidental but we are trying to be transparent. This is a massive piece of the puzzle. You know, if I have a CVE that says, describes a buffer overflow, I really have a CWE, right, I have this class of problem but I don't have an instance of it, right? So the CVE, the buffer overflow has to affect something. So I have to have a CVE entry that says what component, what piece of software is vulnerable. I have to have something there. But can I have a comprehensive, complete list that everyone can go out there and identify? And that's no, possibly very, very difficult. All you have to do is identify all the software components on the planet and all the vulnerabilities and then intersect them. Simple, simple three steps and we're done. So all we have to do is that. Yeah, pardon me. Yeah, I think we were gonna fix this a couple of years back, actually. Identify the components, is SBOM gonna help us with that? Sincerely, it is, please, yeah. I believe so. I'm passingly familiar only, but, sorry, for the sake of those online, GitBOM was mentioned. Identify all the components, I'm a huge SBOM fan. I could be wearing, I should be wearing my SBOM t-shirt under my CVE hoodie, but all of your, whatever you wanna call your dependency tracking, your inventory, your asset management, your upstream dependency awareness, do that. Should be doing it anyway. A lot of people are doing it anyway. If you wanna call it SBOM, great. If you don't, also great. Know what you're running, know your upstream dependencies and we have to do better than verticals knowing those things. It has to be a horizontal cross ecosystem sort of solution. That's the bigger sort of SBOM story. GitBOM does pieces of this, is as much as I can say, and I've heard good things, so I'll say thumbs up. Getting a little bit back to the minimalist idea, I don't know, and I'll speak for CVE at least, that it makes sense at scale to have a huge list of affected things jammed into the CVE record. Personally, what I sort of envision is vulnerability records and software components and a third thing that joins them together in combination. So I sort of have a graph RDF thing in my head. That level of engineering is not gonna fix this for us, we need to do more than that. We mentioned sort of more, I guess it was computable, not compatible. Version semantics, that can help. Try to use Simver or something else, great, but not everyone's going to. VEX, VEX is coming from the NTIA SBOM work into the SISA SBOM work. Vulnerability, exploitability expression. I may have the acronym wrong, may have shifted slightly, but it is VEX. And this is meant to be a structured way to convey status, right? Component is not vulnerable to vulnerability. And VEX sort of focuses heavily on reasons you are not affected. The theory here being that with greater SBOM and inventory and upstream dependency knowledge, there'll be a lot more stuff that looks like, hey, you've got old known vulnerabilities in all your upstream and a maintainer could quickly say, no, I don't because at scale. That's sort of the rough idea behind VEX. And Alan Friedman is not here to fight with me, so. Yeah, I'll skip on here. Hey, when you have a dispute or an update to a CV entry, the current model, the CNA who issued the ID is basically where you have to go. And that does give that CNA some implied or inadvertent power to maybe respond quickly or maybe not respond quickly or maybe discuss it with you or maybe they're not gonna answer the email for a couple of days. You follow the CNA hierarchy, you may end up at often mitre today, but the top level CNA or the next CNA up in the hierarchy for that discussion. That's the path. Getting things changed or updated that you don't own is tricky. And I don't think, I think there's too much friction perhaps these days, personal take on it. It's a known issue, but what to do about it has not really been solved yet. Man, there's this thing called the cloud and your computer is sort of somewhere else and you do something with the service, I don't have a computer or a product that I got with a CD-ROM and a box that I run. And thankfully, CVE has partly accommodated the fact that cloud services exist. This is a snippet from the rules. Literally, it's the book of the CVE rules. So if I am the cloud service provider, I am allowed to issue a CVE for my stuff. If I am the cert CC and not the particular cloud service provider, I am not allowed to issue a CVE for that cloud service provider stuff. To me, that's a problem with the rules. Hasn't yet been sort of adjudicated. There was some concern that there'd be a proliferation of cloud service CVE IDs and no one needed to do anything about them because the cloud service would just fix them, which is often the case and a great selling point for cloud services. But I don't know if that concern was ever sort of proven out or not. So here's where we are with that one. Couple of issues about CVE for malicious code. And this was talked about at least yesterday, I was at a talk. In these cases, you've got sort of some upstream, so this is Ruby and probably node things where either someone's package was compromised or someone is doing some name squatting or somebody got mad and changed what their software does to put the hearts in. That's one of these. That's this one, I think. Does that get a CVE? If I put my vulnerability nerd hat on, no. If I download malicious software and run it, there's no vulnerability involved. I downloaded malicious software and ran it. That's a problem, but it's not a vulnerability. There's no bug. I ran malicious software. Maybe I got tricked, right? There's a social engineering or an understanding concern. There's a pretty good argument, though, to say we need to know about these things and CVE is pretty close and it's out there. So let's use CVEs for them. And then there are, I mean, just in my slides here, there are three CVEs for this already. One of the rules is if a CNA for their own stuff says they have a vulnerability and calls it that and they issue a CVE ID, then guess what? It's a CVE ID. So it's quite possible. I don't know who, you know, Node or one of the vendors here did this and therefore it's in the corpus and we're done. There was a debate about CVEs for vulnerabilities in malware. I am pro. I'm in favor of a CVE for your vulnerability in malware. It's a vulnerability in software. Sure, put a CVE on it. There are a whole lot of cases of this. This site malvolna.com lists a bunch of them. If you read closely, they're kind of borderline vulnerabilities, it's a bunch of DLL injection on Windows if you're running local stuff. You can trick the malware into running it, your own DLL, but so a bit esoteric, not a major use case. There was a debate. We don't wanna help malware authors out, was sort of the answer, but it's a vulnerability, give it a CVE. I covered this a bit up front. There was some really good existential motivation from Kurt and Josh and DWF and GSD DWF at the time really was putting put pressure on the CVE program to modernize. And again, despite some of the feelings and discussions, huge win, so lots of, I don't think, other Josh is not in the room, is he? Yeah, so, yeah, Josh pressures Kurt Seyfried, Seyfried. Seyfried get mentioned because it was a great, great thing. There are plenty of other databases and IDs. VulnDB is OSVDB and they did a lot of work and OS was open source and they tried really hard and cared about the community. A bunch of people stole their stuff and made it commercial and they went commercial. And they were successful, so good for them. But you can't get their good stuff anymore unless you pay. VulnDB.com is another new one. They have a pretty high count. I don't know their business model, there must be a subscription involved somewhere. I mentioned the two Chinese ones that I can keep getting confused. OSV.dev is Google, but it might be under the OSS, open SSF? Okay, so OSV.dev is another database and a format, both. There are lots and lots and lots. Josh and Josh had a talk earlier with a screenshot of a whole bunch of databases. Almost any vendor who does their own advisories consistently has their own naming system as well. But DWF really lit a fire in a good way. Yeah, I said this sort of upfront. We really, really do want the feedback. Even if we don't care if it's complaints and criticism, we want those things. The board is the board at CVE. There's different discussions and opinions, but we wanna know and not know about problems and concerns. There's still some friction from just historically how CVE was engineered and designed. It's gotten a lot better, but things could be quicker, easier, faster, lower overhead. We talk a lot with the CNA community, of course, because that's who's out there assigning things. We talk with the research community really from anyone. We'll take any input on this. And yeah, this issue was filed by, I think the OSV.dev folks with the CVE.Json5 repo to get the version information in better shape. And there's a nice discussion and you can go look the whole thing up. And it worked out great. It was open source collaboration. It was in GitHub. It's all public. When's all around? Yeah, going back to 1999 again, this is a battle I keep fighting. I feel like Don Quixote running at the windmill sometimes. And this just popped up again in the CVE-CNA coordination working group had a big discussion about, well, there's a CVE for this library, but I use it and you use it. Do we use that CVE or do we issue new ones for our implementations of that problem? And sort of both things are true philosophically and different things happen in reality every time. Sometimes everyone points to the Heartbleed CVE and that's fine. Sometimes people cut their own for their implementation and if you're not careful you lose track that the subsidiary CVE is actually related to the source one and information is lost. 1999, the people who wrote the whole thing up predicted this, they were correct. It is unsolved today. It's not a hard solution, a colleague of mine mostly and me, but mostly Allen Householder fixed this a while back. A very simple reference system as long as you're doing it systemically could take care of this, right? Equivalent to alias four, not alias four related to, you have a child and parent maybe. You don't need a whole lot of relationship magic to get this worked out. That really needs to be in, oh yeah, here's, sorry. Here's the work from the X ref. So possibly related, related, not equal, equal, superset, subset, overlap where the few we came up with. So that's the slide deck. I got a five minute warning a moment ago. So happy to take questions. Please, yes? Yes, they're not in schedule yet because I am late binding. They will be there. Plus, I'm not sure how fast I can flip back to it. The tiny URL at the beginning is the, cover your eyes, everyone. That tiny URL is the Google slide version of it, but they will be in schedule probably later today. Yeah, sure thing. Oh yeah, that's slide availability for those online. Any, no online questions? Nothing, okay. Anyone else? So I have a huge complaint about CVE. Now's a great time to throw that out. You can also tell me privately if you want, or you can just report it and then you'll put it on Twitter, but I'm happy to take stuff to the board. No huge complaints about CVE? Yeah, absolutely. Okay, fine. I'm happy to hear them, okay. Private, private reporting is fine as well. Did you wanna, yeah, Josh, yep. I think so. So the question is, I'm up here claiming CVE's willing to work with other databases and I firmly believe that. And how does that happen technically? So the JSON5 format is, you know, the JSON5 format and the other format of whatever database is where things would have to sort of line up or not. I attended the GSD talk, so the namespace trick, I think is gonna work pretty well for GSD. I don't really dream we're ever gonna get, you know, Rosetta Stone level field compatibility across a whole lot of formats. I don't know what the right answer is, but JSON5, CVE's committed to a machine readable format, even though some of the fields are, insert your text prose here, but, you know, we'll work on that as well. Was that it? Was there a second thing? No, okay. Yes, please. Yeah, right off the bat here, I have much of the numbers gonna show up here. So risk-based securities is, for 2021 is, I don't know, 27,000 and CVE's around 20,000. So they've got 7,000 more than CVE has. Now they published an annual report with that number in it, but their data is proprietary, so I can't inspect it carefully or anything. But I mean, I just looked up one of my, that slide about Moxa has a couple of CNVD vulnerabilities in it and not, there are two missing ones right there. So you don't have to look around too hard to find something, a vulnerability that's public without a CVE on it. Every so often I'll just issue one and try to clean that one up at a time. Not scaling well, but yeah. There are, I treat the CVE count as a low watermark. That's the minimum number of publicly disclosed vulnerabilities per unit of time. Yes. Yeah, I mean, there are, I know, because I know how they started out, the risk-based security database, I believe they do a really good job trying to aggregate things, but again, their data is proprietary. They're making a business out of that collection and that analysis. CVE does not itself try to go aggregate more. It just tries to issue. It's the first data provider. That Valdi B site seems to be aggregating, but again, I think they have a commercial business model in there somewhere. So there's not one sort of open data pool for all the vulnerabilities that I'm aware of. It's gonna have to be a collection, a messy collection. Please, yep, GSD. Yep. No, go ahead quickly, yep. Yeah, so that's it. Yeah, so for the online folks, GSD Global Security Database without a V in it, although I kept trying to write a V in myself, Global Security Database is designed to be able to collect or people to submit or integrate their, all their database information. So it's designed to do that, yep. Okay, got it, yeah. So the comment there is Global Security Database, not limited to vulnerability, malicious code, tampered things, belong in as well. Yes, last bit, we're at time yet. Sure. Well, for GSD, I just talked to Josh. Last time I was up, I don't know that Oval, it's not, we're not apples to apples anymore, so I don't know how you integrate that. Yeah, I think I'm at time, so thanks very much, everyone. You will find the slides in Sked, if you don't get that. I'll put the URL back up for the tiny URL, so. Thank you. Here, right?