 So quick disclaimer that we can breeze right through there. So all right, first off, just want to talk about the main contributors to the project. First, it's going to be myself. My name is Chris Turner. I'm the NVD Analysis Team Lead. I've been working in the infosex space for about over a decade. And for the majority of the previous decade, or the current decade rather, I've been working to help with CDE analysis efforts for the National Vulnerability Database. The gentleman with the picture to my side there is Dave Waldemeyer. He's a standards and outreach lead for NIST. He's been working in the infosex space for about 25 years, where he's led the NVD for a period, developed many successful community-oriented security automation projects, and he currently serves as a member of the CDE board. So first off, I just wanted to do a light overview of this flow of the presentation. So we're going to start by discussing the problem space, move on into the goals of the voluntary, discuss a high-level view of what we do currently, and what the vontology offers. Go through the main components of the data model, show a few practical examples of the vontology in action using both one real-world example and then just a theoretical one so we can see how it would look when all the stars align. And then after we've gone through that, we'll just do a quick wrap-up of what we're working on currently, which will no spoilers. But it'll be a nice way to end the presentation. So first problem we want to go through here is clarity and consistency in vulnerability descriptions. And this is going to be the only interactive part of the presentation, I promise. But I do want to go ahead and ask the audience here, if I asked you to describe this color, what would you say? Teal, seafoam, green, blue. See, lots of possibilities there. And generally, because the way I phrased the question, you're all probably right. However, a lot of those, or rather all of those answers, were focused mostly around colloquial names. There's a whole bunch of other ways that we could have represented and described that color, whether it be a hex code and RBGB model, or we could have gone into far more detailed things like its transparency, saturation, what the temperature of it was. Was it hot or cold color? Some of that may or may not have even crossed anybody's minds here. And that's the crux of the problem we're dealing with. Depending on the background of the person who's answering the question, their skill sets, their perspectives, or just their opinions of the day, the way we describe something when we're given such an open-ended question of describe the thing can vary wildly from person to person. So the same thing happens in the vulnerability descriptions space. A lack of structure for the textual data leads to large differences in accepted terminology. And also, it leads to gaps in descriptive information that may have been able to be provided and simply were omitted due to accident or maybe intentional. But mostly, it's just accidental omissions of the data. Vulnerability information, in general, is provided in a flat structure. So it covers a lot of high-level concerns, but it consolidates a lot of outcomes when we're typically describing vulnerabilities today. A quick example of that is when we have a buffer overread vulnerability, the way those are usually described is you can have a read, a disclosure of information somehow, and a crash. But the reality is that that's actually a bit of an oversimplification. Usually, it results in a read or a crash. And those kind of minute details are lost in the way that we describe vulnerabilities today. Lastly, information providers currently are leaning more towards providing what I would call a derived abstraction of vulnerability data instead of simply providing the vulnerability data as it stands in a more straightforward manner. This is not to say that systems like CDSS and CWE are bad in any way. It's simply to point out that they are not purposefully designed to describe vulnerabilities. They serve fundamentally different functions but are being used to do something that they were not at least primarily intended to do. And then lastly here, I know there's a lot of wall attacks there, but basically the goal is just to show that these vulnerability descriptions were taken from the CVE list, just completely random and arbitrarily. The CVE list does have general guidelines for how to populate CVE descriptions. And amongst all of the organizations providing information to the CVE list, you can see even though they were given generally the same guidelines, they have wildly different ways to represent each of their vulnerability descriptions. So really just pretty much no consistency whatsoever in the approach taken currently. Additional problems we have are going to be around additional training for certain individuals. So developers and engineers aren't always in the mindset for derived data points like CDSS, nor should they really be required to be experts on how to apply those specifications. Their problem space is the code bases that they operate in and how those products interact with other products that they may work with. And things of that nature. And they shouldn't necessarily be expected to understand all of the nuanced rule sets amongst these derived data sets. Instead, what we should be doing is requesting that these individuals simply describe things in a straightforward manner and keep their level of effort focused on where it should be. Secondarily, we have the issue of data loss. Now this happens most often when we have the telephone game between intra-organizational communications. So I'm sure some of you are pretty familiar with what I mean by telephone game. But to give you a real world example, that's been kind of vegged down a little bit to avoid pointing out where it was. We have a scenario where the engineering team identifies a vulnerability or an issue as a SQL injection vulnerability. They pass that information along to their development team so that they can look at it a little bit more closely, work to resolve it. But the development team, when they're looking at it, decides that it's more of a privilege escalation problem and it's super difficult to exploit because of the weird conditions in place for that vulnerability. So they've kind of taken away the SQL injection side and say, oh, it's what we think it is now. They pass that information onto the policy team. The policy team then takes all the organizational policies on top of that and says, well, that's great. But really, I think that's probably just code execution. And we'll just leave it at that. And then they pass that information onto the coordination teams that then have to apply all of these derived data types like I'm going to keep picking on CVSS because it's easy. But derived data types, and they basically have something that says it's a hard to exploit code execution vulnerability. So what we wind up with is instead of something that's nice and detailed, we have a scarce description with possibly with a CVSS vector string that doesn't really represent the vulnerability. Even though they had all the people available to describe it properly, we've just kind of lost that data along the way. The last problem we want to kind of help mitigate against is language barriers. As we start to globalize our vulnerability information sharing, this is just going to be more and more of a problem going forward where everybody wants to provide the information. We want to share it with each other. But silly things like language barriers tend to get in the way. And they cause oversimplification, misrepresentation, and then just generic mistakes during translation just because the desire to get the information out there is greater than having it be 100% perfect. Looking at, after we've gone through the problem space, looking at the goals here, so the Vontology project aims to standardize the structure and the descriptive characteristics of vulnerabilities. We also want to improve the baseline level of detail that typically gets provided with the express purpose of better informing defense while minimizing increased risk from adversaries. That's a relatively important point. Strictly because adversaries usually have all this information already, or frankly they don't care. They know of vulnerabilities there. They're going to go exploit it. They don't necessarily need to know all the intricate details, but defense typically does. They want to know a little bit more because we want to worry about how we're going to prioritize one issue over another. And that does involve needing a lot more detail. Otherwise, we end up in the space we are now where a common complaint is everything is super important and you need to patch all of it. And it's really hard to cut through all of that data. Next, we want to be able to enable better automation. Currently, obviously some automation can occur. There's been a lot of very clever and smart people who've been working with big data and AI to try and parse through the information as it's provided today. And that does have relative success in some areas. However, it is not as good as it could be. And frankly, it's due to this fundamental foundational information just being insufficient either in the way it's provided, the inconsistency of having to do case-by-case parsing, et cetera. So, volatility would enable better automation than we have today for things. And then that could be leveraged for all sorts of things, whether it be generating information descriptions, perhaps doing automated scoring using CVSS or some alternative scoring system. Really, it doesn't matter. Just the goal is by having strong foundational information, we can have better automation built on top of that. And then lastly, to make it easier to share information across language barriers. So, this next slide here shows roughly what the current data model that is used today provides, relatively a flat structure with concepts like vector impact, products, attacker, root cause, bone type, all kind of mismatched together into this wild floaty thing where people can or can't put information together. It is still better than not having anything, however it doesn't lean itself towards all of these goals that we're trying to work towards with automation and whatnot, whereas when we look at the vontology model, you can see through the lines there that there is a general structure to the information that we think should be provided. And then looking at these color codes without going into too much detail about that is generally just to show that the same kind of information is present in both approaches. It's just that in the vontology model, we think that there's a little bit more nuance that should be provided with things like impact or attacker and root cause based off of where that information should be located. In addition to the green there, which is just displaying where we think that there's additional valuable information that can be provided or should be provided. Now, it's important to point out that on the left, it does look like it's a relatively simple system, which means it's easier to use. It keeps people happy. I don't have to think too much about it compared to the vontology representation, which definitely does at first glance look like it's much more complicated. From our perspective, we really believe that the complexity exists in both of the systems displayed here. The problem is that the current system puts all of that complexity on the information consumer. And quite frankly, they are not in a position to really be able to leverage and fill the gaps that are in place with all of that complexity. So as far as guidelines go, I'm not trying to make a generic claim that this is how every single organization does it, but strictly looking through the guidelines within the CDE list as an example, this is the general structure that's been provided. So it's more just an issue of there hasn't been any direct or explicit requirement to update the way the information's being provided. So either way, as we go through the presentation, and this actually is a pretty good segue to that question, as we go through the presentation, it's gonna be pretty obvious that a great deal of the stuff I'm gonna be talking about and proposing isn't necessarily new or fancy or this wonderful new solution that's just gonna solve all of our problems. A lot of it does overlap with things that people generally do today. The difference here is that we're proposing a structure framework and relative system of expressed values that should be universal to help drive the adoption of something that's more structured than what we have with the case by case parsing that really limits our capabilities downstream. All right, so we started going through the system here. We're just gonna go through a few of the more prevalent objects in the data model. This is really just to one, give an idea of the kind of information we're looking for and two, to make it a little bit more tolerable to look at some of the examples I'm gonna show later because it is a bit to parse through visually. So to get started, we have the vulnerability object. Visually, this is the primary object within the data model. It really serves as kind of like the root. Connected to it, you can see we have, see if I can get a little laser pointer out here. There we go. So connected to it, you can see a few relationships here with has identity, the expectation, I'll expand on this in a bit, but has identity, just CDE identifiers or any other type of identifier that tracks specifically to this vulnerability. There's an optional relationship here showing known chains. This is just a display that when we have properties in a structured data model, like what's provided here, we can do more clever things, like not just identify a vulnerability, but identify what vulnerabilities maybe can be used in tandem with this one to achieve different types of results or impacts or what have you. We also have additional kind of like supplemental data. So it's always been requested when we have vulnerabilities, if we can categorize them as to what types of sectors are interested in that vulnerability. So say we have a vulnerability known to affect infusion pumps, then we can mark this as a related to the healthcare sector and things like that. Again, just to help drive data consumers to have that more of that functional information to get them to what they want to be able to see. And then lastly over here, we have the scenario relationship, which I'm gonna expand on, I believe in two slides. So vulnerability identifier, fairly straightforward one here. Typically it's going to be referenced by CDE ID, but that's the, the Vontology isn't intending to limit to any one specific type of identification. You'll see this with the other schemes we have too. Really, this is just an object that's here where typically we will see CDE identifiers, but if organizations or information providers would like to include things like their internal bug tracking IDs, maybe they have proprietary identification schemes they use before they get their CDE identifiers, or issue trackers, things of that nature, all of that can go here to kind of start building all of those mappings where information can be found about the vulnerability, or just the way it's been identified in the past. Now, I'm sorry, I have a little issue here in. So, like I may not mention this, but so we're not explicitly stating one identification scheme is mandatory here. This is just a data model. So while it's likely that CDE will be used primarily to fill this data point, there's no explicit requirements as you have to use one or the other. Yeah, so I don't have it represented here just for simplicity's sake, but most of these objects are absolutely going to be represented. They're gonna have their own internal tracking kind of things like UUIDs and things like that. That's just not represented here for simplicity of the presentation. All right, so going into the scenario objects here, there is a, I know that's a lot to look at there, but I wanted to at least include the visual representation here, and I'll talk through the different spokes that come off of the scenario object. The most important thing is that scenarios, we consider this to be the central object of the vontology model. This is what stops us from being a flat structure when we describe vulnerabilities. This allows all sorts of additional enumerations in the vulnerability description to add all of that additional nuanced information, such as the types of impacts that are available in certain different exploit scenarios, or if there are multiple different exploit scenarios present. Perfect example of this would be most vulnerabilities that rely on victim interaction are reported as such where they're, if it's a remote attacker, but as victim interaction and gets to have his payload put into the, wherever it needs to be to achieve his goals. But realistically, all of those vulnerabilities also have the insider threat possibility of not a remote attacker, but an actual local attacker who simply has authentication instead of victim interaction. Having something like the scenario object here is what allows us to include that in the information provided should the information provider be willing. Just a real quick description of the different objects that we're gonna be going into. Scenarios connect both the, or all of the product barriers and action objects, which we're gonna go through a little bit more detail in the next few slides, but they also have a series of properties associated to them, such as the attack theater, so where the attacker's coming from, the exploited weakness, and the evidence by source of the provenance, like where did we get the data from to provide this information, just to help prevent that people from, grabbing things out of thin air and claiming that they did their work or something like that. So getting into the product object, now this is gonna be similar to the vulnerability identifier one where there is no explicit identification scheme that we're trying to enforce with the product object. Instead, while we do have something over here on the side that says CPE applicability statement, that's there just because NIST and the NBD typically operate within the CPE specification and the CPE applicability language. However, most of this data model and the product object in general are put together in a way that it can support any type of product identification scheme that an information provider is willing to offer up, whether that be SWID or Cyclone DX or Pearl or the CV programs JSON 5.0 affects sub schema. As long as there's a schema that can be referenced and a set of values, it can be supported and identified here. The reason that product is connected to the scenario object is a very common statement from vendors is that while they all may be affected by the same vulnerability, due to the implementation details of their product, there are all sorts of different things to be accounted for between each implementation. So the barriers may be different. The possible impacts of a vulnerability could be fundamentally different based off of the product that's affected. So we believe that the appropriate location for the product identification would be at the scenario level. Next one here connected to scenarios is barrier. Barrier is just a series of, we'll call them difficulties that an adversary would need to overcome to be able to reach a successful exploit scenario. These are things like ASLR protections, sandboxing concerns, needing to have social engineering of some type. Most of these are relatively straightforward, but since, like I mentioned, different product implementations can absolutely have all sorts of different barriers and mitigation techniques in place, we know that certain vendors would certainly be more likely to provide more scenario-based information if they could split it out based off of product and then based off of the types of difficulties an adversary would need to overcome. So the next object here is the action object. And I suppose the most important thing to say here is that the action object involves a lot of high level information. So we're talking what we consider impact methods, but easy examples of this would just be code execution, authentication bypass, the types of data points that are typically provided by information providers, but don't really give us the actual picture we're looking for. It's a very high level concept that causes data consumers to have to make all sorts of assumptions at this point. So we wanna be able to communicate those high level things, but having them in the action object allows us to then point to the next object down for that more granular information that's typically missing in vulnerability descriptions. Additionally, the action object contains this concept called context, which is really just a way for us to explain that these high level impact methods and the impacts associated to them further down occur within a specific space. The way implementations of products and services are going, it's just getting more and more complex with containerization and virtualization and cloud implementations in general, and data consumers are usually left kind of guessing where all of these impacts are actually occurring. They know generally what the vulnerable product is, but they don't necessarily know where the impacts are or if there's secondary impacts or things like that. Having that kind of information in the action object allows us to be more specific, just to give an example that I'll expand on in just a bit. Having this kind of information in the action object would allow us to say something like a trust failure in the channel allows authentication bypass in the application. While that seems very similar to what we hear today, that information usually isn't structured in a way that lets us programmatically parse through that information. I mean, it's always just free text strings. So the next one that I'm gonna go to and this is the last object we'll be touching on is the impact object. So impacts traditionally are understood through the lens of the CIA triad, confidentiality, integrity, availability. We do have adjacent concepts that get provided. That's usually what we tend to focus on in this model. So instead of saying there's an availability impact, we do want to specifically say there's some sort of service interrupt or a crash or what have you. In addition to those typical logical impacts, we also want to allow the ability to communicate physical impacts. So that would be more under the lines of, we'll just use a vulnerability and a jeep, which there have been quite a few of those, where they could have impact maybe to the assets. So damage to the jeep, some organizations would care a great deal to communicate that. Even more important, damage or death to humans. I'm sure everybody cares a little bit about that. But then also other secondary concepts like resource consumption, but not in the typical ways of like too much memory being used. I mean fuel, electricity, water. Things that could have financial implications to maybe an organization that they would care about to know. And then one of the major benefits we believe of having impacts be explicitly called out after the action object is simply that having this be something that is mandatory and also advised means that information providers would typically not omit this kind of information, which frankly is very useful for deriving all sorts of downstream data types, whether it be CVSS or other prioritization schemes or just vendor specific criticality systems. Ah, and I almost forgot the last thing here that's worth noting is there are two relationships here, results in and does not result in. And what that really allows is for organizations who have the information, they'll be able to positively identify impacts, but also be able to state explicitly that a certain impact is not possible. This would help mitigate against all the concerns of data consumers just assuming the worst case scenario. All right, so it kind of runs us through the general model. What are we going through next is a few of the, a few examples of how this can be applied practically. For those of you who have access to the slides, which I believe it's on the website for the summit, each one of these links will take you to one of the things I'm gonna go through here. This will take you to the public GitHub repository where all of this is hosted. So if you wanna look at it later, you'll be able to. Now, I have made this a bit easier to read by not including all of the data types, such as the UUIDs. But even so, I know that's a bit of an eye strain, so I'll be going through it here. What I've done is I've picked a random vulnerability from the CVE list. There was nothing explicitly special about this. I wasn't trying to send any messages or anything. This vulnerability really just had light information, but kind of enough. And the information that wasn't necessarily explicitly available was, for the most part, able to be derived through proof of concept code that was provided, which, again, that was a whole manual parsing effort that I had to go through here. And the whole goal of this is to reduce and hopefully completely remove all of that manual parsing by downstream consumers. So you can see here, we have just to go over it really lightly, it's a buffer over read vulnerability in a repository that allows, you know, read of sensitive information or a crash. So you have the vulnerability object up here with nothing really special, but you do see the enumeration you were asking for just identifying the object itself. Vulnerability identifier represented by a CVE ID and an originating product since this, excuse me, since this vulnerability in particular happened to be in a library, we can make the claim not just that we know the vulnerable product, but we also know the originating product instead, what the final dependency would be. And then we have the very first scenario. So we follow this down, you can see for this first scenario, we can derive from this information here buffer overreads, that would be a weakness of CW126. It's evidenced by the hunter.dev link, which this was originally reported as part of a bug bounty program. So that's our evidence by data. And then you can see right here, we have a tech theater of internet. The reason I have this in bold is to point out that this was actually not explicitly stated in the vulnerability descriptive data or the proof of concept code that was provided. I actually had to go back and based off of the exploit scenario that was being provided, I had to infer and frankly assume that it was internet. The reason that's worth to bring up is if this information had been provided to the information provider, it's a very simple data point that they would have easily been able to associate had they known that it was valuable to downstream consumers, instead of them simply thinking that we would assume the appropriate response. So I kind of led into this, but this vulnerability in particular relies on social engineering for a malicious file, for a victim user that has at least user level privileges within the application. And then there's the first action is code execution within the context of the application that leads to two impacts of logical impact read and a logical impact of service interrupt. And that's basically just backwards mapping everything from this vulnerability description. The difference here is that this is structured in a way that we could actually build automation on top of instead of trying to use AI or some other clever fuzzy logic to try and pull all this data together to do something with downstream. The last thing I wanna point out is I have two properties here, scope and criticality that are both bold and with question marks and same as the attack theater, the point here is that this information simply wasn't provided, but unlike attack theater, this information was impossible for me to derive as a information consumer, meaning that I would have as any other information consumer, we would have to assume the worst case scenario, which would likely be, it's the worst possible read, it's the worst possible crash and move forward with that, which if we derived that all the way downstream to say CVSS, we'd end up with a base vector string that's just simply way too high because we didn't have the information we needed to derive those data points properly. So the next thing here, just to show how scenarios work in a practical example, since this is in fact a buffer over read, we know that there are likely multiple scenarios as far as the possible outcomes. So just breaking out the Vontology model instead of having a single scenario, an information provider, if they want it to be fancy and get a little bit more detailed, they'd be able to say in the first scenario, everything's the same, but there's a only a read capability that would occur and in the second scenario, only a service interrupt would occur and that really just allows us to have the context of it's not A and B, but it's A or B as far as impacts are. This may not be valuable to everybody, but there are quite a few academics out there that would appreciate having this type of granularity built into the system and then for what it's worth, scenarios can be used to expand in multiple different directions. I'm just trying to show one example, yes. So the approach taken here is simply that when the buffer over read happens, there's some sort of execution that may occur and so the high level concern that usually gets communicated is that there's some sort of code execution that results in an impact. Well, you could say that and frankly, at a high level, that's a completely fair thing. The real point that's trying to be displayed here is simply that code execution, as you're kind of pointing out, simply isn't good enough data to build on top of. So we need to go to that next stage to say, well, what does that code execution actually mean? It means I can have a write, it means I can achieve a read. All right, so this next one's a little scarier. This is what I would consider, this is what I consider the theoretical example. It's a traditional stored cross-site scripting vulnerability so it's not something that's overly complex but it is something that's generally accepted and understood through most of the community. The point of the slide is not to walk through the model again. I'm not gonna make us go through that but what it is going to show is when we have all the data points populated in a way that provides at least a baseline understanding of the vulnerability, we can do very nice things through the automation. So this example is simply showing that when we have all of the information put together for at least a single scenario for one vulnerability, we can do clever things like programmatically generate vulnerability descriptions. Again, by having the foundational source information, we can do a lot of things downstream and by not having that, it really impacts our ability to accomplish much with automation, at least in a useful way. All right, so the last thing I'm gonna show here as far as the data model is a huge, well it's the whole data. Actually, that's the whole data model right there on the screen. Usually when I show people this, they kinda cringe back away, say yeah, that's great, thanks for talking to me. I gotta go do a thing. And you can, you can walk out if you want, but hold on, there's something coming. So overall, the main point I just wanna bring up with this is if I can generally give or take some of these valid values, I know some of those can change over time as people prove that there's a case for them. Generally, if I can fit the entire model for describing a vulnerability on one slide and have it still be somewhat legible, the main goal, the main point is just to prove that it's not overly complicated to describe vulnerabilities. And it's not overly complicated to structure them in a way that we can build automation. We just need to start actually doing it. So now that hopefully you aren't too scared, but just a little scared from that last slide, this one is coming into kind of alleviate at least some of that concern. So what are we working on? First, not only do we have the model there, but we do have and have completed the first draft of a JSON schema that not only allows for communication of the information in a serialized format, but it also allows you to check and validate that that JSON information is structured and well-formed. The benefit of that is it just allows us to communicate the information from one organization to the other without having any issues in between. The second thing that we're doing, and I would call this an alpha state right now. I do have a screenshot up of it, but we're working on a web UI to allow users or organizations to develop voluntary representations for vulnerabilities. Obviously, we'll be doing this ourselves as well, but having the UI available will mean that people can play with it and learn, give us feedback, et cetera. This is going to be browser-based with a relatively intuitive UI. It is a lot of information to squeeze into a small space, so that is a limitation that we have, but what it will do is it's gonna allow a lot of simple input suggestions. So if it isn't a free text field or waiting for an explicit CVE ID, it's not only going to be providing a list of possible values that someone could provide, so it'll guide them down the path of how to describe a vulnerability. It's also going to guide them for information that's missing. So the Attack Theater, for instance, is a perfect example of something that could have easily been provided and simply was not, because they didn't realize it was necessary. The other things that this is gonna do is in live while they're populating the information. It's going to be developing and creating the JSON format on another tab that they're gonna be able to export so they can provide that to another organization or simply another entity to look at. And then it's going to allow for importing of that data too, so I'm sure all of you loved those graphs that I had up there, but they're still not the most digestible thing for the human mind, and if any of you have just stared at JSON output, that's also not the best thing to sit down and read with a cup of coffee in the morning. So this UI is really gonna allow users to import the data too, so they can have a nice user interface to also just kinda look through and better understand the vulnerability as it's represented. The last thing that's worth pointing out for this is it's going to be a standalone, and I've mentioned this standalone browser-based interface. The real value and the point of that is that organizations, if they don't want to just use what we're hosting on, say the public website, it's gonna have to be extremely low overhead to basically pick this up at a GitHub and drop it into whatever processes and policies and what have you that organizations may use. That way they can use it internally and maybe integrate it in a more specific way to their needs. So it's just gonna save a lot of cost in that way by having it be simple. All right, so to recap, went through a bunch of problems, covered a few of the goals that the Vontology wants to solve. We went through the current structure versus what the Vontology is trying to offer. Went through the main components, did some practical examples, and then talked about the schema and the AlphaStage web UI. Anyone have any questions or thoughts? Right, so that's an interesting thought. So in the same way that the bugs framework folks are really focused, oh, I'm gonna do terrible at summarizing it. So the question was how does this relate to the bugs framework because it seems like the other side of the coin to vulnerability descriptions. What I would say is bugs framework is focused mostly on trying to really detail, in a really detailed way explain weaknesses. This is, I mean, to your point, this is kind of similar where, and I wouldn't say as detailed as the bugs framework methodology, this is trying to provide a structure to describe vulnerabilities in much more granularity than they're described today. What happened happened, right? Yeah, yeah, it could be used that way. And one of the good things about the Vontology model is it's set up in a very plug and play kind of fashion so that when there are other data points that are mature enough to kind of be plugged in, it would be very simple to kind of just take the bugs framework information and just snap it into its appropriate location within the Vontology model because that does serve one, like you mentioned, it does serve the purpose of how did we get there, whereas Vontology is trying to describe what happened after the fact. Right, so the question was can we automate some of this? So some of that would be done through the UI, which is the whole reason we're trying to develop the UI and why we're developing the JSON schema because usually data models are very wonderful and easy but they're also theoretical and until you can implement them in practice, it is a lot of burden. So we are trying to take a lot of that initial burden of kind of getting things spun up off the table, but as far as the expectations of the type of information that we're looking for, kind of one of the main argument points here is that a lot of the effort and complexity and difficulty is currently being put on data consumers and they're simply the wrong entities for that effort to be put on. They don't have the expert information and domain knowledge of the products to be making those kind of determinations and it really is something that the information providers should be focused on, at least improving with the guy in front here. Is there someone you guys... Well, so right now, obviously, we're still building the UI, building the JSON. We can go through and kind of do some hand models and generate the representations to make sure that it can accurately do that, but I would say that the stage we're at right now is we're actually, once we get the UI up, we're planning to do some outreach to get some assistance to create those, first to validate that it doesn't just make sense in our heads, make sense in other people's heads too, because I'm sure there would be some valid values within the lists there that other organizations would like to have added just for sanity's sake, clarity's sake for their use cases, but we aren't necessarily at the stage now to say that this works perfect for automation. The real issue at hand here is that the foundational information is simply insufficient to really build on top of, and so we're trying to create that initial framework so we can have that foundation so we can do all the other fun stuff downstream. Yeah, just someone else's problem. Yeah, so I would say at this stage, we're still focused on the vulnerability description part because that is really lacking, at least in its consistency, but the way it's set up, if there was a way to plug it in, the data model there, as long as you can tie it into the vulnerability object, it's just something that just plugs right in and you just build that extensible model off of that, but in the event something crazy happened and it wouldn't actually jive with the model, I'm sure we would be able to build something like that, but I am getting my all stop, so thank you everybody for your questions and I'll hang around for a bit if you have any more.