 Okay, very good. Thank you for coming. We'll get started. So, keeping up with the joints, I did not come up with this title, but we're talking about CVEs, we're talking about security, Spons, Teams, we're talking about tools, we're talking about trying to address inefficiencies to make our jobs better, faster, less cost to us and to our customers. So, security response management, risk, cost, and best practices in an imperfect world. Let's talk about that. So, we're going to keep our products secure, that's a requirement for survival. We need to keep this, but however, the secure data available is a lot of it, it's a flood of data, and it's also not complete, sometimes misleading, and we have to be careful, we need it, but we need to be careful how we rely on it. It can be very inefficient, resulting in high costs and trying to manage the stream of CVEs for your corporation and for your customers. So, we need to share best practices, knowledge, awareness, automation, and tools. So, the agenda today, I'll be talking about understanding CVE sources, CVE quality, CVE volume, managing your security response, cost, best practices, solutions, and we'd also like to introduce a tool that we are now sharing with the open source world. The general patch workflow is, I'm putting us in the center, so above us is a upstream CVE source that is gathering the data and finding the vulnerabilities, and they're gonna be publishing that, hopefully in the CVE database. You, or me, is in the center, we're a vendor, we're OEM, we scan the upstream for the CVEs and their status, we manage our response to those, triage, match against our products, try to understand them better, we fix them, we create the patches for our customers downstream, and then the customers we see those, and then test and deploy. I'll be talking about the items in the orange, and focus on that, because of the other topics, there's conferences about that, but I wanna address this middle topic about managing the security response. So, now everyone's here, so I assume everyone knows what a CVE is. My question is, how many of you have actually looked at a CVE record? Okay, how many of you have never, ever seen a CVE record? My goodness, okay, I don't think I need to do this slide, but I'll do it anyway. So, description field, it has the severity score, just to have this in mind. It also has the attack vector with the domain scores and such. It has a list of affected products, which is the CVEs, as you all know. It has the list of support links, people have attached helpful documents, hopefully patches, reproducers, and support material, and it has also a category for the weakness. So, just for completeness, that is what a CVE record is. We'll have to keep that in mind as we try to analyze what they are. The upstream sources, MITRE, is the corporation that is responsible for maintaining the CVE list. They pass the actual data onto the NIST, National Institute of Standards and Technology. They have the NVD, which is the National Vulnerability Database, that tracks all the information. The hardware vendors, also supply information. The software maintainers, distros, also track the things that apply to them. Their response to the activities and how it applies to their various releases. There's also many other sources, more ad hoc, more random, more different, mailing lists, websites, forums, and some are private and some are public. And this is a place, of course, to find stuff that isn't quite maybe a CVE yet or information about status that's not been published yet in the public forums. So, let's talk about the normal workflow. This is the one that we expect and pray for. That's very, very easy. The community discovers a vulnerability. MITRE, there may be a step where it goes first to the private list at MITRE, reserved CVE record. And in that case, maybe a vendor under NDA will be working on it and do the test. At some point, it gets moved to public in the MITRE list. And at that point, it's also published to the general world in the NIST list. The vendors work on it, provide a public patch. The patch goes to the vendors who represent their customers. And they test and patch, provide that to the customers. And in the end, the customers receive the fix and everything is fine. This is what people expect and hope for. What often happens is this. Things are out of order, things are delayed. Their discovery in the community, MITRE may reserve it, and work is done. However, the vendors may publish their patch before it's made public by MITRE. So, and then some people who are watching those tests will see that that has happened. And they'll say, what, where did that come from? And then it'll start applying patches. And then at some point later on, MITRE might then mark it as public. And at that time, sometime later, that NIST might also mark it as public. So what's happened is there's a delay. If you're not watching these various sources, you may miss a patch that has happened, a vulnerability, a patch that has happened, and that is gonna be a problem. And it's especially embarrassing if your customer finds it first. That's pretty sad. You get an escalation and get a call and you have to explain what's happened, when it happens. So the thing is that you have to be careful. The nice workflow doesn't always happen. So you have to keep your eyes open. You have to be aware of the vendors. You have to be flexible when your people find it first. And you can't rely on NIST being the latest word. You can't rely on MITRE being the latest word. So you have to be vigilant. Now, sometimes it's more complicated than that. I won't name this particular escalation, but it can sometimes be a mess. And it always will be high profile CVEs where everything just gets a mess. So let's just talk about some of these. Because it's really brought into our view how many issues and many small inefficiencies kind of add up to big inefficiencies. So something is discovered, is reserved in MITRE. They start working on the background and sometimes they work a long time and sometimes it's made public before the community, before it goes to MITRE or to NIST. And they may just reveal it without any just announce it to the world. So people start all of a sudden discover and if it's a big high profile one, then they'll be panic from customers. They'll be surprised for the customers. A big surprise for you as a vendor because you did not know about that. And sometimes it takes even longer to solve stuff. And while you're waiting for patches and waiting for solutions, you start having to deal with your customers. They wanna know what's happening and you may not know yourself and you're trying to also find out what's going on. You're caught in the middle of information that people desperately want to know the answers. And so you get in this loop of trying to find the answers, trying to deal with your customers and keep everyone happy. You get in this loop of meetings, of writing papers, gathering reports and trying to try every little patch you can find. And you find that it's very wasteful in time and very, very stressful of course. But how you organize your data can really affect how well you can respond to all these extra pressures. So in the end, you get a patch, everyone's happy or happy enough and you move on. But you've then have lost a tremendous amount of time. And of course, you'll be losing time for actually fixing the problems but you've lost time just dealing with the problem, trying to gather all the information. So we'll be talking about that in some detail. Quality of the CVEs, that can be an issue. They can be very brief in their discussion. They can be not complete. They could be in fact slightly misleading about what's inside the description. But sometimes that's all you have to work off of. The usually CVEs have the CPE list of the affected products and their releases. And you would hope that that would be complete and certainly mature CVEs. That is pretty complete and accurate. However, if you're working on stuff that is new, it can be either incomplete, incorrect or in fact missing. And in fact, most recent CVEs, there is no CPE list. The content may have missing or a few contact links so you may have nothing to work with, just a description. And you just have to then start searching the community and asking people, well, what is it about? Do you ever reproduce or what can we do? And you also have to be continually scanning the information because it's changing, hopefully getting more mature, more correct. But it could take a while for the information to be at a state where you can use it. So you need to keep looking. And that's a lot of polling, a lot of looking, a lot of time. And sometimes there's a delay to the content updates. We talked about the mitre versus nest, but you have to be proactive and you have to also be patient sometimes. Quality of the issues too. This really affects people like us who are always, we're about to release. And the customers going to want to know, are you vulnerable to these defects? Well, the last 20 or last 400, there's no information beside the description. So what are you gonna do? You need to start looking at other ways to analyze this stuff to know, at least if you're not vulnerable. If you are vulnerable, then you can at least market and you can then promise that there'll be a update at some point, you just have to have something in your hands. But unfortunately for people releasing, that's exactly the information you do not have from the CVEs, you have to be more creative. And if you're gonna have tools that are trying to do work for you about CVEs, like auto testers, auto scanners, auto research tools, you have to also be careful because as I said, the information could be incomplete, could be inaccurate in big ways, but also in small ways. We've had situations where the last little bit of the version numbers different from what you expected. And if you have your tools that are brittily reliable on the information, you could be making big misses on stuff where it should be matched. So you have to be very flexible into tools that you try to use to assist you. Let's give examples of some CVEs that we've had problems with, give you a sample of what issues come up. For example, 2017, 13-220, the CPS, as you can see, they said it was for Google Android and that's probably where they found it. However, it starts talking about kernel issues. Well, that's a very different thing. So you have to be always vigilant, in that case you have to go maybe to description, maybe to the patch is actually there and you have to go that extra mile, unfortunately. For the next one, 2524, it had this CPE, which claims read line, 6.3 and below, but in fact, it was 6.0 and above. Again, you have to take the information and be vigilant about it. 8872, it was against, this is a one, yeah, that was resulted in a bug in a patch but upstream ignored it and it was fixed in a different patch, but no mention, no reference, no cross-reference to the CV was made. So it may say it's fixed, but that fix never went in. It's actually got fixed over there, you hope. But how do you know if that patch is actually in your system or not? Are you vulnerable? Have you been patched? And unfortunately, these are exceptions that you have to keep your minds open for. And the final one, 10195, dark CVE, reserved, however, there is a patch for their system but it's for software that's long dead and the patch will never go upstream. So it's a black CVE, there's a fix, but do you know what got in? And the question that you have to ask to your customer, of course, are you vulnerable? So this is a dead one that's maybe fixed, probably not, but it does affect your system, so you have to be vigilant to know about this ones that are not actually affecting you that may not be around. Talk about the volume of CVEs, it's growing. It's more than a thousand per month. And you have to evaluate every single one because every single one, even if it looks like it doesn't apply to you, it may, as you saw in the previous examples. So you can't just read it and go on, you have to actually look at it and that's unfortunately expensive. And it's unfortunate because maybe only 30% of the CVEs are less than that actually apply to you but you have to look at every single one and it has to be done by someone who is aware of the limitations of the quality of data of the CVEs. Share number is expensive but also given, yes, the analysis is also very expensive, the expertise. So that takes time. So and of course incorrectly categorizing the CVE can have even greater cost. Cost in an escalation, cost in a vulnerability to your customer, cost in trying to repair that. So it can be very costly to do but you have to be careful because the cost of not doing it can be even greater. Volume of CVEs, just an example, it's been growing. It's almost been doubling in the last two years and it's growing more. In this example, looking at data, there was 14,000 in 2017, 14,600. 5,400 were actually defects against our particular system as it turns out and some of those of course spread across different releases. We have multiple releases, in this case five for that last period. So the volume is just increasing not quite exponentially but it's increasing pretty fast. So there's tools. There's ones called scanners, system analysis scanners. An example is Nessus. They can be very valuable for targeting product systems. They have a lot of features around that. They can tell you about non-vulnerabilities but they do not tell you what you are not vulnerable to. And unfortunately the customer is gonna wanna know that answer, what are you not vulnerable to in addition to what you are vulnerable to. So they can really understand their system and know where they are, stand with that. The results are mostly in the area of needs more analysis, more investigation. So it's still a lot of work after they do. It's good to have, it's a good backstop for finding stuff on the fly but you have to be aware of its limitations. And of course it depends on the information it has. So all the gaps and problems of CVEs they're magnified in the tools. So you have to understand what they're trying to do and how to be flexible. Other scanning tools, for example, the build and source analysis tools, examples are Black Duck, Yachter Project, we have a tool called CVE report, dependency tracker, there's many that do this. And where they scan the either the raw software or they scan the builds as they go by, they can be more precise than system analysis because they have more access to more information. They can do checksums and such. However you can trigger on stuff that is there in the software but it's never actually built into a running system. They find it, it's a false positive. So you have to be careful about that. You also have to determine what is vulnerable but it doesn't tell you again what you're not vulnerable to. It just tells you what they found. And again it's reliant on the known information from CVEs. We hope they have extra secret sauce, they've been analyzing stuff but it has to be careful about what comes out of it. So let's talk about the general topic of security response management. So here's my central thesis while there's heightened awareness about device vulnerabilities which is often missing is awareness of the process of managing security response itself. It's, this is all overhead. You're not making money off of this, this is overhead. But it's perhaps hopefully overhead we can control and understand at least. And it doesn't make money but it does protect money. It protects money from escalations, about liability, your customer's trust in you. So it costs money but it can protect money. So could the issues of trying to manage this whole stream and CVEs, this amount of work is growing. There's stuff coming in all the time and it's just getting worse. And you probably have a support matrix, not just a current release, you probably have a couple of releases in the back. So and one of the problems is that the fix for this release, that kernel may be different for that release and that kernel and that release, that kernel. And so we have to, that's a lot of work. And so the support matrix is a big factor in this. Vulnerabilities often apply as again to different releases. Data's not often well integrated to your other systems. You may have your defect system here, agile system here, your incoming CVEs, your patches out there, the engineers over the wall. So there's some, you may not have things integrated and when things are not integrated, information's not flowing. There's inefficiencies, lost stuff and a lot of time spent trying to find information, polling people, this is lost time. And if you're lucky enough have access to embargo data, then you have to track that also separately and that's another burden. It's a value but it's a burden because you have to make sure you have control of that, that the right people have access to that, that you can move it quickly. Often that stuff gets lost in private directories, lost in email and that's just a way to lose time and stuff. There are companies that you can offload your work to. If you don't have the expertise or the time to do it, the benefits is they can provide missing expertise and or bandwidth and or resources but that pass through can reduce your customer response time. It's also can be very, very expensive. Some can be phenomenally expensive. They're taking on a lower risk but that's expensive to corporations. So it's probably better if you can bring some of that in house if there's ways to make it more cost effective where you'd be able to do this without trying to outsource it. So defect systems, well, what about those? Often not the best place to track security issues. The reason is that security issues and vulnerabilities often are cross products. Whereas defect systems kind of focus on single products, single releases. So they don't cover stuff. So it's hard to have information carry across and track it at a high level. And you may not even have an understanding of what applies to yet. You may have a vulnerability that is vague or you're trying to understand under investigation. Well, where do you track it? Where do you put it? What bucket? It has no bucket. So that's another reason that a defect system may not be the best place. Hard to manage your embargo data. If you ever work with JIRA or Bugzilla, it's focused on single products. You can make one invisible but if you have defects against that release, well, how do you track that? Do you have a shadow product? And if you have an access list of ABC for that and DFG for that, how do you mix then all the different matrices of access lists? So it's very, very difficult to really effectively use your defect systems. You need some global, something more powerful. So let's talk about the costs. Just kind of summarize before we go on. Tracking, upstream CVEs, that's a cost. You have to do it. Creating and fixing defects, you have to do it. Providing updates to customers, whether it's the actual patches or just response or your conversations, meetings, whatever, this is something you have to do. And providing patches to customers, that's just something you have to do. Let's talk about the unnecessary costs. And this is where I'm going with this. Repeated manual polling of the upstream is expensive. That takes a lot of effort. And you have to do it repeatedly. Also trying to track your defect status. Again, you can maybe do some reports, but it's also very difficult and time consuming to manually pull out information and then start matching against the CVEs coming in. And analyzing all those CVEs, I keep mentioning incoming CVEs and analysis. That's actually a huge pain point. Again, because you have to look at every single one. And if you're just gonna have your engineer go in once a month, once a week, once a day, sometimes, and try to reanalyze stuff. They're spending a lot of time looking around and trying to analyze what they have in their hands. And we gotta do better with that. You have to also tracking all the patches, reports and documents. They could be in 20 different directories. They could be lost in email. They could be in many places. And that's not efficient. You need to be able to get this stuff and share it quickly and easily across the team. You have to status for customers and for management. And also if you're tracking embargo data, you have to keep track for compliance reasons about who know what when. So that's hard to do if you're not having a tool to assist you with that. You have to do reports. You have to keep track. You have to look around. That's a lot of effort trying to manage all that and then be able to prove to your management that you will be compliant and you have actually disclosed all the information. And then you have to repackage your results. You're gonna publish your CVs on your public site to your customers to see. That also takes effort to gather that information and this place is for information to get out of sync. And you have to do this over and over and over again because everything is moving. Everything is changing. Everything is helping getting better. But you have to keep doing this over and over again. And that's just a huge cost. So best practices, how do we deal with that? That's expensive for us. I'm sure it's expensive for you. So here's some of the sums we think we've come up with. Automate as much as possible. Don't expect your engineers to do this. This just takes too much time. It's too inefficient. You got to automate and you got to spend the money on automation. When we were growing up, 200 CVs a month, we could handle that. That's just an incremental cost. We'll just absorb that. You have to a thousand. My goodness, that's a lot of time incremental over and over. And it's just getting worse. So you got to just do automation. You got to spend the time, spend the money and get it done. Because so that's for gathering the data upstream, all the change notifications, pulling your defect systems. You don't have to keep manually looking at it. Every time you want to know what to say, the system is for your reports. The reporting tools, you might as well just have something generally reports. Why keep doing it by hand? You have the information, you'll probably add inaccuracies. If you do it by hand all the time, automate it. And history and audit tracking. So you can do your compliance, you can show to your customers, you're a vigilant, you can do your compliance for your embargo data, prove that you're compliant. Have tools to do that for you rather than doing it all by hand. Use multiple sources. NIST, MITRE, of course, use those. But keep vigilant about the other sources. As I mentioned, it may be fixed there. Red Hat may know it's fixed. Debian might know it's fixed. Bash might know it's fixed. You got to keep looking at these different sources and keep them in mind when you track what's going on. Because they may have more time with it. They may have had actual fixed it. So you got to keep vigilant about all the other people and that way you can stay in the game and stay ahead. And again, you want to find the answers and solutions before your customers do so that you can be the smartest guy in the room. Or a woman. Aggregate the data. Don't keep it all separated. Aggregate it so that you can run reports, compare the information, be always up to date, have everything integrated, defects, CVEs, status, customers, products. Keep that integrated and easy to build stuff upon that. Best practices too. Provide easy access to the data. Have your manager be able to run the report so you don't have to. Be able to auto-generate stuff on demand for your customer database. All these things that you get the email, okay, another report, another report. Have it easy to get to. In a safe manner, of course. Have it easy for your field people to find the appropriate information so they can quickly find answers for their customers. Make it easy to do that because continual emails, continual requests, continual packaging information that takes time from the people who need to do the actual work and evaluations. Be flexible with the data. I know that when we do our triage for the incoming CVEs, we ignore the version numbers. We just go for package names. That's enough resolution to be able to filter a lot of the data down. And then we can know if we're just not vulnerable. If we think we are, at least we cut down a list, but don't try to do every single detail. Just de-focus the information. Try to get the best answer at that time and move the data along so you can separate what you have to deal with with you don't have to deal with. Provide tools for CVE inflow triage. And I mentioned it before. This is a huge expense and you gotta do better. There's ways we can help these people out. You need to start doing that. I'll be showing some ways to do that, of course. Provide management with NDA information. So be able to keep this essential, but safe storage, user restrictions. So just manage your compliance carefully and have tools to help assist with that so you don't have accidental crossover. So we've been dealing with this for quite a while. We've come to the pain point where you gotta do a solution. And we come up with this thing called the SR tool. This is a carry response tool. And this solves many problems for us and stress. Because all of our really expensive, experienced engineers are spending time going do all this manual stuff I mentioned before. Doing reports, doing gathering and data, doing a re-analysis, trying to reintegrate information, trying to pull together information. It's just too much and it became too expensive. So we came up with this. And we're in fact sharing with the open source. We're promoting it at Yachter Project and to have the community join with us and take advantage of this tool as well because it's not just us. And we'd like to not spend our time working on this problem. We need to spend our time on actually doing patches and fixes and servicing our customers. So what does it do? Well, the list of things should look like, very similar to the list of best practices. And that's how it was designed to really address the best practices so it can do it for us. We don't spend the time doing it manually. On a mission, it's got scripting to do all the data gathering, going out to sites to gather the data incrementally. It's got cron jobs. We can make sure we do the right frequency so we don't have to keep having a person press a button or anything or do it manually. Easy access. We have a provided a web interface so that people can really easily walk the data, just follow the links, gather the information. We have also been able to panline. If we didn't invent some solution or question or answer in the GUI, they could just write their own scripts to query the database themselves. Data aggregation. SQL database. That's where all the data is integrated, all tied together, and that way we can really tie everything together and have everything know about everything else. So everything is up to date, consistent, brought together. Data flexibility, well, we just had to design that in and you'll be seeing some of that in there. And you can design more strict scripts. We'd like to do analysis after we get some more experience with this, but it's enough to get us out the gate. Triage inflow, I'll be showing some of the tools we have developed to help our engineers manage the CVE triage. Finally, NDA management. We have a ways to keep user lists so some information can be tied to an access list. Every person logs in, so we also know who everyone is and if they do a transaction like add a record, download a file, change the status. We know everything that's happening. Have the tool do that. So we have a complete history. Can do reports against it. We can validate it. We can enforce it and that we can really control the NDA information. An example of a page from the tool. You can see we have a mix of products. We have a mix of status. We have defects, releases that have been fixed. Thank goodness. And so you can really see everything's linked. Everything's tied together. You can see this is one CVE. This is a one CVE that we had to deal with and it had different effect on different releases. Some were still working on. They may be backported. It also shows at the top, we have our customer content management. This is custom releases. These are kind of in parallel. They're not quite a release, but kind of a release. But our system's flexible enough to have mixed data like that, which is cross releases that we can actually keep tracking stuff together with that. So this is a vulnerability. Again, it's across a product. It's across releases. And we can then drill down to specific releases. We can drill down to specific, up to the specific CVEs. And it's all linkable. And it's very, very easy to find information. And you got special buttons, not in this one, but report button up at the top. So you click a button and do report, do an export and get the information quickly to you to go work with that. So the object model just briefly, data sources. Again, this one I have as many data sources as could get your job done, certainly for the CVE providers, but also for your defect system, for your sustaining system, for your management system, whatever you need to do, you can add data sources to make sure that you have all the data you need at your fingertips. CVE, of course, represents the upstream CVE with links and such. Vulnerability, that is the vulnerability. It may usually be one CVE is the vulnerability. Sometimes there's several CVEs that actually add up to one vulnerability. So that we can kind of group that into a problem. And that's across products. Investigation is a word we came up with. Describe how we're gonna resolve that vulnerability for a particular release. And with that will be the particular defects that track that for that release. And the defect, that's representation of our defect system. We use JIRA. We're also adding a plugin for Bugzilla and whatever system you have. That we can represent, integrate, tie together, pull, push, whatever you need to do with the defect system so you keep your CVEs, vulnerabilities and your defects all tied together with the data that you gathered. Notifications. If the status has changed upstream, the tools can see it. The manager will have a notification to the manager that something has changed. Or if the defect status has changed, why have your person pull it? Have the notification come in and tell people that way you can keep everything tied together and not have people continue to look for it. It'll tell you when something's changed, there's something for you to do rather than wasting your time looking for stuff to do. A brief outline for people like pictures. I like pictures. How it all works together. Multiple sources. Try the laser over here. We have the backend scripts down here. They're triggered by Cron jobs. Once every hour if you wanna keep your defects up to date. Once a day for the updated CVEs. Database all ties together. SQL, everyone can work with SQL. The web interface. Or you can do the custom data scripts to get your data out. Reports grips. Reports out to your customers, to your management. And you can then triage the data and send it to your public pasting database. Make sure it's clean and everything. And separate for the best the everyday work information that's stored in the CVE system or defect system. And we also have the place for the bulk data. And also place you can download your files that are pertinent to specific CVEs or vulnerabilities. Upload data so you can really easily share the information between teams rather than having it kind of lost between systems. So. And I talked about the incoming CVE. This is one of the things, one of our primary goals was to have a tool to help triage incoming CVEs. What we've done is, let's see, oh, come back, come back. Here we are. All the incoming CVEs. Descriptions as you know them. We have, you can follow the link to see what actually the full record is. But here's the magic part. Our secret sauce. We've been doing this for 10 plus years. And we have a lot of information about which keywords applied to CVEs were vulnerable to, which keywords we have that are applied to CVEs were not vulnerable to. We've taken that list and we applied it, filters against the information we have in the CVE record that's coming in. There may be only be the description. So we'll do a scanning description. If there is a CVE fields, we'll scan against that. If we have information on downloads, attachments, we'll scan against that. We use that database of learned knowledge about what applies, what does not apply, to scan, pre-scan the information. So when the engineer comes up to do stuff, we'll give them reasons for, this is a vulnerability reasons against, based on what we've learned. And sometimes this is really all you can do. It's applying heuristics to the complete information of the CVEs. So that we can really quickly give guidance to the engineer. It says, well, it's got certain keywords. It's probably Windows. Or it's got keywords, it's Linux. It just, we have a couple of 10,000 keywords, I think we have. And so that way you can apply heuristics. That's just the first step. But sometimes that's what all you need. And then you can start pilling stuff off. You can say, okay, everything is Windows. It doesn't apply to me. Do a search against all those, select all those, mark those that's not vulnerable. It's out of the system. You've peeled away a couple hundred right there. Go to the next keyword that you probably is not vulnerable to, peel those away. That we can peel away stuff in groups. Then you could have the ones that you start looking at. Well, we're definitely vulnerable to that. The smart those is vulnerable. These are definitely need investigations. Let's pull that group out. So that way you can really chop through the list, divide, conquer, and really get it into the right state so that your next level of engineers can start working with it. So that's how we've been applying our information to try to make that job, that very expensive job, easier and safer for us. So, next steps, it's under development. Come join us. We have an open source, we have a forum. There's a community page at Yachter Project about the status of the thing, how to install it, how to run it. Design is modular, so it's very easy for you to add your own data sources and to implement your business rules against the data, your own reports, your own agile system, your own management system, your own compliance system, whatever you need to do. It's very modular. Purposely. And in conclusion, a lot of information out there, thankfully, with knowledge, awareness, adaptability, animation, we can manage this struggle. Because it is a struggle, an everyday struggle, but we can manage it and keep not losing money on that. We need to spend people's time on the actual problems and not the process. So let's have tools help us with the process and get on with it. And links to learn more. There is, we have a mailing list at Yachter Project on security. We have myself. I work for Wind River. We're the ones who contributed this. I'm the maintainer. And you come to the Yachter Project booth, you can see a live demo of the tool and see how it really integrates together and how it can work maybe for you. So that is my talk. So, questions. Oh, here. My lovely assistant has a microphone. Anyone like to ask a question? Okay. Oh, gonna make her run the whole way. I will come to you, but I have to stay on the stage. Yeah. And be pretty. You like staying on stage, you forgot. Okay, I suspect when setting up SR tool, you have to tell it what software you are using yourself. So can you pick them up from the GUI or choose from a list of OKBs or the vulnerabilities that are disclosed and we are using this and this or how do you do it? So what we've done is we've given you a list of keywords or 10 years' experience to get out of the gate. Or we have ways you can add to that key list word, ways to remove stuff in that key list word. We don't know your product yet, we don't know. We're kinda, you want to move into a world where we have our own CPEs internally for what the products are with actually the vulnerability list, the package list and all that. We haven't implemented that yet, so we're just using the heuristics right now. So, but that'd be easy something to answer. Easy to apply, easy to bring in. So if you have, we can work together, we can make it happen. But yeah, we know about our problems, we don't know about your problems, so that's why you made a modular to make it adaptable. So yeah, so work with me, I'll implement something for you to make you happy because I'm sure it'll help us too. More questions? Ah, one over here. So how would the system help me figure out like a patch that arrives after the CV, long after and doesn't have any reference to the CV, it fixes sort of the examples you gave? So if it came from a data source that you're aware of, you can still be pulling that data source for updates. If it came from a more ad hoc forum or an email list or some mail from your brother, you can enter it into the system. I can't answer that because it takes vigilance to find that stuff. But if you can find it with the tools, if it's in a place that's amenable to access, any one of these aggregation sites or host sites, you can have tools that will look for that. If you can find it other ways, you can easily attach that kind of metadata because we know that no system is purpose or complete, so we made it so that it's very easy to attach stuff that you do find out after the fact. I'm not sure I'm answering your question, but that's how we're approaching it right now because there is no good answer for that. So I'll just mention that you can create your own events in the system like for CVEs that do not exist yet or internal things that you know this is a problem, I don't know where it is, so you can create those and if a CVE does exist, you can then promote it to the actual CVE. So you have ways to be flexible with that. So again, you have to be flexible with the data because it's in perfect world. Yes, thank you. Question. Did I understand correctly that all this tooling is about filtering the information in the CVEs and the second step, when thinking about a build from the Yocto project where all these projects are configured, features configured away, that's left for this investigation or is there something to help with this part of the problem? That's the next thing for me to do. That is the next thing because we know from our builds from the recipes from the belt outlets from the auto builder every night, we know what's in the system, so be able to match it up. If you have enough CVE information, so we certainly can have our internal CVEs, about the product from our builds, we can gather that that's pretty easy, the versioning, what's in there, what's in that particular configuration to match against CVEs that may have zero CVE stuff. So then we have, well, okay, it's bash. That's all we know for now. And so it's imperfect as this gets better, the batches can be better. So I have not implemented that yet because right now I'm just trying to get my number one priority is to get the engineer who's triaging stuff happy and everything. So that'd be my next step to implement that. Does that mean, for example, if we put Samba on a product and we configure array the domain controller features and we get a vulnerability against Samba, will it detect that we are not vulnerable because we configured away the domain controller stuff? So if you have a list, so the quick answer is yes, you can do that. Maybe not today, but yes, because you know what you're vulnerable to, Samba. So certainly we have a whole idea of meta words that attach to everything. So even if the CVE doesn't say Samba, but you know it affects Samba, you can have keywords that match that so that you can have the extra reference if the CVEs are not completely accurate. So yes, there's ways to match it against the CVEs or the keywords that your engineer has said, this is really Samba. And it can match, okay, this product A is Samba, this one is not, this one maybe. So yes, you can match that. I have not implemented that yet, but yes, that's complete within the scope of the system and the design of the system and the goal of the system. Any more questions? Three, two, one, it's a wrap. Thank you very much.