 Hi, I'm Mark Essler. I'm from the Ubuntu security team. And this talk is about how boss projects can improve security by taking proactive steps and responding to security reports. So first, there's a few background terms that we need to define to talk about security. So if someone could shout out, what is a vulnerability? So yeah, so vulnerability, any computational flaw that weakens the security of a system. And the important takeaway is not every bug is a vulnerability. There are many types of families of vulnerabilities. And common weakness evaluation, or CWE, is a way to describe specific classes of vulnerabilities. CWEs are defined and organized by the MITRE Corporation. If there is a vulnerability code, it's helpful to determine what type of CWE it is. And describing it will help communicate to others the vulnerability, and it will help you understand the impact of the vulnerability. CVSS, or Common Vulnerability Scoring System, is how we assign severity to vulnerabilities. CVSS scores are not perfect. Sometimes they do an extremely poor job of quantifying a vulnerability. Nonetheless, CVSS version 3.1 is a metric that is used. It roughly describes how an attack occurs and the kind of impact it has. In this example, the attack vector is adjacent. This means that the vulnerability is not remotely exploitable over the internet, but could be attacked adjacently through wireless or Bluetooth. Part of this example also shows that availability is highly affected. This means that the resource or system could be denied from running. When we tabulate all these metrics up, for this example, the CVSS score is a 5.4, which we call a medium vulnerability. Common vulnerability enumeration is almost always referred to as acronym CVE. CVE is the naming system to specify different vulnerabilities. It's the common name of a vulnerability. By using CVE, many groups can talk about the same vulnerability without getting confused. The CVE program is the organization in charge of catalyzing CVEs and are the primary resource for CVE data. Upstream developers and projects will use CVEs to communicate vulnerabilities with their users and stakeholders, and downstream groups will also use CVEs for communication. So let's take a look at how CVEs can be used to communicate a vulnerability. Say we have an upstream software project like SQLite. Downstream of SQLite are projects like Node SQLite 3, and downstream of Node SQLite 3 is NPM, which distributes Node SQLite 3. If there's a vulnerability in SQLite 3, they will likely notify the Node version before publicly announcing and disclosing the vulnerability. This allows Node SQLite 3 to patch the vulnerability before the vulnerability is public. Then users can check the security of their packages by running NPM audit to see if the CVE has been fixed or not. CVEs contain a lot of metadata. If there's a vulnerability in your software, it's best if you, the upstream project, can help write this metadata. I'm only going to highlight a few key pieces of CVE information. The CVE description is written in plain language and describes an explanation of the attack, the impact of the vulnerability, what software is affected, and any other relevant attack factors for the vulnerability. The severity and characteristics of the vulnerability is calculated by CVSS. And references should include bug reports, analysis reports, and other announcements. And you should also include the CVE identifier. The CVE program is sponsored by the U.S. federal government and the MITRE Corporation. Often the CVE program and MITRE are confused as MITRE plays a large role in the history of the CVE program. So looking at this graph, you'll see that there's spikes in the number of CVE assignments. Initially the CVE board, members voted on the assignment of every CVE, and then in 2005 MITRE became, was allowed to assign their own CVEs. Since then in 2017, certain organizations like Canonical could also become a CVE numbering authorities or CNAs who are trusted to assign CVEs directly. Today there are many CNAs ran by distros, programming language is and software vendors. But you don't need to be one of these organizations to request a CVE. Anyone can request a CVE from a CNA or make a request through MITRE. If you're unsure, always check if you have a dedicated CNA first. Now, not all CVEs are valid. If you request a CVE, make sure it's a real vulnerability. This is an example of a misassigned CVE for the version control software Git. In this example, a researcher thought there was a vulnerability in Git so they contacted Git security. After being told there is no vulnerability and that Git features worked exactly as intended, they requested a CVE assignment anyways and wrote a cheeky blog post. This bad information has caused headaches for developers and downstream maintainers who now have to deal with a misassigned CVE. Here's another example where there's a bug in the XORG display manager. The bug is real, but it has no security impact and was later rejected as a CVE. Inflex DB documents that their database should not be run publicly without an authentication layer. If a server administrator runs a database publicly on the internet without authentication, then administrator is responsible for the break in security and not the inflex DB developers. It's always wise to talk to a upstream developer before assigning a CVE. And as a developer, you should clearly document what you believe is and is not a vulnerability. Lastly, this is a short example of where a vulnerability was found in Node SQLite 3, but it was assigned to the upstream project, SQLite. Most CVs are not misassigned and are corrected when they're found out not to be. CVs are extremely useful, but not infallible. CVs are the primary structure most organizations use to monitor the security of their packages. If there's a software vulnerability in your pack project, you want a CV assigned so that you can communicate it to your users. Is there any questions so far? Yep, so a CNA is a CVE numbering authority. So there's, so like Python, I assume Python is a CNA, Canonical is a CNA, so at Canonical, we have some software that we maintain. And if there's a vulnerability in it, we want to assign the CV and help coordinate the response to it. So a lot of jargon. Just a follow-up question there after what is a CNA? If Canonical is a CNA, do they have any name spacing as to the things, the CVs they can release or is they could do it for anything? There are scopes. And so Red Hat is a root authority for all open source projects. So they are scoped to be very broad for every open source project. At Canonical, we will assign CVs for things that directly impact like that we have developed or things that we are actively auditing. So if we audit a piece of software and we find a vulnerability, we'll usually assign a CV then or if there's something assigned against our projects, we'll assign a CVE. And I kind of guess that the early CNAs had a very broad authority and over time, it's been restricted more. Okay, thank you. Yeah, I don't know too much of the specifics. MITRE kind of assigns everything. I'm abusing the rules here a little bit, but this isn't a question so much as a response, but MITRE is sometimes referred to as the CNA of last resort. If you can't find anybody else, but it's a vulnerability in software, you can always report it to them. Historically, they did them all, but of course, as the number of vulnerabilities found increased dramatically, that was not tenable. So they've been trying to federate it as much as possible so that it doesn't get stuck. Thank you. So proactive FOSS security. Please raise your hand if you've heard of a security policy. That's good. Oh, correct that one. So a security policy is the most important security document a software project can have. Without it, security bugs might be reported publicly or not at all, so please write one. A security policy doesn't have to be fancy. It doesn't have to be formal. Just take five minutes and explain how you want others to contact you or extend it to include timelines to set the reporter's expectations. Alex D's security policy is a model example for writing policies and they even have a video detailing their security. GitHub recently added a new feature that's still in beta for reporting vulnerabilities privately. To use it, you simply go to the security setting of a repo and enable private vulnerability reporting. When a reporter submits a private vulnerability, they fill out an intuitive template that you'd see in other types of bug templates on GitHub. This creates a private issue tracker and you're able to talk to the reporter. GitHub staff will also assist in your project and on request can assign CDs. So my one takeaway is write a security policy with a way to privately contact your project. You may also want to look for vulnerabilities on your own. Static analyzers are tools which help identify bugs and vulnerabilities. There are many kinds of static analyzers for different tasks and languages. Fuzzers test how programs respond often to invalid or random input. In this example, one of my colleagues is using fuzzing on their own GitHub when they have extra actions. In the resources slide of this slideshow, I have links for more information on static analyzers and fuzzers. Bug bounties are well known, but should be taken as the last step. If your code hasn't been reviewed before, you might be overwhelmed with a number of reports, especially if you were paying for discoveries. Is there any other questions? So when a hacker security researcher, grad student, developer, or someone finds a vulnerability in your code, what can they do? They could keep the zero day vulnerability secret for themselves. They can file a public bug report. They might demand money. They may break on a mailing list, or they might privately report the vulnerability to your project. Whether you like it or not, Discoverer has the ability to publish vulnerability information however they want. There are many models to make private vulnerabilities or zero days public. One extreme is full disclosure, which means that vulnerabilities become public knowledge immediately. This could happen when reporting an issue to a public bug tracker. On the other extreme is private disclosure, where the upstream project, you, receive a vulnerability report and either take no action or silently patch the vulnerability without notifying affected parties. This is sweeping the problem under the rug. These extremes put users and your reputation at risk. Coordinated vulnerability disclosure is the happy medium between these two, where you're able to work on the vulnerability and roll out patches to affected parties before release. Communication is key. When you receive vulnerability reports, always keep communication with the reporter open and positive. A credible vulnerability report means that the reporter wants to, the security issue solved. It means they want to work with you. It's your opportunity to steer disclosure. To be open and positive, you must accept that bugs happen and some of these bugs are security bugs. Admit vulnerabilities and own them. Vim is an example of owning vulnerabilities. Vim is a popular text editor that many programmers and writers use. The author, Brahm Wulinar, has been running a bug bounty since September, 2021. In a one year span, over 130 bugs have been found and roughly half have been assigned CVs. Robin's doing a phenomenal job of protecting his users by owning and addressing these issues. On the other hand, some projects do not assign CVs or warn others that there are vulnerabilities in their code. Kitty is a feature rich terminal emulator. Because Kitty does not assign CVs or even report vulnerabilities as security relevant in their change logs, I as a downstream package maintainer do not know when security issues need to be patched. Kitty's attitude is that every user should use the latest version of their software to be secure. Many of Kitty's vulnerabilities are quite serious like remote code execution and users pay the consequences for Kitty's lack of a security policy. So be like Brahm, own your bugs, own your vulnerabilities, fix them and protect your users. This doesn't mean that every report is credible. Even when the report has no security relevance, stay positive and set clear expectations. Think you should always be the first thing you tell a reporter. Your communication can impact future interactions. Coordinated vulnerability disclosure is a very large topic. This is a highly idealized example of CVD. Initially the researcher should reach out to the affected project or vendors and agree to a response. Often the two will agree to a coordinated release date to win to announce the vulnerability. A common example is in 90 days. Other parties may be involved to help mediate and coordinate the response. The time before the coordinated response date is called the embargo period and no one working on the vulnerability is supposed to speak about it publicly. CVDs give upstream vendors the opportunity to investigate and prepare a patch and for upstream to coordinate their patches with downstream projects. Using CVD as your vulnerability disclosure model protects your users since multiple affected parties can release a patch on the same date. I highly recommend anyone interested in coordinated vulnerability disclosure to watch open SSFs preparing for zero day from last year's Linux security summit. If your project needs to perform coordinated vulnerability disclosure and you want to notify Linux distributions, look into the distros mailing list. Other communication tips are to be involved in the CVD process. A CVD's description is the first place most people learn about a vulnerability. As the developer, you likely understand the nature of the vulnerability well, so suggest the description for the CVD to head off miscommunication. Participating in bug reports for vulnerabilities is also helpful as people investigating the patch will read them. Depending on the severity of the vulnerability, you may also want to announce disclosure on a mailing list or on your website. Change logs or release notes should always mention vulnerability fixes between versions and include the CVE ID. Backporting are ports of a patch from a newer version of software to an older version of software. Sometimes backports are simple and other times they can bring in breaking changes. They have special significance to security maintenance as downstream projects often cherry pick security patches to older versions of their software. Regression is when a bug is fixed, but introduces new bugs. A regression could happen to an upstream when they are trying to patch a vulnerability or to a downstream when they are backporting a security patch. As an upstream project, there are best practices you can do to reduce the number of regressions when security maintenance is taking place. So write a security patch with backports in mind to help downstream projects. This is another example from them. In this commit message, Brom is clearly describing the problem and the solution in the commit message. The commit message is even cleanly formatted and consistent between other VIM patches. Adding a CV number in the commit message would make this commit message even more useful. This patch is specific to the vulnerability that doesn't contain access code. Refactoring code or changing style in a patch muddies up the get history and makes backporting more difficult. Brom even includes a test case so that people patching the vulnerability can test that it worked. Are there any questions? So for, I actually did not understand why Kitty's policy of just pushing new code and having users download the new code is problematic. And I specifically want to ask that because that's how GitOps works right now. The expectation is that you download whatever the latest and greatest is. Yes, so in the case of Kitty, they want people to upgrade it as soon as there's like a new version. So even if you're on a rolling release where you're always downloading new packages, you might not know that there's a new vulnerability. I haven't upgraded in the last three days. I need to upgrade now. So by not telling anyone even on a rolling release, you could be late. And a lot of distributions, they don't use rolling packages. So with the Ubuntu, we have packages and we maintain like a specific version for the lifetime of an LTS. So when there's a vulnerability in Kitty, unless there's a CVE, we don't know to take the security patch and backport it to older versions of Ubuntu. And this affects other distributions too. Does that help? Yeah, so it seems like the expectation is that we're gonna be pushing updates and your client better be downloading them and polling our servers. So it seems to be like related to the amount of time it takes for the project to do a release. Is that what changes the policy? Or is that the question is, is that a dangerous policy? It is. It might be better as an example like Apache or something like that. So corporations who use Apache, they wanna use specific versions of Apache because they know that their whole code base works with Apache 1.2. And they don't wanna upgrade to the latest version because it's gonna create breaks in other parts of their infrastructure. So with a terminal emulator, you probably should kind of keep it more up to date along with VIM. So in the Ubuntu world, you might be using Snap to always use an updated version of Kitty or VIM. But if you had like a large infrastructure that you need this version of Apache, you're not gonna be upgrading unless you're gonna change most of the rest of your server. Yeah, just to steal as I got the microphone for another question, it's a success problem. As you become a more successful project and start to appear in distributions, you now have more than one branch of active code. So you initially begin with a mindset of everyone upgrades to the latest because you only have one branch of active code, really. You only have one major version that you support, but you never tell anyone you've end of life and everything else. It's just the way it works. And as you get into distributions, suddenly there's a tension. And it goes much like this. The distribution's unhappy. The project's unhappy and they eventually get better and work it out. Question I was gonna ask is, you mentioned how the patch you have showing here from Bram would have been better with the CVEN. How do you work it when the time between that commit going in and the release going out is not instantaneous and therefore you would leak? Oh, well, so you shouldn't be posting the patch publicly until the release date. So you should prepare the patch and you can share it with other people privately, but as soon as you make this patch public, embargo is over, everybody knows. So you should have a CVE assigned before that. They might not and they might post the patch and break their embargo or they don't get a CVE until after embargo, but ideally they should talk to their CNA, slow down, put the CVE in there because if you're doing security maintenance, you're gonna be searching GitHub for CVE numbers and if you have that in the commit, you'll find it really quickly or anyone researching it. So you're having some form of private branch off to the side where you're doing your security work separate from your normal classic repository? Yes. And if I recall the GitHub feature gives you a CVE branch to go work in. I didn't know about that feature, that's cool. Feel like it does, but I'm not sure. I might be getting tricked by naming conventions. Is it possible to reserve a CVE number while the details are still embargoed? It is. So I do CNA work for canonical. So when we get a vulnerability or we get a report, we get the report, we assess it and make sure it's a real vulnerability and then we reserve it. And we usually reserve the CVE before we even reply to the reporter because we wanna give them the CVE number as soon as possible if we think it's a real vulnerability. So we'll reserve it and then we'll do the whole, and when you reserve it, nobody knows what that CVE relates to. You probably wanna use random numbers if you have multiple issues in the same software so that obfuscates a little bit. But yeah, you can reserve it. The researcher, it's for the CVE backup. Oh, people at home. So we're reserving that CVE idea ahead of time and providing it to the researcher helps build trust and cooperation with the researcher because depending on what their ultimate public disclosure intentions are, they might wanna write a blog, they might wanna make beach balls with logos, who knows? But it lets them prepare. So it's a form of cooperation and goodwill towards them saying that on the public disclosure date, this will be how everyone talks about it. Exactly. And just to add to that, a lot of security researchers, they want to have a history of CVs to show their work and their importance. It's like a resume for them. I've heard it called CV coins recently. So going back to the kitty thing, in the case of kitty or packages that have security policies like this, are distributions more likely to pull those packages out of their repos in order to protect their users? In certain cases, there are, we will do that. So there's been a recent discussion at Canonical for pulling a specific package. I can't speak to other repos. But this is a case where if you're gonna use kitty because it has all these bleeding edge things and if you watch their development, it's a little haphazard in my opinion, you'd probably want it in something like a snap so you're always getting the recent version of it. And so if it's in a snap, it would be like a rolling release binary. So in the case of Ubuntu, they might pull it out of the apt repo but keep it in the snap repo. So there's only one way for people to get that and maintain the safety. At Ubuntu, we have two repositories. We have the main repository in the universe and we support updating the universe packages but we have a higher quality for that main repository. So if I was reviewing kitty to bring it into the main repository, I probably just wouldn't allow it. But search certain software we're currently considering like do we wanna keep this even in the universe repo? Gotcha, thank you. Just thought I'd follow up as I gave you a I don't know info. Yeah, when you make a GitHub security process, they have a create a temporary private fork option. So you can go off and work there and do the nice commit and then fold it back. That's a really nice feature. That's true. Yeah, yeah, I think it's pretty new. So I just wanna give a quick shout out to the Ubuntu security team. I've been there for a little over a year. They've been great mentors and especially Seth Arnold and Jay Bosberg. And then first open FSS, MITRE, and I should have search listed here too, have wonderful guides and we'll answer a lot of questions. And a huge thank you to GitHub. There was an issue with the private vulnerability reports that they fixed before this talk. I appreciate it a lot. And if you download these slides, there's links if anyone wants to follow up. And if there's any more questions available. So how does the, in the CVE database, there is the CVE, the number, the CWE, which is the weakness description and I forgot the third one. CVSS. CVSS, the severity score. Yeah, what about the software that it affects? That seems to be the thing that everyone is looking for, but how does a security researcher report something like that? So in the CVE metadata, when you're submitting this to the CVE program, you must say what versions of the software is affected and usually people will say it's affected up until this point. So that will be in the metadata and it's not so much on the researcher to come up with that, but the project to determine like, okay, this is the version that's affected and maybe even find when the vulnerability was introduced, ideally. So it's not like a question of the researcher is looking for the vulnerability, like looking for a project to poke at, it's more like they're just looking wherever and then they find it and then they identify like where the code is, but then how do they, like who do they report it to? Do they report it to like the organization that's hosting or the organization that they find, how do they find that information? So that information should be in the security policy and the discoverer of the vulnerability and the reporter, they can come at it from a lot of different directions, like they might be doing this as a living and they make their money off of bug reports and bug bounties or they might just be, some random person who found something and thought it was interesting. And so hopefully they'll look and look for a security policy and then know that they can contact the project who develops the code through the security policy, but they might not know to do that and just post it on a forum or something like that. So a lot of different directions. So what I meant to us before that, like when they find the vulnerability and they're looking for who to report it to, where do they look? So ideally it'll be the security policy and if you go to a GitHub repo or most repos? Before that, like they find the code that has the vulnerability. Do they then identify, oh, this code belongs to this, I don't know, this program and the name of the program is X and they go and look for like some project that has the name, something like that? Most likely. Like to report the vulnerability, so yeah, to find the vulnerability, you probably need to know about like the specific program that has an issue and how to attack that specific program. So they should know like where that it is. There's some cases where it could be kind of more broad like there's this configuration, if I do this thing, things break and they don't know where to report it, but like at least with Ubuntu, we have Launchpad and you can create private vulnerabilities or private reports and then like we'd get in like, well, this is probably for this project, but they might not know, but usually they'd have a good sense of which program specifically the code's in. How do you navigate the case where a researcher is, as you say, doing it as their job and when it clashes with your policy of say, like Ubuntu saying they don't pay for those things? Yes, so well, it's good to always be very upfront and set expectations with the reporters and researchers. So for canonical artist closure policies, we don't have any bug bounties. Other, like them has a bug bounty program, so that should be stated upfront, but then it kind of comes down to, working with the person, building trust, they might want to report it anyways and it's their right to do with it what they want. If they do that, that's gonna put users at risk and that might hurt their, like they're not gonna be very reputable as a security researcher if they're just hurting a lot of other projects because others won't want to work with them. But more or less people want people to be secure and I haven't had an issue with that so far. Sorry, follow up. Do you find or know whether offering bounties ends up with a safer code or more code that is dealt with with kind of a finer comb approach? If you're offering money to find problems, people are gonna find problems usually. So with them, they're having a massive amount of bugs they're finding and the severity of them aren't always that important. You'd have to do something very specific for the code to really break and it's not reasonable that someone could really break it, but they're getting assigned CVEs and we're patching them. So the code's definitely getting better and a lot of bugs are being fixed. So yes, but it's better to do your own audits first, but more eyes is always better. To your first question, if you're a maintainer or a project that is considering how to set up how you want to manage vulnerabilities and get templates for security policies, the open SSF vulnerability disclosure working group has a CVD guide for open source projects and maintainers and in there we talk about some strategies on how to deal with researchers and kind of some researcher motivation. There's a lot of different personas within the research community. And then we also, if you're super curious, we have a CVD guide for the researchers to better work with open source projects. We try to provide the researchers advice how most commonly open source projects and communities work and try to de-escalate some of the tension that might be there. So those are two good free resources on there in the open SSFs get repo. And then for bug bounty, I don't know that there's a lot of scientific evidence of the feasibility. If you are a very large project and you don't have like security expertise in your team, that is an option you can pursue. Those aren't free, but there are a lot of other ways that you potentially, if you were looking for security help to get somebody to come assist you with audit and be proactive about finding violence. Bug bounty is a choice. There's vendors like Hacker One and Integra and others and they have fairly robust programs and there's a research before you enter to any contracts there. If I could quickly add a couple points. As far as bug bounties go, they certainly encourage the reporting of bug reports. The question is, are they enough value for money given? And at least what I've been recommending to people and I think this is consistent with others is, try to get your house in basic order first. There are many reports of people who, hey, I set up a bug bounty program and suddenly they received 3000 reports because somebody ran a simple tool and they found 3000 things because they haven't done their homework. And you're spending a tremendous amount of money for a very little value. So step one, try to do your homework, run some tools, fix the problems that you could quickly find and now you're ready to, a bug bounty can be helpful for finding the bugs that are hard to find other ways. And for a corporation like where I work, having a bug bounty program makes sense because A, we sell things for money so we have some money and we like to not have our reputation damaged. We're not a free software company or free software independent developer. So again, you need to understand kind of what the motivate what you're trying to seek, what your long-term goals are. It is an option. There's some distrust on both sides from developers and researchers about bug bounties because sometimes historically, some commercial entities have used bug bounties to hide vulnerabilities, secretly pay out under the table so there's some distrust from the research community. So again, just kind of understand what you want to do. There's a lot of options and if you're a very mature project, that's something to kind of put the icing on the cake of your security posture potentially. Somewhere I'd be interested in the presentation going to next would be how a project decides who to share the information with. So I'm a project just being used by people directly off GitHub and then Ubuntu decides to put it in the general universe space. I'm extending my trust out to Ubuntu so I'm just picking you because you got the logo. But that feels a really difficult one including your own employer. You get those ethical questions of if my employer uses my software, do I tell my security team or are they not within my embargo space? I think that's a complicated weird space and I'm guessing projects that do everything you're saying and get their fifth, sixth notification coming in suddenly start having to work out who do they trust, what's their circle? And I think an alternative, sorry, an inverse of that I think also gets interesting is you mentioned SQLite and then Node SQLite 3. And this is an aside from your presentation really. When I look at Node SQLite 3 how do I build trust that they work with SQLite and that they are part of that same relationship because I'll look at some NPM packages and go you just randomly put a binary in your package. I am not gonna use this because there's no way you're ever gonna hear when there's a security issue or even update. But I don't know that we've got any mechanisms to be aware of who works with who. Who is on who's embargo lists? Yeah, so NPM would be a very difficult area to maintain and the S-bombs are gonna start filling that in. But so there's the distros list and there's a group of distros and they're all on a mailing list and all of them have been fairly well vetted. And if one of those distros started breaking embargo like they're gonna release the patch a week early because they want their users are special. They will then get kicked off that list. So the mailing, the distro mailing list is kind of like the big one. But if you're a specific project, it's gonna really kind of be custom to, you know, you probably know who your downstreams are and you should be going through the work of figuring out who are your downstreams and who are you gonna talk to? And maybe you talk to the distros, mailing list too, they have a lot of like very specific hoops. You have to jump through it to do it correctly. But you'd be watching like, how do they interact with you during the embargo process? And like if you look at Android, everyone's breaking the embargo. But, you know, it's, so everyone has kind of insecure phones for a month and like it's released and keeps on happening. So like, you know, that whole supply chain isn't very secure. The open source software is a bit better with that. I think the distro space is definitely a special kind of foundational space at the bottom that is sorted out and it's quite easy to build the trust for that group. But software on the distros is still only a minority of the software across open source. And so I think you're gonna get a lot of cases where it's, yes, I should trust this person or I should bring this group in. But then how do I know when I'm bringing that group in that I'm not bringing in every member of that group? Like what's, we almost need kind of like the security policies between the embargo relationships and I think a notion of embargo. This is kind of where I'm going with like the next step in your presentation is I think the project owner needs the notion of an embargo. In fact, even the word embargo, I found really useful for the project person to know to use with the security people. Because they're like, oh, this is a magic word. That kind of implies that gentle person's handshake you were trying to explain. You now have a word for it that makes sense. So I think there's a really good next step on a lot of presentation. And the next step I think is teach us how to do that embargo thing. And to like, how do I know that like note SQLite is getting those updates early on? I don't maybe see Rob can kind of speak to this better. But as far as I know, there's not like a public standardized ways to do that. You could be kind of like as an individual, you could look at what's happening in SQLite and is that happening in Node. And you could start mapping that and figuring out which projects you want to trust. But it's a massive amount of work for an individual or an organization to do. I'm not following SBOMs too closely, but SBOMs will be a part of that. Cause then you can say like this, so this CVE is known to affect all these packages. And then you don't need to even like look at the downstream and upstream. You just have to look at the tooling. I think you don't get the embargo bomb. You don't get the relationship on the SBOM side. But if you do begin to get a better job of this random person's FFNPEG executable in an NPM, it's there. And then you go figure out, then you just distrust it. So you distrust Node SQLite 3 just as much as random FFNPEG XE. And what I'm wondering is how you then build up trust in the Node SQLite 3? First off, I think that's an excellent suggestion for a future presentation, trying to help coach maintainers and projects how they can institute this. To answer, to use some ideas today, so you don't have to wait till next year, you need to understand kind of what your software project is about. You should understand who your downstreams are, not down to my new detail, just generally, are you making a secure communication server or are you making a paint program? And that will try to help you understand how much time you need to invest, how big is that bread box you need to put this into? And everything amongst the groups that we're talked about today, upstream open source security, coordinated vulnerability disclosure is highly trust based network. It's knowing people, it is earning and building that trust within the community. And just as Mark said, if somebody breaks the rules, which are not publicly published, they're within the circle of trust. But if you break those rules, you are quickly ejected. Justice is severely needed out for those that violate things. So it's understanding what your software does, who your downstreams are, and kind of what ecosystem you live in. Now, if you are within like the NPM ecosystem, they are going through some efforts to kind of improve the security from a package management ecosystem. So they have a foundation behind them. So potentially contacting those people, I believe they're gonna be carving off dedicated security people for the NPM ecosystem. So, you know, those would be good people to get a hold of and talk to privately saying, hey, I have this problem, can we have a Chatham House rule conversation about a thought experiment and kind of suss out from them how that community might best be able to share things under embargo. And then there's groups like the OpenSSF. The distros list has been around. Any of the distros, they all operate very similarly and they all cooperate. So reaching out to a big brother or big sister there just to get some advice. This security community in open source is very friendly from my perspective and really willing to try to help and uplift everybody inside it. And OpenSSF selfishly has some great resources too. Thank you and thank you for the discussion.