 Well, I want to welcome everybody today to Madison Eye's little vulnerability cavalcade We're here to talk about simplifying coordinating vulnerabilities and disclosures within open source projects Hi, everybody. My name is Madison Oliver. I work at GitHub. I go by at Tala drain professionally. I Considered myself to be a transparency a security transparency and disclosure advocate. I love vulnerability reporting and disclosure I also love cats and word of Warcraft Nice. I'm probe. I do stuff That's like We're here today to talk about CVD within open source projects We're gonna talk about some key concepts that are important to understand in this space for maintainers and researchers Or talk about why it's important how to do it. What makes it hard and then well educate you on how you can help So give some concepts and some definitions to start just to lay lay the land So coordinated vulnerability disclosure or CVD is a term that you might have heard of quite a lot It is the process of gathering information from vulnerability finders Coordinating sharing that information between all relevant stakeholders in that process and Disclosing the existence of the vulnerability their mitigations the fixes all of relevant details various stakeholders including the general public There are a number of principles for coordinated vulnerability disclosure The biggest goal right is reducing harm throughout that whole process You want to presume benevolence whenever possible? Assume assume the best right trust by default Try to avoid surprising other stakeholders in that process as well You want to incentivize the other party? To do the behavior that you're desiring from them and we'll talk about that a little bit more later as well There's a number of ethical considerations Process improvements, and if you're old like me you might have heard a term responsible disclosure This is how this process used to be referred to as and is dropped out of favor just because of the implications of the word responsible You came a kind of a judge a valued judgment and we just want to focus on the coordination aspect So you may all sometimes hear responsible, but today it's called CVD There's a term that frequently goes along with these disclosures It's called embargo and this is a period of time when the issue is kept private And that's typically when a reporter or a project finds an issue And they work to very quickly to try to get it addressed And so that's a period where you don't you're not sharing things publicly Sometimes based on the complexity of the issue a software maintainer may include other people in their project or other projects that might be needed to help create fixes or at least kind of Help coordinate as things are going to go public and they'll be read into an embargo But this is generally a time when the information is secret and is not publicly down The end result of a coordinated vulnerability disclosure process could be something like a security advisory Which is an announcement or bulletin that serves to inform the general public So this is one of the most popular I'd say ways to disclose vulnerabilities publicly is by using and creating and sharing a security advisory It's an example of what one might look like for example there's a couple Other terms that go along with advisory today CSAF is the common security advisory framework. This is how most vendors are working towards Publishing advisories is an electronic form. There is an old this evolved from an older standard called CVRF Which was the common vulnerability reporting framework So most of your vendors Corporations governments will steer towards releasing via CSAF and that's not necessarily what a Maintainer or small project would use but it's an option you have the ability to use those standards and then VEX is something new This is a new idea. It's about a way for a developer to report the effectiveness of a piece of software to a vulnerability There's a couple different states like under investigation affected not affected and why And you know affected and here's how to get the remediation So VEX is a newer emerging standard And there's a couple simple ways that you can issue a VEX statement to keep your Constituency informed of the status of your investigation if it issues public a vulnerability disclosure program is a structured process that might exist at a company an organization a large project So you as a maintainer might be running your own vulnerability disclosure program you as a security researcher might be interacting with a Projects vulnerability disclosure program these programs typically have very clear guidelines on how and where third parties can notify the the overarching project about the security vulnerability how you as a security researcher are expected to conduct good faith research The process that you can expect usually including time frames So this might be where you would see some language around embargo When information will be disclosed and what that looks like and how your report will be evaluated by the project And some examples of this would be like the kernel security team Coups security team Apache security team red hat product security. Those are very formalized Either vendor or project-based security groups and kind of a Twist on this is the bug bounty program This is that sometimes done through a vendor like hacker one or integrity, but this is a program that is publicly shared for researchers to come in and they are paid for their findings and If you're a researcher, there are a lot of caveats and rules So if you are a researcher make sure you read the instructions about kind of what the rules are and participating in that program But this is a way Traditionally security researchers are invited to share the findings with an entity get the issue fixed And then they will potentially provide assistance with the coordination and they will Pay the researcher and sometimes there are incentives on the back end to fund developers as well But this is kind of a flavor of a VDP that if you're researching Or if your project doesn't have a formalized team, you could explore potentially a bug bounty program Safe Harbor is a concept that is often thrown around a lot as well So outside of information security safe Harbor is something that offers protection for liability in very certain situations under very specific Circumstances, so safe Harbor first really came about especially in copyright law That it has existed in copyright law for years So in the context of security research and vulnerability disclosure What safe Harbor does is is an ethical statement from the organization asking for vulnerability information to To give you as the person reporting vulnerability information assurance that you will not be subject to legal action from the organization So this is meant to be a safe protection for the person sharing vulnerability information that has done that research that has found that information So that they are safe and For any researchers in the audience Not every organization provides a safe Harbor protection So if you're going to report to the organization, you probably should understand how they interact with the research community And it also you need to be under understanding what your Local regulation and laws are because some countries require additional Steps or do not afford protections against reverse engineering So just be aware of what kind of what the rules of the road are and I think that's I both agree safe Harbor is the way Ethical organization should conduct themselves working with researchers Yeah, and keep in mind your own local laws where you are operating as a security researcher But also keep in mind the laws that might be impacting the company who might be located in another country All right, so now we're gonna kind of take some steps and talk about why CBD is important Many of you this is a little tour through the years and the way back machine you may recognize some of these celebrity logos these is a string of both open and Proprietary vulnerabilities that have happened since starting off with our dear friend heart bleed the kind of kicked off this trend of celebrity naming of vulnerabilities back in 2014 and This is a trend a lot of researchers use but it's an easy mnemonic. Oh, you see The bleeding heart that's heart bleed But as you can see these types of celebrity events also will generate a substantial amount of media government and customer interest like what is this scary thing on the news on the front of Wall Street Journal or on the register it Generates a substantial that kind of up levels the pressure Within your vulnerability disclosure if something like this happens if it gets branded to go to the next slide There we go, so CBD helps us it ensures the software maintainers have access to the resources they need to analyze test and fix that reported vulnerability as Somebody reports to them whether it's a academic or professional researcher somebody on the project or just a Enthusiastic community member Ideally that it takes time to understand what this bug report is. I need to reproduce it I need to understand what the impacts are and that's why CBD is important so that the developer can create a correct fix when they're ready to go out publicly and The intention is you know as all these fixes are developed Sometimes you need to read additional parties in authorized individuals and it could be because somebody has specialized skills and testing or Analysis or maybe you're part of a very large ecosystem like an open SSL where there are you know Thousands of downstream Communities that depend on you so you need time to help stage that information and those fixes so that when it goes public all Consumers have access to that information at the same time CBD can take a number of different forms There is bilateral coordinated vulnerability disclosure and what I mean when I say that is two parties the reporting party and the receiving Party working together no nobody in the middle Nobody else involved just the two entities as you can imagine that happens less and less often especially in open source very rarely is Is there only one other party involved or one only one other party who needs to be notified what happens more often is? multi-party Vulnerability disclosure where a number of other stakeholders are involved. It's more than just one to one It's really one to many or even many to many oftentimes There's not just one researcher who's looking into a vulnerability or one vendor or project who is impacted by that vulnerability So coordinated vulnerability disclosure really brings an aim to bring together everybody who needs to be involved who should be involved as Early on as possible There are a lot of benefits in my opinion for coordinated vulnerability disclosure in open source This gives the opportunity to add vital skills and capacities for the remediation process For both the researchers and the project maintainers and developers that are involved This allows for broader regression testing and patch review prior to public disclosure So if you are working together as a maintainer with a researcher Especially an open source you have often the ability to have them to share the patch with them ask them to test it They have spent so much time already looking at his vulnerability looking at your project looking at this code That if you have the opportunity to have that level of collaboration with them It can ensure that the patches may be even better than you had originally intended right Ecosystems can also prepare and stage these patches and documentation to to share for all downstream consumers at the same time The coordinated aspect of vulnerability disclosure is really ensuring that everybody finds out at the same time with the same information Everything is very clear very obvious Users know what to do researchers know what to do the project knows what to do And everybody gets this notification typically hopefully when patches are released that way your end users have a direct action that they can take install this patch that's that's the goal and All that coordination all that testing and understanding these complex security issues that are discovered can be hard a Lot of different reasons kind of go into this a lot of times in open source The open source is as varied as the colors of the rainbow every project tends to handle things differently They have different programming languages different testing infrastructures different ways they engage with their communities and a lot of times Not all projects accurately convey that out to the public and their downstreams of how to get ahold of us or how we manage security bug and defect reports So it is sometimes hard for an external party to contact these entities And then even when you are you do potentially find the right person because of open source Which typically tends to be a you know use it buyer beware use at your own risk They're very there never is a warranty, but sometimes there's a support arrangement as part of that We will support n-1 versions of the software, but oftentimes, you know maintainers either Retire they move on to different work and there's no one actively maintaining that project So when you're mailing a mailbox that was ten years old, but potentially there's no one there to pick up the other line of that message and Open source is all about agility and speed Sometimes coordinated vulnerability disclosure. It takes a long time to make sure everyone's ready when it's time to go that doesn't always match up with the times that either a researcher requires or the release schedule of a project and then you know not every Developer has the skills to be a security expert Sometimes they need a friend to help out. Maybe there's a different person on the project They might need to get external help in writing a patch So there's just you know not all team not all open source projects have the same level of capabilities and process I Wanted to talk for a moment more generally about why coordinated vulnerability disclosure is difficult even beyond open source I am a huge advocate of reminding folks that behind all of the technology that we interact with is a human being or now For now But there is a human behind all of this vulnerability disclosure is truly a human process It is a person talking to another person So disclosures can go awry for lots of human related reasons like unavailability inability Emotions thoughts feelings all of that truly does come into play here and can be Some reasons why it might go awry and can also be reasons why it goes very well But it's very important to remember the motivations for all of the parties involved in that and Give people a little bit of grace Everyone typically right if you're involved in a coordinated vulnerability disclosure process is doing so for good Altruistic reasons so showing some empathy being understanding these things can take some time Just remember that who you're talking to for now at least it's still a human being All right, so let's kind of get into some of the specifics of how to actually do CBD Fun fact Madison and I get the opportunity to collaborate together in a little group called the open source security foundation We're part of our members of the vulnerability disclosure working group so this is a elite group of open source community members vendors researchers academics and we get together and talk About and think about how we can help the vulnerability disclosure process within open source get better Over the years. We've produced a couple different CVD guides So we currently have a CVD guide for open source maintainers and projects So if you don't have a VDP or a bug bounty program that you pay for we give you some tools and techniques and Guidance on how you can adopt these good practices within your project some tips and tools and templates In addition to the guide for maintainers, we've also recently released a guide for reporters So if you are a reporter who is commonly sharing vulnerability information with open source projects And you want some tips or best practices on how do I share this information? Who do I share it with when do I do it and what are my expectations of me? And what should I be able to reasonably expect from them? We have a guide listing all of that as well. All of this is available on github, right? I've heard of it So let's talk about some things that a maintainer in a project can do to be successful through a CVD process through the Venue of memes First and foremost, I think the single most important thing a project can do to have a successful vulnerability disclosure Interaction is publishing what your process your policy is You don't need to necessarily have the exact same rules and processes that a large project like kernel or Kubernetes has You might not be that concerned about triaging security defects But the most important thing you can do is as you have a positive interaction is right down But you and your project in your community what they will do and how they will react Establishing your security team within your project Before you receive an incident or a vulnerability report is also incredibly incredibly helpful I cannot recommend that enough not every developer on your project not every maintainer is a securityologist They very often might not have the skills needed to respond to an incident So identifying people in your project that do have that capability or reaching out to other community members to fill that need When when that need arises is incredibly important and having those channels those communication channels that those avenues set up Before you need them will also really really help right Then within the industry there is a Thing called CVE which stands for CVE Used to stand for common vulnerability enumeration, but that was too confusing. So now it just stands for itself That that is what it does. That is not what it is CVE means CVE. Thank you, MITRE But within that ecosystem and CVE is a number a identifier of a unique security vulnerability and It was created so that you know We have you know many different Linux distributions or different people of ingesting open-source software But you all should share a common vulnerability Enumerator a number so that downstream consumers and other developers can understand you're talking about the same thing CVE XYZ means the same thing to two of us and within this structure There is an entity called a CNA which is a CVE Numbering authority not naming Bad crow and these are organizations that are kind of like the big brothers and big sisters Normally a vendor will be a CNA for their own product set But within the open-source ecosystem Organizations like get a red hat and Google can act as CNAs for parts of the broader open-source ecosystem So even though your project may not have the ability to write their own CVE identifiers You can find a buddy within the ecosystem to write that and it's they have gone They've made some strides to make the process a little easier to get an identifier But there are other ways that you can you know get this you can go straight to MITRE or you can go to like one of these big brothers or big sisters Setting up a means for private intake of the vulnerability reports that you are hoping to receive again before you actually receive them will be incredibly helpful Reported vulnerability is truly a threat to any users if the software is left unfixed So establishing a private way so that external entities can share this information with you without just you know opening an issue on your repo Zero day and you sharing that information with the entire world at the same time that they are sharing it with you gives you as a Maintainer the ability to respond to it and have that collaboration and communication back and forth and Develop a fix before you share it with the broader community So if you don't have a way to privately receive this information from a reporter standpoint They are much more likely or inclined to share this information publicly by just filing an issue on your repo And that might not be what you want and there's a significant amount of bad actors or security researchers Or community enthusiasts that are constantly monitoring your software and any change any issue or PR submitted If it isn't protected through privacy through private means they potentially will see that and we'll be able to kind of reverse Engineer it before you're able to develop a fix You want to do this one? Sure. All right on the same sort of thread having a way So that you as a maintainer a developer of a project can Create your patch or your fix in a private way also incredibly important There are so many folks who are watching so many repos and every single activity change on it is noticed So having having the ability to create this privately within the developers of your project again Allows you to be proactive and gives you a way really to handle this before before the before the public Mm-hmm, and then you know if you need to if it's part of how you wish to Conduct your vulnerability coordination activities Establish that embargo list. It's easier to do it ahead of time Understanding that you know crow should talk to Madison because Madison is a heavy Partner and collaborator with my software even though she's on a different project It's important to get that established ahead of time so that everybody again has that ability to prepare and help Assist the end consumer of whatever your software might be Determining how you as the project maintainer or you as the security researcher are going to communicate this disclosure to the community is also incredibly Important there are a number of different ways that you can do this right truly easiest Just publish it's on an email list right any any way to get this public make make an issue PR security advisory Blogs commits. There's there's so many different ways to share this information with your downstream users It's most important to figure to figure out and determine beforehand what your users are expecting right? You want to share this in a place that they're also actually looking it doesn't do me any good to share this on an email list if none Of my developers are not monitoring that email list that truly isn't actually that helpful So again determining all of this as early as you can before there is an incident that you have to respond to will help Ensure that when there is an incident it goes as smoothly as possible and this is something you would put in that security policy We told you about the very first step So those are some of our tips and tricks and kind of an understanding of the landscape From the audience here. What questions or comments do you have? Do you think your CVD experts now? Gonna go out and coordinate some vulnerabilities What can we answer for you? Happens every minute of every day It depends on what level what capability the third the question was how likely how frequent does is our bad actors monitoring? GitHub get lab on your repositories or mailing lists it happens all the time. It is very simple through Tools I can scrape a website and then grip through it and there's more sophisticated ways But this happens all the time and it's based off the sophistication of the threat actor If it's a nation-state or organized crime Those people have professional developers that have figured out this problem They understand what your pros are interesting for them and they'll monitor it me even though You might not think your software is interesting Chances are there is someone that sees interest in figure is going to figure out a way how to exploit you in your downstream One of the ways that this ends up being seen a lot in open source Too is this concept of a drive-by CVE where the maintainer or Reporter or somebody who was involved in the coordinated vulnerability disclosure process did not request a CVE for one reason or another Somebody external to that sees it because all of that information is public requests the CVE for it Maybe unbeknownst to the maintainer maybe with their own thoughts feelings or severity Determined that might not align with your your disclosure. So that's that's something I've seen very common Gentlemen back there and then the gentleman in the brown shirt next So the question was around what happens with vulnerabilities discovered and reported to end-of-life software Do you want to start let me start I'm happy to start. So part part of your security policy might be We don't maintain this software below this version that is considered end-of-life. Therefore, we will not fix it So if somebody were to report something to you and you have clearly stated in your policy This is end-of-life. I will not touch it. You can you can default to that and say hey I I said I was not going to address this The the important thing is still that that information does end up being disclosed publicly, right? Not everybody updates to the latest version. Unfortunately, there's a lot of very old software out there that is still actively maintained So while you as a maintainer might make the decision to not fix something because it's end-of-life I personally believe it's still important to share that information publicly and the researcher might Yeah, it's it would be maybe a difference in expectations. Yeah, and from a project standpoint it's critical to state what your how you wish to desire to support this software and You know some of some people like if you're a part of a large foundation. They have more resources They have longer support tails others don't from a researcher perspective You should do your best to understand what that project's life cycle is There are other methods you could go through a coordinating body like cert CC or JP cert depending But there there are ways to do it but at the end of the day not all software is supported forever and if the researcher is not getting Action on their requests. You are Perfectly entitled to help warn the public and those consumers And from a researcher researcher perspective feel free to talk to Jonathan He has a lot of experience in this and he actually works within the open SSF to help try to create some new norms on How to do mass vulnerability finding campaigns and trying to help he was actually instrumental in helping us shape the finder guide Thank you. Do you see the CNA is eventually adopting peer or I was as a You know a default of first-class citizen as an identifier in these CVs So that we can better consume this information so Today I do not see that happening In the future How CVE works is it's controlled by an organization called MITRE which is a US based entity as a CVE is issued it goes out into something called the National Vulnerability Database it's a public resource and Just last week actually MITRE and the NVD team stated that they are looking for both Private entities like corporations as well as open source to collaborate on how they might be able to evolve the NVD There are other standards that you can use there's the OSV which is an open source security foundation project that kind of federates vulnerability identifiers GSD the global security database is another open source vulnerability database So there are other ways you can get an identifier as a researcher and as a consumer You probably want to take a look at but in the future there are there will be steps and our OSV team within open SSF is going to be getting on NVD's list to talk to because actually we've had a lot of conversations because CVE and NVD work very well with classic software legacy, you know the large corporation software Doesn't is not quite as compatible with open source agile iterative processes So I think we'll be able to find a better partnership, but I don't have a timeline on when that is but in the future Yes, but there are alternatives. You can take a look at Thank you way in the back So the question is are we aware of any bad actors using? Anybody's researchers bad actors using artificial intelligence tools like chat GPT to Find vulnerabilities or exploit officially. I am personally unaware Being a 25 year Security person I can guarantee People are if it's on the internet somebody's going to use it and try to break it But I don't know that there has been there's not a large body of evidence or research yet But again, that's not my personal area to look into But I'm fairly certain that if they are not they very soon will be because again, it's it's a very useful set of tools Can you hear me? Yes. Yes, you can Justin Murphy. I hesitate to say it. I work for the U.S. Government. It's a I Work with Alan Freeman on the S-Bombex work. Oh Now is he gonna show up? Yeah, you said the word No, he'll That's why I'm here actually But probe, I know you've been involved with some of our working groups and I appreciate appreciate your input all the time So I also happened to work for the branch and so that handles Coordinated vulnerability disclosure and we'd love to hear from you anybody in the audience. What are some things that? The U.S. Government can do what I guess, you know, we'd love to hear at what we're doing well but more importantly, what are some things that we could do better and And things that we could do perhaps incentivize a Coordinated vulnerability disclosure and being more approachable And things like that and and I do have a second question about the working group I'm curious. Do you have and is it open to U.S. Government participation if we had a member from our team participate in the Vulnerability disclosure working group is that possible? It's like we planted that question Yeah, so to the first question I have a whole team of excited people that would love to talk to you about how we can better work together to match The speed of open source and kind of we find some middle ground between Classic and New Coke for example Secondly every open SSF working group is completely open to anyone that's interested in participating There's some additional things you can do as a paying member But a lot of the people that participate in our working group and especially this working group because I get a lot of Academics and researchers pop in totally open to the public all our meetings are public every meeting is recorded you can watch Thousands of hours of zoom calls on YouTube if you that's your thing but Like I partner with Alan a lot and actually Alan does collaborate with some of our other groups He participates in the end user working group quite a lot But if somebody was interested in collaborating with that specific group, we would love that We would love that from everywhere as again We have like we talked with Thomas over in Germany quite a lot as well anyone that's interested. We welcome your collaboration Yeah, I appreciate that Never hurts to help. Yeah Actually just a comment and related to that previous question which are GPT We Sneak disclose three vulnerabilities, but they were not with a score of 5.4 and they were not on the nvd at the time it took about a week and It was fun just to ask chad gpt What score would you give to this and which copy paste the exact description and Chatsy PT gave a 7.5. So the vote is a high severity a Week later on the nvd. They were 7.5 And I also participate in the CVSS working group I Participate in the other organization called first the form of incident response and security teams and they help kind of curate CVSS and The single greatest failure of CVSS is that a human does it and it can be a little subjective they try to mathify it and it does a good job and CVSS was never meant to be a Description of risk it was a kind of description describing how this problem works and give you kind of a severity and Depending on who the analyst is looking at it what their knowledge of the particular packages This is a fight I used to have all the time with red hat Because red hat would perform the analysis and other people would perform a different analysis and our customers were kind of caught in the middle, but it really it depends on who's doing what is the analyst doing that context how experienced they are and You know potentially the different analysts see the problem differently They know they might see well That's that scope is changed or that scope is unchanged and that really you get into that or some people actually reflect on things like the temporal Space kind of considering other compensating controls that should be in place But yeah, you will see some variance in how different parties scored you'll see variance from the same analyst potentially scoring things Yeah, that's very common honestly The evaluating severity for a vulnerability is so very subjective The CVSS spec does does the best that it can truly to outline how you should do that But much like a comment we made earlier that that was Maybe not initially designed with open source in mind. So there are some Some areas with an open source vulnerabilities We're using that to gauge your severity might not work the best because that was not Initially how it was designed and I would say stay tuned They're not ready to release the CVSS v4 spec and it is much more robust and it kind of more aggressively Incorporates the temporal stuff We greatly encourage people to actually do their own scoring internally based off of their priorities And there's a whole section around safety and human life It's a great improve. It's a great evolution It will never be perfect, but I think they've done a really good job with the next one next version So ideally once that is in place and people are doing that a little more ideally will see a little bit more consistency of the scoring Not out yet. Yeah, that's that's a great question The question was Jonathan asked when a vendor was going to start releasing CVSS v4 scores likely after the spec is released Sometime after that is what I'd say And that is a big conversation of the working group there is they're trying to decide how to roll it out It's gonna require some new training of the analysts You know, there's been a thing I knew you'll have a little bit of growing pains, but I would expect you'll start seeing commercial organizations issuing CVSS v4 scores probably the end of this year and you'll see 2024 will be a year where you'll have both three and four and you know, some people still require I think one People love to do yeah, so you that but you will see the majority of vendors start issuing v4 Probably towards the end of this year. Any other questions while we're here? You should look so I my two channels One is the security unhappy hour, which was a bunch of p-cert people We got together and drank and talk about security things like CVSS and then chips and salsa is my corporate shilling of things and talking to security researchers question yeah so I heard you mentioned first and so I and I Said on the common security advisory framework Nice technical committee with Thomas Schmidt from BSI We are doing some if you're gonna be at first conference, which also happens to be in Canada and Montreal in a month We're doing some writing workshops both for beginners and and more mature users So we'd love to have any participation if you're curious question about CSAF How are you mentioned it in your talk briefly? What are you seeing from an adoption standpoint? Is there resistance to it? Is it just? Are you seeing a lot of adoption? What what are you seeing? out there that and and Are there things that we you know as part of the technical committee could I bring back that might be helpful and? Encouraging adoption and things like that. Do you have thoughts? Yeah, I'll go briefly please What what I have seen is it seems to be fairly industry and focused specific So the a lot of the benefits of CSAF at least from my opinion are really for incident response teams and those Responding to vulnerabilities it it is a little bit harder for downstream users to make a lot of benefit from that information It would be kind of hard for a developer I think from their standpoint to create a CSAF document, so I think there's It is very good at this more narrow scope And if we could make it a little more agile which seems to be the theme of our talk if we can make things more agile and more easily Usable and consumable by the open-source community. I think it would really grow in adoption Yeah, and from a commercial perspective, you'll see it's been slow uptake for people to issue CSAF And those are people that have security teams and they have paying constituents that would demand that From an open-source perspective I've seen very little adoption because again, it's not it doesn't benefit the maintainer or the project but One of the goals of our volume disclosure working group and then some of the other efforts like we're proposing an open-source Security and some response team one of the efforts there would be to try to create tooling so that it would be dead Simple for a developer as they are fixing an issue when they do that commit it potentially could issue a CSAF It could issue a vex statement And then we you know, we're trying to figure out one of the next steps of the working group is to start to collaborate with Oasis and the other Cyclone DX and SP DX the other kind of vex adjacent and CSAF adjacent groups, so Not there yet But hopefully we can make it very simple for a maintainer and project to click a button and Out it goes Thanks. Do you see any usage of CVRF at all? It was the same okay for years. It was like basically just red hat in Cisco. Yeah And and even like red hat just started issuing they're one of the biggest champion I worked there for seven years they were one of the biggest champions of this and They've only been doing CSAF advisories for like a year and that's one of the most progressive My organization Intel is just now starting to release CSAF Because our customer our large OEM customers are demanding it. They want electronic advisories So it's a slow adoption process Yeah, and I think we've seen that honestly pretty commonly across vulnerability Specifications or vulnerability reporting and disclosure tooling a lot of that in my opinion was made For use by the security teams by the security responders not necessarily by the engineers or the developers that are Working on the patches or fixing that and in an open-source community Especially a small one those people are one in the same very very often So the tooling and the specifications that exist currently weren't necessarily made with them in mind Which I think has led to some of the low or slow adoption Because a lot of assumptions were made and a lot of there's a lot of expectation That doesn't necessarily match some of the reality we're seeing Yeah, but and I think the key to that is going to be tooling automation Get it incorporated into pipelines and part of like the source forages so that as they're doing their work It's just a check. Would you like to do an advisory? No way it goes just lower the bar to entry make it as easy as possible And if you capture the information in the vulnerability report and through your process It's pretty simple matter to kind of take key pieces of that and put it into an advisory format Any other questions or comments? So we're all coordinated vulnerability disclosure experts now Everyone's going to go home and put a security policy on their repo Well, we thank you for your time and attention and your good questions. We hope you all enjoy the rest of the conference. Thank you