 This will be fun. Okay, good morning. My name is Ann Berticio. I am acutely aware that I stand between you and the Konex Coffee Break, so I will try and make this exciting, enjoyable. Hopefully you will learn something. We're gonna be talking all about vulnerability disclosure. We're gonna go through, and I hope we're gonna have as much fun as the people across the wall. Is that? I think that's the financial conference, and I've never heard a financial conference be so excited. So we're gonna talk about what vulnerability disclosure is, a little bit about why the OpenSSF decided that they need an entire working group devoted to it. The working group has made some very specific recommendations, so we're gonna talk about what those are, why they made them, and then walk through the life of a vulnerability in that model, and then talk a little bit from a project maintainer perspective and a user perspective, kind of the basics of what you need to know. So let's start from the top of what happens when somebody finds a vulnerability, because they actually have multiple options. One is that they could exploit it. If it allows them to abuse somebody's compute and mine Bitcoin, although that's not as lucrative as it was last week, they can do that. It's an option. The other is that they could sell it. There is actually a market, granted an unregulated market, but there is a market for vulnerabilities, and it does pay quite generously at times, depending on what it is. Wow. Okay, so we're gonna dance it up a little bit. So it does play quite generously, and it's just something to keep in mind that your researcher has options with their finding, and that option can include a large amount of cash. This is not a bug bounty, we're gonna talk about that in a little bit. The other option they have is that they could disclose it, which is really where we come in. So what is vulnerability disclosure? It is a process for verifying, documenting, communicating a known vulnerability. There are many models to this, and not everybody agrees on what the right model is. You can kind of put these models on a sliding scale. I wanna say everything across the scale, we kind of get a little philosophical in open source here. Everybody's coming at this from, I want to protect the user, but there's disagreements about what the right way to do that is. So full disclosure. This is where a researcher says I'm gonna share this as publicly and widely as possible. The thinking being, the sooner users have this information, the sooner they can patch. If I found it, an attacker's probably already found it. Let's get that information out there. That's where you see people putting things on Twitter, for example. On the other side would be private disclosure. The approach here being, if we keep this as quiet and covert as possible, we can maybe do some silent patching, protect the users from that perspective. This model does not give a lot of credit to attackers. Attackers are very sophisticated, we should keep that in mind. Barring from the folks in full disclosure, if a researcher has found it, there is a good chance somebody else has found it as well. Coordinated disclosure pulls a little bit from both those camps. It is a model that acknowledges that the project owner has once, the reporter has once, the users has once, and we're gonna need to work together in a way that kind of balances all of those to disclose this vulnerability. So that sounds relatively simple, right? Why is there an entire working group for this? Why not? Yeah, it seems straightforward. It's not at all. It can get really complicated. It can get really nuanced. We're dealing with humans here. So which of those three models do you follow? What are you supposed to do when? What if it's really bad, like extra, extra bad? Does it change what I do? What if we can't figure out how to patch it? I just don't know. And what if a company comes knocking on the door and says, excuse me, we sell your project as a managed service. I would like this information early. All of these are complexities that make this process really complicated. And you have to practice over and over and over again. The time to get comfortable with disclosure is not the time that you have a severe vulnerability on your hands. It's well before that. And there are open source projects that have really mature, well-practiced vulnerability response processes. OpenStack, Kubernetes are just some examples. And they have that because they've had a lot of time to practice and they've had people who created those processes and work on those processes who have a lot of experience in this field of response and remediation. Not every project has that. And so that is the goal of the working group is to kind of bridge that gap to create those resources, to help those projects have that same kind of maturity, those same really well-oiled response processes when they don't necessarily have all of those resources. So when the working group put out their guide, we did the coordinated vulnerability disclosure for open source projects. Why did we pick CVD? As Nithya and Brian were talking about, open source, it's critical. It's not just hobbyists, it's running critical infrastructure. And the thought is, if we can give projects a little bit of time working with the researcher to patch, to cut a release that mitigates that, that helps reduce the risk to the user. We can help protect them that way. But the disclosure part, communicating the information out as widely as possible, is really important, particularly in open source. Unlike proprietary software, where you might have a way to pull all your users, you might have an email list of every person that's using your software, we don't have that in open source. So we really have to make sure that when we know about a vulnerability, if we know a mitigation as well, we communicate that as widely as possible. I wanna pause for just a moment and clarify how vulnerability disclosure is different from a bug bounty program. So vulnerability disclosure, it's a method for reporting findings. There may or may not be cash involved. If there's cash involved for reporting a finding, it's typically because it's part of a larger vulnerability disclosure program frequently run by a company. So for example, I work at Google for some of our open source projects. If you submit a report, our VDP will give you some cash as a thank you, but it's not run as a bug bounty. The ethos here is you can think of, it's kind of, if you see something, let us know. A bug bounty program, on the other hand, is very actively dangling cash, saying please go look. And there will be scopes on that based on the bug bounty sponsor. So it's not please go look for all vulnerabilities. It's please go look for these specific ones. Here's our payout rate. You're gonna get really big money if you go look for these types. So you can kind of think it's turning on a fire hose. The ethos is different. And if you don't have a really well-oiled machine, a smooth process, if you don't have hardening already applied to your project, and you just jump to a bug bounty program, you're gonna be paying a lot of money for very simple bugs that you could have probably patched yourself. And you might not really be ready to handle that intake of all of those. To borrow a metaphor from Katie Mercuris, who's kind of a pioneer in disclosure. This was a situation where a vendor on day one had a very flashy, very pretty bug bounty website. A report was sent to them and they didn't actually know what to do with it. So she kind of made this, you can think about it as opening a restaurant before you have your wait staff, before you have your line cooks. There's a time and place for them, but you need to do the basics first. All right, so let's talk about this guide. Yeah, it's a great guide. So we do have about 20 minutes left. And so I'll say this is the abbreviated version. If you had to get hub, there's a lot here. There's a full guide that kind of walks through everything. There's a runbook and then there's a bunch of templates. My recommendation would be that you find some time if you're a project maintainer, when you don't have a burning vulnerability to read through the guide, when you do, you can go back to the runbook and help you go through those steps and hopefully the templates will make all the communication pieces of this much easier. So let's walk through what happens. Let's walk through what this process actually is. So it kicks off when someone finds an issue and they say, I wanna contact the team about this. And of course you might recognize our fine open SSF goose or duck or, I don't know, difference. Goose, duck, hat, all right. You might recognize that thing. They're gonna be our security researcher for today. They found something. They say, you know, I wanna contact the team about this. The first question is, well, who is that team? So within your project, you probably wanna have between three and five people who are essentially your vulnerability management team. If you only have three to five contributors, it is all of you. And so these folks, you know, they don't have to be security engineers. They don't have to be experts in everything security. They're really there for triage and coordination. So we also wanna be able to have our duck know how to contact this team. And we want that information really obvious. If it's buried five pages deep nested in folders that kind of take insider project knowledge to know like governance slash community slash go here, they're never gonna find it. They're gonna give up before they get to you. The other thing is that we really want it to make it easy for them to contact us. And for that reason, email is okay. You know, some folks gonna feel a little wonky about sending vulnerability information over email. But if we think about it from a risk perspective, it is a bigger risk to not give that information. So email is all right. I would encourage you as well to not have your reporter have to sign up for new tools. So if there's a third party platform and it involves creating a whole new account going through a whole new thing, they're probably not gonna do it. They've already done you a favor by deciding to report the vulnerability. Let's make this process as friction-free as possible. So this is an example of a security policy frequently known as a security.md file. And you'll notice there's some specific information in here. It's pretty brief. It's not, you know, a huge gigantic document. It gets right to the point. It says how to reach the VMT, what to include in the report. So an important bit here is information to reproduce the issue. It says the VMT you're gonna wanna be able to reproduce that. And another very important part is managing expectations about communication. You'll see and hear it says acknowledge receiving your email within three working days. Can change that as need to be, but think about it almost like a customer service interaction. You know, if you send something off, say, hey, I never got my shipment. You don't hear anything for a week. You're going to assume that nobody is there and you're either going to give up or find another way to solve your problem. It's really important to at least say, yes, thank you, we got your email taking a look. So then we move on to assessment. Not everything is a vulnerability. And as the vulnerability management team, you'll wanna take a look and really decide that this isn't a comment about design. Just having poor design that isn't great for security is not a vulnerability. It isn't a suggestion for an improvement. That's also not a vulnerability. Those are things that can be worked out in public. The criteria we're really looking for is something that's not working as intended and gives unattended access and compromises data. Frequently, that's in the categories of data integrity, availability, and confidentiality. Once we confirm that it is a vulnerability, we wanna let our reporter know that and we can move on to patching. We're gonna play the role of the VMT over there. If anybody from the marketing group is taking notes, I will be first in line for that sweater. So we're gonna say, thank you for your report. We were able to recreate this confirmance of vulnerability. Would you like to be involved in patching this? Now, reporters have lots of different reasons that they report issues. Maybe there's a paper they're working on, some research, but they also might be really invested in what this patch looks like. As part of finding it, they might have a couple ideas of ooh, this might mitigate it. So you should always ask them, do you wanna be involved? They might say yes or no, but give them the option. The other thing we'll do is as we request a CVE for it, is ask them if they'd like to be credited. Your default should be to crediting reporters, but for many reasons, they might not like that. And you'll also wanna respect their preference for how they wanna be credited. The name, any affiliations, anything like that. So now let's talk about what on earth this 90 day disclosure timeline bit is. It's really important that it's an agreed upon timeline. So frequently, and this gets to the coordinated part of disclosure, 90 days is kind of standard-ish about how much time the vulnerability management team has to patch and disclose the issue. It is not 90 days from when you start working on it, it is 90 days from when the researcher sends off that first bit of communication. And it lets the researcher know there will be an end date in sight, because remember how we were talking about incentives and needs, they wanna be able to show their work. So it gives you a little time to find a mitigation, cut a release, share the information. But if that doesn't happen, the reporter is free to go about sharing their work themselves, going to that full disclosure model. I said it's standard-ish, not everybody agrees. And if your reporter says, no, I'd rather have this out in 15 days, thumbs the rubs. That's just how it is. You have to work together. There are situations where 90 days is probably a little long. Maybe if it's a high severity issue, seven might be more appropriate. But it's really about coming to an agreement on what that is, and everybody has the same set of expectations. So now we have a CVE assignment in the works. We've got a patch in the works. We're ready to disclose this. You'll notice I skipped over this bit in gray about embargo. There is a lot of information in the guide about having an embargo program. Emborgos are really complicated. They add a lot of considerations to your project. They potentially add some legal ramifications, things to consider in that realm. There's a ton more information in the guide, but I would say for most projects, unless you have a very robust product ecosystem, you probably actually don't need it. So it's time to disclose. If you're following our handy guide, you'll find a template directory that has a lot of this stuff pre-written for you. But this is what a security disclosure looks like. You'll notice again, it's pretty brief. It's straight to the point. It says exactly what's going on. It has things like the versions that are affected, steps to recreate it, so that the user can recreate this and see if they're impacted. Has remediation and mitigation, and some timelines about when this was reported, when it's disclosed. You wanna save things like how I found this, or really a deep dive on it for blogs, videos, talks, all that. This is really trying to keep it brief, get straight to what people need to know. So back to the communication bit, our sweater-wearing duck tells the other duck, here's our timeline, here's when we're gonna disclose this. They say, thanks so much. The party starts, we're ready to disclose this. That was just amazing timing. Amazing. So the TLDR here, if you're a project maintainer, really make sure you have a plan before you need a plan. Feel comfortable with what the process is. Here are three to five folks that take in issues, that triage, have that ready to go before you have the burning issue. Get your security policy in place. That if you're a GitHub user, you'll find that they have GitHub security advisory. There's some easy tools in there to help you do that. And also remember that researchers are human too. By coming to you and trying to make that contact, they want to work with you. They want to help this, protect users, get this sorted. Don't feel like that stereotype of hacker with the black hoodie menacing and mean. They're not. They want to see this have a good resolution. And I'd also say, say thank you. This person has come to your project. They're doing you a huge favor. Even if you don't get it right, you don't have all the answers or there's little bobbles along the way, just keep saying thank you. If you are a project user, it's important that you know where you should be signed up to get disclosure information. That might be running a feed off GitHub security advisories, the project might have a mailing list. But make sure if it's really critical to you, you know how to have that information. Also know if as you're using the project and you stumble across the security issue, where should you go to report it? Where do they do that intake? And of course, in the spirit of open source, if someone on your team is good at organization communication, is this a role that you can contribute back with? You don't have to be the security engineer solving everything. If you love spreadsheets and organization, this might be for you. So I know we're probably right on time. So maybe rather than questions, I'm happy to step outside and take questions over coffee. But thank you so much. Take a look at the guide and happy bug solving.