 Many of us software developers are not familiar with how to report a vulnerability and why it's different than reporting a regular bug. Your first job probably taught you how to report a bug maybe on your first day, but nobody ever really talks about how to report a vulnerability. Let's fix that. I'm going to talk about what a vulnerability is, what responsible disclosure is and why it's important, how to report vulnerabilities, and I'll give you a few tips on what you can do to your projects to make sure vulnerabilities are reported correctly. First up, let's talk about what a vulnerability is. So Oxford Dictionary says it's the state of being exposed to an attack. So getting more specific about computing, Wikipedia says it's a weakness that can be exploited by an attacker. So these are basically the same thing except for maybe, you know, some of the emotional details, but I get pretty emotional when somebody reports a vulnerability incorrectly on the projects I work on. So maybe they're the same. A CVE is a basic way of describing a vulnerability across multiple data sources. I like to think of them mostly as an ID with some metadata attached, like an effective product, a version, and a description. These IDs are assigned by a numbering authority, such as MITRE. Most of the time we refer to a vulnerability by this ID because not all vulnerabilities get a cool name like Heartbleed. Sometimes these IDs are assigned to issues that aren't actually vulnerabilities. This one in particular is invalid. The reporter didn't understand what Python's virtual environment was. They thought it was a shell escape problem. But if they had gone to the project team and tried to report this issue correctly, then this issue never would have been open in the first place. So this is a graph showing the frequency of CVE's reported per month. I'm not exactly sure what happened in May of 2017, but I'm sure it was a bad month for somebody. So I'm showing you this data to stress how common vulnerabilities actually are. So if you average out this data, it ends up being 1500 per month, which is about 50 a day. I actually think this number is much higher because this is just what's reported publicly, and we know not all vulnerabilities are reported. So are vulnerabilities good? Are they bad? Well, in my opinion, they're good. They provide a standard way for us to talk about security issues. Software bugs are a factor of life, and it's how we handle them that makes them good or bad. We can handle them in a responsible way. For example, Apache Tomcat is a popular Java web server. It's a great example of how CVE's are reported. The project has a security page that lists vulnerabilities for each version. And of course, you can go to a CVE database like MITRE or NIST to find this information too. The biggest point is you can find the info. This gives developers and system admins notice to go update or patch their systems and the bad. So we all know about Spectre and other speculative execution attacks. Allegedly, there was some insider trading that happened before this issue was actually disclosed. And of course, many of these issues are a major pain in the ass. Spectre, Meltdown, Heartbleed, for example, cause many folks to work overtime. In a couple cases, the fix is actually decreased performance of the machines that are running on. So that cost real money. Moving on to disclosure, the Oxford dictionary says that disclosure is the action of making secret information known. So responsible disclosure basically means keep it a secret until the issue can be fixed within some agreed upon timeframe, as opposed to full disclosure, which is tell everyone right now right away. The idea with full disclosure is that it gives users and admins all the information they need to make their own decisions. But it also gives the attackers the same information. Many cases, attackers are more likely to be able to exploit the issue where a system admins and developers might not be able to do anything until a patch is released. So in my view, responsible disclosure is the best option in almost all cases. Once information is out, it is impossible to take it back. Even if you think full disclosure is a better model, starting with the responsible disclosure still might be your best bet. The term security embargo refers to the period of time after vulnerability has been reported, but before it's been disclosed to the public. So how long should this period be? Well, it depends who you ask. Recommendations are all over the place. Google's Project Zero has a default timeframe of 90 days, but they've actually published a Microsoft Windows issue in seven days, which was actually before the fix was released, which, understandably, made some people upset. And they've also waited up to eight months in the case of Spectre before that issue was actually leaked out. The Linux kernel is something like 19 days, which is two weeks, plus a few days added on for holidays and weekends. How long does it take you to fix a regular bug, non-security related, right? And release that to your customers. Is it days? Is it weeks? Is it months? I know I've worked on projects that have been days, weeks, and months to get them to the customers. So you can see the timeframe may vary depending on the project. Going back to Linux kernel, that project has a lot of eyeballs on it. Linus's GitHub project itself has 30,000 forks and almost 90,000 stars. It's really hard to keep a secret with that many people looking at this project. And of course, Ben Franklin says, free may keep a secret if two of them are dead. I joke about this, but think about how fast information travels. So how are vulnerabilities actually reported and dealt with? I think you can break this down into three steps. Report, fix, disclose. I'm going to talk about each one of these. So first, they're reported privately. And that's the key information. If you take nothing else from this talk, this is the information you should walk away with. So that means you don't go to Stack Overflow. You don't post it on some public mailing list or some Slack channel. The hard part is you have to go find out how to report the issue. Each project will handle this differently. A lot of vendors will have some sort of security mailing list, maybe security at example.com. If you can't find a page dedicated to how to report their security issues, you can go to bug bounty websites like bug crowd or hacker one, maybe they have a page there. For smaller shops where you don't think you'll be able to talk to security professional, maybe you email support and ask them how to report a vulnerability where their security pages. Now I wouldn't send them the information about the vulnerability just yet, ask them where to go to report it, make sure you're talking to the right person. And you know, obviously as a worst case scenario, you could just create an anonymous email account if you're really worried and message in that way. Again, I want to stress, do not use a public bug tracker. So here's an issue that I helped fix a little while ago. The reporter of this issue opened it and basically ghosted us. Some folks will scan public repositories looking for specific types of issues, and then report them to try to gain reputation and pattern resume, which I'm actually all for. I just think it's how they report them where that's the problem. In this case, this project has a security mailing list. And it's not just bug trackers. Like I said, I've seen this the same type of scenario play out on mailing lists and stack overflow. You might think it's easy to delete this issue, but it's not. Once this issue was created, emails were sent out to public lists, Google starts indexing the page, the information's out there. You can't take it back. Some bug trackers were set up with a checkbox or something to indicate that a new issue should be private. I would still encourage you to go look for the project's security page and see what they recommend doing when reporting a vulnerability. You never know who actually has access to a bug tracker. So now the issue has been reported to the product team. Again, this could be a vendor, an open source project, anything. It's up to them to fix the issue, just like any other bug you would open. If it's an open source project and you want to try to fix the issue, that's awesome. Just stay in contact with the developer team. It's not time to disclose the information about this issue yet. So your commit messages or your pull request messages may need to be a bit sparse. Once the issue is fixed, the product team will create some sort of release. That could be a patch. With Java projects, we would typically publish binaries, so jar files. Microsoft would include a patch on Patch Tuesday. Source based projects like PHP might just cut a git tag and call it a day. So once the fix or the patch has been released and made available, now it's time to disclose the issue. And this is the first time information has been released to the public. So typically a vendor handles all of this, all creating the associated CVE and dealing with all of the disclosure, you don't have to do anything. But once it's released, you can blog about it, tell everyone what you've learned. You can brag to your friends that you're security researcher now or not. Some companies will reward you for not talking about the issue at all. The Apache Software Foundation has a great detailed list of how to deal with vulnerabilities. So this is geared towards developers fixing the issues. And many of these steps are Apache specific. But I think it's a great foundation for defining your own processes of how you deal with vulnerabilities. I've used this list for other non Apache projects to just modify it slightly. So what can you do for your project? Many of us have projects that are open source public bug trackers or have some other public facing component. So one of the easiest things to do is configure your bug tracker to have some sort of warning to tell people to go somewhere else when reporting security vulnerabilities. So this is what Spring Security does. Spring Security is a popular Java framework. They're on GitHub. You can do something similar non GitHub projects. But essentially, they've created a template for an issue. You can see this on the left. And they've stuck an HTML comment in the markdown file. So when you create a new issue, which is what you see on the right, the comment is visible to the reporter. But once you save this issue, if that comment is still in the file, nobody sees it as it's not rendered to your webpage or indexed by Google. For GitHub projects, you should also set up a security policy. GitHub has a little wizard to walk you through this. Essentially, it's just the markdown file. But the goal of any of this is to help point the reporter to the right place as soon as possible. So another cheap option is security text.org. Essentially, you're creating a simple text file, very similar to a robots.txt that you place the root of your website, say example.com slash dot well known slash security dot text. This text file has a few key value pairs that will help you discover where to go to report a vulnerability. If you go to security text.org, they have a little form you can fill out that will generate essentially a static file that you would put on your website with this information. And there's other attributes as well. There's I think there's preferred language and a few other interesting attributes. Again, the goal is to help people discover where to go to report vulnerabilities. So they're not disclosed publicly before they're fixed. That's it. I hope you've learned that reporting security issues isn't that difficult. It does require the extra step of figuring out where to report them. But it is our responsibility to go do that. Thanks for watching this video. Be sure to hit that like and subscribe button below. We have new videos coming out weekly. Thank you.