 Thank you, everyone, for attending our panel today. Usually, I find that I get these 8 AM slots, and it's really awful, so I'm very, very happy that it's a late slot. So thanks for everyone who has hung in there with us. So over the last couple years, several of us have been talking about vulnerabilities in depth, and we started down this path of saying, does anyone really care about vulnerabilities anymore? And it turned into this conversation, well, if it's a zero day, well, then it's really important, but otherwise, maybe no one would care. So we wanted to get some people here today. Got a pretty large panel of folks, so I'm going to try to keep them reined in as best as possible. And I wanted to get some people that had some good thoughts, but were really easy to manage that weren't that opinionated that wouldn't go off on rants. And if you know anyone up here, we'll see what happens, right? So yeah, fail already, thanks for that. So anyways, hopefully this will be a good panel for you. I know a lot of times panels can be a real pain, but we're going to try to make this interactive, get some good topics that maybe aren't discussed as they should be. So the first thing with a good panel is getting the right people up here. And I think that we have that. Hopefully you know all these people. If you don't, I'm not going to waste any time of the session introducing their bios. We've got Brian from OSDB, Steve from CVE, Karsten from Secunia, Art from CERT, Dan from HP Tipping Point, Katie from, I don't know who's whistling for Katie or Dan, Katie from Microsoft, and Alex, I don't know where you are anymore, so I'll just call you Alex. All right, so we're going to kick it off. We're going to try to do four questions, about 10 minutes each. And then at the end, there's going to be some free for all. If you really feel like you need to yell at one of the panelists, there's a microphone, come on up. But otherwise, you can hold it to the end, your call. So the first one we're going to start off with is does anyone that doesn't work on one of these things really care about vulnerability databases, tracking, or trending anymore? And I'd like to start off with maybe Steve or Brian, you guys want to pop in? People care, but they don't know they care, near as I can tell. At least on the CVE side, we rarely get many complaints about what's going on with us. Sometimes we have like blatant errors in CVE descriptions that we never hear of from anybody. So in a sense, people don't necessarily care. It seems like a really easy job to just comb through thousands and thousands of vulnerability reports every day that most of which have one of the following four properties, which I call the four eyes. Vulnerability reports are either incomplete, they're inaccurate, they're inconsistent, especially, say, between a vendor report and a researcher report, not like we ever get those kind of inconsistencies, incomprehensible. Some of the stuff that we're dealing with comes in in broken English from people who live in the Midwest, poorly formatted advisories, where the most serious vulnerability, they don't even quite realize what it is. And it's buried in a single sentence in a three page screen about what they had for breakfast or something. That's the kind of raw information that we have to deal with on a daily basis. And unless you really deal with that kind of stuff on a daily basis, it seems like really easy. Oh, just scrape all these websites or take in all these emails and then just do a little bit of analysis in two minutes and then push this thing out. Unfortunately, there's a tendency that all of us in the vulnerability information industry have, which is we kind of care about correctness and quality. And in some ways, this is why I think we're having this panel now, because quality comes at a steep price. So you want to talk about CVE? And are we still working on CVE these days? Just get right out and ask me. Is CVE dead? No. However, we are going through a change. So we're kind of in a cocoon, and we'll kind of come out like beautiful butterflies or whatever. But there are a couple of realities. First of all, just the raw number of vulnerability reports coming out is increasing significantly. The complexity of the vulnerabilities that come out are really difficult to capture. Yeah, we could go along the lines of other people and just call things memory corruption, which is really code for some kind of buffer overflow that we don't really know how to describe exactly. But at least on the CVE side, what we've been doing is caring a lot about the kind of the academic strength aesthetics of what are the real root causes lying underneath these vulnerabilities. So terms such as memory corruption, we actually try and dig a little bit deeper. What that means, though, is that there's a lot bigger analytical overhead in our pursuit of correctness. And those of you who follow CVE on a regular basis, you may see increasing levels of precision, increasing levels of correctness, but it's come at a pretty high price, one of which is the entry to have people on our team gets a little bit high in terms of the technical skills that are required. And then the other price as well, though, is that we're at a constant level of funding and the numbers of vulnerabilities are getting reported are more vulnerabilities and more complex vulnerabilities. Something's got to give. And in this case, recently, what's been giving is the actual number of CVEs that we've been publishing. But this year, especially, we're working a lot on modifying our processes to change that ultimately. Brian? Yeah, real quick. OSVDB has two different ways to handle this. The first one we tried was crowdsourcing. So we would put memory corruption in the title and we figure, hey, there's a lot of smart people out there. There's the researcher, whoever else could clear it up. And of course, no one did. So after a while, we went with our backup plan, which is just let Secunia do it all. Any comment? I just wanted to comment on whether, does anybody actually care about vulnerability? It seems like everybody wants to talk about infrastructure, virtualization and cloud and mobile and puke. So at the end of the day, all these various infrastructures are all just delivering applications. Applications have vulnerabilities. It's the same thing getting delivered. It's still a web application. It's still a web browser. And they're full of volums. And just because the infrastructure changes, doesn't mean the vulnerabilities really change that much. There's actually a blind spot that exists right now with respect to cloud-based services in general, which is that us as vulnerability information sources don't cover those. If there's a vulnerability in the Google search engine or something like that, maybe it can be used to hack millions of people or whatever, but that's an online service. That's not deployable software that goes into the enterprise. So while I think we've been doing a good job in general tracking trends, that is one area that's a really big blind spot. And it's going to get worse and worse as the adoption of services increases. I completely agree because if you look at the trends that have really occurred over the last five years or so, so many of the things that have actually moved are web based, whether it's browser or app. Tracking a trend like SQL injection actually becomes quite fascinating because it actually moves as opposed to some other stuff from Microsoft, for instance. With regard to the care, at least we hope that people care and that all the efforts that we make aren't in vain. At least there are some people attending today, so it's not completely CEO care. But sometimes we see ourselves as making an allergy like we're providing electricity. None of us are sitting in here now excited about their slide, but we would be complaining if there wasn't. And the same when we do the vulnerability databases, when we correlate all the information, when our advisories get out, when they're correct, when everything is as it should be, it seems like no one's really paying attention. It's not the type of thing where people say, oh, that's excellent, that's good. But if we send out an advisory with just a small typo, we within five minutes actually have a ha-ha mail sent to us. So at least with us, we spend a lot of time making sure we get it correct. Everything from the analysis of the core problem and the vulnerability down to spelling. So I think we kind of live in this, I mean, security people kind of live in this world where vulnerabilities, each vulnerability, it's a beautiful snowflake, right? Each vulnerability actually, yeah, they are, they're each one unique. But it matters a lot to us as security people and security people working inside of giant mega corporations that matters to us, each phone matters, but to the broader world and maybe to some people who call themselves security people but don't actually care, maybe they're talking APT in the cloud, whatever, puke as you said, right? Bingo. Maybe those people, they're thinking to themselves, well, phones don't pwn people, exploits pwn people. So they really only care about exploitable phones, they only care about exploits. So with all of these vulnerability tracking databases, I think for a lot of the population, what they really cull out of those databases is what's exploitable or what hasn't exploit out there as in, what do I need to get off my lazy butt and deal with right now? I love that because I think I saw two tweets on the zero Dan WordPress and all of you people keep retweeting that Metasploit four is out, so thanks, I know. Yeah. So at CERT, we had the same feelings as Steve and Karsten. We wanted to be correct and count every vulnerability in the world. Within maybe the last year or two, I've come to the realization that people do care. They do because they want to count the vulnerability, call it something, scan for it. They want to be in compliance, see if they can patch for it. They need to name it something. Sadly, they don't really care how accurate the advisory is to a large extent. If you're really gonna, you're doing something hot and at zero day and there's special mitigation advice and you gotta do something right, maybe it matters, but I think the big need is just having a label on the thing and being able to talk about that and it's the same label so that when we're all talking, you don't have to have eight different IDs. You got one ID that rules them all, which is probably should be CVE. The places where accuracy has come into play, at least what I've seen at CVE are, all this root cause analysis, I think is kind of cool and I've had kind of a mindset of, well this may help influence how people think about vulnerabilities as these darling precious snowflakes that each and every one of them is. But when we do get complaints on the CVE side of things, in generally it's two things. Either the affected versions of the software, we might be a little imprecise about and then characterizations of the severity of the issue, ultimately the CVSS score. And that kind of makes sense, right? Because that's ultimately what people seem to care about. It's how, what is gonna be the impact to my enterprise? I don't care if this is some, you know, brand new, really, really cool attack that deserves a pony. Is it gonna hurt and you're not? This thing is a 9.8, I have to do something. Yeah. There's a lot of that mindset in the enterprise user sort of community so. Can anyone up here, does anyone know how many public vulnerabilities were disclosed in any given year? I think OSVDB might be the closest these days. Well, if you're talking about just the overall numbers, one of the reasons that we say that we have an accurate numbers, because we're the only ones that abstract to the level we do, where every single specific vulnerability in every script gets its own ID, and everyone else across the board, CVE, Secunia, everyone says no, we're gonna lump them together. And yeah, that kind of moves into some of the stats. Them's fighting words. Yeah. Yeah. There are lots of different ways of, there are lots of different ways of counting vulnerabilities. I'm not gonna dispute that you guys have a lot better coverage than any of the other sources up here, because you guys really try and track everything, and you guys put in really long hours into it. Also, your analytical overhead in general is very minimal. Your work goes into let's compose a title, and you have fairly simple ways of breaking things down that get a little bit more complex, at least for CVE. When we're dealing with, for example, shared code bases, and we're looking at two different bugs, we may intentionally combine them. In other cases, we kinda have to look a little bit deeper to figure out if we need to split them or not, because in some cases, if we have two CVEs that are out there that are duplicates, that's kinda okay, but if we have like one, I hate duplicates, by the way, don't get me wrong, but if we have one CVE out there that kinda combines multiple issues inadvertently, then the utility of the CVE goes down if people, vendors, are only fixing like one part of it, and not the other part. So that said, the way that you guys have structured things, I think is really good, because you can be the closest to counting the total number of vulnerabilities that are disclosed, using the way that you count things. Well, on that note, let's move to the next one, which talks about the trends and the metrics. This is just a natural flow, and plus, you know, it was average 8,000 a year. Yeah, and my point being, who cares if you can't even put a measurement on the number, or how bad are vulnerabilities? I mean, we're just making stuff up, which... I'm gonna cover that. All right. Is that the next question? Yeah, yeah. So what are some trends and vulnerabilities, disclosures, types, volume that we're seeing, and then are these security metrics even worth a damn? So what, if you count this many, what does that really mean? So... So the first question is, who out there thinks that vulnerability statistics are helpful and useful, and you actually do currently use them in any capacity? Anyone? That's a lot of people that are wrong. I'm sorry. I mean, you are absolutely wrong. Does this slide help you? I can't see the slide. Yeah, we can't see the slide. Oh yeah, my panel doesn't know the slide. It's the vulnerability counts for most VDB. Okay. One of the things I wanted to bring up about vulnerability stats, just as a quick idea, there were 8,337 vulnerabilities in 2010. Does that sound like a useful statistic? Only compared year over year. It's over time that I think it does become useful. Okay, well, I mean, the media will call up and ask us, how many volumes were there last year? As if they knew what was the previous year. Right, well, they will get to that, and I will too. So I have to footnote, well, that's according to OSVDB. What about other VDBs? Well, there were 3,648 according to Secunia. Well, wait, why the discrepancy? Well, now you have to get into different kinds of databases. Secunia's database is geared for a very specific use. They have an entire customer base that actually uses their database for day-to-day patching, notification. You know, it's an entirely different system than OSVDB where we're looking for long-term stats, history, and we abstract the way we do. Yeah, like our customers, they would like flock me publicly if we started doing what OSVDB do and send out one advisory for every single issue. What they care about and how people, like the idea of how to use our database is how many actions do I have to take? So if there are 10 vulnerabilities being fixed by one patch, then they just want one advisory, listing the 10 vulnerabilities and the patch. They wanna know which product is affected. How many vulnerabilities? How critical are they? How do I fix it? And then there's that subset of people that actually care about the core details of each vulnerability, but they just wanna know how do we fix it? So they don't wanna have 10 advisories that tell them to install the same patch. Right, and that's why it goes back to that is that OSVDB would be horrible. It would be useless in that situation because it would saturate them. So after that, the next question is that figure more or less than 2009. And so I say there's 7,678 vulnerabilities in 2009, less than 2010's total according to OSVDB. Next question we obviously get is, is 2011 on par with the last? Well, there's 3,427 as of July 25th. This does not appear to be on par. So now all of a sudden we have one number that we start out with that as soon as you start putting context around it and you start even looking one year either direction, the stat starts to lose some of its meaning. There's also an assumption in those stats that your analytical capabilities are keeping on par with the publication sources that you're monitoring. So for example, a couple years ago, I think you guys started scraping almost the bottom of the barrel looking on various sites that no one else was looking at. And that affected your numbers for that year and then the year future. For us on the CVE, people do a lot of CVE-based analysis counting the number of vulnerabilities without recognizing that we don't have the complete coverage that we used to have due to some of the factors that I talked about earlier. Not to be a pedantic ass, but the stats don't lose the meaning. Stats are just numbers. What loses meaning, your problem with meaning is actually the model. You're inferring meaning in what you're doing there. Okay, so it's not the numbers, it's your quest for knowledge out of the numbers. Right, well it's interpretation of that and that's why I say it's all about context. Well, right. Right, exactly. And I think that's what Dan, I was just gonna say, Dan Guido's been pointing out lately, of course, which vulnerabilities do you actually need to worry about. I think where the context and the numbers actually mean something is when you then pair them with attack data. And then you can see are the trends similar, completely different. Yeah, ActiveX disclosure is going nuts and everybody's getting owned with ActiveX. Same thing with SQL injection, but not the same thing with pick whatever vulnerability type. So that's where I think it actually, you get more of a whole picture because people actually wanna know how am I getting owned? I don't care if there's a vulnerability out there that attackers don't use. Yeah, in a sense, in a sense it's too bad that the media can't ask you smart questions, but that doesn't make the numbers any less useless. Well the funny part is that they ask these questions or like, is there anything else you wanna contribute to this? And next morning they wake up and they have a 17 page mail from me explaining all this and they're like, okay, thanks for your time. The hell if they're gonna write about it. You gotta write smaller messages. This is the case of looking for the, it's the case of looking for the keys under the streetlight instead of where you think you kinda dropped them, right? This is the only data that's out there so people are looking for it. So real quick to jump back to my example, if 2011 isn't on par with 2010, the question is why? So all of us, we know some of the reasons. There's trends, look at phases like cross-site scripting, SQLI. There's certain years where people jumped on the bandwagon. There was the DLL injection on Windows platform where everyone was finding software that did that. A couple years ago with all the image ones. Image ones, yeah. And then the zip ones. So that's low hanging fruit that'll swing the totals. There's change in desires to disclose. A while back everyone was like, well shit, if I release an advisory that becomes free advertising for my company. And eventually these companies realize, wait, we're not getting business. And then researchers are like, oh, ZDI, they'll give me all kinds of cash and hookers and blow for this, you know? So all of a sudden they have a very different desire to disclose in the way they do it. There's the volumes we know about, the time we had to dig into it. Like Steve said, there was a few years where I was a consultant and I didn't work a whole lot, enough to live and I spent all my other time on OSVDB. And those years, our numbers jumped dramatically because I was scraping change logs. I would go through like the Apache bug tracker. And if you've ever been in that thing with all of their projects, it's crazy. And yeah, I was the dumbass that actually went and said, okay, I'm gonna search for the word security and start reading every goddamn ticket Apache's ever written with the word security in it. Pull out every denial of service, every stupid little vole, every race condition, local permission area you name it, and we put it in our database. You still have to make a guess, right? Because half the time it's like six words. Right, some of them were. Fixed permission problem, what does that mean? Does that imply security issues or usability issues? Yeah, so not only did we have to write the entry, then it was a techno disclaimer. It says, due to the vague wording of this, we're not sure if it's a security issue. You know, you add this up and yeah, all of a sudden the numbers jump. So then we get to what I call the security metrics factor. Who in here reads the security metrics mail list? Anyone? Alison. Yeah, okay, stay the hell off of it. That is the biggest waste of time of academic masturbation you will ever see. As soon as you get close to a real statistic, these ass hats like Fred Coe and jump in, what's a vulnerability? What do you mean, we have to define vulnerability now? So he does, he'll go down this path and say, well, you don't know what a vulnerability is. And I say, well, I've got an idea what one is. And then he says, well, there's an infinite amount of vulnerabilities. These stats don't mean anything. I said, well, it's not infinite. And I gave him an example. I was like, I have a 10 line program. There is a finite amount of vulnerabilities in this. He says, no, there's an infinite amount. I was like, 10 lines, it's not infinite dude, trust me on that. And so he will sit there and argue and this is just one example. And one way or another, they will figure out a way to make all the stats useless. And you're wondering kind of, what's the purpose of this list again? Is it to get metrics or is it just to kind of like, have a civilized flame war? So long story short, we take all these factors in and we come to the conclusion of what I think Jack Daniel said, that that original stat I gave you is about as meaningful as my cat weighs 134 miles per hour. Without context, these stats mean nothing. Metrics aren't very helpful. I mean, how many of you, like you said, how many of you really cared that there was 8,000 and some vulnerabilities last year? You don't. Come on, Alex. This is the perfect time for you. How many of you run all that software? No one. There's also a broad factor. Again, I don't understand, Brian, why are you getting spun up? Because the numbers are just the numbers. What you're really bitching about is the fact that you don't have a model, right? So propose a model. And then I'll show you five assets that will tear it down for stupid reasons and a bunch of panelists that will tear it down for good reasons. And that's what we call scientific method. I call it academic masturbation. One of the problems is also that people need to be aware of what they can interpret out of a given number. Like, we have a lot of those cases like, oh, there's 10 vulnerabilities in product A, 20 vulnerabilities in product B, which one is the safest product? Yeah. And then they even take the stats perhaps even from our side. And that's one of the problems we have with VDB. So also people take those metrics and then they just start interpreting shit out of it. Like, oh, they might even add because it's on the Sikunya side. Sikunya says, this is more vulnerable than this night. No, we don't. We just tell you there's 10 vulnerabilities in that product, there's 20 in that product. If you wanna start evaluating more, what if I add, for instance, that in the product with 10 vulnerabilities, they're all unpatched. The product with 20 vulnerabilities, they were all patched within a week. So if we factor in time to patch, which one is then the most safest product? Some of you may have changed your mind now about which one it is. If I then go and add the one that has 10 vulnerabilities, they were all classic stack-based buffer overflows. The one that had 20 issues, they were more complex use after freeze. Which one is now the safest product? Which one would you prefer to use? Yeah, I think that's a really good point, especially from a big vendor perspective in terms of if we've actually put in the due diligence to, when we get a vulnerability report, we've actually put in the due diligence to look for variants of that, and we fix all of those too. For it to be bucketed as something like, oh, well, they just fixed, they had more. They had more volums. But that's not differentiating between vendors who actually do due diligence and find additional variants or additional vectors, whatever, with those who just do the lazy thing and patch one vector incompletely, whatever, and then it shows up in these counts as lazy vendor only had this one. Diligent vendor had a bunch more. So there's no real way to differentiate the lazy from the diligent in this model. Real quick, in case anyone's curious, we're talking about Adobe. I was gonna say, we see it all the time through ZDI. The researchers are actually quite good about testing the vulnerabilities. And I can't tell you how many times they come back, oh yeah, it still works. They didn't patch it, or there's another vector, whatever the case might be, happens all the time. I do wanna hear Katie about silent patching. Hold on, hold on, hold on. I have one more, good mouthful of beer. I wanna claim that metrics are totally fine if you understand it, and it's your context, and you wrote the metric. Sturt has this awesome metric that if anyone knows, we still publish vulnerabilities once in a while. It goes from zero to 180, two decimal points of precision. So you can tell which ball is more important because there's a number and you can sort them. Which is totally worthless to everybody in the world, except the people at Sturt, and actually it only was worthwhile to us years ago when we used it to decide whether or not to publish document A or document B. It was very worthwhile for that purpose at that time, and that's it. So it is context. Right, my point is that you have that one line vulnerability and then you have a 87 line disclaimer rider saying this is what it really means. It's subjective. I mean, a lot of this stuff is. So that's kind of the trick. If I might care about some vol, you don't care about it all. I might care about Secunia's vols or Microsoft. I might not care about all the PHP includes you guys have. Maybe I do. Maybe I'm a PHP web app developer and I'm... You're sick. That's my point though. That's how people are getting owned. PHP. Then their metrics are bad because they're not telling them that PHP matters at the moment. Yeah, that's kind of it. Everybody always asks about Oracle vulnerabilities. Unless you're Lichfield, nobody ever cared. All right, unless you're a pen tester pointing out how broken every deployment of Oracle ever is, nobody cares. But people get owned with cross-site and, you know, everybody's PHP blog was getting owned left and right four or five years ago. And so, still are, right, exactly. And so that's my point. I like the way OSVDB does it. Because I actually know what is the attack surface available to attackers. I think that's important. That's entirely dependent on who the researchers are who are concentrating on things and what they're concentrating on. We, if those of us who've been in this industry for since about 2005 or something like that, remember a Latvian teenager, age 14 or 15, who basically decided to spend 10 minutes testing all the software that he could download and came up with 800 vulnerabilities within the course of like six weeks or something like that. And just a couple years ago, some guy for Debian basically used a super powerful vulnerability detection tool called G-R-E-P. I'm not sure what the acronym stands for. And he found like 500 vulnerabilities or something like that, right? And so we are still very much subject to pretty much the whims and the fads that researchers happen to go through. And even one individual researcher can have a big impact on what these numbers are. What were your four eyes again? The four eyes? Incomplete, inaccurate, inconsistent, and incomprehensible. And you need a fifth, ignorant. All right, we're gonna move on. What are your thoughts of the value of vulnerabilities? Bug bounty programs, vulnerability, buying and selling, impact on disclosure? And we're gonna give it over to Katie and you can give her your little spiel and we'll see what Dan has to say. Then I have a question for Katie. Oh, good. Go back to silent patching. I believe the moderator has asked me a question. So the question about bug bounties and that type of thing. So I don't know, how many of you guys saw or heard about the talk that I gave a couple days ago at Black Hat? Okay, all right. Well, I'll just, I'll fill you in as I go. So I think that a lot of security researchers have varying motivations for what they do. It's not all money. How many of you out there who do this for a living, I mean professionally, have figured out ways to mint money on the back end of some financial system? Raise your hand. Liars, come on, there's more of you. Anyway, if you wanted it, if money was it, right? There's a lot of unsavory ways that people with the dark arts know how to get money. Now, what folks like tipping point do is what we would consider the white market of vulnerability buying and it doesn't really, the numbers don't really end up equaling anything close to what the gray and the black market will pay for, right? So there's a lot of researchers out there who think that it's important to get recognition for either publicly or among their peers. So when we looked at the possibility of doing some sort of a bounty program, a nominal fee for vulnerabilities, we looked at the motivations that were out there and we looked at the motivations for the researchers who are actually finding loans in our products because not every vendor has the same profile of the researcher that looks at their code. We're a pretty popular target for research, right? Partner, right? We're, yeah. But we're a pretty popular target for research. Other vendors might have different behaviors and different main motivators for the researchers who look at their products. So we looked at what researchers do with our, why they do what they do with ours. And what we found was this past year, we've had about 80% of our vulnerabilities that were disclosed at all were actually privately reported to us. So 80% were privately reported, you know, gave us time to fix the issues and the other 20% were dropped to zero day. Now in that 80%, considering there are programs like, you know, Dan's over at ZDI that would offer a comparable price to a bug bounty like should we had decided to do so. In that 80%, 90% of those reports actually came directly to us. So even though they could have made a small amount of money, you know, they actually, the majority of the researchers who find molds in our products and wanna give them to us to get fixed actually prefer to come directly to us. So that's what we found when we took a look at that data. Now we absolutely, you know, are fine with the researchers' abilities to make money doing their vulnerability research. And I think there's some great programs like ZDI that are out there that, you know, we love, we actually talk about quality of reports, actually the quality of reports that come from these guys is really, really good. So, yeah, no, no problem. But, you know, thank you, but, but hold on. I was gonna agree with that. Agree with me later, hold on. So, but, so that's what we found when we looked at our data, when we looked at our researchers, right? So instead of doing a bug bounty, because it seemed like, you know, there's lots of ways for researchers to make that money, we decided to do something different and that's what I talked about a couple days ago. So if you go to www.bluehatprize.com and take a look, we decided to offer over $250,000 in cash and prizes for mitigation research. So we're looking for the next generation of platform mitigations. Top prize gets $200,000. So we're gonna announce that the winner is next year at Black Hat and the contest is already kicked off. We've actually already gotten some entries to the contest and there are, you know, so top prize gets $200,000, second prize $50,000, third prize gets, you know, MSDN subscription worth $10,000, you know, and money, fame, I guess women I suppose if you would, you know, if money and fame bring women, you know. But that's what we decided was, you know, sort of the best way for us to encourage the research community to do what it does but figure out, you know, ways to mitigate exploitation because like I said, you know, volums don't pwn people, exploits do and we wanted to encourage the research community to work with us like that. Actually, I wanna disagree because I think Microsoft is also setting a great precedent that they are rewarding not only badass exploits but the ones that are completely weaponized. So, your bug bounty does exist and it exists in the sense that I write an exploit, it becomes really good, it owns 200,000 machines and becomes part of a botnet. Now you guys offer a reward for information on the botnet, so my motivation is now not just to write a Microsoft exploit but to write a badass one. That's actually a different thing. Yeah, you're thinking about the other reward that we have that has nothing to do with it, yeah. But yeah, in essence, you were still offering money on what is fundamentally a very good working exploit against Windows systems. Nope, that's not it. So, I think you're thinking about the Rustock but botnet bounty, that's a completely different thing. So that's a quarter million dollar bounty for info that leads to the incarceration of the people who wrote Rustock. Totally different. So what I'm talking about is, this is a, I just announced this like two days ago, I'm sorry you weren't looking, but anyway, listen. So this is completely different. So we're taking this approach where, look, there are open problems in modern exploitation that breaks our platform mitigations, things that break ASLR in depth, right? So return oriented programming, jet spray, that kind of thing, there are open problems there that we're working on mitigating. So what we're actually rewarding are, take one of those open problems, right? And these are for memory corruption vulnerabilities. Yes, I know I said it and you don't like it. But anyway, take one of those open problems in the exploitation of memory corruption vulnerabilities and come up with a novel mitigation. So basically next generation ASLR, next generation depth, that kind of thing, you know, SEHOP, that type of research is what we're looking at. And just, just, well, so. He's asking, will the research be made public? So the question is, will the research be made public so that it can be used in other platforms? The answer is, it is up to the inventor. The inventor retains IP ownership of that research. We just get a license to use it, so the inventor gets to choose what the heck they wanna do with their research. They wanna port it to Linux, go for it, my friend. Enjoy, you know what I mean? So yes, if the researcher who wins chooses to make it public, they can do so. They own the IP 100%. From a vulnerability point of view, we can see, though, that to an extent, bug bounties do matter and they do motivate people. I'd made a nice slide and he killed it. So now I'm just gonna describe it like this. I made a case, for instance, with the CA BrightStore. Had a fantastic track record, 2004, five, six onwards. There were like 80 vulnerabilities being reported in one of the BrightStore solutions, laptops and desktops, I think it was called, in 2007 and it actually triggered us. And that was the time where CDI was paying for CA BrightStore issues and a lot of them actually came via CDI. And in the beginning of 2008, as part of our yearly report, we actually went out and said CA BrightStore is a solution we consider to be inherently insecure. Not only because of the vulnerabilities, because we already talked about, we can't look at that alone, but we also found a lot of those vulnerabilities, my research team, and we could just see the code was terrible. So we went out and said we consider this product inherently insecure. A while later, CDI came up, backed it up, and also stated that they would no longer pay for vulnerabilities in BrightStore. After that, how many vulnerabilities have been reported? So either they magically suddenly just opt the quality of their product, or people just stop giving it to them and found other places. And I think Adobe Shockwave is an interesting one of us because that has certainly received a lot of attention lately also. And if I can understand correctly, you don't pay for Shockwave anymore either? Well, we had a presentation at Kansak West where we showed everybody how broken it was, and so after that, yeah. So, and I was also finding a lot of those Shockwave issues, and they have some problems in some of their components. So, and it's quite realistic to also expect that since CDI won't pay for Shockwave vulnerabilities anymore, that we will likely see a drop in it because then people will find another target where they can get money. So to a certain extent, it definitely does motivate people to in choosing which target they wanna go for. And this is one of the kind of metrics that's much more informative about the relative security of a software package than counting the raw number of vulnerabilities that have been disclosed. I can make this quite short. I even questioned CDI when it came out. If you go back to 2005, this room probably would have looked a lot different. The whole industry was different. The number of reverse engineers and researchers on the planet was far fewer, but it was a very naive position to think that that number was not gonna grow, that a black market was not going to spring up. And if any of you have ever read Freakonomics, that it pretty much proves that people, there's very good positive response when then there is monetary reward. And I think now as CDI is quite proven year over year, it's more and more popular. It's, you know, I think we do a good job, you know, being responsible and being popular with both vendors and researchers. But it's, you know, if you look at everybody that's got their own volume programs now, I think it's been proven that it's a model that works. Well, and for us, you know, the model that we chose for the Blue Hat Prize was something where we were looking at, as a platform provider, we were looking at ways to scale such that we were essentially blocking entire classes of vulnerabilities with some of the research that we hope to get out of this. And, you know, certainly, you know, what Simple was hinting at is, you know, were we gonna share it with the community? And quite frankly, we got ASLR in depth from the community. Why shouldn't we give back, you know? So absolutely, I think the model that, you know, that we've chosen, and I think there's room for lots of models here. You know, every vendor is not the same. Not every vendor is a platform provider. You know what I mean? So for other vendors, other models might make sense, but for us, you know, it makes sense to try and make these changes that won't, that not only will impact our platform and our applications that run on it, but these are platform level mitigations that will also help third-party applications on our platforms and mitigate some of those issues. So for us, we're looking at this, you know, in terms of sweeping, you know, or making much more difficult to exploit entire classes of vulnerabilities. And I think this is a reflection of a growing trend in the area to move a bit more towards not only defense, like you're talking about with the Blue Hat Prize, but also prevention in the first place, right? There are entire classes of vulnerabilities. We know about these. In the common weakness enumeration, we document them, but we still have like 800 different CWI IDs, maybe 20 different ones for stuff that are related to buffer and memory corruption errors. I'm excited myself, sorry. I want to get on to the next question. Come on. This is a good one. I want you guys to talk about being the people that track and deal with researchers as well as vendors. Name names, tell us who they are. How do you really feel about working with certain researchers and vendors? And I know you guys are gonna be, you're not gonna be shy about this, so who wants to talk about the research quality and vendor response? Okay, so. Ryan, should you go first or last on this? Yeah, so I've had a few problems with researchers and I think I'm the only one out of any of us up here that will actually reply to bug track and full disclosure and call them out on it. And part of that is, you know, yeah, quit being a dick and sending this really worthless information and also just kind of teach a lesson that if anyone's reading these lists, strive for a little better accuracy in your reports because it's not just reflecting on you, but it's causing a whole lot of headache on the part of everyone else involved. If Microsoft gets a report and I know that they've gotten probably hundreds, if not thousands of these where there's enough information and they're like, wow, this sounds like it may actually be an issue, but the technical information isn't there and then all of a sudden they're in this email back and forth and they spend two weeks all to figure out that, well, oh, wait, you have to have local admin privileges to do this, you know? So, you know, one of my, to name names, you know, one of the most recent ones for me was HT Bridge and I'm sure that one or two of you are in the audience. Hi, I'll respond to your mail from three weeks ago when I get home. You know, they started releasing advisories and it's obvious they're using them as a way to promote their company and there's all kinds of really crappy stuff that they're releasing because they're going after beta products, they're going after real low-hanging fruit, you know, they'll find, yeah, well, no, not only that, but they'll find like, oh, here's two cross-site scripting and two different advisories. Oh, and we forgot, or not forgot, we just kind of missed the remote code execution, you know, and the serious bugs in it. And I don't know how many cross-site scripting issues I've seen reported that our error messages clearly indicate RFI or... Or SQL injection, yeah. And they're missing these left and right and you're looking at it like, you know, if you guys would actually spend some time on this, you would find some really neat stuff and you're not. And then they also have this habit of, you know, as an example, it's like, oh, we're gonna contact the vendor and we're gonna give them two weeks and the fact that we type out on the email and the vendor never got it, doesn't matter, you know, we're gonna go and release in two weeks anyway. Bottom line is if you're discovering cross-site volums, nobody thinks you're cool. Yeah, yeah. Cross-site scripting is really old. It's really kind of lame and it's one of those that... Ask John Oberhide. Yeah, well, if you're gonna do cross-site scripting, just wait every 30 days and do one post with like all 750 of them. Okay, so if you can own a mobile phone at Pone to Own, then your cross-site is worth a cramp. Otherwise, disclose it to the vendor or the website or wherever the hell, ask for some swag and be done with it. Right, and I'm fine with posting it to the list. It's just don't think that it's anything other than, you know, a novelty for most of these. And the other big pet peeve is like SQL injection. It's like, well, here's cross-site scripting and they will actually include the script code to exploit it and you're like, okay, well, this is valid. And then when it comes to SQL injection, they're like, and the proof of concept is bracket SQLI bracket. Wait a minute, that's not proof of concept. That's saying, here's the script and here's the variable and wait a minute, why couldn't they actually put SQLI exploit code in there? Is it because they're morons or do they actually think, oh, well, if we do that, bad things will happen to the 87 installs of this software that you've never heard of? You know, either way, it's a cop-out and yeah, it gets really tiresome and I want to be clear that HT Bridge has kind of been my whipping boy for the past year but that's just the tip of the iceberg. You know, if I actually spent time to respond to all of these Liam advisories, it would be more than a full-time job. I gave up responding years ago just because of the amount of time that I took to do that. Right, so we spend time responding but it's to our researchers. We don't do it publicly. We do that, you know, we accept about 30% of what is submitted to ZDI. A lot of that is vulnerabilities that we're not necessarily interested in. A lot of that is crappy submissions and we want to work with the community and we've seen researchers come up through the years to make those submissions better. That's obviously in our best interest but to call someone out, I will call someone out and then I will also give them kudos. If any of you were aware of the policy change, the only policy change we've ever had with ZDI, we now enforce a six month deadline because there were some vendors that were kind of sitting on their hands. HP. And that's absolutely correct. And so it's actually been phenomenal for HP because everyone decided, you know what, we're one of the culprits and we want to do this better. One of the other culprits was real networks. If you go back to last year and you see how many real network vulnerability advisories we disclosed, there were a lot and they took that policy change very seriously and look at how much better their software is. So yes, they were bad, but now they're good. So that's positive. We generally experience that like in the past 10 years I've been involved with VDB. I actually all think that researchers are getting better. They are getting better at providing the details we need. Don't get me wrong, we're still killing about 25% of what is posted on the list. But the level of quality seems to be improving. Now Katie has been baiting Steve and I for a while so let's go back to the memory corruption issue. That is one trend that so is going the wrong way. More and more people are using the term memory corruption. Seriously, if you're a researcher, then it's because you're damn lazy or you just don't really know what it is. There are a couple of valid cases where it's perfectly fine to case, call it memory corruption, but it's been like a thing covering everything from a stack-based buffer flow to a use after free. And we've even sometimes seen it, it's actually just a missing exception handing that just results in an application terminating. So it seems like being the standard thing. Oh, I ran a fossa, something crashed. I don't really know what it is. Memory corruption, done, sent. That's the kind of stuff we offer. And when you see the same from vendors also. And it's like, come on. I mean, the vendor should hopefully know what the core problem is. Please tell us, is it a stack-based buffer flow? Or is it an intergeal flow? Is the use after free? What is it? Don't tell us it's a memory corruption. So I'll also chime in, because obviously, I'm here representing Microsoft, a vendor, but Microsoft also, we actually do vulnerability research on third-party products. I founded Microsoft Vulnerability Research in 2008 to do this. And we started releasing advisories on third-party products for vulnerabilities we found and work with the vendors to get fixed. So we see it from both sides too. We are both the researcher and the vendor. And sometimes the coordinator will also, MSBR will step in and coordinate multi-vendor, super nasty apocalypse kind of issues. And we'll try and do our best to coordinate there. So we feel the pain from all three roles in disclosure a lot of the time. And yes, some of the researchers that we deal with are much more able to articulate their issue than others. But actually, we have seen that same trend where they do actually get better over time. And then- Art, do you see that on the cert side? We stopped paying careful attention. We stopped counting vulnerabilities. We get maybe 30 direct reports, about 30 a month, so maybe one a day. And we don't run with all of them. But probably half or more of those we go with. The only thing that really bugs us is that we get the researcher who is looking for some extra fame. And their company's not famous enough yet, but maybe if CERT has an advisory, that'll help. So they'll be on us to make sure we publish something that has their name in there. Hasn't happened a lot in the past couple of years, but that used to really annoy me. But do you think that the quality of the incoming reports to you has improved? No, it's all over the place. There are great ones. And there are horrible ones. And I can't measure enough to really say there's a trend either direction, but my gut feeling is it's about the same. And we actually see something really interesting too in that a lot of researchers are, they only come to us with one vulnerability ever. And they got lucky maybe, or they didn't like doing vulnerability research anymore. I mean, we don't really know what it is that made them come to us just one time and then disappear. I think a lot of times it's pin testing, same thing. Or accidental discovery. I mean, you see something crash, you bother checking it. I think a lot of researchers don't look for variance either. I mean, that was a major pain when PHP application vulnerabilities first started happening. You'd have one researcher go, oh, I looked at this PHP golf application with 10 downloads in its entire history. And I found this cross-site scripting in these 10 different vectors. And then 10 different parameters or something like that. And then like two days later, some other person with completely different reports, 22 different vectors for the same vulnerability type. And there's a little bit overlap, but not all that overlap. And it makes it very clear that the depth of the research is not necessarily there. Yeah, and one of the last things I wanna say about, Microsoft and the fact that we are in all three roles of disclosure or vulnerability research, both the finding, coordinating, and the fixing side. But as finders, when we go to different vendors, we've had to actually prove it just like any other researcher. We've had to prove it to them sometimes by popping calc. This has definitely happened in the course of Microsoft vulnerability research where a vendor just didn't believe us. So we had to show them. So, but part of that mission for us is actually education for them, right? It's just like any other researcher. It's education like no, really, this is exploitable, I promise, here you go. And they're like, why is this calculator showing up on my desktop? I don't understand. And then we use that as a way to start a conversation with them about secure development because we're saying to them, look, we've taken our lumps over the years, we've learned our lessons in the following areas and we'd like to help you because you run on our platform. We'd like to help you get better because that makes our platform more secure. So we start talking to them about ways that they can catch these vulnerabilities earlier in the code, but it's an educational process, just like any other researcher who comes to a vendor and says, hey, your fly's down, you might wanna pull that up. We not only say that, but we also definitely try to make it so that they don't keep making the same mistakes over and over again. All right, so we're starting to get the hand signals, but I want to ask you guys. One last thing real quick. Real quick? Just as a heads up, there are multiple vulnerability databases that do this. The data is not public. When OSVDB has a data set, we will make it public, but one of the things that it's been fun tracking is what we call researcher confidence and OSVDB is actually gonna eventually track vendor confidence as well. So researcher finds 50 vulnerabilities over the year and let's say 45 are accurate. Well, that starts to give us a percentage of success rate in finding a vulnerability and at least one of the VDBs represented here, and it's not OSVDB, tracks it even beyond that. And when you start to look at these statistics, Steve Christian and I, we're looking at the data and we're like, oh, yep, we know this guy, yep, that's accurate, that's accurate, and some of these, it's like, it's amazing that some of these researchers that are well known and liked all of a sudden have a 60 or 70% success rate. How many of you know that someone has a 30 or 40% failure rate on reporting a vulnerability that it's not accurate, can't be reproduced or something else about it is wrong? So down the road, look forward to that because I think it'll be very telling not only what we deal with, but a lot of the big names that you guys recognize, it becomes neat. All right, so we're gonna be going to the next room here, he's telling me no, but I want one comment from Alex and maybe Art on what do you think about CVSS that leads us into our room? Two things that are wonderful about CVSS. All right, so I'll back up. My problem with CVSS is this, it's an attempt at formalization of something that doesn't exist. I like the ratings, there's nothing wrong with weighting and scoring and trying to figure out how smart something is, but when you start multiplying ordinal values together, you break the fundamental life that the universe works. You just can't do that. And you end up with jet engine times peanut butter equals shiny. And you're telling me that the result is shiny. The second problem with it is decimals aren't magic. They're not unicorn poop. You can't just add them really nearly and suddenly it's a ratio scale. It doesn't work that way. So the problem is that it may be right where you have a 15.4 is actually more severe than a 13.2, but when it is wrong because you're doing the wrong things with math, it will be really wrong potentially. And that's dangerous. I like it, I wish they just wouldn't multiply things. Just give me a frickin' baseball scorecard thing and let me look at it, because I can look at that and digest it myself. So there are two answers. I have two answers to that. One of them is that there's- Fuckin' you. No, that was last night. I haven't had enough to drink. That's why I'm hoarse. CVSS version three. There are some rumblings within the Special Inches Group without thinking about that. So for those of you who are stuck with CVSS version two with its warts and all, if you have any concerns, you can bring it up to me or I'll name Katie as well or Art, because we're all one way or another, kind of least indirectly involved on this sake. The other thing is to address at least some of the limitations, some of which you've alluded to, Alex. There's this thing called the Common Weakness Scoring System, which isn't at the vulnerabilities. It's at the, when you find a weakness, indication of the potential for a vulnerability. It still has multiplying ordinal values by ordinal values, but it has built into it continuous values as well for those people who are sort of the expert users. I think we need to recognize that most people who are using CVSS, right? They need a score. One way or another, all they care about is the score. They don't necessarily care about a lot of the fancy math behind it. So my hope is that for CWSS, some of our lessons learned can feed into the future of CVSS. All right, with that, thanks for your time. Appreciate it. We'll be around. Find us for beverages and thanks again. Good job, everyone.