 I like the point out, when I worked at the NSA, that was back before we spied on Americans. Or at least before we told everyone we did that. So, yeah, when I was there, I left in 2005, I think. And, you know, they made a really big deal about not spying on Americans and you get big trouble, and we had to do yearly trainings not to do that. And, like, I left, and, like, within a year, there was all this stuff about how the NSA spies on Americans. And I was like, holy crap, like, I leave and it just goes to hell. So, yeah, it turned out that they were spying on Americans while I was there. They didn't tell me. I was upset. Anyway, so let's talk about Infosex. So thanks for having me. I'm happy to be here. I'm actually a St. Louis native, so I just drove in from my house today. Yeah, St. Louis. So thanks for coming, everyone, who's not from St. Louis. Welcome. I usually give, I give a lot of talks, maybe 20 or something. Usually there are more sort of technical talks about research I'm doing. I do give a few keynotes. But I'm probably not quite as good at keynotes as technical talks, so please bear with me. Mostly because I don't really know exactly what a keynote talk is about. So from going to a lot of conferences, what I've sort of taken from it is the keynotes just sort of talk about whatever they want. So that's what I'm going to do. So anyway, so one of the things that bugs me, and I've been doing Infosex for 15 years or something, and it's a fun field, it's exciting, and it's sort of adversarial, like there's bad guys and there's good guys, and you have to make sure the bad guys don't win. So there's a lot of things that are just really fun about it, and it can be technical or not technical, it's up to you. And so there's a lot of things I really like about it, but having done it for 15 years, I feel like it's like, I don't know, are we doing this right? Have we totally screwed up? Are we better off now than 15 years ago? And so that's kind of what this talk is about. And definitely feel free to, if you think I'm saying something that's not true, or you totally agree or whatever, just feel free to interrupt that, I won't mind. All right, so it was a great introduction. Just some more stuff about me. I've done a bunch of mobile security. I won this contest called Pone Dome four times, which I'll talk about a little bit later. This is a contest where if you can break into a computer or a phone or something that's like fully patched, then you win the actual device and some money as well. And that's why it's like Pone, like hack, Pone Dome. Anyway, I won this thing like four times. I wrote a few books. Lately I've been doing a bunch of like car hacking, so that's like a lot of fun. And then I have some letters after my name, so. All right, so this is basically when I talk to people who aren't in the field and I say, I do computer security or I'm a hacker or whatever. This is what they think of, is articles like this, which obviously, you know, it's hard to compete. I mean, look at this. This is an article I got from something I recently knew in the computer and it's like a 386 or something. You know it's bull just looking at that. But anyway, not everyone knows it's bull. So the question that I was sort of asking myself when I was putting these slides together is like, are we really better off now than in 2007? And I chose 2007 as sort of an arbitrary time frame based on that's when I started to kind of give talks at conferences. But you could choose any time you wanted. So here are talks from Black Hat, which is a big conference, you know, some would say the biggest industry conference for computer security. Some of these talks, and you can tell by the font, which are grouped together, some came from Black Hat in 2007 and some came from Black Hat in 2013, in the last year's version. And if you read through them, it's not obvious to me which are which, right? So on the one side, you've got things like database forensics, understanding the heat by breaking it, static detection of application back doors, like are those things that we cared about seven years ago and solved? Or are these things that we care about now and are trying to solve? And then on the other side, there's some sequel injection kind of talk, CSRF attacks and things about PDF exploits. Again, like are those the ones that we've solved because we talked about seven years ago, or are these the ones we still care about? So anyone want to guess which side, right or left, are the ones from now, from the conference this year? Anyone want to have a guess? Right side someone said, anyone else? He says left side. So it's hard to tell, but it's actually the right side is the newer talks. So it's like, why are we giving talks at conferences if seven years later we can't even tell whether we've improved, right? It's kind of depressing. And then, of course, the gold standard in computer security is, you know, as to whether we're doing a good job is whether people are getting breached and losing their data. And if you look at the number of breaches we've had, and I didn't even include Target, there is really not a whole big difference between what was happening in 2007 and what was happening now. And so half of these, if you know your history well, you'll know which too, half of these came in 2007, half of these came last year. And again, like, so we're still getting breached. So by that measure, we're not doing that great a job. Here's two things I've taken off of the Microsoft site. So in not picking on Microsoft, I just, they had good data, so I went with them. So on the left, I won't even make you guess here because one says Internet Explorer 7 and the other one says Internet Explorer 8, I think. So you would know. But anyway, on the left is from 2007, a monthly patch update from Microsoft. And these are all remote code executions in Internet Explorer. On the right is taken from sometime in the last year. Again, all code executions in Internet Explorer. Okay, like, so we have a new browser. We have, you know, SDLC, we have fuzzing, we have all this stuff. But we still have a nontrivial amount of bugs being patched in Internet Explorer. So are we doing good or not? This is something that, so, you know, maybe though we're not doing better, but at least we understand what we're doing. I don't know. So if you look at, like, headlines and what reporters are reporting on our field. So on the left, iPhone, flaw, let's hackers take over. And then on the right, your TV might be watching you. So either of these more or less scary or frightening and actually not even important to our lives as security professionals. Well, I would say they're both probably not very important to protecting our customers or our students or professors or whatever. Whoever your job is to protect, probably doesn't matter, either of these two headlines, but this is what everyone cares about. And then I actually, I covered up the right side of that picture on the left one because it's got my face on it. So that's actually something I caused that headline, even though I'm saying that you shouldn't worry about it. All right, so again, like, you know, to me, it's like depressing that you can't tell the difference. Like, nothing has changed since 2007 as far as these sort of eye-popping headlines. I mentioned this contest. It's, you know, it's every year. These security researchers get together. They try to break into a fully patched system. So maybe this is a way to judge, you know, which systems are more secure or less, which ones are harder, are things getting better. Like, so maybe in 2007, you know, we were able to do it, and now we can't do it anymore. It's too hard. Well, in 2007, Mac hacked via Safari browser and PoNom Contest 2013. Like, Chinese security team exploits Safari security flaw. So exactly the same thing happens every year. So every year we have the contest. Every year, all the things get hacked. Like, are we improving? Like, by that contest, you can't tell if we are or not. The only difference is it took one guy in 2007 and a team in 2014 or 13 or whatever that was. So, like, I'm depressed. Like, I don't really, you know, I don't really know what, you know, we work hard and we try to protect our users. And I, as a researcher, try to, you know, help. But are we making a difference? So the sort of thing that I think about is, and this is sort of in fits of, you know, depression, is, like, one of the things you learn from all this is no matter what you do, like, you can always get hacked. Right? So whether you spend a million dollars on your budget and you've got this huge team and you're doing everything right, no matter what you do, still it can happen. Right? And then you've got some people who aren't really doing almost anything and they still might get hacked, too. And it's kind of hard to tell from the outside which of these two happens. Right? So you read about Target or in St. Louis there was this grocery store called Snooks and they got hacked. And people ask me, like, oh, those guys are idiots. Right? They totally got hacked. And I was like, you can't tell. Like, they might have been doing everything really, really well and did everything right and they just got, you know, they got hacked anyway. Right? Or they were completely ignoring the problem and they got hacked because they were totally negligent. And from the outside there's no way you can really tell. And basically, like, the reason I got into computer security is because I thought it was, like, fun and, you know, interesting. Which is why I was hired at NSA actually not to do computer security but to do math, which is what my background said. And I was like, nah, this computer hacking stuff, this looks way more fun. And so now I'm at the point where I don't think it's really that fun anymore because if you're a defender, it's like, no matter what you do, you can still get hacked. And, you know, lose your job or look bad or whatever your biggest fear is. Like, that can still happen. Like, I work at Twitter and my job is to try to make sure Twitter doesn't get hacked. And, you know, I'm doing my best but there's no guarantee that tomorrow they're not going to get hacked. Right? And likewise for you guys. So as a defender, it's not very much fun because no matter how hard you try, the people on the outside can't really tell how hard you're trying and you can still get hacked. So if you're a Chrome Attacker like me, someone who does research and wants to find out, you know, find new interesting flaws or write exploits or whatever, like, it's not even that fun either because now I hack a car. I hack, you know, the latest version of Chrome. And it's like, what do people say about that? It's like, oh, yeah, that, you know, I'm not surprised everything can get hacked. Well, yeah, I know. I know everything can get hacked but still it's like, you know, it's hard. So anyway, it's not fun for anyone anymore. And it's sort of a bummer. So this is why when I go to conferences, I do this usually instead of going to the talks. This is me at Black Hat last year. So what else is wrong with our industry? So a lot of people, you know, their main job and maybe some of you is to make sure that your network or your whole area is compliant to something, you know, PCI or whatever you happen to need to be compliant too. And as we've seen from, like I said, the gold measure isn't how many IDS boxes you have or how many, you know, what brand firewall you have. The real measure at the end of the day is whether you've been broken into or not. And if you use that as a measure, compliance is not helpful at all. So if you look at Mandiant, which is this company, good company to call in if you ever get hacked. So they help out people, figure out what happened, recover, that sort of thing. And they're a very large company. Every single person that they helped in 2013 was PCI compliant at the time of their break-in. So, like, obviously, compliance didn't help them. So compliant is some really low bar that you have to at least meet. But it obviously isn't going to keep out everyone because here's proof that compliance doesn't mean that you're going to get hacked or not get hacked. Everyone, like I was a pick on Target, and, you know, you can say, well, Target was totally incompetent, but they were PCI compliant. So by that measure of security, they were fine. And then the worst thing is, you know, you think, you know, I talk about, like, no matter how hard you try, how great a job you do, you can always get hacked. I say that, but, like, the types of people who can attack you at that point, there's not quite so many, right? They have to be pretty advanced or lucky or something. And if you don't do much, then sort of anyone can hack you. But the sad thing is, if you look at this study, it says, like, two-thirds of people who successfully attack websites do it just for kicks, right? So that means that it was so easy that they could just do it as a hobby for fun, right? It's like, ah, come on, like, at least, like, if you're going to attack me, be like China or something. Don't be just some kid who's bored. And then the thing that's one of my personal things, you know, I'm more or less a professional bug finder. So I audit code, I, you know, look for bugs. I write exploits. Even during my research, I'm looking for problems. So one of the things that really drives me crazy is that, you know, 15 years later, 7 years later, however you want to count it, like, we still can't find bugs in software. We don't know how to flush out all the bugs. And so here's like the worst proof ever. So there's this, you know, most Linux, say, desktops for sure, anything that has a window manager has this thing called, you know, the X window manager. And there was this flaw found in 2013 in it that would allow local users to elevate their privileges to that approved. And if you look in the highlighted part, you probably can't read it, I'll read it to you. It says, this bug appears to have been introduced in the initial RCS version 1.1, checked in on May 10th, 1991. This bug is like 25 years old. It's been in like every Linux distribution for 25 years. And we never found it. I mean, what are you going to say about that, right? And I've personally looked at X of a different version, maybe, but, you know, for bugs. So it's like people are looking for bugs in this, and, you know, it's open source. Anyone could have looked at the source code. We didn't find it. How can we expect to secure, you know, our web browser or our email client or anything if we can't find this bug in 25 years? So, like, why do we suck? Basically, one of the main issues is that no matter how good you are, if your job is to protect your enterprise or university or whatever, you're sort of at the mercy of all the products that you use. So, and these products that we use are essentially insecure. And the reason, and I'll get in a little bit more about this, so the reason that these products are insecure is because making these products more secure. Like, I don't even want to say making secure products because I don't know if we can even do that. But making more secure products costs more money. Of course, right? You need more people. You need more resources. You need more time. So all these things are, you know, they cost. And that's fine. But the worst thing is that you can't measure the security of a product. So, you know, it would be fine if, like, I had to choose between two, say, document viewers or spreadsheet applications or something. And I could look, and one was a security of nine and one was a security of five or something, by whatever that means. And the one that was a nine costs twice as much as one and that was a five. Well, these are numbers I can think about and assess. Like, is it really worth it to me to pay that much more for a product that is, you know, a nine instead of a five on security? And I could make decisions. And some people who are really concerned about security would buy the one that was more expensive. That would, like, work things out eventually, right? Because, you know, the people who have the expensive one, like, they would make money and they could make their secure product and everyone would be happy who cared about security. And other people would have this cheap product and they might get broken into, but that was okay because they didn't care so much about security. They cared more about, you know, cheap things. Well, we can't do that, right? So right now, if I take two, you know, spreadsheet programs, how can you tell which one is the more secure one? How can you tell who spent more money on making it secure? You can't, right? Even I can't. As a, like, professional security researcher, it would take me, like, a long, long time to figure out if open office or Excel was better, like, more secure. And even then, I wouldn't be 100% confident. So we can't tell, and so that is the incentive for a company to say, like, well, no one's going to tell, you know, no one can tell how much I put into this product, so I'm just not going to put hardly any work into it. Right? Why would you not do that? It doesn't make sense. Why would I spend all this money? None of my customers are even going to know. So I'll just patch it when there's bugs. So, I mean, this is basically the crux of why I think we're in such bad shape. And then, of course, there's this defender's dilemma, which I've now encountered as someone who's worked, you know, on the defensive side, is that as an attacker, which is what I've spent most of my life doing, like, you just have to look around, find a bug, you get in, you're done. As a defender, it's like, well, I need to make sure there's no bugs, and that's, you know, immensely harder than making sure that you can't just find a bug, right? All right, so this is more sort of about, like, what you do for a living. So you build a system, you go, and you buy all this expensive IPSs, and, you know, you have all this great products that, you know, your vendors have given you or sold to you, excuse me. And, you know, you have all of your computers have antivirus and all this kind of stuff. Like, you're totally locked down. You've done everything right. All your systems are patched. Your users are trained, you know, not to click on phishing links. Everything is great, and then what happens? Oh, there's a zero day. Okay, so this is, and I'll talk more about this, but what are you going to do? So your IPS, it doesn't know what this is. You know, your antivirus doesn't know what this is, but it still affects you. And even, so even in a perfect situation, if your attacker is that sophisticated, you're kind of screwed. And it turns out that that user has access to, like, some database. And that just looks like normal traffic. And before you know what you've lost, and you did everything right, which is, you know, pretty disappointing. Again, because, like, what about the company that didn't do everything right? Well, they would have lost to a weaker attacker, but they still would have lost. And the reason is because no matter how much security products you buy, all of your users are still using insecure products. So they're all still using, you know, maybe Chrome, but probably Internet Explorer. They're all still using, you know, Word. They're all still using Adobe Reader. They're all still using all these things on their, on their laptops or their desktops or whatever that, you know, have had a history of having problems. And that history is going to go on more or less forever. So this is, I talked about, like, we can't find the bugs. And this is sort of a defender's dilemma thing, too. So I do this thing called fuzzing to find bugs. And you just hammer on an application with different inputs until it falls over, and then you may have found a security problem. And as an attacker, I was often asked, like, well, when do you, how do you know when you're done? Right? How do you know that you fuzzed enough? And as an attacker, I was like, well, I know I fuzzed enough when I found a bug. Right? Because that's all I need. I need one bug. I can give a talk about it, or I can write a paper about it, or I can, you know, write an exploit or whatever I care to do. But now that I work for a company where I care about, you know, finding all the bugs, right? It's a lot harder to question. Now when do I turn off the fuzzer? And, like, I don't necessarily know. But here is what Microsoft says. So this is in Microsoft's software development lifecycle document. So they say, so this is when to turn off their fuzzer. So they say, let's see. We need the important part. A minimum of 500,000 iterations and have fuzzed at least 250,000 iterations since the last bug found fix that meets the SDL bug bar. Or for Xbox, 100,000 bug-free iterations. That means they just draw a random, like a number, a line in the sand. And they're like, we fuzzed till we hit this number and we stop. Well, I don't necessarily agree with that. But now that, because what happens if the 250,000 than one test case would have found a bug, right? But now that I'm sort of on their side of things, I kind of get why they do that, right? It's like, what else are you going to do, right? How do you know when you've tested your product enough before you ship it? So they have this sort of arbitrary way to do it. And I have some ideas on how to do it, too, but it's hard, right? It's hard to make secure software. We don't know how to do it. Or else we'd be in a lot better shape. So I mentioned like zero days, and I'll talk a little bit more about those. So most organizations that are attacked probably are not attacked with zero days. They're attacked with like fishing or, you know, exploits that are a year old or all this other kind of stuff, because we can't even do the basics. But for those organizations that are actually like pretty secure, hopefully a lot of you guys, since you're here, you care enough about security to be here. You probably make sure your systems are patched and that your users are somewhat smart. I mean, that's another thing. I don't even have slides about this, but I was just thinking about this earlier. So there's this big thing about, oh, we want to make sure we train our users to not click on fishing links and know the difference between, you know, a good executable that they download in a bad one or something. And to me, like this doesn't make sense. So like we're the professionals, right? Like these other people, they're teaching classes, they're making products, whatever they do, they're not security professionals. They have better things to worry about. If all of our, if the security of our enterprise depends on them making good choices on what links to click, like we're screwed. This is not what we need. Like we do not need to make sure that our users are perfect because they're not. You know, it's 3 a.m., they've been up all night and they get an email, like there's no way we can trust them to click the right thing. So it's not that they're dumb, it's just that that's not their job and it's our job, right? It's our job to protect them from making mistakes. So to rely on user training, I think is a huge mistake. Like we need to build systems and networks and enterprises that are resilient to users that are clicking on random stuff. Anyway, that was on the side. Okay, so anyway, back to the zero day stuff. So no matter how, so zero days are basically the weapon of like the very good attacker that hopefully a lot of us don't ever have to deal with. But it's there and we need to think about that and how that affects us as defenders. So like just in general, just so we're all on the same page, I guess, since we probably all at least know what this word is. So zero day is an exploit, a vulnerability maybe, but what we care about is the exploit that exists against a product. No one knows about it. There's no way you can easily have like signatures for it. For example, since it's unknown, you can't have a patch for it because it's unknown. So you'd be fully patched, fully up to date, and you can still get attacked by it. And so it used to be, you know, when I used to talk about this stuff years ago, people didn't even necessarily believe these existed. Now I think we know that they at least exist. But the bigger question is how do you protect yourself against them? And why do they exist? And can we somehow as a group make sure that they're not around? So that's kind of the stuff that I want to talk about for at least a minute. So basically, you know, what they're used for is to attack targets that are very hard. So like say you want to attack, you know, as an attacker, you know, I've tried phishing them. It didn't work. They had some IDS that stopped my binary or something. I don't know. I've tried to attack them with some Metasploit module. It didn't work. It's like, well, now what am I going to do? It's like, well, I'll attack them with something that they can't necessarily defend very easily against, like a zero day. Of course, zero days aren't always bad, right? If you're in the matrix or you want to take down an alien spacecraft, like zero days, you're a weapon of choice. But there's not that many cases where they can be used for good, but it is possible. So who does use them besides Jeff Goldblum? So some penetration testers, the very good ones, will use these. Of course, bad guys use them to do bad things. Maybe corporations, I hope not. So it used to be government's question mark, and now it's government's period. So we know that governments use these as well. And then here's just some funny quotes from our community of InfoSec people about zero days. So this woman, Raven, is what she goes by. So she says zero day can happen to anyone, which is totally true, by the way. So she was giving a talk, like me, at a conference like this, and someone broke into her computer while she was talking and did something ridiculous with it. And this is what she had to say afterwards. Like, well, that can happen to anyone. And everyone really, really gave her the business for this. Like, oh, you can't even secure your own computer, and I can't believe that you're giving a talk at a security conference. And it's like, no, she's totally right. She was years ahead of what other people were thinking. Listen, I did everything right, and what am I supposed to do against an attack that can't be helped? So anyway, so that was something I thought was funny. And then this is the guy, Dave Eitel. He runs a company called Immunity, and one of the things he does sometimes is he buys zero day exploits from people to use in his products or to use as a penetration tester. And with regards like, zero day is only a zero day until someone tells it. It's like a secret, right? If someone tells you, tells the secret to everyone, it's not a secret anymore. So this is his quote. It's sometimes we get burned, sometimes not. So sometimes the person sells it to him and then tells everyone, and sometimes they don't. And then here's a quote by me when I have a little more hair. I say, like all good researchers, I sat on the issue. So I'm talking about, like, I had found a bug, and instead of immediately reporting it, I just waited a little bit, and then I reported it at this contest and won a computer. So again, you have to think about, where are the incentives for people who find bugs? Where are the incentives to look for bugs? For people who find bugs to report bugs, right? And here the incentive for me was to wait and win a computer, win. If there would have been a different kind of incentive to report the bug immediately, people would have been better off, but at least I reported it. Okay, so what do we know about zero days? Well, and we know like a few examples and there's a little bit of date on it, but not much. This is one that, so there's one exploit that I found and I sold, and, you know, this happens, right? So let's talk about it, and like how can we make a system where this may or may not happen again in the future? So this happened in 2005, a while ago. I found a bug that allowed, like, remote root access to Linux systems that ran a service called Samba, which some of you may or may not know. Service that lets Linux boxes talk to Windows boxes. So anyway, I found it, I called the baby bug, because I found it when my first child was born and I was on paternity leave and he was napping, so I just looked for bugs and found it. So I sold it to the government in August 2006, and then someone else found it in 2007 and reported it to ZDI, which is this company that makes, they're associated with Tipping Point and makes IDS systems, and so you can report it to this company and they'll pay you some money and then they'll report it to the vendor as well. So think about this. So this was a zero-day that on many, you know, say, university systems would give you remote root. So it was unknown, it was unfixed, I should say. Like, I knew about it, a couple other people do know about it. There might have been a lot of people, I don't know. Two years that no one knew about it. The people that I sold it to, who happened to be the U.S. government, they had it for 10 months, and so it's like, well, what'd they do with it? I don't know, I can take a pretty good guess what they did with it, but I don't know for sure. All right, so that's supposed to give you sort of a time frame on how long these things last and why they're a problem, right? So there's another one, this one again is kind of old, but it's one of the few that we have an actual timeline on knowing how it lasted. So this was a bug, an exploit against Adobe Reader. So it was discovered we don't know for sure by some bad guy. So in 2008, he sold it in 2009. We meaning Adobe saw exploitation happening in January of that month. It was discussed in various like vendor mailing lists in February, a patch was available in March. So here you can tell that the time from that this attacker could use this weapon against users was somewhere between three months and up. Maybe a lot more, we don't know for sure. Which is what I just said here. So, and the worst case is that once they start talking about it in February, at that point it's essentially not a secret anymore, right? But as a user you can't patch it. There might be some signatures or something for it, but at that point there's no patch, but the information's out there. So there's gonna be exploits, there's gonna be, there's the bad guys and the good guys who can make a signature before you can make an exploit or vice versa. So that's even worse really, because it's out there. So here's maybe the last bit of data that we know about Zero Day. Again, this is someone who, justine, I tell, she used to be CEO of Immunity, she's not anymore. So she, from their experience of buying Zero Days and using them and so forth, she says that the average lifespan of a Zero Day is just under a year. So it's a long time for a bug to be being found by them and maybe it's found by other people too, and being patched. So the shortest ones that they had were about three months or so. The longest one was almost three years. So there was some exploit that they had that worked for three years without a patch or a way to really easily detect it. And then I mentioned this company, ZDI, that you can give them vulnerability information, they'll pay you a little bit of money, and they will then start the process of getting it fixed. And so you can go to their website and see which ones they know about, but they haven't told the public about yet, but they've told the vendor, and they're waiting for the vendor to fix it. So you can see here, if you can read the fine print, this is like a huge list of bugs, and they're all, like I can barely read it, but in fact I can't. But they're all, let's just say it's a long list of bugs and they're not getting fixed anytime soon. So there's a lot of known zero days out there, right? And what do we do about that, right? It's, again, hopefully that's not a problem we actually have. We, most of us, our organizations can't even keep out the unsophisticated attacker. It's only the really sophisticated attacker that's even going to have zero days. And so you have to sort of make a decision on who your enemy is, like who are you defending against? Are you defending against, you know, the teenagers? Are you defending against, you know, the like super sophisticated cyber criminals? Are you defending against the NSA, right? And based on who you think your target, who's targeting you, is how you know how to sort of defend yourself, right? And so if you don't care about the NSA, you only care about keeping things sort of safe from like teenagers or rogue students or something, then you know you can do something a lot different. I don't really necessarily have to worry about that. All right, so I already mentioned that sort of the reason that all of our security is not so hot is because of that we use these products and they're not that great. So what do we do? Who's to blame? I already mentioned that vulnerabilities are essentially, you know, the root problem, vulnerabilities and products. And I already mentioned that vendors have a big incentive to sort of get products out the door. And adding security is as an extra cost, and there's no sort of measurable benefit to consumers. So I already mentioned this that, you know, if I want to shop online for books, say, and I want to use the site that's a little more secure, that's the most secure website, because I don't want my stuff to get, you know, my kind of information to be lost. And I'm willing to pay a little more for that, right? I'll go to the more expensive website if it's more secure. How can I know that that's the case, say, between borders and Amazon? There's no way to know that, right? You know, they each add the little secured by McAfee symbol. Okay, you know, great. But, you know, so you can't, as a consumer, you can't make decisions based on which products are websites are more secure and so you can't affect the amount of money that companies are spending. The other thing is, like, suppose, you know, a company makes a product and it has a problem and it leads to a breach. Does that product, does that company even suffer, right? So, like, if, like, this target CEO, like, he got fired, so maybe that's good. But then again, I heard something about, he was doing some really other crazy stuff, too. So, but what about, like, how did that breach happen, right? What was the underlying cause? What was the vulnerability, right? You know, was, did FireEye screw up? You know, was there, was it a person who screwed up or was it a product that screwed up? Like, what was the root cause, right? And what company is to blame for that root cause and did that company suffer? And, like, my limited research shows that companies don't ever really suffer from making insecure products. Like, maybe the company, like, Target, in this case, that used the insecure product, they might suffer. Their stock might go down. But the company that gave them the product that led to the breach didn't actually suffer much, I bet. So, here's a one. I wrote this iPhone exploit in 2010, which was, like, super awesome. Like, I could just send you a text and take over your phone. And it was like a big deal, to me at least. And it was like, okay, I bet you Apple's gonna really suffer. People are gonna stop buying iPhones because it's like, holy cow, people can just attack me, right? You can look at their stock. Like, that's essentially what Apple cares about. Like, Apple suffers if their stock goes down. So, you look in their stock, that's the day that I announced that it was in the newspapers and all this stuff. The stock did not go down. In fact, it went up a little bit. So, I guess that just proves that there's no such thing as bad advertising or bad publicity. All right, so maybe that's just Apple. Apple's like the freak of Wall Street, right? So, how about Microsoft? So, when the Nymdorm came out, which was like a pretty big deal, like, they were shutting down for days at a time and all this stuff. That happened back in 2001, September 18th. It happened right there on, you know, that's the Microsoft stock. Again, stock actually went up when that happened. So, if you don't know, like the stock market, maybe everything went up. But still, like, definitely they didn't suffer much because they had produced a product that caused a lot of us, IT people, a lot of late nights. So, you know, if there's no reason not to, if there's no incentive to make secure products, why do your companies do it? Well, the answer is they don't really do it. So, the only thing you can do is, and so basically you need to realize that, that all of your products that you're going to be using aren't very secure, and you need to sort of try to design your network or your defense around that fact. Like, okay, I give up, this computer's going to get taken over at some point because they're using, you know, Office or whatever. But I'm going to make sure that then bad guys can't expatriate data or bad guys can't, you know, attack other systems nearby or whatever. So, there's still things you can do even, but you just have to take as your starting premise that, you know, the products that you're going to use aren't so good. And again, you're not going to be able to keep out everybody. So, the best you can hope to do is keep out the people you care about. So, if you care about teenagers or you care about, you know, cyber criminals or whatever, just try to make your defense to where they move on to the next target, right? So, there's a lot of universities, and if yours university is one of the best, the most secure, then hopefully, if they're not targeting you specifically, they'll move on to an easier one to attack. And then this final note is just like, if your enemy is the government, you know, like, for example, some, if you're the White House or you're, I don't know, maybe Stanford, I don't know. But if you care about the Chinese government, like, you're basically going to lose that battle. Because no matter what you do, no matter how much money you spend on defense, no matter how secure you make your products, the government's going to outspend you. And this is just a quote there that says, like, countries spend billions of dollars to create new armies and stockpiles of digital weapons. So, like, you can't outspend that. They're going to win that battle. So what, you know, maybe the government will save us from our insecure products. Right now, there's no laws that say products should be secure. We can't even measure the security products. We're not going to really be able to make laws very easily, I don't think, because it'll be us, people in this room versus companies like Microsoft and Apple that have a lot of money to spend. And there's no way, right now, there's no sort of system to even make this happen. So, like, for my toaster, there's a group called Underized Laboratory that makes sure that it's safe, right? That I can plug it in and not worry about it catching fire. But there's no such thing for the security of software. So, like, it'd be awesome if there was, right? It's like, well, we could release, you know, Adobe Reader, but it didn't pass the UL test, so we can't. We're going to have to wait another month and get it retested. But there's nothing like that, but it would be cool if there was. Like, well, maybe, like, the military will save the day. Right? Well, they're not going to do that. Because the problem is exploits and vulnerabilities can be used in both, in two ways, right? So there's a vulnerability in a web browser. Attackers can use that to attack you, or you can patch it and become more secure. And, but you can't do both of those things at the same time. And so the government sort of wants to keep, they want to do both of those, and they're never going to probably, they're going to make sure they have plenty of weapons before they start to patch things. So, so they're not going to really help. And then finally it's like, well, you know, maybe the reporters of the world will help us to see the light. And they are not necessarily doing that either. So reporters like everyone else, they're trying to do the best they can. But what motivates them is selling newspapers, right? It's not necessarily saving the internet from attackers, which is maybe ours. It's getting page clicks. And if they want to write a bunch of stories, they end up, you know, investigative journalism on how compliance doesn't work, or how great the new application sandbox is, or how, you know, we've really spent a lot on securing our architecture at our, you know, company. And those stories are boring. I wouldn't read those. So they're definitely not going to write those stories. They're going to write stories about, like, the latest, greatest new attack. And, you know, like the ones I said about, like, your TV's watching you, right? So, you know, whether my TV is watching me, like, that's sort of, like, scary and creepy, but it doesn't really affect me protecting an enterprise or a university, right? Like, the biggest threat of, you know, UMSL getting broken into isn't that the TVs are going to start watching the students, right? Like, that's the last thing they need to worry about. But that's the thing that they might start worrying about if they're just only reading newspapers and using that to guide them. The other ones on here, a lot of them are about mobile tax, because I've spent a lot of time doing mobile tax. Mobile tax gets, like, huge press, but really they're not something, they're a distraction. If you look at all the breaches, none of them are caused by mobile tax. They're always caused by desktop stuff. So it's like, everyone's so scared about mobile, but it's not really what we should be focusing on. And then, like, I'm as guilty as anybody. Like, I'm not, you know, as you saw in this talk, I try to be good and I try to do the right thing and I try to help out, but, you know, I'm not the superhero of the internet, right? So I still just have to make decisions like everyone else. So the reason, there's this thing called stunt hacking. I don't know. So if you read, and this is like in some magazine, and it says under me from my accomplishment, it says, world's best stunt hacker. So what is stunt hacking? Stunt hacking is like showing exploits or doing something that's like, wow, gee whiz, you know, but really has no effect on actual security, right? And so an example of that is the thing about, you know, TV's watching you. I would say a lot of mobile security is actually that. But like hacking cars is a great example of that, right? So, like, again, if you're trying to defend your enterprise, the fact that maybe one of your employee's car gets hacked is really, really should be at the very bottom of your threats that you're concerned about. And yet, this is the thing that everyone wants to talk about, right? It's like, well, you guys are just, like, talking about the wrong things, you know? So here's more examples of stunt hacking and sadly, like, I'm to blame for some of those. But, you know, it's fine. And so the point is that, like, while everyone's over here looking at the, like, wow, look at this new article, wow, did you hear about that latest attack? Like, the real problem is, like, the same old stuff we can't even do right in the first place. And so stunt hacking is, like, something that distracts us from real threats and trying not to get so distracted. All right, I know lunch is coming, so I'll try to wrap this up quickly. Because I'm hungry, too. So, and if you haven't figured it out from that, that's not, that was, like, it's like, well, that's, you know, I'm blaming, that was in the section of the talk, Blaming Newspaper Reporters are, you know, and obviously I'm to blame as well. And so researchers are not, are not really helping that much either, including myself. So, like, I report bugs and they get fixed, but still it doesn't really, it hasn't really improved internet security. So, so why is that? Well, it used to be that, that report, our researchers did stuff just for, like, you know, to show up to their friends and stuff. But things are sort of changing. So now people are, researchers want, you know, money and so they want to sell exploits and they want to, you know, do the thing, they want to, you know, get newspapers or whatever motivates them, but it's not necessarily about, you know, it's serious business now, right? So, so when someone like me finds a bug, they have to ask themselves, what am I going to do with it? Well, I could tell Microsoft if I want to pick on them and they'll give me my name in a, you know, a patch. Cool. I can tell, I can sell the ZDI for $5,000. Well, that's kind of nice. I did the right thing and I got some money. Or I can sell it to, like, the U.S. government who, like, in some sense is not a bad guy, but kind of is. But anyway, you know, some people would justify that and say that's the right thing to do anyway, but I probably wouldn't. But anyway, it's $100,000, right? So it's like, well, it's kind of hard to, you know, to blame someone for choosing that last thing, even though that doesn't help us, right? And it's even worse if you're, like, some kid somewhere, right? Like, I might make a responsible choice, even though it costs me money, or I might not. But, like, you can't really blame some kid for making the wrong choice. And definitely, the security of the internet shouldn't depend on whether these people make the right choice or not, right? It should be secure, no matter what. I just want to wrap up with, like, some good things that have happened in the last seven years. So far, it's been, like, a real bummer, I think, this talk. But there is sort of some good news coming down the road here. So, like, Heartbleed, I thought was, like, a really good thing. So it was bad in the sense that it showed yet another internet disaster. But there was, like, a lot of good that came out of it. One was there was a lot of press about it. Like, I was, like, it's on NPR and stuff. Really? Like, kind of, Riz now is talking about OpenSSL. It's, like, what world do I live in, you know? But so that was good. It got, like, major coverage. And it wasn't just, like, oh, there's this thing and now the hackers have us, you know? It was, like, there were some, like, serious issues being talked about, like, you know, everyone used OpenSSL. Is this a problem? Like, OpenSSL isn't funded. Like, is this a problem? Like, is this open source? Should we use open source? You know, does open source work? And there was, like, some serious, like, things being talked about. I was, like, this is great. That was a good, positive thing. It wasn't just the same old, like, the hackers want, oh, my God, all the credit cards are gone. What are we going to do? Oh, there was some of that. But there was some, like, really good, positive stuff. All right. So despite what I said about how bad products are, the security products have actually improved a lot. There's fewer bugs, like Adobe Reader, which used to be a nightmare, now is in a sandbox and it's not that bad. There's, iOS is, like, pretty secure. It has code signing. And then there's tons of things that make writing exploits hard, which was smart. Like, we gave up on trying to find all the bugs because we don't know how to do that. And we started doing something we can do, which is engineering. So we tried to engineer things to make it hard to write exploits. Like, we give up on the bugs, but we're going to make it hard on you to write exploits. And that's smart. And now, basically, all products have that. So then why do we still have breaches and why does someone still win PoNone over here? The reason is that the products are more secure. We can't really measure that they are, but from my personal experience of trying to find bugs in them, I'll say they are. But what happens is we reduce the number of people who can write exploits, but there's still people who can do it. So it's hard to tell if you look at PoNone. So, like, this is my made-up graph of number of capable exploit writers in the world. And so, like, it used to be like, anyone could do it. Like, you know, my seven-year-old could do it. But now, it's like, only a few people in the world can do it. And that's better. It's kind of hard to tell because there's still people who can do it, right? Other things is... So here's my shout-out to our sponsor, Symantec. So this is... I saw this in a newspaper yesterday. This is, like, smart, right? So Symantec is basically saying anti-marriage doesn't work. I totally agree. But there's some people coughing. Yeah, Symantec. So anyway, I totally agree. This is smart thinking. We need to think beyond that. It's like, everyone is known as broken for 10 years. So why don't we move on and do something better? So that's smart thinking. We're paying researchers now, which is smart. I mean, that motivates people. So there was this program called CyrofastTrack, which I participated in. That funded my NFC research and my car research, which was sort of stunt-hacky. But still, it gave me, you know, money to do research I wouldn't have been able to do otherwise. So even though cars aren't really a huge threat, it's still like a good idea to at least consider the threat. So it was good that money was coming from the government to researchers like me. Unfortunately, now they've shut down that program. It was meant to shut down. It wasn't necessarily because they saw car hacking. It was like, oh, my God, what are we doing? It was a design shutdown. But still, it's a bummer. Also, bug bounties have been around for a while, but they're getting better. So bug bounties definitely work. Here's a graph. Of Google, the number of bugs that were reported to them. So at first, it was like a few. And as soon as they started paying, it went up, and it stayed up. And it says they fixed over 2,000 security bugs from bug bounties, bug reports. That's 2,000 less bugs that are in things like Chrome that we all use and depend on. The problem is that it used to be bug bounties were like $1,000, and that's a lot of money for some people, but it's not a lot of money for the amount of work it takes, I don't think. But it's going up, and that's the positive thing to take away from bug money. So now Microsoft paid $100,000. That's a lot of money. I would get out of bed to do that. And $11,000 for a bug in Internet Explorer they would pay. Google's paying up to $7,500. Chrome, $5,000. That's some serious money. This guy, whose hacker name is Rainforce Puppy, he won $100,000 for Microsoft. People would definitely do that. The PoNome prizes as well have gone up. Of course, the four years I won were when they were way, way on the bottom. But nonetheless, it's good. It's an indication not only that it's harder to hack into computers, so you have to pay more. It's like a supply and demand thing. But also that's going to motivate people to look for these last few bugs that maybe are in these products. Another thing that's happened recently that I think is a really cool idea is this whole crowd sourcing of security. So TrueCrypt is a full-disk encryption. But 1,300 people donated to a fund that is going to be used to pay to have a professional audit done in that code. They raised $53,000. They hired a consulting company called ISAC Partners for five to six weeks' engagement. And it's going on right now. You can download the report that they wrote. After Heartbleed, some people were trying to raise money to audit OpenSSL. They're not doing quite as well. But still, I think that's a good idea. We need to put our money where our mouths are. So wrapping things up, basically, we're not doing that hot. For all we talk about, we still get hacked. A lot of the talks at this conference are about, yo, we totally got hacked, and here's what happened. Which is cool, because we need to learn from these. But at the same time, let's not get hacked anymore. Let's do better. I don't understand why a bunch of us smart people have been in this field for so long, and we're still not any better than we really were. So the other thing that hopefully you'll take away from this is we're only as safe as the products we use, or at least we need to realize that we use insecure products and design our security around that fact. It's not, and even if you believe everything I said here, and even if Google and Microsoft believed everything I said here, and they said, you're right, Charlie, we're going to make our products secure, that's not going to happen this year or next year. We don't know how to do it. It's going to be years and years before we would feel confident. So plan, plan on what to do. And then finally, there are some things that are improving, some things that are better than it used to be. So we just need to keep thinking about those things and pushing people to keep doing that. So anyway, that's it. Thanks everyone. So I know you guys want to go to lunch. Definitely feel free to just take off and go to lunch. I won't be offended. If you have any questions, you can step up to a microphone or ask. I'll also be doing this online thing later where there's questions too. So if you have questions, you can grab me there too. Any questions, anyone? Here's a question. So we have a question from an online audience member. What is, what do you think we need to do to get people talking about the right thing and doing the right thing with respect to reporting the bugs? Okay, so the question is about how do we make people report bugs the right way? So I think that you need, I mentioned like there's incentives, right? So you can report a bug and get your name in a patch release or something and that can be good. That can help you get hired. You can put that on your resume. You can report to like ZDI and you get a little bit of money or you can do something, sell it to someone, either a bad guy or the government or whoever else would want to use it. And then you get a lot more money. So for me, I think it's hard to argue against someone who wants to get like 20 times as much money for doing what doesn't help people for what does, right? Like, it's like, well, you know, I kind of see why you did that. But if it was like, you know, maybe just a little bit, like you're never going to be outspend the government. The government always going to be able to pay more than ZDI or Microsoft or anything like that. But if you can at least get it close. So it's like, well, I could have gotten $50,000 for reporting it to Microsoft or, you know, 70,000 for giving it to the government and not telling anyone. Well, then it's like, well, most people I think would probably, do quote-unquote the right thing, right? So I think if you can just make the incentives more aligned towards doing the right thing and not make it such a hard decision, then I think we're better off. The question is, was I banned from using iTunes? No, I can use iTunes right now. The thing I can't do is make apps. So the story there is I found a vulnerability in the way that apps on an iPhone work. So right now apps, they can't download new code. They can't update themselves. They can't do anything new. Everything to do has to have been approved by Apple. But I found a way they could. They could download new code and run it without Apple having ever seen it. And, you know, being from Missouri, the show me state, I decided that the only way that anyone's going to believe this is if I actually do it, right? So, because I knew if I didn't, Apple would be like, oh, well, that's fine and good. But when you submit this app out, we would have totally caught it. So I was like, well, I'll just submit it and see if they catch it. So I submitted it. The app could download new code and run it. And I submitted it. It passed. And I told them about the bug after that. But I didn't tell them I had submitted the app. And the app never downloaded any... People did actually download the app, but it never downloaded any new code or anything. It didn't hurt anyone. But when they found out about it, they got pretty upset. And the funny thing is they found out about it by reading an article in Forbes about it. And so I was out running. I was jogging. And my wife was frantic when I got home. She was like, Apple keeps calling. I was like, what? I had no idea that the article had even been published yet. And I was like, what do you mean? Apple's calling me. So that never happens. And it hasn't ever happened except that one day. So anyway, it turned out they were really mad. They said it was malware, whatever. It had the capabilities of malware. But it didn't ever do anything bad. They banned me from the developer program, which meant I couldn't write apps anymore, which was fine because I'm not a developer. But it also meant I couldn't get the beta releases for Apple products, which is kind of a bummer because I'd find a lot of Apple bugs and report them to Apple. But anyway, it was for a year, at least a year. So a year came by, and I wrote them an email, and I was like, hey, it's me, Trina Miller. My year is up. Like, can I get back in the program? And they never sent me an email back. So that's that, sorry. I'm also banned from using Google as well, even though I've never actually published an app for Google, but I'm still banned. For life, actually. Lifetime banned. Oh, they're happy. They love my data. They suck it down, but I can't make an app for the Play Store. I think that's about it. If you have any other questions, just hang around. I can talk to you. Thanks again.