 Hi. I'm Bruce. Are there any questions? I'm serious actually. Are there any questions? This could be a really short talk. I'm fine. How are you? Not what I expected. Yes. The question was about authentication in wireless networks with a lot of transient devices. But let's sort of generalize that. And let's talk about authentication in sort of any system where things are changing. And it could be a wireless network. It could be a wired network. What the medium is doesn't matter. We got a lot of issues here. There's a bunch of people that believe that we're going to solve internet security if we can just get attestation rights. If we can just know who people are, just authenticate people. I know people in this room I'm sure understand why that's nonsense. But it is a pretty pervasive thing. In general the more you can control the device, the better authentication you can do. So your cell phone will have better authentication than your computer just because the company that builds yourself on the network, they own the firmware they own a lot more of the network or your iPhone or any of the devices where there's a lot of control. You also think of, oh I don't know what else, your sling box or anything attached to your television, gaming console where the hardware and software is on the manufacturer's control, it's pretty hard to add anything. So it's going to be a lot harder to fake authentication. In general authentication is good. It tends to work in areas where the users want to authenticate. I actually like it when my cell phone authenticates the network because that's how I get phone calls. So I'm likely to be on the same side as the system that makes it work. I mean surely there will be people hacking it, but by and large people are going to want to make that work. I want to authenticate to my email server. I might not want to authenticate to some file sharing system. I might want to do that anonymously. So it's going to be a lot harder to build a system to make it work. In general, and I did a paper on this with Adam Schaustack about 10 years ago that when you start breaking up systems, when you have let's say a smart card in a reader and you give the card to the user, when you take the security system and hand pieces out to different parties, you tend to have a lot more problems because you can't necessarily trust the different parties. It's a lot easier for me to build a security system based on, I don't know a bank account number where you produce the number or a bank card that has a number. That number is just a point or two of my database. I mean I don't care if you hack it. I mean it's my database, you show me the number and I figure out how to make it work. If I give you a smart card with a chip where that chip is doing its own authentication, suddenly I'm handing you the device you could hack and it's a lot harder to build those security systems. So I don't have to answer the question. Those are some of the sorts of things that come to mind. Someone asked me about AES. Okay, so people heard the news. I talked about this yesterday at Black Hat. Yesterday, and I blogged about it yesterday, there was a new attack against AES. Actually a very major attack against AES. No reason to panic. You can stay. Because if you panic, you'll never get out of here for two hours. But this is actually a big piece of news. So let me give you two preludes and I'll talk about what happened yesterday. So AES is the new, since about 2000 I think, encryption standard. By NIST, there was a whole competition. I participated, I had two fish. And an algorithm called Ringdall 1, it came out of Belgium. And this became AES. AES is defined with three different key sizes, 128 bits, 192 bits, and 236 bits. The algorithm is the same for all key sizes except the number of rounds is different, 10, 12, and 14. And the key schedule is different because the keys are larger. So that's what we have. And pretty much everything out there nowadays has migrated from DES to AES. And so the question that you want to look at when you see an AES device is AES what? AES 128, 192, or 256, that just tells you the key length. So that's sort of AES background. Let me talk a little bit about something called a related key attack. Oh, cryptographers need to write papers. And one of the things we do is we sort of invent threat models that we use to evaluate the security. One of the threat models, I really was invented in the late 90s, and I'm sorry to say, I think I was the one who did it, is a way to look at algorithms, a way to attack algorithms based on related keys. So related keys are different keys that are related by some mathematical property. So a key and the key inverse, for example, would be a pair of related keys. And there's cryptanalysis against algorithms based on related keys. So you might be able to break an algorithm and I'm just making this up with 2 to the 80th work and 2 to the 20th related keys. So that means you need 2 to the 20th different keys where you don't know what they are but you know their mathematical relationships. And you know the plain text and the cyber text, sort of a standard model for breaking cryptosystems where you know everything except the key. But in a related key attack, you know the key relationship. They don't have a lot of application in the real world. I mean, it's hard to imagine a system where you can say to an oracle, this is the way we think of it, encrypt this. Thank you. Okay, now encrypt this with the key inverse. Thank you. Now, right. But it's certainly possible. There are some applications when you use a cipher as a hash function where you see some related key properties that they matter. But largely it's a theoretical attack. Okay? So what we had yesterday as a group of cryptographers, and I'm blanking on their names, it's on my blog, that having a new attack against AES, it breaks only AES 256. It's kind of interesting. And it only, it breaks 11 rounds of the 14. So this is an attack against 11 round AES 256. Against 10 rounds it's like 2 to the 40th and something like 200 related keys. Against 11 rounds it's 2 to the 70th so it's sort of just barely not feasible with a similar amount of related keys. This is a big deal. It's a big deal cryptographically because it's a huge improvement over anything we've seen before. It's actually a third in a series of papers that have come out in the last few months that are improving these related key attacks against AES 256. Alright, so what does this mean for us? Well, first of all it does mean luckily for us not to panic. Not to panic for several reasons, right? One, we're not breaking the full anything. We're still in reduced round. Two, we're not breaking AES 128 which I believed was the most common variant and I've been told otherwise that more people are actually using larger key variants. And three, it's a related key attack. And related key attacks tend not to be applicable in the real world. But still this is very startling. I mean what it means is the security margin we thought we had with AES we actually don't. There's an old autogen cryptography that attacks always get better, they never get worse. You know, it's trite but it's also kind of profound. Everything done always builds on something else. The attack that I wrote about yesterday built on a paper I co-wrote in 2000 where I broke seven rounds of AES 256 using related keys. And then it's been extended to 8, 9, 10 and now 11. Right? And attacks always get better, they never get worse. There's no reason to believe that this is the last word on the topic. You know, if you can go from 7 to 11 you probably can go from 11 to 14. How many years will it take? We don't know. Has anyone done it yet? Nobody's admitted to it. So what doesn't mean that people are reading AES traffic. I mean these attacks are theoretical. You know, even though they have practical complexity. But still, it's time for NIST to start thinking about how to modify the AES standard to get a little more security cushion. It's funny reading back at stuff I wrote in 2000 about Boudringdhal, I suggested doubling key lengths. And that's actually not a bad idea. So we'll see what happens. But this is actually really interesting news. So I wanted to tell you that. There's a hand right there. No, it's you. The question is about sort of security in general users. There's a paper I just saw that came out of Carnegie Mellon which proves something we already know that users ignore security warnings. It's a surprise. But actually, this is interesting. People have a really good sort of risk barometer. They have a good sense of what's risky. And the reason people, let's take sort of the one we all get, the SSL certificate out of date, or invalid. We all ignore that all the time. The reason we do is because we know it doesn't matter. The problem with security warnings is that they don't actually warn anything. They're generally CYA devices. It's not my fault. I gave them a warning. I'm the innocent programmer. He used one to click OK. It's not my fault. And we have a problem with that. In a sense if the programmer can't figure out what to do why does he think the user will be able to figure out what to do? The user's not smarter than the programmer. So one of the problems we have with just putting users on the front line of security is they're not equipped to be there. They don't have the expertise to understand that security warning you're putting up there. You're giving the user information he cannot process. Now sometimes we're stuck here. We don't know enough to design the security properly to make the decisions for the user. So we're kind of stuck sloughing the decision off on him. But it's not a good solution because the user can't make a good security decision. Largely because the warnings happen too frequently. So it's a cry wolf syndrome. I've yelled at you duck. You duck, right? Because that's a warning you take seriously. But if I started doing it all the time you'd kind of start ignoring me because nothing would happen. And we have that problem in so many areas of computer security that the bad stuff looks like the good stuff. And the program can't tell the user what decisions are chooses not to make a decision. And hand decision to the user who looks at it and says, you know, I clicked OK 80 billion times previously and nothing happened. So I'm going to click OK. Or I don't understand this. I mean, there's a website you want to go. Someone, I forget who did it has a parody of a dialogue box. There's a great website you want to go to but there might be problems. Go there anyway, annoying bullshit. I mean, what are you going to click on? You're going to click on go there anyway. And that's the way users see those warnings. So what can we do? I mean, our job, I think is to figure out how to give users real warnings. SSLs to be expired or invalid is not a real warning. You're going to a known phishing site and it's not just a warning. I'm not letting you go there. Go find some, go to a Microsoft browser if you want to go to that site. I'm not going to let you. And something like that. Or I really, really, really think you shouldn't do this because it's really bad. An actual warning and have it be correct. Those sorts of warnings people will listen to because they'll be real warnings. So us as security people, I think this is our problem. We can't assume the users can answer questions that we can't. We have to figure out how to answer the questions. I saw your hand first. What worries the most about cloud computing? Cloud computing I think is in some ways worrisome in some ways not. I mean, cloud computing is just a new name for an old thing. It reminds me of time sharing in the 60s. You know, in the 70s, in the 80s, I was using a punch tape and the computer was across the city. All these remote systems or client server from the 80s and 90s and now it's called cloud computing. Fundamentally what we're doing is we're doing the computing somewhere over there. And what should worry anybody in any of these sorts of systems is trust. Fundamentally you have to trust who's ever doing the computing for you in whatever paradigm you have. Cloud computing is Gmail. Cloud computing is Facebook. Cloud computing is sort of any of those systems where your stuff is on someone else's machine. Computing is all about trust. And whenever you boot your computer you have to trust the operating systems, the application software, the network you're on. It's a whole lot of trust built into computing. We don't really think of it that way but there is. And this is just another layer. So in some ways it's no worse. You know, you already trust Microsoft with your word processing files. Now you go on Google Docs. Now you trust Google with your word processing files. It's kind of the same thing. Both of them can have security flaws. Both of them can make mistakes. Both of them can screw up and compromise your data. The effects could be worse if you move more control, the more control you give up. This is the future of computing. I mean, don't think for a minute this isn't what everyone's going to be doing in a few years. I mean, you know, I feel like I'm the only person who actually deals with my own mail, my own computer. Everybody I know, not us, out in the real world, is moving to Gmail and all of those internet based email systems. I think that's nutty to give of that control but it's normal. People are moving to Google Google Calendar, Google Docs, all those things. Salesforce.com is taking over the systems for doing, which I thought would never work when it showed up, external filtering of email looking for malware. But that's normal. So this sort of thing is normal and it's going to be the future. The problem here is trust. You have to trust whoever the entity is. And that's hard. On the other hand, the companies that do this have to be trustworthy. So they're going to go out of their way to secure their systems more than most users will. Because their reputation is greater than any individual user. So I think there's promise there. So that's what I think about cloud computing. I mean, the worry is going to happen so fast that it won't be able to be secured properly. But it is happening and you can't stop it. It will be the holdouts. Let's go to a different part of the room. Way back there. Yes, you. No, don't turn around. Stand up. It's hard to tell. There's a lot going on right now, the administration on cyber security. There's a huge power struggle on who gets to control what. Often when I see stories in the media about these huge cyber risks, I always wonder which agency is leaking it for what reason. We don't know how it's shaking out. You know, Obama is going to appoint a cyber security head, which I think is a great idea. I like it if it has budgetary authority. But we don't know who it is yet. And we don't know what's going to be done. We know that the military is doing their own cyber security. That makes a lot of sense, right, both offensive and defensive for military purposes. That's perfectly reasonable. Who controls the civilian government network? We don't know. There's reasons why you want to centralize some things and decentralize other things. So it really remains to be seen. I don't know what's true yet. And I think we'll know a lot more with who's appointed. I actually would rather President Obama appoint someone more political to the head role. I have a technical person under him. I think one of the problems we've had for past cyber security czars is they've been too technical and not political enough. Because in the end this is about politics, it's not about tech. And you know, if you have a politician who listens to tech, you get a lot more done than you have a tech person that tries to listen to politics. Because politics just makes no sense. So we'll see. We don't know. Alright, someone asked me about SHA-3. That's another interesting topic we have. Okay, yes, I guess. Excellent. So after NIST had such success with AES, a couple of years ago they decided to have another competition to replace the SHA family of hash functions. And right now I don't know if people know we had SHA-0 and then SHA-1 with a minor modification. Now there's the SHA family. There's a SHA-2. And NIST wants to sort of scrap those and get something new for the next decade. A really good idea. I mean, these competitions are actually huge impetus for research. Lots and lots of teams all over the world submit algorithms and they break each other. We have a little problem though. NIST, which was the National Bureau of Standards back then in 1976, did a call for algorithms for a block cipher. They got one submission for DES. 1986 was nothing. 1996 they had a call for submissions for AES. They got 16. 2007 they did a call for submissions for a new hash function. They got 64. So you notice the little geometric progression we have here? Which means in like 2017 if they call for something they'll get 256 submissions. These are a lot of submissions. 64 was a huge number. So NIST got 64 submissions for a new hash dungeon. The calling at SHA-3, I think that's done. I think it's called the advanced hash standard. I mean, duh. But they're calling it SHA-3. And so 64 submitted I think 51 met the submission criteria and they came from all over the world. We had an initial conference on the SHA-3 competition in I think it was February in Belgium attached to the fast software encryption conference. And just I think last week, the week before NIST announced the 14 algorithms I think it's 14 that are going on to, that have been selected to go on to the next round. I mean you might think this is the great crypto demolition derby. We'll put all the algorithms in the ring, beat each other on the head and that's when left standing wins. It's not really that way but it sort of is. And so there are 14 algorithms left. My submission along with I think it's eight other people or seven other people. I forget if we're eight or nine total is skein. And we're one of the 14 going on. It's a good set of algorithms going on which interesting is these new results against Ringdoll, it's AES. A surprising number of submissions to the SHA-3 use AES. One of the reasons is the Intel chip, the next generation, it's good to have an AES instruction. It's going to have an AES round instruction. So you can do this a lot faster. So any number of hash algorithms took advantage of that and produced algorithms that use AES or parts of AES. So it's sort of unclear how these results will affect them. So right now there are 14 algorithms left. I think NIST is going to pick one by 2011 I think is the plan. And I think there'll be another narrowing from 14 to let's say five and then from five to one. This is great for cryptographers because what cryptographers need are targets. Remember cryptographers like to write papers? The best way to write a paper is by breaking something. Nobody wants to see your new design unless you have a trail of heads behind you. So the way to get cred is to break stuff. And there's now lots of targets out there. Of quality. They were easy to break targets. There are lots of hard to break targets. So that's what's going on in SHA world. Again there's no cause for panic. SHA 2 is fine. Both of these algorithms, SHA 2 and AES actually, are optimized for 32-bit words. So they're not as efficient on the modern CPUs. When NIST asked for SHA 3 candidates, they asked them to be optimized for 64-bit words. Which I think is a good idea. So that's what's going on in the SHA world. Yes. What's going on with TSA these days? U.S. is as good as mine. Kip Hawley resigned because it was a political appointment. So he resigned when the presidential transition happened. And Obama hasn't appointed anybody to replace him. So near as I can tell TSA is kind of in stasis. I'm sure there's an acting head. I don't even know who it is. It's someone that stays out of the news. I haven't seen anything different out of TSA in months. Except I think now you have to take your shoes off and not put them in the plastic bins. So that's what passes for policy change at TSA these days. But I haven't seen any news. Even their blog has gotten boring. When Kip Hawley was around there were some interesting things in there. So I think they're just sort of treading water, waiting for some political direction. As much as I hate to admit it I don't think that Obama can make any serious changes. I think politically he's not going to risk it because there's no upside. It's all downside. And this is a problem we have in society when security ratchets up. That it's very hard politically to say don't do that anymore. Because if something happens you are dead. I mean you've been blamed. So I think the TSA security measures are going to be with us for the foreseeable future. Just like we got the photo ID check requirements after I forget which plane. It was a plane that went down that it was thought it was a missile or thought it was some kind of bombing. But it turned out to be an accident. So the security measure wasn't needed. But it never went away. So I don't think things are changing anytime soon. I'm just not seeing a lot of movement out of there. I know this stuff brewing in other areas of national security. Not TSA. But making changes is hard. Doing the right thing is very hard in this case. Because doing the wrong thing is more politically expedient. It doesn't matter what part of you are from. That's true. So I'm not optimistic about near term changes. I'm going to take the hand way back there. Yes. No you. The PKI vulnerabilities are announced at Black Hat. So I can't comment on them. Is that the universal cert? Or is that something else? None of this surprised me. I've been in a series of stuff. I don't know what was announced. So I really can't comment on it. But in general, you get these kinds of things. Any security system is going to have problems. There isn't magically invulnerable to that sort of stuff. But I'm sorry. I've been busy enough for the past few days. Right there. This goes back to the first question. The notion that if we can just figure out who everybody is, we can magically make the internet secure. It's just not going to work. The more we rely on identity based security, the more fragile our systems become. Because it works very well until it fails. This is why I'm sort of against national ID card. I think we are served far better in a security way by having multiple IDs in our wallet with different rules for issuance, authentication, revocation, expiry, issued by different organizations. By the state, your passport, by your bank, by your library. We have many different cards in our wallets. And the notion that if you just sort of have one card, you're more secure, I think is wrong. Because forging a credential is a balance between how expensive it is to forge and how valuable it is once you forge it. So sort of paradoxically, by making a single credential harder to forge, you make it more likely to forge because that delta changes. So maybe instead of costing $100 to get a fake credential, it costs $1,000. But instead of being worth $1,000 or worth $10,000 or more when it's forged, it's now more likely to be forged. So I really like systems that minimize identity based security. I like systems that are secure regardless of the identity of who you're looking at. I mean, the airline security measures that's work have nothing to do with identity. The photo ID check, I think, adds no security. An example. So reducing anonymity doesn't help. God, I didn't know the Diane Reem show. That's just my radio host. I was on the show maybe about six months after our date, months after September 11th, or maybe it was more. Actually, I was debating a DHS person. It must have been over a year later. And the DHS, we're talking about ID cards, identity, identity checks. What it does is, when you're on an airplane, you want to know the identity of the person sitting next to you. And I said, well, no, you don't. I want to know if you're going to blow up the airplane. And if he's not going to blow up the airplane, I don't care who he is. And honestly, if he is going to blow up the airplane, I don't care who he is either. Right? Who he is is completely irrelevant. So I worry about a fetish for reducing anonymity. Because I think it makes for more fragile security. I mean, once you build a system that relies on identity, as it's linchpin for security, you just invite identity theft and various hacking by which you masquerade as somebody else. And so you make that the way people are going to break into things, and that's bad. So you're stuck building a backup system that provides security anyway. So I'd rather, when possible, I ditched the whole identity piece and work on, I don't care who sent the packet and why, I want to know what it's going to do. Right? Because if I don't like the packet, I'm going to dump it. And knowing who it is doesn't really help. Because even if it's a known person, it could still be a nasty packet because his system's been hacked. So I do worry. There is this fetish in the world today that identity is going to be how we're going to solve our security problems. We just knew who everybody was, we knew who all you were, we know who the terrorists are, we just arrest you all. And that'd be easy. And we'd suddenly be safe. This is the way people think. It assumes that we have this master list of terrorists, which of course we don't, and we identify people properly, which of course we can't. And so it just doesn't work. I mean lots of places identity makes sense. I have an employee badge. I have an account on my corporate network. Those make a lot of sense. A lot of places identity makes no sense. I love identity based security at my bank. I mean that's good because I get my money and no one else does. I like that. But in a lot of places it doesn't make sense. And you're better off dumping it. But I think there's something we're going to have to fight. We're going to be losing the battle for anonymity on the net. There's a lot of pressure to reduce anonymity. And of course there's a lot of social value for anonymity. But I think this is going to be a hard battle and we're going to lose in the near term. And that's unfortunate. Oh let's go to this side. How about a way back there? I am doing fine. Alright, what's next? Alright you get two questions. Where do I buy my what? My hat. Actually it's a neat place. It's the Hat Center in New York. They've been in business about a hundred years. So it is a cool store. So I'll say that. No I have not talked to Obama. It's kind of neat. But I did an essay about a month ago. Which started with the rubric of if I had an elevator pitch to Obama and this is what I'd say about how to fix cyber security. This is what I'd say. For the life of me I can't remember what I wrote. I mean this is actually embarrassing. So I invite you all to read that and pretend I said it while I was standing here. You've heard my voice so you can probably do the cadence. I wrote it recently. It wasn't even a long time ago. I am the shadow of the man I used to be. Yes. Because we're coming on privacy in what context? On the ownership of your data. This is actually a real important trend. It's about privacy and the ownership of your data. We're now living in a world where we tend not to own our data. And I'll get calls from the press all the time about things people say. I can ask what can people do to protect their data on Facebook, on Gmail, on this service, on that service. And the answer is nothing. You're screwed. Fundamentally, most of our data isn't owned by us anymore. And this gets into the cloud computing. It's going to get worse. That my critical data is owned by somebody else. I mean identity theft. I used to say shred your trash. Nobody steals identity out of trash anymore. It's too annoying. Steal by hundreds of thousands. I'll have some database somewhere. And you, the user, the owner of that identity has no control of that database. So there's not a lot we can do. There's applications for our legal structure. A lot of our laws against illegal search and seizure involve our person, our cars, our homes, the stuff around us. But different laws apply for third data that's being held by third parties. So what Facebook might have to go through to release your data is different from what you might have to go to. To be forced to release your data. So the rules are changing. You asked about privacy. I think I actually also want to mention is the notion of privacy is changing a lot. I mean essentially the internet is the greatest generation gap since rock and roll. And you must believe it as that. I mean there's a huge difference between how, I'll use the terms, the elders use and perceive the internet. And the young people, the internet generation does. You see the gap by asking someone, do you use Twitter? And you either get why in the world would you ever do that or of course who doesn't. Those are the two basic answers. And that's the generation gap. So there's some research we're starting to see. Dana Boyd has written some great papers on young people how to use the internet. In terms of security there's a lot more focus on control that young people seem not to be less concerned about security but security is generally about control. They're certainly more open. Everybody under 18 has been dumped on Facebook. They know what it's like. And those things would never happen in a public forum to people who are older. People who are younger tend to put more stuff out there that their notions of privacy are a little bit different. And things are changing. The notion that the young people are more security savvy seems to be a complete nonsense. In general, the young people are extremely socially savvy about the internet. But not technically savvy. So I'm reading sort of interviews with people who are using the net. Sort of random teenagers. And they don't know the difference between data that's on their computer and on the internet. I mean it's the same thing. This data is out there. A lot of people are sort of platform independent. That they can use whatever device is handed to them. And they like that. It means everything they do is in the cloud somewhere. It's a really surprising thing I learned that it's not uncommon in high school that you give your password to your boyfriend or girlfriend. That it is a show of trust that you give them your password. Which means when you sort of break up with someone, change your password, must be done in the correct order. But that's an interesting social mechanism that never occurred to me. But that is not uncommon. And sort of I'll hear a lot of people, especially the older people, talking about how the young people don't understand privacy and are sort of doing it wrong and getting into trouble. When you think about generation gaps, think about rock and roll, what the elders would say about rock and roll. The death of marriage and women running amok and drugs and sex. They pretty much nailed it, right? But we're all okay. So as a general rule when you see a generation gap what all the horrors the elders talk about will come to pass. But the younger generation is right that it won't be the death of civilization. That it'll be okay. And we're seeing that. When you see a generation gap is a people fired for blogging. Or people when they apply for college, the admissions officers go look at lastnightsparty.com or Facebook and deny people either a job or college admission because of stuff on there. That's a generation gap. When we live in a world where the world leaders send lolcats to each other. That's when we know we've passed it. When the U.S. president actually twitters. And the tweets make sense. You've got politicians twittering. But you see they're doing it wrong. That's going to change. In any generation gap the younger generation wins because the older generation dies. So I think I have time for one or two last questions. So short questions would be preferable. You in the way back. Why don't you walk up and you'll ask the last question. I'll take another question from someone who's closer here that you two fight it out. It's Friday any squid news. So I get asked all the time about squid blogging. And if people follow my blog every Friday I post a squid post. And I'll get emails saying what is it about squid blogging? And I write back I do them on Friday. I get people who say hey do less squid blogging and more security. Like there's some conservation law. That every squid post I make means one less security post. I assure you that's not the case. And in fact posting about squid is the easiest part of my blogging day. That squid news turns. I mean I had no idea there was this much squid news out there. But yeah I do get odd reactions on doing squid blogging. So who's the last question. I did ask them to come and say not. Oh you did. Okay yes. It's an excellent question. Right. I'm going to repeat the question. From an engineering perspective terrorism is easy. Yet it doesn't happen a lot. What the hell's going on. Perfectly reasonable question. I assure you it you look through the past eight years. The administration officials will say that again and again and again. I wrote about bioterrorism a couple of weeks ago and there was a quote from the United States. I forget which cabinet officer has said it's so easy to poison the food supply I'm amazed nobody did it. A couple of things. One it's actually not that easy. It's easy from an engineering perspective but unlike an engineering project if you make a mistake you get arrested. This is actually sort of in general why there aren't many criminal masterminds. It's hard to practice. If you're a good guy you can practice and if you make a mistake you get better. If you're a bad guy you practice and make a mistake you go to jail for ten years. It's hard. It is harder. But yes anybody can. You can come up with a bunch of steps to do a terrorist attack and it's pretty easy. But actually each step has some percentage of failure and add the percentages up and it's pretty rare. In some ways 9-11 terrorists got really really lucky. They almost failed a bunch of times. And it's unfortunate we're sort of basing our national policy, international policy, based on really really lucky but we are. So one, so terrorist attacks are actually not as easy as they seem. Two, the hard part seems to be not the technical but the personal stuff. The people stuff. Getting the people, getting the position, getting them not to talk, getting them not to screw up that's a lot harder. Getting people willing to do it. I mean you think terrorists are a dime a dozen but they're actually not. They're pretty rare despite their marketing campaign. Interesting. Different terrorist groups do have images. Al Qaeda seems like a really lousy terrorist group with a great image. The IRA was a pretty good terrorist group with a terrible image. Different groups have different ratios with that. It's a good question and those who are doing terrorism policy, I wish we'd think about that more. If it's so easy like we think, why has it not happened? So with that I'm going to end. I'm going to room 106. I'll get in. I can't promise any of you guys but I can promise some of you. I can't promise any of you in particular. So I'll go there, I'm happy to talk more, answer questions, sign books, chat. Anything today. So welcome to DEF CON. Have a great time and thanks for having me.