 Good morning everybody. Good morning. Do you see Joe's talk last? You can see like all the weird stuff that happens in the background just to get this thing produced. It's pretty crazy. Okay, I want to introduce our next speaker, Bruce Schneier. Many of you I bet subscribed to Cryptogram or Rita's website or you see him quoted like maybe four or five times a day in various publications. I'm still trying to figure out where your doppelganger is because I'm sure you've got somebody back in a room just like firing off quotes to the press because by my account you live like three lives, right? I'm barely functional in one. You're flying around speaking everywhere, writing everywhere, writing. He reviews restaurants and writes food review books. He has like a whole secret side to him. He's a foodie in case you didn't know that. So he's got this whole foodie thing. I don't have enough time just to watch reruns of Lost. Okay, so we're going to kick it off. Bruce is going to try a new format. It's just going to be questions and answers. It'll last as long as you can keep coming up with interesting questions. Nothing stupid, please. He's a bright guy and we don't want to slow down his brain. We've got to keep it running in top shape. We're all counting on him. I want to introduce Bruce and then we've got two microphones up here. Don't use wireless microphones at DEF CON in case you... We're wondering why. So if you can't yell, just walk up and use one of these microphones so then everybody else can hear the question. Otherwise he'll just repeat the question. Oh, and don't forget to turn off your phone because it's rude. So let's give a big hand to Bruce Schneier. So Jeff, the trick is don't watch television. And I got secret lies I haven't even used yet. So hi. What? Hi. Who are you? Don't tell me. There are a lot of you and only one of me. That's actually probably best, otherwise it would be a lousy format. So hi. Are there any questions? How did I come to Vegas? Yes. Actually, he's asking about TSA. I don't know if people know. Actually, who here reads my blog? So maybe about half of you. Maybe a third. Who here gets cryptogram? The rest of you should. Cryptogram is really where I publish all of my essays and writings and such. It comes once a month in your email. If you want to sort of know what I'm doing and writing, that's the place to get it. And over the past week, I've been publishing an interview with Kip Hawley. Kip Hawley is the head of the TSA. His title is TSA Administrator, which I kind of like. I kind of like that government understatement of title, like the Secretary of Defense. You think he takes the notes at the meeting. So he's a TSA Administrator. And in the months of May and June, we did an interview over email. And a lot of topics. And I got to ask him some really good questions. He gave some answers that were at the same time surprisingly forthcoming and impressively evasive. And I think the interview is worth reading. The last installment published today, and then there's a link on my blog to the entire thing. And that'll be in cryptogram next month. So yes, I fly. I still fly. Actually, I flew without ID to get here. There's a paper in Minneapolis is doing an article on me and security and airline security. And they wanted to sort of get as sort of a human interest thing, me flying without ID. And if people don't know this, I mean the TSA spends a lot of time convincing you that you need an ID to fly, which is of course nonsense because lots of people lose IDs. And for the TSA to say, stay, well, you've got your wallet stolen here in Vegas and you have to stay here for another 25 days before the state of whatever mails you replace with an ID. It's just nonsense. It'll never happen. So you can, and I did this, and it's written up on the web in lots of places, walk up to the airline counter, say, hi, I'm Bruce Schneier. I lost my ID. You actually say you're somebody else if you know they're flying, by the way, because no ID, right? They will print you the boarding pass. They'll write no ID on it. And you walk through security and you say, hey, you know, I lost my ID. And then they put you through special screening, which in my case meant that you waited 10 minutes while I found somebody to do it. Then they put you through security. They wandered me and they ran that explosive trace detector swab over my bags. So that was it. So if you're on the no fly list, right, if you're so dangerous you can't fly, just lose your ID and fly on somebody else's name. If you don't have a ticket and know somebody else's itinerary, just get to the airport early and then fly to my friend's earlier flight so you're gone by the time they get there. You know, I'm just not impressed here. So anyway, there's an interview with Kip Hawley where I ask him basically these questions, like, you know, I mean, are you defending us against terrorists that are too stupid to Google fly without ID? And he answers them more or less. So thank you for that question. Yes, so you're up front so you get to, I get to hear you. Those in the back, you can either, you know, sort of walk forward or shout or write out a piece of paper and pass it forward or, you know, make up some other way. If you've got a megaphone, yes. This is a question I'm more than about the TSA if they realize how pissed we are at them. You know, it's interesting. And I try to write about this. You know, security is in a sense a negotiation. You know, what are you going to get versus what are you going to give up? And the important thing about negotiating for security is to understand where to negotiate. You can go to, you know, someone and say, you know, I want more security and I'll pay this. When you walk through airport security, there's no negotiating there. The TSA employees are really just cogs in a machine. I mean, you can think of them as parts of the security system. They are not decision makers. They are only autonomous in the narrowly defined ways they're allowed to be. You can't say, hey, this is stupid. I shouldn't have to do this. And my feeling is a lot of the TSA knows it's stupid. And I think we see that in the frustration. That when you see them being arbitrary or nasty or spiteful, I think a lot of it is their own frustration of being stuck in a system that isn't working, that isn't satisfying anybody's needs. And they can't do anything about it. They can't change it. They can't complain. I mean, they're kind of stuck being a part of the machine. And also, it's a stressful job. It's very similar. I mean, here we are in Vegas. One of the most, one of the toughest jobs here in Vegas is being a dealer. Being a dealer is incredibly boring. You have no idea how mind-bogglingly, numbingly boring it is to watch people gamble. And you have to pay attention, right, because they might cheat. And there's lots of casino cheats. I mean, I've given talks in Hacking Las Vegas. And that's why when you see them rotate out so fast, even at a craft's table where there are five people, they'll switch positions very frequently. And that's to keep them moving, to give them something different to do, to keep them alert. Because otherwise, they basically fall asleep in place doing their job. And they do it as a robot, and they don't notice anything. And being a TSA screener is kind of like that. You know, look at that, and you'll see if you go through the checkpoints. Look how quickly they rotate through the jobs of, you know, walking the person through the metal detector and checking the lights up on top. Or screening the bags. Or doing some of the special processing, the additional screening. Or, you know, the guy who sits in front of the checkpoints with a box of baggies and says, do you need a baggie for your liquids? I mean, this is a high-played government employee giving away ziplock bags. Or the one who just announces, again and again, take off your shoes. And you just look at these jobs. These jobs are mind-numbingly boring. And yet, if they're not alert, what happens is what we see happens. And you see surveys of, you know, 70% of knives go through and 50% of guns go through. I saw a survey with 90% go through. And you wonder why. This is why. So, you know, as much as I don't like it, as much as I think we deserve better, I can really understand some of the shitty treatment we get. Because, man, is that a hard job. Let's have a non-TSA. I don't have a crypto question. Will VPN ever go away? No, because we all want to access remotely. I mean, security's not driven by security. It's driven by what we want to do. And one of the things we learn is that the stuff we build, we're stuck with forever. So, no, I mean, these protocols aren't going anywhere. I mean, you'll see better ones. Because the nice thing about a VPN protocol is you don't have to worry about too much about compatibility. If your company decides to use proprietary VPN protocol X, you can just do it. Right? Everyone gets the client and you enforce it to work. So, you know, that's a little better than 802.11, right? Where if I have now proprietary wireless on my computer, suddenly I can't log in anywhere. And being able to log in in airports and Starbucks and various companies is part of the protocol. But, no, there's such a need, especially now that more and more companies are going virtual. Someone asked me about hash functions. Anyone know what's going on about hash functions? All right. Well, that's not the question, but that's another question. A question is about SHA-1. SHA-1 is the NSA... This is actually kind of fun because it's NSA stuff. And breaking NSA equipment's kind of like breaking alien technology, so you'll kind of feel kind of cool about it. Right? Because, I mean, it is like alien technology. It arrives on your planet sort of out of nowhere. And the manual's written in Klingon. You can't read it. And, you know, you sort of try to reverse engineer what these people could be thinking. Because you have no idea, because they're basically alien mentality. And, you know, you try to figure out what the NSA knows. So, SHA was an NSA-developed algorithm over a decade ago. SHA-1 was a replacement. And again, alien technology. When they gave a SHA-1, they said they did it to fix a vulnerability we're not going to tell you about. And we all felt pretty good when we found the vulnerability, right? Because once we knew to look, we started paying attention. Okay, what changed in Shah and SHA-1? SHA-1 is a good hash function. It's been around for a long time. Its main limitation is its length. It's 160 bits, which means its security is 80 bits. Don't ask me, just look that up. But you only get half the bits in security, basically. And it survived pretty well. Over the past few years, there's a phenomenal researcher out of China who has been basically beating at SHA-1 slowly. And she's finally got to the point where she has better than brute force attacks. You know, not yet feasible. I mean, still in the, you know, more computing power than we have on the planet stage. But, you know, that's still pretty damn impressive. And we're all pretty damn impressed. And that's the reason, this is the question I actually thought you'd ask. We're going to have a new hash function. I don't know if people remember AES. AES is the Advanced Encryption Standard. Back in the GADs, late 90s, I forget the year. NIST, which was formerly National Bureau of Standards, people who actually gave us DES back in the 70s, said we're going to have a competition for a new encryption algorithm. And it was great fun. I mean, it's basically the cryptographer's demolition derby. We all wrote our algorithms, put them in the center of the ring, beat on each other, and the last one left standing one. It didn't, you know, it worked out that way mostly. At the end, there were five good algorithms, and one of them, mine wasn't the one that won. The one that won, I'm happy to use, and I think it's damn good. And that's Ridingdahl, which is now called AES, Advanced Encryption Standard. So a couple of years ago, NIST decided they're going to do the same thing with the hash functions. We're going to have the AHS, the Advanced Hashing Standard. And it's the same deal. We all get in our corners and develop our little hash algorithms. We submit them to NIST. They all become public. We spend a couple of years beating up on each other, and then NIST picks one. So that's going on now. Submissions are due next year, I think, in the fall. And it's, we don't really know who's working on a hash standard. If it's the encryption algorithm, we got a submission out of Japan. We got one out of, a few of the security companies submitted one, IBM submitted one. It went out of Australia. It went out of Korea. I think there were 15 total. Now, when we sort of look around, he was working on a hash algorithm. We might have 20 or 30. He might have an enormous number. The two fish team, which is my group, has reconstructed. Reconstituted, sorry. And we are developing a submission. We don't have a name yet, so email me suggestions. And this is likely to be, I mean, aside from this being a really good way to develop a security standard, right, not by committee. You develop stuff by committee, you tend to get the lowest common denominator. But as separate groups develop something, and then you choose based on peer review and what people want. That's a way better approach, designing a standard. For encryption, when we did the AES, the encryption algorithm, it was a phenomenal learning experience as cryptographers. And it was interesting, I didn't expect it. But we as a community learned an enormous amount about how block ciphers work by going through this design process. I guess this is another NSA secret, you know, we now know, that actually having these sorts, I mean, because we all did a lot of analysis and very little design. And the design was sort of ad hoc and you'd come up with something and you publish it and that'd be done. But this sort of multi-year, multi-stage, multi-review design process I think taught us all an enormous amount about block cipher design and about crypto design in general. So I would expect some really high quality hashing submissions. I mean, in some ways shock kind of sucks. It is frightfully slow. I mean, it's just plods. You look at some of the operations and they are slow, slow, slow. So, you know, when I go into designing these algorithms, what I look at is, you know, the basic unit of, the basic unit I'm paying for my security is a clock cycle. What can I do in this clock cycle that will make it more secure? And we look at, you know, we're looking at the top Intel chips. The top Intel chips have two processors. What can we do to keep both of them busy constantly that will make this more secure? We have a free operation here. What can we do here to make it more secure? I try to build the maximum security in the fewest clock cycles. This is a development style. It just didn't see 10 years ago. I mean, you were looking more at primitive operations in a more graphical sense and you wouldn't really differentiate between operations that took one clock cycle or even operations that are free. I mean, there are, there's stuff you can do for free in some of these processes. They don't take anything. You know, when for AES, Mars, which was the IBM submission, had a multiply, 32-bit multiply, which was a great thing. I mean, it had a lot of good security properties, but it was like a dozen clock cycles. And the question to ask is not, you know, is it good to multiply, but is multiplying better than 12 clock cycles worse of this, of, you know, of rotating or XORs or these other things? Right, it was 12 clock cycles and it's stalled some of the pipeline. So there's going to be really exciting stuff in cryptography coming up. I mean, you're not going to see the results of this for maybe four or five years. And that's good, right? I mean, and the reason to do it now is that there's no emergency, right? SHA-1 is still fine to get back to your original question. There's no security problems with using SHA-1 for any applications. But, you know, in the future, we don't know. And the way to do the transition is slowly, deliberately, and properly. And not, oh my god, there's a fire, we must exit. That was a joke, don't leave. Good. You probably can't save fire in a crowded wherever this place is. Right, so that's hash functions. There's a hand, the menu. Okay, good, now I'm getting some hands. Let's go. Can you speak loud enough? No, you, you, you. Yes. Come closer. Come to the light. Don't come too much closer. I assume people didn't hear that. The question was about privacy. And he's asking, you know, as things get more wired, you know, where's our expectation of privacy? What's going to happen in the future? This is something that's really important. And actually, this might be the most important thing I'm going to say here. Because here is a point where we really get to make decisions about what society we live in and how the future looks and how the future works. And people were at the conference and I have pioneered what I talked about this. Basically, data is the pollution problem of the information age. Right, all processes produce it. Every computerized process produces data. Naturally, it's a natural byproduct of basically everything we do now. And it stays around. And how we use it, how we store it, how we dispose of it is fundamental to how we live in the future. And these are the questions that 100 years from now, we as a society will be judged by. Just like we look back 100 years ago at the way, you know, the industrial age dealt with pollution. And we right now are finally struggling with some really hard issues about pollution. Data's going to be just like that, and it is. So the fundamental problem is that everything we do produces data. When you hear the big brother talk, this is where Orwell got completely wrong. It's not about deliberate surveillance. You don't need to do that. No one needs to follow you. No one needs to watch you. The things you do naturally produce data about who you are, where you are, what you're doing, who you're doing it with. And it's cell phones everything we do on the internet. And it's every financial transaction we make. And it's all the cameras. It's absolutely everything. And this has been true for a few decades. But what's changing is data storage drops to free. Data processing drops to free. So everything now is saved. Amazon can save not only the list of the books you buy, but the books you look at and you don't buy. In some ways, we like that. I love that when Amazon puts up suggestions of books I might want based on my purchasing habits. I see books I do want. I didn't know they existed before. That's a great service. What we don't like is Amazon selling that data to Choice Point or some big data broker. It gets resold to telemarketers and then who knows where it is. We tend to like the primary uses of data, not the secondary uses. We like an itemized phone bill, but mailing a copy to the FBI squicks us the wrong way. It's that sort of thing. And we as a society really get to make a choice here. We get to decide. And unfortunately what seems to be winning is sort of the libertarian the market will sort it out. Because the market won't sort it out. Because you for whom the data is about no say in this market according to the laws of the country, data about you isn't owned by you. So when you make a phone call, you can record who you call and how long you talk. The phone company can. The phone company owns the data that they collect. You don't own it even though it's about you. They collected it. It happened in a transaction with you in the phone company. They own it. They can resell it. They can do whatever they want with it. So you're going to deal with that. And I think it's really important we do deal with this. Because our notions of privacy are changing. Our notions of liberty it's very fundamental to who we are. This happens to me and it probably happens to some of you at least. You're on the phone with somebody and you're talking about something. Maybe you're talking about terrorism. Or some sensitive government thing. And one of you says the FBI isn't listening. And it's a joke and you know it's a joke. But you know it sits in your head and your conversation changes. Just the thought that you might be being observed changes what you do. The fact that I know this conversation here is being recorded is changing things I say and don't say. If this wasn't being recorded there were things I would have said in the early interview that I'm not saying because I don't want it recorded. So we really hear in the early years of building information age and despite some of you in the crowd grew up with the internet these are still the early years of the information age we get to decide what it gets to look like. And these aren't technological limitations. These will be legal and social limitations. That we will say we will have laws about data collection, data retention, data ownership, data accuracy, data audit or will not. We'll have laws about police access to data. The Fourth Amendment which protects those of us who are Americans against the government grabbing our data without a warrant without due process doesn't work for data that you don't control. So it's written in the Constitution to secure our persons and papers. But our papers aren't in our homes anymore. Our papers are at our ISP. Our papers are held by Google. Our papers are at our phone company. It's our SMS messages. Those are our papers. But we don't have control of them anymore. So the Fourth Amendment doesn't apply. And recently there's been some court cases in here. This is where we as a society get to decide. And really everyone here should be thinking about this. Because these are problems that in two, three decades we will be looking back and saying why didn't we think of this then? And some of them are hard. Medical data is incredibly personal and private. Yet is enormously valuable in the aggregate. Wouldn't it be neat if everybody's medical data was anonymized and available everywhere? To do massive lifestyle studies or massive follow on drug studies or environmental studies to have that data. What are the effects of living near power lines? Let's pull the health records of everybody who's ever lived near a power line. That'd be neat. How do you do that? How do you maintain the personal privacy of your own health and at the same time get as much of the societal benefit of the aggregate data? This is hard. This isn't easy. Well, we're going to have to do it. Good question, by the way, wherever you went. I'm going to go to this side. There's a question over there which I can barely see. Yes, you. You're holding something in your hand that's black. You must be louder. The question was, is society really going to decide? The answer unfortunately is no. We'd like it if society would decide. What's going to happen is we would get what happens naturally. There's a sort of fetish belief that if you leave things alone you get the best result. And some of it comes from libertarianism. Some of it comes from the notion that if you touch it, you screw it up. I'm hoping people here have gotten beyond that. Larry Lessig's written an enormous amount about this. That a lot of people believe that the myth of the internet is it was left alone and it's why it's so cool. It's because it wasn't left alone as why it's so cool. It was forced in a certain way. Probably not. I mean unfortunately the power is in power and I don't mean this partisan like. I mean this sort of in general. Corporations like this free and open access to our data. And they all do. So for us to get these sorts of privacy laws and you can see some of them are in Europe and are pretty damn good and some they don't have. It's going to take the kind of intervention but in a lot of ways that's the way pollution went. You didn't see pollution laws until things got really bad. And even then, I mean even today it's kicking and screaming. To get any sort of laws that protect the commons of the environment our health, any of those sorts of things. So likely we make changes around the edges. I got a call and I'm not going to name the name of a corporate CIO but I'm a major phone company who's asking me about what they can do for privacy. And a lot of things I said they couldn't do but some stuff they could. And that's where you play. You play around the edges. You try to make this a little less bad. But I expect or we can have any privacy left we are but it's going to be different. The privacy we get out of the Web 2.0 sites, the Facebooks and the Myspaces and the live journals are very different sorts of privacy than the ones our parents are used to. So you're going to see some changing notions of privacy? And this is important. A lot of the legal rules around privacy use the secret metaphor. So you have a basic right to privacy in this country. But it assumes that privacy equals secrecy. So if somebody knows one of your secrets you lose your right to claim it's private because somebody knows. That actually doesn't make much sense. It really makes no sense now. Because the argument is well you put it in an email and the ISP knows that therefore it's not private. But sir that's everything. Well I guess you have no privacy then. So it just doesn't really work. And this is another example of how the old way of thinking is sort of crashing against the new technology giving us these results that make no social sense. So unfortunately that's what's more likely to see in the near term. And we're seeing some of it now. Results that make no social sense. Anything else? Let's go to over there. At the end. Another techie question about AES about timing attacks. Who doesn't know what a timing attack is? See if I have to do it a little tutorial. You guys are good. Anybody? Wow. No. Come on. The general class of and the phrase I coined for these are side channel attacks. These are attacks against in general crypto systems or security systems that rely on additional information than just the inputs and outputs. Traditional cryptography is you have the inputs and outputs and you have the mechanism in the black box and you try to break it. Side channels are things like timing attacks power attacks. Paul Cotcher did some phenomenal work in differential power analysis. Radiation. I did a paper on hamming weight attacks. I've seen papers on cash attacks that if you're dealing in a processor with multiple processes and if you're in another process I can just watch when the cash gets dumped you learn information. These are all side channel attacks. They are incredibly devastating. There aren't algorithms that survive these. And while you can in some cases build the implementation base much more than mathematical base build implementations that protect against some of them and again Paul Cotcher does some great work in this. He's cryptography.com if you want to see his site. He has a bunch of papers on this. It's hard. They often require more access. You need physical access. They're very powerful against smart card chips for example. Because you have the chip. If your badge had a cryptographic protection the side channel attacks would be very powerful against those because you've got the badge in your hand. They work less well against I have an encrypted file on my hard drive and you want to break it. Because you don't have access to the process. But they are very very devastating attacks. Now on the plus they are kind of high tech attacks. So your average criminal won't be able to do them but you're more well funded criminal will. Anyone want to ask me about voting? Voting has been in the news this week. This is actually some important stuff going on. The California yes? You want to ask about voting? Favorite voting system is optical scan. By far the best. Because what we want out of a good voting system is this is hard. Especially hard in this country because we expect the results before we go to sleep. It's not like India where you can take two weeks to decide who wins. We will not stand for that in this country. And yet you could probably do better if you allowed that. So voting has to be accurate and fast and anonymous. All the things we need. Optical scan and people don't know the machine. We have them in Minnesota. It's what we use. It's like those old standardized tests from the 70s. You'd fill in ovals on a piece of paper for your vote and then you'd put them through an optical scan reader which would do the tally. So it's night. You have the piece of paper and in my state it's both sides. You put it through the reader and the reader tells you at this point whether your vote was complete. So if you under voted, if you voted in fewer elections that are on the ballot, your ballot is going to have 30-40 different elections on it. Another reason voting is so complicated. We vote for so many things at once. All different districts. So if you under voted, it'll beep. If you over voted, you voted twice, more than you should, it'll beep. And then you have the opportunity to take your paper back or say, never mind, let it go through. And then the tally happens in the machine and the paper drops into basically a safe. So what this does, it gives you automatic auditing. You have a paper audit trail incredibly important and you have the fast tally. So this system really is the best one out there. And if you have any poll in recommending voting systems for your local county, city, state, whatever government bodies selects them, optical scans and the machines to get. So that's your question. What's gone on this past week is we're starting to see results from the California voting audit. State of California did a real live review of three voting systems. The Sequoia system and the SSS system and I forget the third. And this was a real review. They got source code. They were able to do red team you know, just you know, let's get the machines and see what we can do types of analyses. The source code review I think showed up this morning or maybe last night at the California site. I blogged about it this morning. And that blazes a real good post. He was one of the teams. He was the head of the team that reviewed the Sequoia code. Really, they basically broke all of them. The other news is there's a new report from Florida on a, I think it's also a Sequoia system where they did a source code review. So we're starting to see more of these. The annoying thing and I wrote about this from the California review is the conditions were really onerous. I mean, you had one month to do the review. The companies you know, dragged their feet and releasing the source code so people had less time. They weren't forthcoming. And the state really didn't push. So while, you know, and what I said is a lot of ways were lucky that these voting machines sucked so bad that the researchers found so many vulnerabilities in the short time they had. You know, next time you won't get so lucky. The bad guys aren't going to be limited to a month for three weeks. So while these are good steps, they're really only small steps. Brennan Center also released a new voting report on post-election auditing. So there's a lot of good stuff happening in voting security right now. This is also a tough problem. I mean, it's not, you know, other people compare this to ATM machines. It's not the way to think of it. I mean, voting is something that's incredibly important for like one week every two years. So it's really hard to get people trained on systems, getting familiar with them. I mean, user interface is extremely important and very different because you can't build a system with a learning curve either for you, the voters or for the election officials. I mean, you can know when you go vote, and I hope you all vote. The person behind the desk has no more experience than you do. Actually, being an election judge is fun and it's another thing you can do if you're on election day, besides being an election judge in your in your county, in your city. You sort of see how this stuff works. You'd be kind of amazed. So that's what's going on in voting. All right, let's get a question. There's a question right there. I can't hear a word you're saying. This is actually a really big room. He's getting the microphone. I know that would answer another question while he's walking up here. Are these algorithms functionally better than they were five to ten years ago given the increase in the average computing power? Will there be some point in the future where we will reach a point of diminishing returns? So he read that. So we don't even know it's his question. Okay. It's his handwriting. So now we know. Encryption, you know, in computer security most of it favors the defender. Right? I assume that the DefCon crowd knows this. But we don't have to explain this to a lot of corporate security audiences. Right? I mean, you know, the defender has to defend everything and the attacker has to find one way in. Right? All my source code has to be secure. You have to find one more for overflow. The odds really favor the attacker in computer network security. Which is when you know this, because basically every system sucks out there. I mean, we break into everything. And so there's a natural flow to, towards the attack, towards the attacker. Cryptography is the exception. In cryptography the math favors the defender. I mean, think about it very basically. If I add one bit to my key I double my key space. Right? It makes the attacker do twice as much work to break the key. And, but for the defender I have to do, you know, one-enth additional encryption work. So 64-bit or 65-bit. Right? So if I take my key as I have a 64-bit key and I double a key 128-bits I have to do twice as much encryption work. More or less. Maybe four times. But there's some small linear increase. For the attacker, the increase is 2 to the 64th in work. Right? So you get an enormous benefit. And when you look at AES with a 128-bit key I mean brute-forcing it is more computing power than there is on the planet. In the foreseeable future. So it's not like you reach the point of diminishing returns. It's that you reach the point of who cares. Right? His question was, are encryption algorithms more secure today than they were five years ago? And the answer is who cares? Right? Because the encryption algorithms are so much more secure than every other aspect of the encryption system that that's not where you're going to break it. I mean there's been a couple of articles recently that are coming up in court cases on how the FBI breaks into PGP encrypted traffic. And it's kind of interesting. They don't break PGP. They install a keyboard logger. Right? Who gives a damn how many bits the encryption algorithm is? Or look up access data. Access data is a company in Utah. It's a really cool company. And they build cracking software for pretty much every encryption algorithm. Go look them up. And you know, you can buy modules for PGP and for, you know, PKZip and you know, basically any encryption program out there they make it a encryption module for. They don't break the cryptography. They don't brute force the key. They try the keys in most likely key order. You know something? In their list of the top 100,000 possible keys you know, break a good quarter of every instance out there. You know, their record is basically 60%. What they do, and this is phenomenal that if they get, you know, usually they do a lot of sellouts at the FBI. And so what happens is the FBI you know, comes into your house and they take your hard drive and it's got encrypted files on them and you're of course an unsavory character and the FBI wants to read them. So they go to access data and say can you read these? What access data does is they take your hard drive suck down the entire thing and build a database of every single printable string on your hard drive and they try every one as the key. Yeah, and guess why? I mean, they're running windows and windows you know, shove stuff into cash and it ends up at the tail end of some MS Word file and four years later it's still sitting there and what it was was the screen image of what your key was. Or something else. And they have an enormous success doing that. So here you have a really good encryption product being broken by, you know, the memory management features of the OS. They should piss you off. But that's what happens. So the key length is the wrong discussion to have. You know, what I like to say, it's like you know, we're putting a huge stake in the ground hoping the enemy runs right into it. And we can argue whether the stake should be a mile tall and a mile and a half tall, but honestly the enemy is going to walk around the stake. Right? The crypto is so much better. And whether it's the user interface, whether it's the key management, whether it's the computer security, the network security that something else is going to break first. You know, whatever I do analysis of encryption products, I pretty much break all of them. I never break the cryptography. Even if the cryptography sucks, it's not worth bothering. Something else sucks more. And that's annoying, right? Because what I said was that cryptography favors the defender. Unfortunately, you know, the weak links is where it favors the attacker. And that's why you see the FBI reading PGP traffic all the time. PGP is great. But the FBI just goes around it. Yeah, I mean, I see because I do a pretty complex encryption management system on my laptop. So I take my laptop to lots of countries and I don't want some you know, board across and get ugly or I lose in a hotel room or who knows what. You know, and I'm constantly thinking about it and you know, really, I type my key in you know, every time and you know, when is it going to be swapped into cash? You know, how often do I have to scrub the free space of my hard drive to get it out of there? You know, what works? But these are the things I'm thinking about. I use, the question about my process is my laptop. Actually somebody asked me this in email instead of I should do a blog post on this. So I will. But basically I use PGP disc. I mean, encryption is all about trust. You must trust the people and I know the people at PGP, I know John Callis and the guys there. So I trust him and his team. And I basically divide my hard drive into the things I need immediately into the long-term storage. You know, the only thing I'm not secure against is someone grabs my laptop out of my hands while I'm using it. It doesn't have that sort of on-the-fly array stuff or encrypt stuff. But you know, if it's closed, if it's, you know, hibernate or sleep, you know, everything's good. But I set tiers of longer-term and short-term storage stuff I need, let's say once a week. I encrypt with another key and just sits in some, you know, back file somewhere. Encrypted zip archives. And I only pull them out when I need them. The stuff I use every day, you know, my email and the files I'm working on is, you know, PGP has sort of other options to make that work. And I use that. The best, of course, best security for your laptop is not don't put your files on it. And amaze any people, carry everything written on their laptop, you know, just or even worse on, you know, a USB dongle. You know, this is the problem of ever smaller devices. You know, we are keeping more and more sensitive stuff on ever smaller and easier to lose things. Our cell phones might have our calendar for the last five years, our entire contacts list, all of our email. You know, on a little cell phone. Right, our little USB, some of us have USB tokens that have our entire desktop. You know, we just go to a random machine and we just boot off them and work. That's sort of very valuable and easy to lose. So I like to see encryption on these smaller things. I have nothing good for my Trio, which is a POMOS. I try to do that by just not keeping a lot of stuff on it. But I haven't seen really any good encryption thing. My laptop, I run Windows. Just, you know, and yeah, yeah, yeah, but that's where I get my tech support. He's taking notes. This is great. Oh, he's allowed to, he's the presser. Now everybody's going to know. That's even better. Oh, I want to see. Who's got a good question? Like a good one. Nah, I don't want to do another crypto question. You'll be judged by your peers on the quality of your question. Okay, cyber warfare is quick. There's a really good essay on cyber warfare on my website from Cryptogram. It's too long to do, you know, now. One of the nice things about writing Cryptogram is I can sort of summarize what I think and never speak of it again. I'll probably never answer this in Cryptor Hard Drive question again, because I'm going to write about in the next couple of weeks and then I'll just make you read it. So cyber warfare, I'm proud of the essay. It's a good one and find it and read it. So I don't mean to blow you off. It's just, that's sort of where it all is. Quantum computing is also on there. Oh, this is good. All right, we're going to do it. Wow. Security vendors protecting us from people they accept money from. I wrote about that and this is sort of the question is, will your antivirus software ever tell you about the FBI trojan on your computer? You kind of want to know that, don't you? Right? Maybe you should buy your antivirus software from the guys in Finland, because they're less than I don't know. Yeah, that's a big one, right? And we saw that, I wrote about that with the Sony case, when the Sony built that trojan and Symantec and all the others basically were told to ignore it and they did. So there's a whole lot of stuff going on there. All right, I didn't know what it was. I think they did. It was kind of hard to you couldn't get straight answers out of them of whether they actually colluded on purpose or by accident. It seemed like collusion on purpose. All right, I got time for one more question. There was a woman I heard back there. Say the word again? Virtualization. What do you want me to know about it? The security implications? I mean, it's like anything else. You know, as soon as you build, right, complexity is the worst enemy of security. As soon as you build an extra complexity, you have to secure it. And it's hard to do. There's going to be all sorts of security holes, just like you have it in everything else. So before you leave, I want to remind you again, if you don't get cryptogram, go to shinire.com, sign up or get in blog form. The man this goes to absolutely nobody, I built it so I don't even get to see it. I'm sorry. I doubt it. No, no, no. So supposedly I was told there's going to be a Q&A session after my Q&A session which seems redundant. But maybe I'll ask you questions or something. And I don't know where it is, but I was told a guy with a hat would take me there. Now this is entertaining. Enjoy your conference.