 Alright, so if you've heard that, the CIA guy is in the newbie track, which seems so apolitic. I'm here, when I'm done, I'm gonna go out to that room, there's gonna be a table where I'm gonna sit, and I'll chat and answer questions for either as long as you guys want until I get really tired. And according to my schedule, it's a little bit early, so I'll wait because a couple people are filing. What is my talk gonna be on? I don't know yet. The CIA guy might be more interesting than me. These guys think I'm okay. Ah well, it means there's a seat for someone else. You can go talk to that person. She'll be over in two minutes. She'll be over in a hundred seconds. What's my intro music? I know. Is this punk industrial or golf techno? This is grunge? I actually can't tell the difference. It actually doesn't sound half bad. Do you want to spot feds? No. There are probably some here. Let me see if I'm wearing dark glasses indoors. That can't bode well. That's true. All right, we'll start. I think this is a good time to start. People aren't filtering out that, right? Hi, I'm Bruce Schneier, and I'll take questions. Anybody? Oh, okay. This person asked about Kerberos and Microsoft. Yeah, it's a vaguely interesting topic. We're not really quite sure what's going on. Kerberos is an old authentication protocol designed at MIT, I think, in the early 80s. It uses symmetric cryptography. It allows people to log into computers more robust than username and password. And it has sort of bummed along for the past decade and a half, and in various applications there are free implementations. It shows up in products once in a while. Microsoft added support for Kerberos in Windows just recently, I think Windows 2000. Kerberos also became an IETF standard, an internet standard for authentication. It's sort of a competitor to a PKI-like standard. When Microsoft, and this is interesting because it might not be a security thing, but when Microsoft put it in Windows 2000, they made it different than the IETF standard. And one of the questions you ask is, did they do this just to annoy compatibility, which is sort of something they generally do? Or was there some other reason? When you make changes in a security standard, you have to worry about does it affect security? And there was lots of talk of why they made this change and whether it affected security. In that piece in cryptogram, I have not looked at it, so I don't have any good answers. I haven't seen anybody else say this is bad for security reasons. People have said this is bad for compatibility reasons, but not for security reasons. Ah, I see there's a good question. AES. Alright, who doesn't know about AES? Does anybody need a refresher course in AES? Okay, I'll give the two minutes. In the mid-70s, the National Bureau of Standards wanted to produce a standard encryption algorithm. And they sent out in the Federal Register, which is this little book of government contract solicitations, candidate algorithms that would become a standard. And they got a whole lot of garbage. They submitted another request for standards. I think of this as like 1976. And they got a whole lot of garbage and something from IBM called Lucifer. And history happened and Lucifer became DEZ and that became the government standard algorithm for the past 20 or so years. In 1990, let me get these years right, in 1996, NIS, which is what the NBS became, the National Institute of Standards and Technology, not the National Institute of Technology and Standards, which would be NIS. They were NIS. Decided that they needed to replace DEZ. You know, a day later, a dollar short, but they made it good, but they were right. So then they did the exact same thing. They had a couple of conferences in, I think, 96 and early 97, where we won't get together and say what we wanted out of a standard. And these were open. Anybody could show up. And lots of people did and some that had useful opinions, a lot of them didn't. In spring of 2008, early 97, they asked for submissions for AES. The rules of the game were 128-bit block cipher, faster than DEZ, key lengths of 128 bits, 192 bits, 256 bits. That's why things like IDEA and Blowfish and others you might be used to were not submitted. They didn't meet the criteria. Submissions were due, so I have the years wrong, because submissions were due like summer of 97, maybe like June 15th or something. 21 algorithms were submitted, 15 of which met the submission criteria. One of the criteria was you had to have working code. So if you produced an algorithm with not working code, you were tossed out. Working code in C, someone submitted working code in Pascal. There were 15 submissions. These came from all over the world. These came from the United States, from Canada, from Japan, Korea, Costa Rica, Australia, Belgium, England. Unmissing places, all over the place. From academia, from industry. And this is the AES competition. Counterpane submitted an algorithm called Toofish, hence the shirt. Don't tell Zeus. Which we submitted along with the other 14. And it's been kind of neat. You still think of this as the big cryptographic demolition derby. We all put our algorithms into the ring and we beat on each other in the last room of Standing Winds. In August of 97 was the first AES conference where we all got to present our algorithms. Now this is actually kind of an interesting story. Over the past months, after submission before the conference, people republicized their submissions. And we had seen all of them except this one submission called Magenta from Germany. And Magenta was presented at the AES where it was kept secret and told the workshop. Presented at the workshop and broken during the question session. When a good cryptographer starts a question with, can you please put up slide 14 again? Nothing good can ever come of that. So the first round of demolition derby happened between 97 and 98. I was saying, I'm getting the years wrong. It's 98 and 99. No, 97, 99, 99. See, I'm screwing this up. 97 was the submission deadline. Right, okay. No, 98 was the submission deadline. Because in 99, in April was the second candidate conference where we all presented papers attacking each other, talking about efficiency on different platforms. And there were lots of good papers. Some of those were broken for odd differences of broken. And then this one, this is a cave and they chose five semifinalists. And they chose, if I can do this in alphabetical order, Ringdahl, it's our RC6, Ringdahl, Mars, Serpent, and Two Fish. They chose five. RSA's submission, IBM's submission, the submission from the academics of Belgium, the submission from Russ Anderson in the UK, LEBM in Israel, and Lars Knudson in Norway, and the submission from us. And then we had the third AES candidate conference in New York in March, which was basically the second round of demolition derby. And lots of papers were presented. And what Ms. is doing now is they have crawled back into their cave, and they will choose one winner. And we don't know which one it is. They want to get it chosen by August. Crypto is in August, and they'd like to have it chosen by then. We don't know if they will. I think our chances are okay. I believe it's down to three of them. Ringdahl, Serpent, and Two Fish. Ringdahl is the fast one, but risky. Serpent is the slow and conservative, and we're the compromise. So depending on how I feel in the morning, I think our chances are better or worse. Now, this will never be as ubiquitous as DEZ. Now, if you think back to the 1970s, there were no alternatives to encryption algorithms. DEZ was all anyone ever had. So everyone used DEZ. Now we have triple DEZ. We have IDEA. We have Blowfish. We have RC5, RC4. Lots of algorithms, IDEA, I think I might have said that, Cast, lots of algorithms are seeing why use. So AES, while it will be used, will not become nearly as ubiquitous. But it's been great fun. I mean, I've had the most fun possible as a block cyber cryptographer. Actually designing a cipher. We wrote a book about the design and analysis, going through the real work, not just spending two weeks and saying, I did this, but spending a good year designing and breaking and designing and coming up with a submission. Then watching other people attack it and see what they found that we missed, us attacking other submissions. It's really been a ball. This is enormously fun. So that's where that is. Who has that question? I want to... All right, I have things to throw to you guys. This is... It's supposed to be a good tape. It's a demo tape from somebody. And I want to get this to good questions. If I can get to him, that'd be cool. There's a question over there. Okay, let's just talk about this question in two parts. The first question is what algorithms have been cracked? The second is what products have been broken? And they're very different questions. Now, I believe in... When you start looking at products out there, those that don't use standard algorithms often use really horrible things. And we see that all the time. We saw that in all the digital cellular algorithms. We saw that in some of the DVD stuff. We saw that firewire. And this is example after example of company producing proprietary algorithm and it being broken. Now, a lot of these we never actually break because we either never see them, we're too busy. I get stuff sent to me all the time. This isn't a product where you look at it. And I usually say I'm too busy. If the government cares, they will look at it. So they're likely to have broken a lot of the dorky algorithms that people use that haven't gotten the kind of peer review that you expect. The larger and more interesting question is what products have been broken? And this is something I've come to in the years of analyzing cryptographic systems. What I'll find is no matter how bad the cryptography is and how lousy the algorithms are, something else is worse. It's almost never the math. We saw that with the DVDs, a perfect example of that. There was this mediocre algorithm with bad key length, but it didn't matter. You can break the system anyway. Even if the cryptography was perfect, it would still be a lousy system. And you find that in a lot of software packages, things break not by beating on the math, but by looking around the edges. And to a first approximation, the math almost doesn't matter. The analogy I like to use is, it's like putting a large pole in the ground hoping the enemy runs right into it instead of building a wall. The wall should be a mile tall or a mile point too tall. But in the end, that doesn't matter because the enemy is going to go right around it. So for someone like a government or even anybody who's doing analysis of commercial products, whether they're firewalls or VPNs, I believe they're all breakable. It's just a matter of finding where the overflow bug is, where the bad random number generator is, where the programming mistake is. That's where the mistakes are. And we've really been sold a full security. I'm probably guilty of selling this because you and I are writing applied cryptography, saying things like, the mathematics will protect you. It's actually not true, the mathematics doesn't. The mathematics is actually really good, but there's so much around it that undermines it, then a lot of ways the mathematics doesn't matter. Who wants this? You want these? If you don't want it, there's somebody else. I'll take that first. I remember saying that three years ago, but okay. The general question is where is, let's take the NSA as compared to the academic world. It's sort of an interesting speculative question. Where are we compared to them? You can sort of look at this theoretically. The NSA is a one-way lens, right? They take everything we do and they give us nothing they do. So sort of by definition, they're ahead. Even if they know one more thing, they're ahead because they're not telling us that one thing. More interestingly, you sort of think about what they're doing. Their job is really bread-and-butter, break-in production ciphers. Stuff that is used in militaries by governments, they break them. And that's what they have a lot of resources on. At the academic community, we don't do that with the same single-mindedness as they do. And they've also been doing it a lot longer. Is that my phone or someone else's phone? So the odds are that they are better at breaking block-and-stream ciphers. Just because they're better at it. They've been doing it longer, they do it more. They have more expertise, they have a bigger body of knowledge that we actually don't. What's a break in academia is not a break in the NSA. I could produce a break that has a 200th complexity. And we can say this AES candidate is broken. But as far as the NSA is concerned, I've done nothing because they can't read traffic. Or they care about us. Can I read traffic in something resembling real-time? So I believe they are ahead in that. In the esoteric protocols, authentication protocols, commerce protocols, sharing of content, academia I think is very far ahead because that's a business thing. That's not something the government needs. There's a lot more research done in business, a lot more things are fielded, a lot more things are tested. So I believe we're ahead of that. In public key cryptography, either there's parity or they know some things we don't. It's clear if you look at the sorts of research that we're funding in the 70s, that there are some areas we should go into that we're not. Whether that was fruitful, I don't know. It's very hard to speculate about public key cryptography because those are really number theory questions and not little techniques and tricks to break things. So I'm not sure where we are compared to the government in public key. 10 to 15 years might be a reasonable number. We might come back here 15 years from now and find we're no further ahead than they are. Then we are today. So maybe 15 years of research, you could yield no results. We don't even know if there are results out there for us to find. This is another question to speculate on for a while. Do you know any of these? Let's go from this side of the room. Let's go right in the back. I'm going to talk a little bit because I actually don't want to talk about the company because that's kind of shilling to these guys. Basically, what we're doing now is I'm doing internet burglar alarm services. The idea being that, from what I just said, all of these products have flaws and problems and if you don't have real-time monitoring of your security, you're not going to do any good. So we're like a bank saying, bank vault is so good we don't need a burglar alarm. That never happens in the real world. So we're trying to build an internet burglar alarm for companies that need to monitor the network and don't have the expertise. And I'll stop there because I'd love to talk about it. I don't want to do it here because you guys don't hear me talking about my company. All right, that's either one of the fails. How about over there? Say a second, would you understand the submissions to AES were? Relatively free. Oh, God, fame and glory. This came up during the AES process. In the beginning, some people said you can't restrict it to free algorithms because no one will submit. As it turns out, we got some extremely high-quality submissions. Many groups including IBM and RSA data security, numbers of universities, companies, NTT in Japan, submitted, gave away their algorithm, which probably was man years of work. Why do we do it? One because it's way fun. And I assume companies did it for the fame and glory. I mean, getting chosen AES would be cool. But really, if you like doing ciphers, this is what you do. And if you're going to make it proprietary, no one will use it. You know, we sort of learn that again and again. And I believe everybody whose good at cipher design understands that. And in order for a cipher to be widely used, analyzed, trusted, it has to be free and available. So NIST, I think, made the right choices evidence by the fact that they got 15 high-quality submissions. And they didn't get stuff tossed off by kids. They got real things to work on them. And that was kind of neat. Okay, you've been holding your hand. You have orange hair that counts for something. Well, first, I'm just, I'm glad you're here. Well, thank you. You're my favorite audience member. A message that's been wrapped in separate protocols. For example, it's like an image, and then did a PGP, and then... Yes. Okay. Separate. Okay. Number two is, can you describe how that can be used to develop a... If I could invent a new algorithm on the fly, I probably would write a paper. So I really can't. New primes of Edward, how about a new algorithm, a new way of doing public key? Probably not very. Well, this is one of our problems in cryptography. Pretty much all of our public key algorithms are based on the same piece of mathematics. The difficulty of factoring large numbers or the discrete log problem, which is basically the same. The same solutions apply to both. And nobody actually knows that factoring is hard for any reasonable definition of the word hard. It could be quite possible that in a few years somebody invents a way to factor numbers easily. And we have no public key cryptography anymore. We're dropping down some really icky, ugly things. It's a bad single point of error. I don't lose a lot of sleep over it, but weird things have happened. We've had Fermat's last theorem proved. And that was proved using elliptic curves and modular forms. I mean, forget what that means. Someone found a bridge between two very diverse areas of mathematics and was able to pull the math from one to solve a problem in the other in a very new and surprising way. Now, one of those two areas is strong in cryptography. So it's certainly possible that there are things going to happen. So a new way of doing public key cryptography that had nothing to do with prime numbers would be useful. I'd like to see that. I'd like to see it now, so in six years we might actually even trust it. But, yeah, I don't know how like it is to happen. This is, you know, public key sees me based on this one hard problem. That's what you get. Back, back there. Talk loud, though. I don't like NTRU. NTRU is dumb. Or is that a rebuttal? It only is not a rebuttal. I'll take one in the back first. Now, these are devices to break public key systems. Let's talk about them. And Twinkle is just an optical computer that's optimized for factoring. There's not a lot of, there's really interesting engineering there. There's not a lot of science there. Quantum computing is more interesting. A few years ago, someone, I think Peter Schur, invented the notion of a quantum computer. The idea would be, we don't need to sleeve unknown architecture. We can do stuff with quantum mechanics and waves and, you know, build a computer. This is all done on paper. No one has actually built anything at this point. And then the question came up, what the hell is this quantum computer good for? Well, surprise. The first algorithm that was invented that can be used in a quantum computer was to factor large numbers very fast. So break RSA. The second thing that I forgot what quantum computer was used for is breaking discrete log algorithms. So basically a quantum computer is, almost by definition, a breaker if you could actually build it which you can't. And more research has happened. There is practical work building quantum computers going on right now. I know it's Cendia that has a quantum computer that can factor the number 15. So anybody using 4-bit RSA should worry. And in theory and, you know, my lifetime, your lifetime, somebody's lifetime, a quantum computer will be built. There's no real reason why it won't. It's a simple matter of engineering. Math is done. We don't know how it will work, what it will look like. But this is interesting. In general, a quantum computer will speed up any key search by a factor of square root, which means for any symmetric algorithm it cuts the key size in half. So if right now a 64-bit key is vulnerable, under a quantum computer, 128-bit key will be vulnerable. Now 128-bit key will not be broken in anything resembling human lifetimes, or species lifetimes. So a 26-bit key will be secure in a quantum environment to the same degree. So it's not devastating. We're not going to lose everything if a quantum computer is invented. But it would be a big deal. And thankfully people are working on it. And that's good. I said I'll take this person next. No, I'll take him. You have tape. Okay. Keep forgetting about these. Oh, didn't even get close. Yes. Where do you see PKI going and when do you see it getting there? Oh, but any question. I wrote about this already. And if you go to my website, you can read about it. How'd I do? Actually, you know, seriously though, a lot of this, I do a monthly newsletter called Cryptogram, which if you don't subscribe, it resubscribes. Tell people next to you who's subscribed. Thanks. And in there, what I do is I answer questions like that so I don't have to answer them ever again. And over the months, I do lots of essays on these sorts of topics. And I did a long essay with another colleague called Carl Ellison on PKI. Really trying to bring down the hype and talk about the realities of using PKI and what it means security-wise. It's a long essay. It's a good essay. It asks a lot of very hard questions. And it attempts to answer them. And in the end, it isn't very satisfying, which is cool. And I do urge people to read it. If they're interested in PKI, especially if their companies are rushing to PKI because it's the buzzword diger, which it is. So rather than spending an hour, I could do it an hour. It's worth reading. My opinion hasn't changed from what I wrote, which I guess is the important thing I should tell you. I will take a question from this area. Oh, the one... Give me a... What? Give me a tape. That's true. I'm sorry. I'm going to hurt somebody. Okay, I'll... Yes, yes you. No, because they know they're better cryptographers. The other cryptographers get jealous. Cryptography is jealous. Cryptography is a weird field because it's by nature antagonistic. You invent something and someone else breaks it. And then you break their break. So all cryptographers have thick skins. You have to. You publish and your stuff is shot down. Even the best cryptographers, Adi Shamir, wrote a paper, maybe it's five, ten years ago, on a public key scheme. And it was broken in the days before the conference. So he gets up on stage in front of everybody and says, hi, I'd like to discuss a very interesting failure. You get used to that. That's what cryptography is. I just feel my expertise is less in new cryptography and more in explaining cryptography to other people. Do you still care about privacy? Do I still care about privacy? Yes. So... I'm not sure how that fit in. I don't see a lot of jealousy. Of course, maybe I'm not paying attention. Oh, back there. Is there any way to pay against keyboard logging? This is a good general question. We'll answer the general question. This is one of those breaking cryptos around the edge of it. How would we break PGP? An easy way to break PGP is to put in a keyboard logger. Not only get the messages, you get the passphrases. And the question is, how do you prevent against this? It's really hard. If you have a general-purpose computer, you can't. If you had a closed body, you'll never put a keyboard logger in an ATM machine because you can't get inside it. But you can always put a keyboard logger or a screen capture device or any of those things onto a general-purpose PC. If you don't have secure hardware, you're not going to prevent against those sorts of attacks against the I.O. Because the I.O. is inherently insecure. If you're a trusted insider, you can do lots of things. But for normal... This is the problem you see with content protection. The DVD kind of protection scheme worked okay as long as the algorithm and decoder was in a VCR, in a closed box that you couldn't get to. As soon as they made a software player, you could reverse engineer it, you can do whatever you want. As soon as you put a security device onto a general-purpose computer, you can circumvent it. You can't stop that. The content people are still figuring this out. And eventually, they're going to get it. It can take a while. And as security, it's also the same problem. I cannot prevent. Someone at home, I'm here, they can be putting a keyboard logger on my computer or home computer. I can't stop them. I can't detect it. It's just a big hole in security. You have to trust the platform. You see that problem again with the Digital Signature Act. How do you know... This is something I did in my book. I would say to sign a message, the signer computes message to the D mod n. That's complete bullshit. I have never computed m to the D mod n in my life. My computer computes m to the D mod n. And it is an article of faith that when I click sign, that my computer does that. I have no clue what my computer does when I click sign. Actually, I don't. It could not sign. It could send a copy of a plaintext to you. It could sign a completely different message. I'm trusting what it's telling me on the screen. I don't know what's going on inside the processor. This is a big security problem and something we don't know how to solve. But general-purpose hardware has a lot of security risks. I'm going to take a question that I saw a person standing over there. That was you. Okay, this is not... I'll tell you quantum cryptography, which is what you're asking. This is about 15 years old. A basic quantum physics lesson, although really basic. In quantum mechanics, when you observe a particle, you change its state by definition. Naively, that's a cool way to detect trees dropping. Because if someone eavesdrops, they observe the particle, they necessarily change its state. As it turns out, you can build a cryptographic mechanism based on this property. You could exchange key bits so you know they are secure because if someone observed them, then they wouldn't be what they were. Forget that. I'm going to alight all those details. And that works. People have been doing this. Last I heard, the people at British Telecom have a quantum key exchange going over 10 kilometers of fiber optic link. This really does work. I was asked, oh, God, that's six months ago by a reporter covering this. Does this matter? Isn't this cool? And the answer to me is, of course, no. This is not something I need. I'm not sitting here saying, oh, my cryptography isn't working well. I need quantum cryptography. And then we're back at the mile tall spike. Yes, you now can make the spike 10 miles tall. Good for you. That's not going to solve my real problems. So this is really cool science. I don't see a lot of practical applications. No, down here. The question is about question distributed computing. We're seeing a lot of good cryptanalysis happening. Distributed.net breaking days and some of the RC6 challenges. The question was, is there any useful cryptography to happen? This is something I've been talking about with people. You'd think there would be figuring out good S-boxes or good functions that are useful, just doing really big brute force searches for good things instead of for keys. We haven't found anything yet that we can do that's useful. When I do, we will tell you guys whoever's doing the supercomputing can actually do it. I'd like there to be but I haven't really seen anything. I should give you a tape. That's not a crypto thing. Somebody asked me about my new book. Oh, cool. I have a new book coming out. I have flyers for it sort of over there and over there and I think they're also in the lobby by the bar. After this is over, I'll get the table. I'll have them there. I have a new book coming out. It's either August or September. I can't get a publisher to give you one answer. So it's one of those two. It's already a pre-sale on Amazon. The book is called Secrets and Lies and it's a book really about the philosophy of security. But a lot of the things I say up here, what technologies can and can't do, how they work, the limitations of cryptography and computer security and biometrics and PKI, what the world looks like because of this and then how to survive anyway. One of the things you learn when you go out into the real world is that the world's a pretty dangerous place. Anybody could kill anybody else. Yet we all do pretty well. And looking at why this is so has lessons for computer security. So I talk about that. It's actually a really cool book. I'm happy with it. I hate it for a while. I liked it. I like it again. So I think it's a really good book and I'm happy to have written it. Back there. Another AES question. Someone close that door so I can hear this man. Someone who's in the doorway should close the door. Hi. That's a good AES question that I ignore because it was dealt with. A couple of years ago, when I said that NIST would choose a winner, the way they said they would choose approximately one winner. The idea being they might choose more than one winner for some resilience methodology. At the last AES workshop in New York, there was almost uniform denouncement of that idea that they should not choose multiple winners. That the benefit of a standard is there are one of them. And if we as cryptographers can't pick one standard, how can we expect implementers to? So while NIST saw us a prerogative to choose more than one winners, I believe they will choose only one because they were told by everybody that we want just one. Thanks. I ignored that because it was dealt with, but people heard me last year. Don't know the level of the resolution. Right here over there. Don't know. I mean, certainly an algorithm that you don't need the computer to use seems useful. Not for me, but for people. Solitaire itself has problems, so I don't recommend using that. Yeah, I plan on updating it in my copious free time. Now that I have the book finished, I might look at that again. I need a lot of work. It's a hard problem. The problem of designing an algorithm that you can use without a computer that is secure against computers is an interesting academic problem. I mean, it seems to have a lot of interesting spycraft-like applications. But, you know, I don't know. But yeah, I do put that. So the moral here is don't use Solitaire, and I am going to update it sooner or later. I will take the red hat. Developments on encryption regulations are being elected. I haven't heard anything more than you've heard. I don't have any good secret knowledge. The question's always to ask is what does it mean? The devil's in the details and those things. Yeah, I just wrote what I heard of in the press. I know nothing new. I'm really waiting until I hear some real data before I write it up again. Sara Flannery is basically the question. This is the Irish 16-year-old who invented a new way of doing public ectography and got a lot of press. This is a good example of the media getting it completely wrong. I mean, the story is basically true. She did invent something. It wasn't really that much of a big deal. She seemed like a really smart person and she's probably going to be a real good cryptographer eventually. She seemed to understand that the media took it a lot of proportion. It was not a major breakthrough. It has since been broken. It was very close to something that was invented by a French cryptographer about 40 years before that she didn't know about. So there's not a lot of news there except that there wasn't news. But the media does that too. They have a big story and then when they get it wrong they never actually apologize and say, you know, this is what it really is. So a lot of people never, never knew how that resolved. The algorithm has been made public. In a cryptogram, I think a few months ago, there's a paper by her about the algorithm and the break. That's where you do cryptography. She did it right. The algorithm and the break. That's all part of the deal. That's back to the thick skin. If you design an algorithm that someone breaks, that's a good. That's not bad. We were learning from that. And she's learned a lot from the experience. And the media grabbed that and threw that out of proportion. Right. No questions for a second. I want to talk about attack models. Because this is something I've been thinking about in the past few months. It's something where I think cryptography has just gotten there completely wrong. When you hear about cryptography, you hear about it in terms of the military eavesdropping model. You have Allison Bob. Allison sends a message to Bob. Eve is the eavesdropper. So Allison crypts the message so Eve can't hear it. That's the way we've all learned about cryptography. And we've taken that model into computer security. Because when you buy computer security, computer security, computer security is pretty much always sold as a prophylactic. Right. Encryption prevents eavesdropping. Firewalls prevent network access. PKI prevents authentication. That's right. Prevents impersonation. Now, in the real world, not in the military world, you have a very different model of security. Most security is not a prophylactic. The reason we are secure is primarily because of detection and response. So the entire legal system, the justice system, the police detecting a crime, either in process or after the fact, and there's some kind of response that includes prosecution and punishment. A burglar alarm is working detection and response. And in the real world, a much better way of doing security is detection and response and not prevention. Now, in the military, prevention is what you had. You had a very different model. You couldn't detect and respond. You worry about military powers, war, death, it's a big deal. You need to have very strong preventive countermeasures. And that's why the military model puts such a large emphasis on encryption and physical security. And, you know, alarms, you never see alarms in warfare. That doesn't make any sense. You prevent. You prevent the enemy from doing whatever they want to do. But in business, you don't think about preventing threats. You think about managing risk. And I believe this is a conceptual shift we're starting to see right now in computer security because real people are now looking at the net. The net was invented by geeks who thought about, solve the problem technically. Prevent the threat. And then the military model fell into that. So I believe we're going to see a lot more in the next few years of these sorts of detection response. Ways to deal with computer security that presuppose that your prevention countermeasures will fail. This is sort of laser-wise started the company. But this is the meta-thinking. You have companies like Visa who say we have a couple of billion dollars of credit card for a year. They could spend their time preventing the threat. But instead they say, it's only 12 basis points. We're managing the risk just fine. So you could have insecurity in business systems if you're making money. If you're doing risk management properly, you're happy to be insecure. E-Trade is a good example. They have a very bad password authentication scheme. It's very easy to hack. But they're making lots of money. They would rather have a low customer acquisition level a low barrier to customer acquisition than high security because it's more profitable for them. And that's another. That's the right way to manage risk. Insurance is another deal. The reason we have fire suppression equipment in this building is not because Alexis Park likes you all, but because their insurance company demanded it. Insurance tends to drive security in a lot of industries. It's why you have bugle alarms in buildings. It's why you have guards. If you have a certain car alarm to get a reduction in insurance rates and you're likely to see as computer security becomes more of a business tool and not just a geeky thing many more of these risk management tools because if you're a CEO a secure computer is the one you've insured. You actually don't care if you can insure against the risk you know exactly what your exposure is. This computer initiative will cost me this much money. And it's in some technical countermeasures in some procedural countermeasures in some insurance. And then you're happy. And that's the way you decide whether you deploy ATM machines whether you build a gold mine in the Congo. That's where you make all of these decisions. You make them based on risk management. Because risk is a good thing. And the military risk was a bad thing. But in business risk is why you go into business. It's the real parallel between the internet and the American West. The American West is a very dangerous place. You had I have some books to give away. I'm going to throw these at people. But in the American West you had very dangerous justice by the fastest gun. You had some very spectacular public breakdowns of justice. Toonstone, that's still being talked about today. But people flock there by the droves because of the opportunity. So the risk was worth the opportunity. And that's what you're seeing on the net. And that's the kind of mindset that says you can deal with this risk better than you can. But if I can get away with being less secure than you, I'm going to make more money and that's worth it. So what is getting away with less secure mean? It means taking more risks. It means managing your risk better. And I see that perceptual shift happening. And it's interesting to see because to me the genesis was in the military mindset and the techie mindset of the internet. So I think that's sort of cool. So no one asked that question. There's a really bizarre question to ask. So I forgot to answer it anyway. Oh wow. Should I go with the yellow legal pad which is obviously designed to attract my attention? Or the person in the hat? How about the person in just a normal white t-shirt? What's the biggest secret? What's the biggest lie? I'll go with the yellow pad. They were the yellow pad. I don't know where the yellow pad was. Someone I know. What a mistake. The cool question was if the priority detection response, where does that make identification of punishment? So I think that's part of the same thing. Detection is the first... Detection actually has a lot of steps. Especially on the internet, detecting that you're under attack is often a big problem. You often don't know you're being attacked. Then detecting how you're being attacked is also very different from who's attacking you. So depending on what you want to do with the information, there's different amounts of detection you have to do. So if you want to punish, which is actually probably a good idea, you're going to need to detect not only that you're under attack in this manner, but also by this person. And the internet has a lot of trouble doing that right now. It's probably going to change. It's going to have to change. Because as normal people get on the internet, they expect a more normal life experience. And getting lacked every two days is not a normal life experience. So there's going to be a premium on detection and prosecution. Which is sort of bad news for a lot of people. But only on making the internet a better place to live. I don't know if I answered that. That's the best I can do. Oh no, over there. Yes. There wasn't you as the next person. I'll do you first. Actually I shouldn't, because I didn't have anyone else to do it. Who's the guinea pig that I want to talk to? You. In unison. Because the question about internet threats, pointing out that most of threats come internally this is a tough question because there's no good information. The conventional wisdom is that the biggest threats are internal. I don't think we actually know that. The best data we have is the CSI FBI Computer Crime Survey that's based on 521 surveys. We have no hard data. Certainly internal threats are nastier because you have an attacker in a better position to do damage with more knowledge, with more opportunity, probably with more motive. But then again you have more attackers on the outside. So I'm not convinced that one is worse than the other because I actually don't know. We're wasting part of your question. A lot of the threats against computer systems don't come from computers. They come from people, from social engineering. And I believe there is sort of a misplaced priority on solving the computer threats and then ignoring the human threats. And the human threats are much worse, much subtler, much nastier, and much harder to fix. Social engineering will always work as long as we're a magnanimous species. People want to be helpful. And those I think are much harder to deal with. So I believe there is a misplacement in the industry on technological fixes to social problems. But the internal, external comparison, I don't know if I buy simply because there is enough data. And the person who... My concern is about liability in the sense of, for example, E-Trade. Their passive security system is bad. But they get more customers. But whose liability is it if false trades occur, if money is lost, interest is saved, and offer management in terms of risk liability. If the liability is somewhere else. Past e-signatures build that. Unless it's at a dial tone. And that makes a lot of sense. If you hear anybody, all you care about is liability. But the reason that you don't care if your credit card number is stolen is it's not your liability. It's different if you're a debit card and a pinner stolen. It's your liability. In the case of E-Trade, it's a good example. The person I spoke to told me, he said, if there was a problem, we would eat the cost. So he would be accepting the liability. Certainly in any system where you as a consumer have a liability, you care a lot about the security. If you don't have liability, you don't care. Liability is, it takes a lot of years to sort out. And I would expect in e-commerce and computer networks liabilities to sort themselves out. That's why we're seeing the first insurance models for computer security. That's all a matter of buying and selling liability. The insurance industry knows how to do that. They've been doing it for hundreds of years. This is a new field. It's the same liabilities. When you're bare-asking, you buy insurance against someone dropping cyanide in your aspen tablets and drug-putting on a supermarket shelf, you're also going to buy liability against someone putting something on your website. It's the same sort of risk. You can buy that insurance. But yes, it's a really good point. The question to ask in any security management system is who has the liability and how does it move? The credit card system is a good example. Very complex system of shifting liabilities from the merchant to the acquiring bank to visa to the issuing bank to the customer. And the way the liability moves, and in each step of the way, it is well-defined who owns the liability if something goes wrong. Now, we fluff that off a lot and computers to computer like try to commerce, but that is going to have to be fixed. That's sort of the rush the internet, people being sloppy. That will be fixed. You need to know what the point is. I'm going to go way back there. There is you. Oh, biometrics. This is something I cover in the book. I like biometrics for some things. This is an example of stuff I cover in the book. Biometrics are really strong because they're very difficult to forge. On the other hand, they're weak because they're easy to forge digitally. So if you have a biometric reader with a guard standing in front of it and the guard watches you put your finger on the reader and the light goes green, you're allowed in. That is an excellent biometric system because it is really hard to make your finger look like somebody else's. If the biometric doesn't have any authentication at the entry point, it's very weak. So you can imagine a biometric system where I put my finger on a computer, fingerprint reader, and it goes out to a website and it's authenticated and I'm allowed in. That's very easy to forge because while it's hard for me to pretend to make my finger look like yours, it's really easy for me to capture your digital finger and inject it in the stream. A physical analogy would be instead of a guard looking at your face as you go through the turnstile, there's a little Polaroid camera, picture, you slip it through a slot, the guard looks at the picture and says, yeah, let him through. It's really hard to make up your face like somebody else, but it's really easy to see the Polaroid picture. So biometrics access control works when you can control the entry point. They are not a panacea, they are not cryptographic keys, and they will not work remotely. Other problems is they have terrible failure modes. And when you lose them all, you're done. This is a big deal. Getting a new key regularly is important for security. You can never get a new biometric. It's yours for life. If there's a biometric database and it fails, we're done with. That was a good question. Thank you. You can get a tape to come. I'm not going to throw it all back there. Should I have these books to give away? This is actually a pretty good book. It's called Hack Proofing Your Network. Oh man, go away. You never agree with this. And there's posters coming up. Yes. The question was about SCT and credit card processing on the net. I've said this for years. Who ever heard me speak four years ago heard me say this. Set was completely useless. There's no reason for it. Set is the Visa Mastercard credit card protocol on the net. We already know how to do credit card transactions without the card being there. You can buy things over the telephone. And something like SSL gives us a secure pipe so we know how to do that. So set wouldn't have any purpose. I don't believe anything that will replace set because I think set didn't serve a purpose. That's having card and up present transactions and SSL is good enough. Amazon.com is happy to take your money. Even if you don't use set. Even if you don't use SSL, they're happy to take your money. As well they should be. They're in the business of taking your money. They don't want to figure out reasons why they shouldn't take your money. I don't think Visa gets out of loop because Visa is way too big. You can't get big people like Visa out of loop. Digital Barrister gets issued by Visa. Visa is not getting out of your loop. Visa is on debit cards now. They're getting you more loops. They're on travel checks. I think they're being investigated for antitrust. All right. I'm going to do one more question. It's not as all the hands go up. Oh, heaven. Oh, that's never going to work. Oh, man. This side. Oh, there's someone in the pick a girl. You're not a girl, but I'll pick you. Questions about RSA patent expiring and how it's going to affect things. I believe not at all. It's an enormous non-event. A few years ago, the Diffie-Helman patent expired. So any protocol that cared about patent infringement moved to Diffie-Helman. So we have lots of free public depography out there. It's the same thing. We'll have some RSA free, but I don't think it's going to make any difference at all. So I see there's one big huge non-event. All right. So I have to clear out because CDC is coming in here. Wait before I go. There are book flyers there and there. I am going to go. Supposedly I haven't seen it. There's a table in the center of that room. Which I'll be if I can get out if I can't, I'll be getting there. But I won't be answering questions on the way because that I'll just never work. And I believe they want you all to clear out so they can do all sorts of cool stuff. Is that correct? So you all have to leave. So thanks a lot.