 Good afternoon. I'm Alan Davidson, the director of the Open Technology Institute here at New America and delighted to have you all here with us this afternoon in our launch of our new cybersecurity initiative. And I'm delighted to be here with our guest technologist keynoter. Bruce Schneier is somebody who really needs a very little introduction for at least for some of the communities who are part of this conversation. And he's the author of about, I think actually 12 books now about plus or minus, including the forthcoming data in Goliath, the hidden battles to capture your data and control your world coming out next week. And he's a renowned computer security researcher. I first met Bruce in the late 90s when I was a cub attorney in the crypto wars of the late 90s. I guess we have to first crypto wars now. And even then he was, it was very clear that Bruce had a very unusual talent, which was explaining really difficult technical concepts to a Washington audience. And since then, he has distinguished himself in this way. So we'll get to test him a little today as well. Well, to start off, Bruce, I was going to say there's this terrific quote by Robert cringely and about sort of technological development. And he says, if the automobile had followed the same development cycle as the computer, a Rolls Royce today would cost $100, get a million miles per gallon and explode once a year killing everybody inside. It feels like we're in that exploding car part of the story now, or at least some people would say that, with this steady drum beat now of attacks on major retailers, Target, Home Depot, attacks on the devices that we all carry around with us, the celebrity hack attacks where people's most personal photos and information leak to the world, the Sony attack. More and more it feels like that our ability to build great technology has outpaced our ability to keep ourselves safe when using that technology. As somebody who's been working in this space for a while, do you feel like this is something has changed? Have we reached a sea change? Is there a meltdown happening? Is the car exploding? I don't think anything has changed, but that quote might be a decade and a half old. So it shows you how long we've been paying attention to this. If anything's changed, it's the way the press has been reporting this and the number of us who are using it, something the magnitude of the Home Depot hack happened decade and a half ago, it wouldn't be a whole lot of credit card numbers because it wouldn't be the same network. I think in all aspects of our society, our ability to do something outstrips your ability to deal with the consequences. We tend to like to rush forward with things and fix the problems later. That's what we do pretty much all through the history of our species and that works better when you're dealing with systems of smaller scale. The problem here is now our systems are getting so big, so inclusive, so complicated that this car exploding once a year killing everybody is equaling all the cars exploding once a year killing everybody, which is much worse. So I don't see a change. I really think this is a natural progression of what's been happening. If in the extent people do see a change, I think it's because it's being reported differently now and the way the popular culture is looking at it, the way the press are looking at it has changed. So for a lot of people, they're going to say, well, that's not a super tolerable world to live in. Well, then you sort of pick your stuff, right? So I guess one question, a natural question becomes, why is this so hard? Why can't we protect people online? Why is it so difficult? I mean, the internet's the most complex machine mankind has ever built. That's the basic answer. Complexity is the worst enemy of security. Securing complex systems is hard. It just is. And as scientists, we actually don't know how to build secure computer systems, let alone a computer system attach an internet run by a user, which makes it infinitely worse. These are actually hard problems. And in our rush to connect everything to the net, we've kind of ignored those hard problems, right? And that was great when the most important thing you did on the net was discuss Star Trek. It got a lot worse when you started doing banking and then critical infrastructure. So this actually isn't an easy problem. I mean, even the something that Adam Rodgers sitting in this very chair said in the answer to my question, that then surely we can come up with some legal framework to make this work, right? What I said to him is that it's not the legal framework that's hard. It's the technical framework. These are extremely hard technical problems. I don't know how to build a secure system. So I can't make your system secure. The best I can do is make them okay and then try to build enough buffer around it so things fail gracefully. So you have some sort of resilience. And then we muddle along. Now, yeah, you're right. As technology gets more powerful, intrusive, this gets increasingly intolerable. Your alternative is something like the FDA, where it takes millions of dollars and multiple years to approve a new drug. Now, we could do the same thing for a new version of Windows. But we as society have decided we don't want to stifle innovation that much. We don't want a world where only a very big, very powerful, very well funded company can design software because of the regulatory regime. So we're making these decisions. So you're just going to be positing this world where, okay, we've chosen to have openness. We've chosen to have an innovation friendly approach to this problem. What are some of the things we could do to mitigate the harms? Because, right, I think a lot of people do look at this and say, well, this is, like I said, it's not a great world to live in. If you can't trust your cell phone, if your company can't trust its infrastructure, are there things we can do to make it better? And there are certainly success stories. I mean, credit card forward's a great success story. If you've been following it in the early years of Internet fraud, customers were on the hook for the money. The credit companies would refuse to believe the attacks. And you'd have lots of individual liability. We've now moved to a world, if you have a credit card, that you don't really worry about Internet fraud. If it happens, it's caught automatically, you're given a new card. We don't make the fraud go away. It's still extraordinarily expensive. And, you know, we're in a sense paying for it through fees. But the whole security happens in the background, and we have some resilience. So we don't make the fraud go away, but we make the aftermath of it tolerable. And you can imagine the same sort of things being done in other regimes. Now, this is harder than others. I'm sorry, some areas harder than others. Sony was an example of an extraordinarily massive attack. You could easily argue they could not have defended against it, and I think that's true. But they could have responded a lot better than they did. There are other times where maybe we need more protective defense. So it's going to be a Patrick depending on what. What's your advice to the average consumer out there? Agitate for political change. This is a hard one. I'm always asked a lot, how can I protect myself on the Internet? And the problem we have is that we are not technically savvy enough to do a lot of that. So we rely on others. So most of us store our email on Google. Gmail, we use Gmail, or somebody else. Now, I can't call up Google and say, I would like you to add these four security measures to improve my email security. They would say, no, thank you, click. So we don't have the ability to secure a lot of our stuff. If you had a credit card number at Target and it was stolen, there was nothing you did that was wrong. Target did something wrong. So in this world, we are increasingly entrusting our data to third parties. My cell phone company knows I'm in this room. Why? Because my cell phone is on and it's on mute. And it needs that to deliver phone calls. That's incredibly invasive. It's mass surveillance, but that's how the cell phone system works. And we are increasingly giving this surveillance information to companies for good reasons. People like Admiral Rogers are perfectly happy to get themselves a copy, which he does under lots of different authorities and different programs. Solving that is not easy. And there are technical solutions, but there tend to be around the edges. Any solution I give you that involves leaving your cell phone off and at home, you're actually not going to do because that's a dumb solution. It's like not having a Facebook account. You cannot have one, but you're kind of a freak. So I think Admiral Rogers was right, even though I'm sure we disagree on all the details, that we need a legal framework. That if we're going to protect our data, protect our privacy, we need some rules because we're not going to protect it by not sharing our data. That's just not a viable option. So even if you had the, so yes, and certainly that's the, where the Open Technology Institute has been coming from too, but even if you had that framework, even if we all voted with our feet and our votes and we got a framework that we felt adequately protect, put the right kinds of rules in place for the NSA, dealt with encryption, we could talk about those issues, you would still be in this world where, okay, so there's better rules for government access, but what do we do, we still have the bad guys out there, or a different set of bad guys, folks who are not obeying the rule of law. Well, and this is where I think the feedback can work in our favor, that if indeed we had organizations like the NSA refocused on keeping us secure and keeping our data private, I think we would get a lot of mileage from their knowledge and research. So you would say there actually is a role for the NSA? It definitely is, and they wear two hats, attack and defense, I mean, we normally hear about the attack hat, but there is a defense hat too, and actually, I just talked about it a little bit when he talked about some of the things he might do to protect against critical infrastructure. And so it was a good way to say it. We're all living in one world, I mean, the traditional NSA jobs were attack their stuff and defend our stuff. That worked really great during the Cold War, where you can attack a Russian communication system, or a Russian radio, and defend a U.S. military radio. That kind of failed with the advent of the Internet, because there's no such thing anymore as our stuff and their stuff. We're all using the same stuff. We all use Microsoft Windows and TCPIP and Cisco routers. And in order to defend our stuff, you necessarily have to defend their stuff, and in order to attack their stuff, you necessarily have to leave our stuff vulnerable. Then you have to choose between security and surveillance. Now, largely, we seem to have chosen surveillance. And again and again, we see secret NSA hacking tools being used against us. So an example might be in the first few months of the Snowden documents, one of the stories I wrote based on them, was about an NSA program called Quantum. This was actually the thing when the guardians negotiating with the U.S. government that the NSA most desperately didn't want us to talk about. This is a secret program of packet injection, which is an attack technique. And the NSA uses it to great effect, but it's not an NSA secret. The Chinese government uses it. There are companies that sell the capability to third world governments around the world. There are hacking tools that do it. We are all vulnerable to this attack. So all the NSA can use it to attack legitimate U.S. enemies by leaving that hole open. We are also vulnerable. What I would like is if we collectively decided that our security is more important than that surveillance technique, and now we can use the expertise inside the NSA, inside academia, the force of law to make some of these changes work, to try to secure all of us against this technique. And others, man, go on and on for a list of techniques that were once NSA secrets and are now commonly used. The story from last week that the NSA has great hacking techniques that drop malware into your hard drives, not where you think it is in the boot sector. Very complicated, but basically they have it, and this is pretty impressive. They have a technique to attack computers that even if you reinstall the operating system, your computer will remain attacked. That's kind of neat. I mean, as a taxpayer, go team. But after this was released, I started doing some research and there are a few papers in the academic literature on the same technique. So what is this secret NSA technique is actually a preview of what the criminals are going to do three years from now. And that's a way to think of all of these techniques. All right, so we got three years. Let's work on a solution. And the NSA has two choices here. They can keep using this technique as long as they can and hoping that we never figure out how to defend against it, or they can help us defend against it because it's coming at us really fast. Now, Admiral Rogers would say, you know, that there is, and did say that there is great value that's come out of these programs, out of these techniques. I think he said there's value. I may have injected great, I'm sorry. We'll go back and look at the transcript. Okay, there's value, though. And this is the question. There is value from lots of things that we as a society decide not to do for all sorts of reasons. But we have to decide. We either get the value and pay the cost, the cost is our vulnerability, or we don't get the value and we get the benefit of being secure. These are your choices. You don't get secure ours and listen to theirs. That's what you're not allowed to get. That's what, that's what having an interoperable internet across the planet denies you. If we say only the US gets to use the internet, then sure, ours is different. And you can now make these trade-offs. One world, one technology, one decision. So in the meantime, we still have this world we live in now where there are these attacks, higher high profile attacks, where we do have a somewhat vulnerable set of tools that we use out there. You know, you said a little while ago, but part of the response has to be, to recognize that and be resilient against it. How does the average consumer do that? How do we talk to the average consumer? The average consumer doesn't. This is what I said. The average consumer is putting their pictures on Facebook, is putting their email on Gmail, and the average consumer really can't make the decision and is blindly trusting the service providers. That's the way it has to be. And that's okay. I'm not saying this is bad. You know, my mother has a much better computing experience now that most of her stuff is in the cloud. My father still can't use a computer, but that's another story. That we want to push that technical expertise onto somebody else because we are not doing it ourselves. And that's where we're blindly trusting. And all these companies, of course, want you to be secure against everyone except them. Google spends a lot of money now making sure your data is secure on Google's platform from everyone except Google whose job it is to spy on you and make money off that. You understand that. And all companies are basically like this. And this is going to be a problem. I want to make sure we have time to turn to the audience. One other question is sort of a kind of broader question about how we have this kind of conversation in Washington. What you're talking about is a fairly subtle conversation about trade-offs, about the kinds of different things that we might expect from our technologists, from our national security establishment. You know, how do we get more people involved in this debate? How do you have this debate? How do we find more Bruce Schneiers? How do we clone Bruce Schneier? How do we find more people who can be translators between the technical world and the policy world? I thought that was your job. I thought that was your job. Okay, so this is the problem. I think it's the problem for a lot of technological issues on the Hill that subtle discussions of policy are not hard to have. Admiral Rogers said we need a legal framework. I had at least 20 minutes of rebuttal to that, that conversation really can't happen because it's very simple. We need a legal framework where the devil's in the details. And I've been trying for a lot of couple of decades. How long have you been doing this? To try to have these conversations on the Hill, we have more or less success. I think some of it's generational. As the generation born to the Internet, we'll understand that there's just a lot more than the generation who got forced on them at age 40. And so this is a generation gap here. And I think the nice thing about generation gaps is that the younger generation always wins. Because the older generation dies, you get that. You can't win a generation gap. And I think that's good. I think the younger generation has an intuitive feel for how the Internet works, what its values are, what its risks are, in a way that the current people in power don't. So I think some of this will just shake itself out. It doesn't help us fighting the second crypto war right now. But I tend to be short-term pessimistic and long-term optimistic. Questions from the audience? Let's see. We've got mics over here. And how about one up here? Oh, sure. We'll go back there since you're there. And then we'll come up here. Well, a great presentation. I'm out at the Naval Academy Cyber Center where we focus on making every midshipman about 1,000 per year at least conversant on these subjects. And I would like to push back on something you said about the issue being technology. If technology diffuses like it always does, everyone eventually gets automobiles, everyone gets aircraft, everyone gets nuclear, at least many countries get nuclear power, that if the crypto diffuses, if the machines diffuse, then won't power come down to the human factor? Which workforce is the most savvy about cybersecurity? Which country's populace is most savvy? And then third and perhaps most dramatic is which population workforce can function in a degraded internet? So I like your comments on that. It really is a human factor in the end analysis. So I think that that's perceptive. There's a lot of human factor here that we've been saying for a couple of decades that people are the most insecure part of any security system. And I think that's still true. The Sony was attacked through a phishing attack. So it was an email sent to a person. And that's how the attackers got in. There definitely is a terrain in cyberspace when you think about that actual cyber war. If we were to engage in cyber war against North Korea, but they have like 12 computers, and it's really, they have a very inherent defense because they're not reliant on cyberspace at all, whereas we are very reliant. And so that's a very human factor. The different savviness of populations in a lot of these attacks, your security doesn't depend on your average, it depends on your weakest. So for example, the phishing attack against Sony, it wouldn't matter if how good the average person was recognizing that as an attack and not clicking on the attachment because it only takes one to infect the network. So when you have a system where you're reliant on the security of the weakest, then training, getting the average better doesn't help. If something like, oh, I don't know, retail fraud against credit cards, where you have a more average, then yes, you're going to see a more savvy population being more resilient against that sort of attack. So I agree that human factors matter a whole lot here. But it has to be temper with technology. I tend to think that solutions that require educating the users to doomed to fail because I've met actual users and I'm not too optimistic here. I'd rather have solutions that work regardless of the user. And if you look at where we've gotten security right, where we've gotten safety right, something like an automobile, we tend to not rely, we rely as little as possible on the user. You know, we try to build safety systems that work even if the user is somehow not acting in his best interest. And I'd like to see more of that in cyberspace. And in that way, a lot of the cloud works that way. Question up here? I wanted to go into the legal framework just a little bit. I was reading a global network initiative report on mutual legal assistance treaties that just came out at the end of January. And one of the points that was sort of made over and over again is because of this, what's ours is theirs and theirs is ours sort of effect of the internet, that there are opportunities for using these MLAs as ways of sort of getting other governments to do things the way that you want them to do. And I wonder, is that naive to think that you could hold a country to an international human rights standards by saying, if you do that, we will honor this MLA or not? I think that actually is a savvy way of looking at things. And you could see that in other areas. You could see that in things like child labor or bribery or money laundering. There are lots of examples of sort of international treaties where countries are really named and shamed for not adhering to an international norm. And we've made great strides in reducing whatever the bad thing is through that. So I think there is value there. How to do it, I mean, hopefully there's people in Washington who know way more about this than I do. But I do think that is a way to deal with summits and dashes, especially when it comes to cybercrime. That we are now living in a world where some countries are safe havens for cybercrime. And that's difficult and bad and not good, especially because you don't have the same geography that will give you a defense. The fact that, I don't know, Sub-Saharan Africa is very far away from the United States is not a defense in cyberspace where it's a great defense in the real world. Other questions? Anything? I want one right here in the middle. Anything from the Twitter sphere? We'll go to the Twitter sphere next, but how about right here in the middle? Someone sent me the Twitter sphere posted a picture of Admiral Rodgers' face when I asked him my question. Completely awesome. This is why we love the internet. Hi Bruce, thank you. I'm going to ask you a question as a security engineer. Helen Nissenbaum makes this philosophical differentiation between technical or information security practice, which is where the objective is to mitigate vulnerabilities that might harm people. So if a car is hit or hit somebody, you look to see how you can improve the car on the roads and maybe traffic rules. And cybersecurity, she says, it's driven by national security interests in which an attack on the networks are presented as urgent, imminent, and existential threats to a significant collection, namely to the national population. This is then followed by justifying bending democratic rules and stepping out of political procedures. So the question is, what do we do with attacks? Do we see how we can improve the system so that they don't happen again? Or do we turn them into a national security problem, in which case we justify putting a bug on all the cars in the world and profiling all the drivers, hoping that that will maybe mitigate the crashes? So I think my question to you is, is there any security engineering that is left that will be outside of the national security project that you have outlined? So can we do security without necessarily seeing it as an attack on nations? And can we think about whether Americans should decide on all of the global internet infrastructure and how it should be governed? Thank you. So I think most security happens outside that national defense framework. You know, I think of all the security research, all the security products, they tend to work under a crime metaphor and a hacking metaphor, not a national security metaphor. So every one of you works at a company that's bought a whole bunch of stuff, has network security in your network, the things you have at home, these are consumer items, these are business items, and they're defending against hacking. They're defending against cyber criminals, they're defending against, you know, anonymous. That's the paradigm. The national security paradigm is relatively new. It's not a decade old. I mean, it doesn't even happen right after 9-11. And it is, you know, taking over and I think that's bad. But primarily, most of the good stuff doesn't come out of the national security establishment. It's not stuff that Raytheon is building or the NSA is funding. It's companies that exhibit at the RSA conference that are just trying to sell security, U.S. companies, foreign companies. I think the important point you made, and a really hard one is sort of the first half of your question when it's whose job it is, what framework are we operating in. And let's take the Sony case. Sony case is a really interesting example because here it is November 24th and Sony gets massively hacked. And it could be either a couple of guys or the government of North Korea. Now pause for a moment and postulate a world where you don't know whether it's a couple of guys or the government of North Korea. That's really freaky. And the legal framework of defense depends on which one it is. So whose job is it to defend Sony? Well, if it's the government of North Korea, we can have a conversation. Should it be the military? Should it be somebody else? Who should do it? If it's a couple of guys, we can have this conversation as well. It took the administration three weeks to point a finger at North Korea. Nobody believed them at the point, but it took them three weeks. And it was extraordinary for them to do that. And it was a big deal to do that. Okay. Whose job is it to defend Sony before we know whose job is it to defend Sony? The real question is you're being attacked, you have to defend, you've got, oh, I don't know, 10 milliseconds to figure it out. What legal and social norms operate before you know who's attacking you? But in the real world, you would tell your attacker by the weaponry. We walked outside and we saw a tank. We would know the military was involved because only militaries can afford tanks. That is an easy mnemonic and that fails in cyberspace because everyone's using the same stuff. So we have to decide what rules operate. This is actually not easy. It's a very hard question. Do we assume, and Admiral Hodges would like this, to assume that it is a military threat until proven otherwise or do we assume it is a criminal threat until proven otherwise? Or we do have some, this new interstitial mode where we don't know in some rules in the middle work, we have absolutely no idea how to do this sort of policy. And if you read David Sanger's great piece on the Sony attack on the New York Times, I think it was late January, he spent a lot of time talking about how hard it was for the administration, not just to figure out who did it, but to figure out how to respond, whether to announce it. It was extraordinary. This is really hard. And the administration had two issues. One, there are different ways to attribute attacks. Tributing is hard. Knowing who did it is hard. And we know that the NSA had some secret evidence. They probably had taps into the government that was able to blame it on Sony. But they couldn't reveal that evidence because that would compromise sources and methods and ongoing intelligence operations against North Korea, which we probably all agree is a good thing for us to have. So one, can the U.S. government convince itself who did it? Can two, can it go to the attacker and say we know you did it? Can it go to everyone else and say we know they did it? Those are three different things increasingly hard. So here we are at number three being expected to approve retaliatory action as a country when we cannot be shown evidence that it's right. And we're at a low period of trust here. We're going off the Iraqi weapons of mass destruction thing. We're not really trusting secret evidence. This is going to happen again and again. These are extremely hard policy questions. And I don't have answers here. I'm not going to tell you what to do. We have to figure this out. Well, I want to do them. Do you want to do one really quick one? Yes or no question. One really quick question. But go ahead. From the internet. Well, most of the internet is really enjoying quoting Bruce and posting pictures. And some of them have referred to Star Wars as the New America Cyber and the future of things. But the main question that keeps coming up is what you think of the news last week about the firmware hacks. And what do you think that means for broader internet architecture? Yes. Yes. We can take another minute for that. So very quickly, and I wrote a piece on this. It's on my blog from last week. This is subtle. On the one hand, we're seeing very sophisticated targeted attacks against legitimate targets. And again, Go Team, this is the NSA I want to pay for. That's going to penetrate the networks of people we don't like and eavesdrop on the traffic. On the other hand, the NSA tends to have a very broad conception of who to attack, including things like the Belgian phone company and stealing every single SIM card encryption key that a Dutch company made. And, I don't know, the Merkel and the oil company in Brazil. Random system is around the world. So that maybe the problem is that this definition of who we should attack is too broad. On the third hand, again, these are techniques that are going to make their way into academic papers, into criminal toolkits, not because of the intercept article, but because research continues and where we might be a little behind the NSA, we're not nearly as well funded, but we're not dumb and where we trail along. So, again, we have to decide now that do we want the NSA to keep using these techniques for attack or the better ones that have been invented since then? Or do we want the research turned to defense? And these are policies we have to make. I'm in favor of defense. I think that the number of good guys outnumber the bad guys society by enormous amount and that in society, all technologies can be used for good or bad. Good uses are better, and that the price of freedom is the possibility of crime. And we accept that because the goodness is so good. And do you worry, even as you think that there's this role for defense, that the trust deficit has become too great? I mean, when you see major governments involved in these kinds of things like undermining encryption standards or stealing SIM cards, SIM card data, will anybody trust the defense approach? We're living in a low trust era of our society, presumably that's going to be fixed. That'll get better with time. Yes, that is a huge problem, and I do worry about that. That's what I asked Rogers, how do we trust you? And he talked about legal framework, and he already has legal frameworks, and I'm sure that helps. Well, what's your advice for him? How does he get trusted? I think more transparency. I think we have to accept that we're living in a more transparent world, and that's the way it goes. You do it yourself or Snowden does it to you. On that note, please join me in thanking Bruce Schneier.