 Welcome to New America, a non-profit civic enterprise dedicated to preserving foundational American values in a time of rapid technological change. I'm Kevin Bankston. I'm the policy director of the Open Technology Institute, which is the tech policy and tech development wing of New America, where we are focused on building a stronger and more open internet for a stronger and more open society. I want to thank you for all coming here today and braving the heat, or for tuning in via the webcast or C-SPAN, for today's panel event, National Insecurity Agency, How the NSA Surveillance Programs Undermine Internet Security. Since the first Snowden leaks last summer, almost all the controversy around the NSA has been focused on its programs to collect phone records under Section 215 of the Patriot Act and its monitoring of internet communications under Section 702 of the FISA Amendments Act, and it's focused on the debate over how to reform these surveillance statutes. Yet, the NSA is also engaged in a wide variety of conduct that, in our view, is fundamentally threatening the basic security of the internet, secretly undermining essential encryption tools and standards, inserting backdoors into widely used computer hardware and software products, stockpiling vulnerabilities in commercial software that we use every day, rather than making sure those security flaws get fixed, building a vast network of spyware inserted into computers and routers around the world, including by impersonating popular sites like Facebook and LinkedIn, and even hacking into Google and Yahoo's private data links. Finally, though, Congress is starting to pay attention to how the NSA is threatening not just our privacy, but cybersecurity itself. In June, last month, the House overwhelmingly voted to approve two amendments to the Defense Appropriations Bill that would defund the NSA's attempts to undermine encryption standards and to insert backdoors for surveillance into the communications technologies we rely on. Those amendments were sponsored by Representative Zo Lofgren and Alan Grayson, and backed by a broad bipartisan coalition. Today, after brief pre-recorded introductions from both of those lawmakers who are today flying back from their July 4th vacations, we're going to focus our panel on these issues, which have until recently been mostly ignored by policymakers, even though they were a central focus of the recommendations from the president's own review group in December. This discussion focusing on the costs of the NSA programs to our overall internet security is a follow-up to our panel discussion earlier in the spring about the economic and foreign policy costs of the NSA programs overall, and previews the release of our paper later this month, Surveillance Costs, the NSA's impact on the economy, information security, and internet freedom. So with that, cue the representatives, please. Hi, I'm Congresswoman Zo Lofgren. Thank you for inviting me to be a part of today's discussion. I regret I can't be there with you now to talk about this important issue, but on June 19th this year, the House took a big step towards shutting the back door on unwarranted government surveillance. By a massive bipartisan margin, 293 to 123, the House agreed to an amendment that prohibits the government from searching Americans' communication and data without a warrant, and from requiring that device manufacturers or service providers create backdoors in their products or services for surveillance purposes. As many of you know, and as you're discussing today, when an individual or organization builds a backdoor to assist with electronic surveillance into their product or service, they place the data security of every person in business at risk. It's simple. If a backdoor is created for law enforcement purposes, it's only a matter of time before a hacker exploits it. In fact, we've already seen it happen on more than one occasion. For example, in May of 2014, it was reported that a major security flaw was found in software used by law enforcement to intercept communications that allowed a hacker to listen into any call recorded by the system. Fortunately, the amendment passed by the House was a worthwhile step forward and will make a meaningful difference, but our work is not done. This amendment in June was the first time that Congress had the opportunity to debate and vote on the distinct issue of the Fourth Amendment and the NSA. We need to continue pushing to protect private information and data security, and we need the Senate to follow suit. Because when the House of Representatives had the opportunity, finally, to vote on it, the result was overwhelming. The House stood up for the American people and for the Constitution. That is something we can all celebrate. We set a strong signal that if the government wants to collect information on U.S. citizens, get a warrant. Thank you for your hard work on this important issue, and I look forward to working together with each of you to keep pushing for a safer, more secure Internet. Thank you, Congresswoman Lofgren, and next up, Allen Grayson, Representative Allen Grayson. Congressman Allen Grayson, thank you for inviting me to share this panel on the NSA, and thank you for all the good work that you do to protect privacy and security in America and throughout the world. Listen to me. If the Chinese government had proposed to put in a backdoor into our computers and then pay the company $10 million to make that the standard, we would be furious. We'd be angry. We'd do something about it. But what about if it's our own government that does that? That's exactly what the NSA has become. The best hacker in the entire world. And when they put in a weakness in the architecture of the software that everyone uses, what they're doing is making it a weakness not just for their benefit, for the benefit of anybody who comes along and knows about it. And that's a crying shame. We are entitled to our privacy as human beings. Many of our economic activities cannot be done unless they can be done with some degree of security and safety. The protection that the NSA is purporting to provide to Americans is actually being undermined by the NSA itself. That has to end. That's why I'm happy that many of you joined me in passing two amendments recently which represented the first substantive limits on the NSA's ability to insinuate itself into our software for improper purposes. One was our science and technology committee amendment, which said that NIST no longer has to be a short order cook for whatever the NSA tells it it wants to do. And the other was a parallel amendment on the floor of the house which passed unanimously among Democrats and Republicans for the same purpose. These are the first steps that we're taking to take back our privacy, take back our own security, take back our freedom. And I welcome your help in doing that. It's one of the greatest endeavors of modern life to make sure that we can preserve modern life against the encroachments of Big Brother. I'm Congressman Alan Grayson. Thanks again. Well, thank you to both representatives for taking the time to tape those messages and also to start a too much delayed conversation about the NSA and security conversation that we're going to continue today. And I'd like to invite the panelists to please come on up. And if you're wondering what Representative Grayson was referring to, about $10 million being paid to somebody to undermine security, we're going to explain what he was talking about. All right. Joining me on the stage in alphabetical order, I believe, are Joe Hall, who is the chief technologist at the Center for Democracy and Technology and who I was lucky enough to get to work with while I was at that organization. Danielle Kiel, policy analyst here at New America's Open Technology Institute and the author of our upcoming paper on the costs of the NSA programs, including the costs to security. David Lieber, who is the Privacy Policy Council for Google here in Washington, D.C. Bruce Schneier, noted security technologist and author, fellow at the Berkman Center of Harvard and here at OTI. And amongst his many books and articles, including one that you can find outside, he has done some of the original reporting based on the Snowden documents about the NSA's impact on security while working with the Guardian. And then finally, we have Amy Stepanovich, who is a senior policy council at Access here in D.C., working on several of these issues. We're going to break up, just to tell you where we're going with this, we're going to break up the conversation to talk about four sets of things that the NSA has been up to along the lines of our upcoming paper and along the lines of the handout that those in the room might have picked up in the front. First, we're going to talk about the undermining of crypto standards. Second, the insertion of surveillance backdoors into products and services. Third, the NSA stockpiling of vulnerabilities in software. And fourth and finally, the range of offensive tactics that the NSA is using, taking advantage of several of those tools we've already spoken about. After spending about an hour on those issues, we'll spend a few minutes batting clean up, talk about any issues or policy recommendations we missed. And then we'll turn it over to you guys for some questions. So, starting with the issue of undermining encryption tools and standards. There's been some reporting about the NSA taking a variety of steps to weaken the encryption tools that we and businesses use online to keep our communications secure. Representative Grayson made reference to that, as did the president's review group in its recommendation number 29, talking about the importance of encryption to ensuring the security of our communications online and the continued health of the internet economy. So, I'm going to, I think, start with Amy to explain what the heck happened. What did the NSA do? Who or what is NIST? And why does what they do matter? Sure. So, the NSA, many people don't know, actually has two different, very, very different missions. The first is signals intelligence. This is a mission that most people are aware of. And the mission under which they conduct all of the surveillance operations that you've been hearing about pretty much ad nauseam for the last year. However, their second less or known mission is called information assurance. And this is a mission under which the NSA is supposed to be promoting security standards, encryption protocols, pretty much making sure that all of your communications stay secure. It's under the information assurance mission that the NSA communicates with NIST, the National Institute of Standards and Technologies. NIST, for those of us in DC who love acronyms, deals with many, many things. They set standards across the board and so many different types of businesses, jobs, not only encryption. But one of the things they do is they set encryption standards. And under a law called the Computer Security Act that was passed in the 1980s, they coordinate with the NSA and the NSA's technologists and their information assurance mission on these encryption standards. However, the Computer Security Act, which was actually very well drafted and made after a lot of collaboration between security experts and kind of the formative days of the internet, was preempted by a law that was passed in 2002, that being a really key date in surveillance laws because it was post 9-11. 2002, the Federal Information Security Management Act came along. And it actually had language that was not as fine tuned as the Computer Security Act and allowed the NSA or really allows the NSA, if you look at it closely, to come in and to undermine the encryption standards in a way they weren't able to or they probably weren't able to under the previous language. So under this law, the NIST is required, absolutely required, to consult with the NSA on all encryption standards. The amendment that Representative Grayson actually alluded to earlier that passed out of the house in the first act. This is primarily an act that funds science and technology research, has not made it to the Senate yet. But in that bill, an amendment was added on out of committee that says that the NIST is no longer required to consult with the NSA on standards. They are still able to and this is in recognition of the fact that NSA has a lot of funding, a lot of experts, a lot of really smart people who do this work and they shouldn't be prevented from being able to help and to assist. But they are no longer required to consult with the NSA on encryption standards, which means that there's going to be a lot more accountability if those encryption standards get undermined. Later on, as part of the Defense Appropriations Act, a second amendment, again alluded to by Representative Grayson, actually is supposed to prevent any funding from being used by the NSA to undermine encryption standards. So not only if the first act passes, will NIST no longer be required to consult with NSA, but when they do consult with NSA, the NSA cannot act to make us all a little bit less secure. Perhaps Bruce can explain to us why these lawmakers are seeking to reduce the NSA's influence over what NIST is doing. Can you talk about how the reporting indicates that the NSA, in fact, undermined the standards that NIST was setting? It's actually surprisingly complicated. NSA does a lot of undermining of fundamental technology. Various ways of underlying them, mathematics, to learn to intercepting Cisco equipment as it's being shipped from Cisco to the customer and inserting backdoor chips. So undermining happens all through the process. What we're looking at here is where it happens as products are being built, as standards, protocols, things that affect every example of the product. So it's encryption standards, it is implementations, it's software. And all of these, we have examples of the NSA going in and deliberately weakening security of things that we use so they can eavesdrop on particular targets. We have one example of a mathematical random number generator. That was a NIST standard that was modified by the NSA to put in a backdoor. There's a lot of standards where this didn't happen. That's actually a very risky place to do it because that's likely to be discovered. This actually was discovered in 2006, we didn't know who did it. We had some suspicions and it wasn't until the Snowden documents that we had more of the story. More likely you are going to see NSA backdoors in places you can't actually see. So you might imagine a operating system in your computer and your phone. It has an encryption product and program that you use that isn't somehow modified that is not as good as we think it is. That'll be much harder to find, much harder to pin on who did it. A lot of examples will find these sorts of bugs and they look like mistakes. They could be mistakes, they could be enemy action. They could be enemy action by the US, it could be any action by somebody else. We don't know which programmer did what. So this very act of undermining not only undermines our security, undermines our fundamental trust in the things we use to achieve security. And it's very toxic. So it would seem that the undermining the standards not only undermines the standard itself, but undermines trust in the process whereby we achieve these standards. I'm curious if someone could speak to the issue. I mean, so we were talking about this random number generator is a code that's been that is a part of many products used widely across the internet by civilians like us. Can someone speak to the issue of RSA and its role in this and the $10 million that Representative Grayson mentioned? So this gets a little complicated as well, but bear with it. So the subtitle of the panel is it's complicated. It's complicated, yeah. So this flaw in a random number generator. Random number generators are extremely important in encryption. An encryption, which is essentially complicated math to make things totally unreadable, you have to be able to generate very big numbers that no one else can generate, they have to be random. If you have a flaw in a random number generator, that means someone may be able to predict the key. Essentially being able to, without much work, decide okay, here's the shape of the key to your house and then go and cut that key and break into your home. And the NSA apparently did this specifically with one particular random number generator. It was very hard at first to tell what the extent of this could be. We knew that this random number generator had been used in a lot of popular products, and not only that, but incorporated in a piece of software that other products in mass use. And one of the unfortunate things that we found out, but a lot of this stuff is, I'm really glad we know this stuff. It's very scary, but I'm glad I'm better for having known it. We learned that there was a contract signed between the company that makes this popular piece of software, RSA, which I don't believe, stop being an acronym at some point, RSA. Like KFC. Yeah, hold on a second. And the NSA had paid them $10 million to make it the default choice. And you could be gracious and say the NSA was tired of configuring all of its crazy number of millions of computers, and it just wanted it to be set up out of the box when they grabbed this piece of software. But no, it's the default set across the whole product line. So anything that incorporates this thing would use this flaw of random number generator. Now, I think, as far as I can tell, I saw a report recently that very few products out there on the wild, at least the ones you can measure using by testing web servers and things like that, don't use this. They use other sources of random, it's other random number generators. And so from the point which we learned about this to now, people have, this is sort of one thing that if you don't know cryptographers like Bruce, you learn really quickly that they're some of the most paranoid people in the world, computer security people being just a little less paranoid. And apparently many of them have moved en masse to change the technologies they use away from ones that have this unfortunate flaw in them to ones that we at least don't believe have flaws in them, and had sort of stood the test of time against a lot of people banging away at them. Thanks, Joe. I wanted to turn to David from Google and talk about what you think this activity by the NSA means from a company or user perspective, and also what you think it means about what the government's perspective is on the use of encryption. Thanks, Kevin. Yeah, and I think what has been probably truly surprising maybe is the extent of the efforts to circumvent and or undermine encryption, maybe the fact that those efforts were undertaken maybe is a little bit less surprising given the NSA's mission. But I do think it's important to sort of take a step back and from a broader context understand what the government's current view and particularly intelligence community's view is about users' use of encryption. There are minimization procedures under Section 702. And what those minimization procedures say is that notwithstanding a requirement to destroy wholly domestic communications that encrypted communications, whether they're used by US persons or non-US persons, can be retained indefinitely at the direction and in writing by the NSA director. And I think it just sends an unfortunate message that the use of encryption is inherently suspect, particularly in the aftermath of what we've seen with large-scale data breaches. That's not a positive moment for users or for companies. And I think it has the potential to bleed over, not just into encryption, but to security tools that we offer and that others offer. I don't know that users commonly distinguish necessarily between encryption and other security tools. And so while end-to-end encryption and tools might be difficult to use and hard for ordinary users or other things that companies do, for example, we do two-factor authentication that's relatively easy to use and implement. And if the perception is that all these security tools are going to ultimately be undermined or are exploitable, I think that creates disincentives for users to take advantage and avail themselves of those tools. And as a result, with future cybersecurity incidents, there's the potential to exact greater harm than there would be if users were actually paying attention to these issues and being more cautious about how they interact with products and services. Well, so I'm curious. Moving forward, what are the policy options, prescriptions that we've seen so far in how to deal with this issue of the NSA undermining crypto standards? Danielle, Amy? I can start. So I think Representative Grayson talked about this to an extent. So one of the things is this key, this relationship between NIST and the NSA. And so this is maintaining this statutory requirement that NIST consult with the NSA and then the NSA being able to take advantage of that to undermine certain standards. That's very dangerous. It's dangerous because the standards themselves are used by developers and in lots of commercial products. And so it's not just what we'll talk about later where they pick a particular product and try to insert a backdoor into it, but it's actually the standards that are used in a variety of things. And it's also a reputation as a standard setter, which is something that the United States has been a leader on for many years, or I guess actually probably since the beginning of the internet. And so part of it is making sure that there's not a requirement in our law that allows the NSA to take advantage of NIST. And there's also, I think, on the other side. I mean, NIST is a body that needs to rebuild its credibility. And they've begun to do that to an extent. So they've started reviewing their own policies and guidelines. They claim that they didn't know what was happening in 2006 when this compromise standard was issued. But they're now sort of looking through all of these things because they need, they're facing a trust deficit right now. They need to rebuild that so that the US can continue to be a leader in standards and so that developers and ordinary users are going to trust what they say. I saw Bruce had something to say. The fundamental issue here, and we're going to see it again and again in these couple of hours, is broad versus targeted. That the issue is not that the NSA is spying on whoever the bad guy is that they want to spy on. The issue is that they are deliberately weakening the security of everybody else in the world in order to make that spying easier. So when we look at solutions, the solutions are always going to be on the order of force the targeted and not do the broad attack. The broad attack is what hurts everybody. And as I think Representative Lofgren said, once you build a weakened anything, you can't guarantee that you are the only person to take advantage of it. Once you do any kind of broad attack, broad surveillance, you suddenly start losing control over what you're doing. It's not the target. It's the fact that happens broad. So Bruce, you also mentioned you actually wrote about. In fact, I think we handed out at the front desk. You said you're talking about a particular policy solution to this issue where you said to break up the NSA. Could you talk about what you meant by that? Well, it's a little bit along the lines of what Amy talked about. The NSA right now has two missions that are jammed into one agency. There is the attack them and the defend us. And those were pretty complementary missions all through the Cold War because you had the same basic expertise to do both. But their stuff and our stuff were different. Tapping a Soviet undersea naval cable had no effect on US communications. And you were able to keep those two missions under one roof because they were physically separate in what they did. What's changed with the internet is that everyone uses the same stuff. You can't hack the Soviet random number generator without affecting all of us. So those missions now collide. And that's where the problem is. So when I view as to how to go forward, I think we need a much more formal breaking of the security mission, the infosec mission, the information insurance mission, which protects communications of the United States of the world, protects standards, makes us all safer from all the attackers out there from the targeted espionage mission, surveillance mission, of going after the bad guys. Now additionally, if we get into the more complication, that espionage mission is now too complicated because it has two components as well now. During the Cold War, it used to be very simple, we would spy on enemy government communications. The rules were agents of a foreign power, we would eavesdrop on them. That change after September 11th, and now the surveillance is against pretty much everybody, everybody in a country. We get all of the telephone calls going in and out of Bermuda. Not just government, one's everyone. So not agents of a foreign power, every agent. We get the phone call metadata of every American. And so these measures, these broad surveillance measures, government on population surveillance, I think are much more a law enforcement like mechanism. That government on government espionage, Cold War, older, that's a military mission, that's great. Government on population surveillance is much more of a law enforcement mission, I think belongs in a law enforcement agency, not in a military agency. And that's broadly the way I wanna divide things up to be more in line with what we imagine the rules and regulations governing these different activities should be. And it's worth noting the president's review group agrees with you on several of those points. Moving on as a transition to our next discussion about backdoors into various products and services as opposed to backdoors and encryption, I was hoping maybe Joe could take us on a brief history lesson, because it seems like when it comes to backdoors into crypto, we had this debate before in the 90s. They called it the crypto wars. The government wanted to have something called a clipper chip in secure devices so that the government could have lawful access to the data that was encrypted. And eventually that didn't happen. Could you talk a little bit about that? Because it seems like we won the crypto wars, but then the NSA kept fighting them in secret for about 15 years. The encryption is a wonderful thing, but for the longest time it used to be sort of entirely the purview of the US military under the NSA. And so one of the crazy things that happened in our history is that people started to learn about it. There were independent discoveries of fundamental cryptographic methods that the NSA had been discovered a decade before by other people working in the military. But now you have academics and other people discovering these things and realizing, oh geez, we're gonna have a computerized and networked future. We might wanna have some privacy, some confidentiality, some security associated with that. We need to have these kinds of methods outside of pure military control and in the hands of civilians. And so there was this tension going on what the administration at the time proposed was something called a clipper chip. This is essentially a chip that had an encryption key on it where that key was escrowed with the government. The idea being there's actually charted and cut into two pieces and there would be two parts of the US government that would have them. And then if you were doing something bad or they suspected you were doing something bad, they would go presumably get a warrant, get probable cause if they had evidence you were doing something bad and then be able to listen in on your encrypted communications which would sound like complete gibberish, it would be like white noise. That's what it would sound like. They would go with this law order and get this key and because they then have this escrowed key they could then get access to that data. I believe Bruce was on this. I'm not, the risks to key escrow paper I believe. So this amazing group of experts one of which is up here right now wrote this extremely compelling paper that basically said look here are the problems technically with keeping copies of keys around in places where you think only the government can get access to them. And in fact the EFF, Electronic Frontier Foundation commissioned and built a cracker for the clipper chip that would, I believe, or this is a desky. I think I'm getting things mixed up and whatever. So the idea being through an existence proof. Well, we were able to argue that look, this is not a good idea and it's not gonna work. There are other ways to get access to this stuff. And in fact, if you ever wanna check out a cool book read Steve Levy's Crypto. It talks all about this back and forth war between advocates of very complicated math in the civilian sector and people that thought that that's only gonna make the world a very, very horrible place because bad guys will be able to hide stuff from the US government who has this duty to oversee the entire world. At least back then if you think about that. Now what it turns out is we won the crypto wars not only on the key escrow front but also on the export front. The US government would not let you export very, very strong encryption technologies for many years. And after a bunch of wily coders and deep thinkers essentially put a bunch of very secure crypto code on news groups. And if you don't know what news group is you're gonna have to look it up later because it may be beyond your time if you're young. Put it on news groups that people from around the world could get access to and when that happened there was no sort of vision that we could keep this within the US borders. There was no assurance that that would happen anymore. So essentially that war stopped because one side stopped fighting and we were happy to move on to other battles in the advocacy realm. But what seems to be happening is they decided well we'll fight it in a way that they'll never even know it. We'll do things like undermine these encryption technologies. Intercept routers on the way to their customers to put things in there so you're not even messing with the math, you're messing with a physically soldered hardware component. And it turns out that they've been doing just massive amounts of things that they don't describe at the level of detail that I think I would want to read at least in the publicly released documents and I know Bruce has seen others and who knows. Well I mean it seems that in the arguments when it comes to arguments against the clipper chip and for allowing export of strong crypto there was an economic argument and a trust argument. The idea that if we're going to be transitioning a lot of our communications to these networks if we actually want them to be used if we want to have confidence in our transactions and grow that information economy we actually need it to be secure. That is the same argument that many have been making in response to what we're learning about the NSA's insertion of backdoors not just in the crypto standards but into a variety of software products and a variety of services. And I was hoping maybe Danielle could introduce us to what we've been learning in the past year about those backdoors. Sure, and I think Joe just described sort of the transition between this public attempt to insert backdoor into all products and for the NSA to have the key to this private effort. So what they did when they lost the public battle as it turns out was they turned to the companies and said let's figure out a way to develop relationships to leverage product design to convince them to make it easier for the NSA to get access. And the idea being only the NSA would have access and I think everyone up here can explain to you why that theory is not necessarily sound for security. So what we've learned in the past year is that the NSA spends about $250 million a year on a program called SIGINT Enabling and this is one of multiple sort of different programs that have been revealed but where they look to leverage these relationships with companies to covertly influence product design. So it's to develop relationships and I think the words are to shape the global technology marketplace to facilitate other types of collections. So this idea that they can convince companies to make it easier for them to get access to their products so this is inserting back doors into commercial IT systems, into encryption, into end user devices, 4G technology. So the goals of this project are really wide ranging to try to get access in as many different ways as possible and so this is kind of a very private and sensitive way to get the companies on their side to let them to insert back doors into their products. So we know that they're doing that but we also know one of the other things that we've learned is that it's not always with the knowledge of the companies that they're inserting back doors and so Joe mentioned intercepting foreign bound network routers so we learned that they are intercepting Cisco routers to insert back doors into them and so this is kind of I think the tip of the iceberg of this idea that the NSA wants commercial products that they might need access to monitor targets but these are also the products that we all use every day for our communications and for various different activities online and so it's this idea that they wanna insert a back door that only they will know about so that they can access the information they need or they can insert malware if they want to and they can kind of do whatever they want. So that's kind of, that's one of the, up here, there we go, there we go. I think that was a sign, I should stop talking. I mean, so it seems this is also a debate that we've had a version of before. There's a law called Kalea Communications Assistance for Law Enforcement Act that required the phone companies back in the 90s to engineer their systems and was later applied to some broadband providers as well to make their systems wiretapable and there's been discussion in the past few years of expanding that to apply to other online services and products. Joe, can you talk to us about that debate and what are some of the arguments were that you and others in civil society and in the security world had against that proposal? Sure, so in other words, why are backdoors bad for security? Yeah, up until about June 15th of last year or June 5th, sorry, of last year, which is when the first Snowden link was made public. The FBI had been pushing very strongly internally to the Obama administration for essentially this argument they made was they're going dark, FBI's going dark. And what that means is back in the day, all they had to do was get a warrant and use telephonic wiretapping. It used to be as easy as attaching a couple of alligator clips to a phone line and just listening in on the call. It got a lot more complicated over the years with circuit switching and packet switching, all sorts of crazy stuff. Got to the point where they passed a law, we call the wiretap law, not really the wiretap law, but it's the tech wiretap law, the KALIA, Communications Assistance for Law Enforcement Act that said, any provider of telephone services essentially must have a way to wiretap people. You must be able to respond to a law enforcement request to wiretap this stuff. The FBI has been saying, people don't use phones as much anymore, they're using WhatsApp, they're like, I play Clash of Clans and sit there and talk to people via that and there's variety of ways we communicate these days. Now, over about two years, the FBI had been arguing, we need some sort of fix, we need some way to make these things a little more bright, so not going dark, but getting a little brighter for them so they could actually get access to this stuff. And what surfaced was essentially a proposal to wiretap all software. And essentially, we never actually, no one never actually saw the proposal except for a couple of reporters basically said that the FBI could come to you as a maker of a piece of software with something called, I believe a wiretap assistance order, but you'd have to see the text of the law to understand what it was. And they'd say, look, we need to get access to this stuff, please do it. And if you said, well, geez, the product's not designed to do that, it's gonna take us a while, they'd say, okay, well, make sure that in the future, when we come to you, you can just with a knob, turn on the wiretap capability for this stuff. So it's sort of a way of putting you on notice that you needed to build a back door into your products. Unfortunately, for them, this got leaked to the press and it got leaked in a very absurd way where you saw proposals like, well, when, okay, you get served with this order, you have to put it, you got, you're on notice, you need to wiretap your users. If you don't do it, you'll get $10,000 a day and we'll double every day, which if you do some basic math gets to like the entire, all the money in the world within like three or four weeks. Totally ridiculous. I mean, yeah, that's pretty strong, but it's pretty ridiculous. CDT organized a group of experts who wrote this gorgeous paper that I can point you to and you look for it called Kalea 2, the Risks of Wiretapping Endpoints or something like that. And it made a really compelling argument and I'll shut up in a sec. The first was, this is a bad idea, that putting back doors in products is like essentially fundamentally undermining the structure of the universe if you think about it in a physical reality sense. And what do I mean by that? Well, everything you do online involves communication and to the extent that you want some integrity that know it came from somebody that know that it hasn't changed since it came from that person, you're gonna be using products that use encryption or other kinds of security features. That's not gonna work anymore because they will have these back doors that no one can prove can only be used by good guys. Although the random number generator may give it a run-off for its money, for technical reasons. And then, but the most compelling argument was, it's not gonna work. That all the things that you seem to wanna wiretap these days, think of like the Firefox browser or the Chrome browser. These are things where the source code is available. You can get it, you can build it yourself. If they put a back door in it, it's very easy to just cut that piece of code out, recompile it and turn it into a piece of executable software without the back door. Moreover, if you can't do that in the US, that means all the secure product development will go to other countries and we lose out on all that capability. These kinds of things will still be available. You can't suddenly erect a treaty that says everyone needs to be able to wiretap all software and all communications all the time. Anyway. So I'm reminded of a particular example in the telecom context where in the mid-2000s, so like the US, Greece had systems for lawful intercept of their phone system and they eventually discovered that some unknown adversary, rumored to be the CIA, rumored, I don't know, had actually compromised the lawful intercept systems there and had been using it for a long period of time to actually spy on the highest echelons of the Greek government, including its prime minister or president. So a good object lesson in how these back doors can actually backfire. Any other thoughts about the security implications of back doors? I can talk a bit. The funny thing about this is Bruce has written an essay on almost every single thing we've talked about and they're great. The fundamental issue, again, is should we compromise the security of everybody in order to access the data of the few? And in order to believe that that's a good idea, you have to believe that one, only you can use that compromised path. That if in some way, no one else can use it. The Greek example is an example where that is in the case. Representative Lofgren mentioned another example. There are lots of examples where this global compromise is used by other people than you expect to weaken security. And you also have to believe, and I don't think this is a good idea, that the value of this path to the few outweighs the security of the many. And you have to believe that. I think that security in our communications, in our data, in our information is vitally important to all of us. There's a wide variety of threats out there. Government threats, criminal threats, foreign threats, domestic threats. And security is one of the ways we protect ourselves. And what the FBI, the NSA are asking is, our mission trumps that. That we want access to that person so badly that none of your security matters, or at least matters less. So when we talk about harms, I mean how the NSA harms security, this is it, that they harm security because they believe their need for access to the few trumps the needs for security for everybody. That story from Greece was a US product, and the Greece didn't want the feature, the feature of the lawful access, wasn't wanted. It was just in the code. So it happened to be there. It came with the product, wasn't turned on, someone snuck in, turned it on, used it. So here's the government having their government communications breached because of a back door they didn't even want. That's the kind of thing we have to worry about. We put a back door in, three years from now, criminals are using it. Well now what? I don't think this is a difficult trade-off to make. The problem is the NSA is not equipped to make it. This has to be made in public at higher levels. That's why I like seeing some of these bills being proposed that actually has Congress making these decisions. At least we have a chance of them recognizing that security trump surveillance. Well so we do have, now we have the president's review group in recommendation 29. I'm not enough to have favorite recommendations in that report in 29 and 30 are them. Urging the US government to make clear that it won't, that NSA won't mandate that any product change, that a vendor of a product doesn't have to change the product to undermine security and enable surveillance. The Lofgren amendment, co-sponsored by Representative Massey and again a pretty broad bipartisan coalition of folks, went even further than that and would have said that NSA cannot mandate or even request that a product vendor or service provider weaken their product to enable surveillance. That amendment was vocally supported by Google amongst a variety of other companies, trade associations and civil society groups including my own. But I was curious David if you could talk a bit more about why Google chose to support that and what your concerns are about backdoors generally. Yeah so I mean I think that particular amendment actually addressed two backdoors, one with respect to requiring companies to essentially build in those security vulnerabilities into their products and the second was with respect to the backdoor search loophole and that was something, I think it was an important but perhaps overlooked component of the original USA Freedom Act that was introduced by Senator Leahy and Representative Sensenbrenner and just really quickly on the backdoor search loophole, Section 702 enables the intelligence community to prohibit the intelligence community I should say from intentionally targeting the communications of US persons or people that are in the US. What it doesn't, what it really speaks to is what happens when the communications of US persons are incidentally collecting and we learned a little bit more about that from the Washington Post article that appeared yesterday about just how extensive that collection is and I think it just reinforces the importance of the amendment because under current law effectively the intelligence community can turn a blind eye to the fact that there is a large cache of US persons communications that are being collected and then are being searched without the protections that the Fourth Amendment would normally afford and this is something I think that's been really core to Google's advocacy here in Washington for quite some time, that there should be an ironclad warrant for content requirement. That's something that the Supreme Court at the very least hinted to in some of the dicta, I guess in the Riley opinion from a couple of weeks ago. That's the searching of cell phones case, yeah. The cell phone, searching incident to your arrests and so we thought it was important, this was a welcome and unexpected opportunity to weigh in and support of both the back door search loophole component but also to prohibit the use of phones albeit for just one year to require companies to build in these sorts of back doors and it would seem sort of maybe a year ago this sort of language might have seemed unnecessary but now it's actually really important to sort of restore trust that these sorts of things are not being requested and or required of companies. So I think it's a positive step but I think there's obviously more to be done again this is an appropriations bill. It was an amendment to a bill and it's unclear whether it's ultimately gonna survive the entire appropriations process. I commend to you if you haven't read it yet the story in the Washington Post yesterday on Sunday. I think it's gonna get them their next bullets or on this topic but any other comments or thoughts about the back door issue before we move on to the issue of stockpiling vulnerabilities? I wanna say one more thing about trust. We're talking a little bit about trust and how this destroys the trust. I think it's worth talking about exactly what the trust was. It's not that we in the tech community trusted that these products were secure, that they were invulnerable, that they didn't have vulnerabilities that allowed hackers in. We know that security is hard, vulnerabilities are everywhere. What we did trust is that these security technologies, these products, these standards, these protocols would rise and fall on their own merits. That they would be what they were advertised. Not that there were some government hand secretly sneaking in and twiddling with the knobs. That's the failure of trust and it's a big one. And it's something that we in the United States have to deal with as we're trying to sell our products overseas. Other countries are saying, well, why should we buy this US thing? The NSA is probably dinked with it. You're lying to me when you say that this product is secure. You have been forced to make changes and you've been not allowed to talk about it. We know this has happened. We know this happened with Microsoft. We know that Microsoft has made some unknown changes to Skype to make it easier to eavesdrop. We don't know what they are. We don't know how they were done. We know that they happened. Now, how is that going to play in an international market? Germany recently kicked Verizon out of a large contract because they didn't trust that Verizon was behaving in their customer's interest. They didn't trust that the NSA didn't come in and forced them to do something and then lie to their Germany, their customer about it. That's the betrayal. And it's a big one because we as technologists like to believe that the technology rises and falls on its own merits. And this is, again, the drilling back from, I think Bruce alluded to it earlier, from the broad, broad targeting of everybody to the more targeted. So eliminating backdoors and trying to make it so that the NSA can't insert them in products and services isn't going to get rid of the targeted surveillance that they're trying to collect. We've talked about many different ways the NSA has of conducting surveillance on legitimate targets that it's been able to prove, probably have for an intelligence information. This just eliminates their ability to spy on everybody at any given time, which is really what we're trying to continually do is to take it away from everybody as a target to let's look at who the real targets are. So perhaps makes them, as another commentator has said, it makes them fish with a pole rather than with a net, as it were. Spinning off of Bruce's comment about how we don't expect our products to be perfectly secure, we just don't expect them to be intentionally insecure. Most software does have flaws in it. Bugs, vulnerabilities, these things called zero days. What we learned in December in a great exposé in Derspiegel, which some of us are starting to wonder whether it came from a source other than Snowden, we learned of NSA's massive catalog of vulnerabilities in a wide variety of widely used products, hardware and software. And basically they can pick and choose and go, oh, the target has that, here's a vulnerability for that. Bruce, can you help us out with like, where did those come from? And what the heck is a zero day anyway and where can I buy one? Did you do that? All right, so let's talk about software for a second. So I mean, software is incredibly complicated everywhere. And we as scientists, we as a community, we as technologists do not know how to write secure code. We do our best, but all software contains bugs, contains vulnerabilities. You know every month you get a dozen or so updates to your Microsoft operating system. Those are all fixing bugs, closing vulnerabilities. Those vulnerabilities can be used to attack systems. And remember earlier, I talked about the NSA's dual mission, Amy started with that. The NSA's dual missions, protection and attack. When vulnerabilities can be used for both. If you discover vulnerability, you call Microsoft, Microsoft you have this vulnerability, Microsoft fixes it, we are now all safer. Nobody else can use that vulnerability to attack systems. You discover that vulnerability, you call up a criminal and say, hey look, look what I found, that vulnerability is now used for attack. It's being used to break into systems, steal money, steal passwords. Now we in the security community recognize that the way to improve security is by continually researching, finding and fixing vulnerabilities. Now the NSA can play either end, right? They have two missions. They can play defense, use those vulnerabilities to make things more secure, or they can by offense, keep those vulnerabilities in their back pocket and use them to attack systems. But remember, targeted versus broad, those vulnerabilities affect everybody. They're in an operating system. They're in the internet. So now we have this question. What should the NSA do? And there's been debate about this, right? Should they hoard them to attack the bad guys, right? Use them to fire cyber weapons. We can come with all these reasons and why you might want to keep them. But by keeping the MS vulnerabilities, we are now vulnerable to them. Or should they fix them, right? If you fix them, you fix the computers, the good guys and the bad guys. If you hoard them, you can use them, anyone can use them to attack the computers of the bad guys and the good guys. That's the fundamental debate. And again, the question comes down to what's more important? Security or surveillance, right? Is it the surveillance of the few that beats the security of the many, or is it the other way around? Well, so we've learned that the NSA has a very large catalog of these vulnerabilities, that it is stockpiling and using for its own offensive or foreign intelligence purposes. The alternative, one of the alternatives is simply disclosing them immediately or something in between that. Danielle, you've done some research on this in preparation of our paper. What have you seen out there in terms of the discussion of how should the NSA handle this? So I think that this is something that this comes up in the president's review group report, but it's come up many times before and there's a really great paper about the idea of lawful hacking. By Steven Belvin, Matt Blaise, and a couple of other folks, and what they talk about is this challenge of how what's the best and most sort of ethical way to get access to communications for lawful purposes. And so one of the big challenges is, zero days are always, you're always gonna find zero days, you're always gonna find some kinds of vulnerabilities. And so the tendency, and when there's this tension between the offensive and defensive capabilities, might be to talk about them, say, well, we might need all of these, which kind of ignores the fact that since you're going to keep finding security holes, you're gonna just sort of continue to come up with an ever-longer and ever-growing list of these holes. And so what they talk about is the sort of what a responsible practice looks like, where when you find a vulnerability of some kind, you disclose it immediately unless you have a sort of very compelling and immediate need to use it. So if you are looking for something specifically at that moment, and it's a sort of like high national security reason, you might be able to use that vulnerability. And then later, as soon as you've used it to get what you needed, then disclose it to the company so that the company can patch it so that all the ordinary users who are also sort of open to attack because of that, can have their software or their products patched. The other thing that they point out, which is very true, is that software patching isn't immediate. So you can, even when you find a vulnerability, you can disclose it and then continue to exploit it actually as a law enforcement agency for a short period of time until you sort of run out and then you go and look for another way to get in. And this is a very complicated issue because of course there's something strange about the idea at all of exploiting vulnerabilities to get access to information, but it's the idea that this is inevitable, this is gonna happen, and we need to figure out a reasonable way to deal with the problem while still recognizing that there may be legitimate law enforcement or national security needs. The president's review group says the same thing. They say the default should be disclosure of vulnerabilities, sort of immediate, and then the only, for a very compelling reason, following sort of a senior interagency review process, the NSA might be able to withhold a vulnerability so that they can use it. What it says that they should not be doing is holding on to them and sort of accumulating their own arsenal of vulnerabilities and not letting the companies know because that means that sort of general cybersecurity is weakened so that just in case the NSA might need that vulnerability at some point for some target, they have access to it. And so, I mean, it's this all or nothing approach again where there's no recognition of the fact that it's actually bad for everyone's security that these holes are out there, that these flaws aren't being disclosed and it's not telling the companies so that they can patch it. And the companies themselves are also looking for these so that they can responsibly patch them and it's saying, no, no, we have this information. And this kind of, this came up in the debate about the heart bleed vulnerabilities. The question is, did the NSA know about the vulnerability? And if not, and if they did, why didn't they disclose it? Was it because they've also been looking for ways to exploit the open SSL protocol for years so that they can get access to things? And so, I mean, that is a serious allegation and it's a serious challenge. And of course, they talked about a disclosure process that they have, but they didn't really say very much about the details of it and about what constituted an extraordinary circumstance. Yeah, so there was this story that was denied that NSA knew about heart bleed. Yeah, it seems that that is not true. But in response, the White House said, by the way though, we actually do have an interagency process to decide when to disclose vulnerabilities and we've had it for years and we are now in the midst of reviving it or revitalizing it in response to the review group's recommendations. That would seem to imply that they weren't fully following it before or something. But I'm curious, what do we know, Amy, about this so-called vulnerability's equities process? So, you've touched on a lot of it. We know that the NSA has a stockpile of vulnerabilities. We actually know that the US stockpiling vulnerabilities is one of the main drivers of the economy of vulnerabilities. It actually raises the price because the US is willing to pay quite a bit of money for vulnerabilities of things it can exploit. So, we have this process that they came out with. They kind of like, oh my God, heart bleed. Oh my God, people think that we knew about it. What can we do? Let us dust off this really old thing that we probably haven't been using for a really long time and say that this is going to be the process by which we figure out if we're going to reveal vulnerabilities so that they can be patched. It's a multi-level weighing process where they're looking at whether or not your vulnerable versus their own security needs. Now, again, we come back to the NSA's dual functions and we see over and over and over again whenever they weigh information assurance against surveillance, this side wins. So, it's very unclear how this weighing process is going to play out and actually one of the reasons it's unclear is that there's no transparency built into it. So, I think one of the key things that we continually talk about throughout this is the need for greater transparency and how things are being applied. And they haven't talked about if any numbers are going to be made public about this process, who's going to be aware of how many vulnerabilities they turn over every year and how many they keep back, how many days on average that they keep things back. So, these are core questions that need to be answered. Things that can be made public, numbers that can be made public without great risk to national security if any risk at all. And it's not built into this process that is just inherently kind of tilted in one direction from the very beginning because the NSA values its surveillance mission so high. Bruce. One of the things we haven't touched on is the very international nature of this. I mean, this is not the NSA or nobody. There are, lots of countries are looking for vulnerabilities and the government of China is doing the same thing. There are cyber weapons arms manufacturers. One is called Hacking Team Out of Italy that sells software to break into systems with vulnerabilities like these, the governments like Ethiopia, Kazakhstan. Governments you actually don't want breaking into the communications of their citizens. So, as we look at these vulnerabilities, find them, fix them, we're not just making security better for us. We're making security better for a lot of people in the world that need security to stay alive, stay out of jail. And the international nature of this makes it very subtle. You'll hear a lot of arguments that we have to hoard vulnerabilities because if we don't, China will and China will win. That's a very zero-sum game arms race argument. But it fails to recognize that every vulnerability we allow to remain is a potential chink in our armor. And as long as we are a highly connected, highly computerized, highly internet enabled society, we are fundamentally at greater risk than the government of China is. The government of Ethiopia is or North Korea. That defense is really much more important not just in general, but to us specifically because of this very international nature. Yeah, I just wanted to add, I do think it's encouraging that the administration is taking up this vulnerabilities equities process. If we're talking about one of your favorite recommendations, this is one of mine from the review groups. But I do think it's important to point out. Number 30 people. Number 30. There are real differences. And I think if nothing we've learned about the importance of language and trying to understand and divine the intent and the meaning of what the intelligence community is saying based on sort of the written word. And the review group's recommendation in this regard was to disclose unless there was an urgent and significant national security interest. And in the aftermath of the accusations that the administration had exploited sort of the heart bleed, had exploited a heart bleed, they had said that there was a strong bias toward disclosure unless there was a clear national security or law enforcement interest. That's very different. Those are two very different standards. And so to Amy's point, what would help to sort of inspire competence that there is a strong bias toward disclosure is to have more transparency because this is easily quantifiable in terms of the circumstances under which a vulnerability is disclosed or whether it's sort of stockpiled and used or even temporarily stockpiled and used. So I think there's a lot to be done on this front. And I think it's encouraging again that the administration's undertaking this vulnerability's equities process and seems to have done so before they were accused of exploiting the heart bleed. But at the same time, I think there are a lot of questions that remain about what the standard actually means in practice. Can you? David, correct me if I'm wrong, but the review group actually says they should be used rarely. Like that is the word that they use. Rarely should they not be disclosed. And they also say immediate disclosure, very strong. Do you know whether Google has received any disclosures under the so-called vulnerability's equities process? Not that I know of, but I, you know, the whole concept of information sharing is a little bit more tricky these days as a result of what's happened. I would so use a cut out anyway. By the way, mark up tomorrow in the Senate Intelligence Committee of the new information sharing cyber bill. We should all know. So Bruce kind of anticipated my closing question, which was gonna be how do you counter the argument that disclosing vulnerabilities is like unilaterally disarming? But I think the answer is you blow up a bomb, you don't have any more, you know, you've blown up the bomb, you can't use it again. You disclose a vulnerability, you get a patch, no one can use it. So moving on to our final category, just sort of a catch all category of now that like the NSA is weakened encryption and has these back doors into a variety of products and has a bunch of vulnerabilities into other products, what are they doing with all of that? And it seems that what they're doing is building a very large network across the planet that have compromised computers and networks that they can then use to conduct surveillance. And it seems that a big part of this is something called quantum. And I didn't really understand the quantum stuff. And this is something that Bruce has done an extensive amount of reporting on, and I'd love to fill this in as well. But I didn't really understand it until Joe explained it to me. So I was hoping Joe could explain briefly what this quantum business is. Awesome, because I was just explaining Bruce's article to you. So that means I've done my job well. My number one job is to be able to explain things to people in a way they can understand. And Bruce, jump in at any point. My number one job is to write things so people can explain things in a way other people understand. It's so important. Oh my gosh. So quantum is this really scary thing. And it's so scary and so complicated that it's easy to be like, oh, what? Okay, I'm gonna fall asleep. So if I see you fall asleep, I'm gonna yell at you right now for the fun of it. But quantum appears to be the US government can respond quicker than any website you go visit, for example. So your browser says, hey, I wanna go to cnn.com. They have stuff in the internet that can respond faster than actual cnn.com can. That's what we call a race condition, which means the NSA is trying to beat the response from your actual thing you wanted to get access to with their stuff. So this is where surveillance gets really strange. Surveillance, you tend to think of it as, oh, I'm watching a bunch of stuff flow by, oh, I'll jot down some notes about what this person's saying or whatever. This is offensive. This is active kinds of surveillance. And what we mean by that is that they're actually changing stuff. They're actually changing communications to do this stuff. And so one example is, if you happen to be using the Tor browser, and if you don't know what that is, it's a very awesome anonymity tool that you should all look into. If you're using the Tor browser and you go someplace, they have stuff in their internet. It's hard to know what the stuff is because the documents very clearly don't describe that because maybe that's even too sensitive to write down. I don't know that if you're using this, the Tor browser is an indication that you may be a bad guy, for example. You also may be looking up contraception in a place that doesn't allow that or something. You may be doing things in there. Which is a bad guy. Yeah, you're a bad guy, right? They can respond so fast and poke a hole into your browser. So they basically use one of these catalogs. They've weaponized this catalog. It's not just a database of vulnerabilities that may be out there that they haven't fixed yet. They've actually operationalized it into tools that can poke holes in your stuff and establish a beach head on your computer and do things later or do things right then. And so this is kind of scary because if you think about it, if you just happen to type the wrong thing in or happen to have the wrong book report assignment or something like that, you may, in fact, get a hole poked into your system by this vast set of infrastructure that NSA has, using the set of vulnerabilities in a variety of very, very clever and what must be just awesome network techniques. So the internet's really complicated. I mean, I could spend a whole day talking about how complicated the internet is, but unable to have this kind of a global reach into what people are doing. And it's not everything, but it seems to be a substantial chunk of what people are doing on the internet. That is remarkable. And it's the kind of thing where engineers think of, hey, I'm designing this thing to make your communications confidential between here and here. There may be a bad guy listening, but we're gonna design it with that bad guy in mind so we thwart that bad guy. Unfortunately, the kind of bad guy we don't think about often is one that has infinite money and has a global insight to everything that's happening. And that's exactly what we had to sort of reevaluate the way we design systems. And so, for example, I'll save that later. Well, so, I mean, let me try and sum this up. So the NSA has compromised a bunch of routers, a variety of ISPs, a whole lot of... Something! Or installed some crap. Has a lot of vantage points around the global internet and it's watching for targets, whether it's someone using the Tor browser or someone searching for a particular thing or someone using a particular IP address or who has a particular cookie. Yeah, it takes a trigger. And then it jumps out in front of that person's communications, pretends to be the site that they're looking for and uses that opportunity to inject malware into their computer. Is that... Yeah, and it actually could work with, yes. Which is what Joe described to me as crazy Cylon stuff. What he first described it to me. Some of the sites that are being impersonated are major US companies. LinkedIn is one that's been named Facebook. I gather that they've at least attempted to spoof Google. David, how do you feel about that? That's awesome. Yeah, no, it certainly doesn't... Again, it's one of these things that just doesn't inspire confidence in the use of products and services. And when people use a product like Facebook or Google or another service, they expect that that's going to be legitimate. And these sorts of reports are baffling and they're disconcerting. And I think it came... Because it came, I think in the sequencing of revelations where it came, I think these sorts of things were no longer a surprise to people, which I think shows how far we've come in terms of our understanding about surveillance programs and how they work. I'm not aware of any of these sort of incidents, at least ones that haven't been publicized, but given that it's happened to other companies, there's certainly the possibility or even likelihood that it may be happening with our services. Bruce, are we making sense of quantum or do you have anything to add about how you describe it? More or less. Again, think of it again as broad versus targeted. The way we normally think of hacking is there's some hacker sitting in a computer somewhere that it's going as accessing some network and trying exploits, sending email, trying to break in and then using that connection to get more access. This is something very different. This comes from broad access. The NSA has agreements with telcos and background providers to put equipment in the middle of the internet that watches everything go by and when it sees something that triggers and it could be anything, they will use quantum to inject data into the stream. In this one example we're talking about it, it injects data in such a way that allows the NSA to take over that computer. So it is a targeted attack made possible by this broad surveillance system. There are a good dozen different quantum programs that do different things, but it's all we're monitoring everything looking for specific things. Now, this is something the NSA can do because they have an agreement with AT&T to put this computer between the user and Google. And it doesn't always win, but once in a while it can respond faster than Google and fool the user. Now, this is not, so the NSA can do this, we can't, but actually we can't. This is not a new trick. This is a hacker tool. You can download it. It's called AirPon. AirPon works in wireless networks. We actually can get that privileged position, but it's the exact same thing. This is a way hackers have of taking over your computer when you're on the wireless of, I don't know, this institution, maybe. So we have a choice here, right? We can build the internet to make this attack not work. We can do it. It isn't hard. You have to do it and makes us safe from hackers, from criminals, from foreign governments, from everybody who might use this, including the NSA. Or we can leave this massive vulnerability open, allowing the NSA to use this broad surveillance to attack probably legitimate targets, while at the same time leaving us all vulnerable. So it's this kind of behavior that has led a lot of US internet industry representatives to express some concern and dismay and be worried about especially the impact on the trust of foreign consumers. You had Mark Zuckerberg personally calling President Obama to complain and then after a meeting with the president complaining that they're not doing enough to reform these processes. I forgot Apple's particularly strong words. Microsoft, at some point, likened the NSA to an advanced persistent threat, a security term that's usually reserved for Chinese military hackers or Russian mafia. And then of course there was Google where a couple of lead engineers there after learning about how the NSA was attacking Google specifically said, and I don't think there's a delay on C-SPAN so I won't say the word itself, but basically said F the NSA on Google+. It wasn't an official Google statement, but they were pretty ticked off. What were they ticked off about, David? What did the NSA do to you guys? Yeah, no, so this was the Washington Post that reported that the NSA was sort of tapping the private links that connect our data centers and I think that we expressed outrage about it, I think on the sort of continuum of likely to unlikely in terms of this happening candidly. I think we probably thought it was less likely. We weren't right about that. And we've been working pretty assiduously now to ensure that the traffic between our data centers is encrypted and I can say that we're pretty much all the way there. You can never say you're 100% of the way there, but we've been working pretty aggressively and I think the post to article noted that even before the post reported that particular revelation that we were working to encrypt the traffic between our data centers, but that was a particularly troubling and disconcerting revelation because there are mechanisms including those that Congress authorized under the Fison Amendments Act of 2008 that enable the intelligence committee to seek information through the front door and to do so in ways that just weren't envisioned or countenance by previous types of FISA surveillance. It really is a loosening of the requirements. And so to see the extent of their efforts to go and to tap the links between our data centers to obtain traffic in ways that wasn't targeted and swept up millions, hundreds of millions of communications, I think it just sort of reinforced our responsibility to redouble our efforts and to do as much as we can on the security side and notwithstanding anything that Congress might do to limit the ways that the NSA can conduct surveillance. Well, so it seems that, you know, beyond just policy responses, one of the other key responses is armoring up, is trying to improve the security of your services to counter these particular threats. Amy, I know you've been working on a project in regard to that. Can you tell us more about what companies, what we should expect of companies to be doing at this point? Sure, so when I came to access, we talked a lot about transparency reporting and how absolutely vitally important transparency reporting is. One of the reasons for that is that we have now this window into the NSA's activities provided in large part by Edward Snowden, but it's time limited. We only know what we know from the documents he was able to provide to us while he was there. We're not going to know what's happening next month, next year, five years from now. We need ways into the future to keep that window open or at least as open as absolutely possible so we can continue to have this dialogue and this conversation about the extent of NSA authority. But that's not enough because transparency reporting actually really only provides you with numbers based on when the government goes through official judicial processes to get information, how many times they ask a court to provide them with information on their users or on their accounts. So what we are looking at is all of the different times when the government doesn't go through judicial process and actually taps into the fiber of the internet and tries to get communications that way and what needs to happen to make sure that all of your, both in the room, all of your on TV communications are protected. And so we've put forth what we're calling the data security action plan that's been signed by a lot of forward thinkers, internet companies including Twitter and DuckDuckGo. We actually have another big announcement coming tomorrow, teaser alert. It's also been signed by leading civil society groups, OTI, CDT, the Electronic Frontier Foundation, the Liberty Coalition, kind of a broad range of groups saying that here are seven things that companies can do if they're going to collect information on people on each of you in order to make sure that information is properly protected. So unauthorized users, foreign governments, the NSA, bad actors, criminals cannot get a hold of it. And so it includes things like encrypting data when it goes between data centers and when it's flowing over the internet, making sure that data at rest is protected, making sure that your passwords are strong and that you're moving toward a two factor authentication system. Really kind of core things, really common sense, seven pieces of really common sense activities that we're finding that companies really across the board aren't engaging in. And we think that if these seven things can become a floor on internet security that you can then start moving forward and here's the minimum, the bare minimum of what's accepted, now become inventive and protect people's information even more robustly and try to think of new ways to protect it. If you're interested, that's at encryptallthethings.net is where we have those seven things located and we're trying to promulgate that and to keep that moving. So it seems that there's a lot of things, frankly, that you need to encrypt if you actually want, you need to encrypt all the things. Like so you need to encrypt between you and the website. You need to encrypt between you and your email server. You want email servers to encrypt between each other, which actually Google just recently released a transparency report showing a lot of servers who are not doing that and I think shamed a few of them into turning that encryption on. There's also end to end encryption and Google just recently put out a plugin to help enable you to use end to end encryption for your email on Webmail. Bruce or Joe, can y'all talk a little bit more about what we, you know, putting aside what the company's gonna be doing, what we as users can or should be doing to try and protect our own privacy against the NSA or anyone else? So again, we'll talk about bulk versus focused. If the NSA, if the FBI, if the Chinese military wants into your computer, your computer, your personal computer, they're probably going to get in. They're almost certainly going to get in. We as security people cannot defend against a well-funded, well-targeted, sophisticated attack against the system. We're not able to do that. Attack is easier than defense. So that's really not where we're trying to defend against here. Where we're defending and trying to defend against is bulk surveillance. Is that can the NSA, the Chinese, the criminals get into everybody's computer? Can they do it in bulk? Can they do it efficiently? Can they do it automatically? Can they do it on a broad scale? And there there's a lot we can do. We talked about encryption. That will protect your data as it's flowing from one place to another. They're gonna be ways to get at it if the FBI gets a warrant, it gets a lot more complicated. But in the normal case of bulk surveillance, that doesn't happen. It's gonna be, if it's easy to be grabbed, if it's not, it won't be. There are things you can do there. There are things you can protect anonymity. And lots of different tools. The issue is going to be that a lot of the data that's being collected is not able to be protected in this manner. It's what's being called metadata. Metadata is really data that the system needs in order to operate. So you can encrypt your email, but the from line, the to line, the time of day cannot be encrypted. You could have a secure voice conversation, but who's talking, how long they're talking, and when they're talking cannot be encrypted. Your location, your cell phone is a location tracking device. If we can secure that, but then you can't receive phone calls. The system has to know where you are. So this data cannot be protected by actions you take because the system needs it. So when I talk about what you can do to protect yourself, the single most important thing you can do is agitate for political change. There are a lot of tech solutions, we're gonna talk about them, but they are fundamentally around the edges. This is a political issue, and the solutions will be political. So that is the most important thing you can do. And with that, you can talk about the technology. Yeah, and I can't emphasize enough. We can, laws move slowly, law and policy move very slowly, but it's a critical, critical component of fixing this in the longer term. Standards, so the people who decide how your computers work and how things work on the internet move just a little bit faster than laws. And so something that we're doing at CDT and ACLU is doing as well is making sure that we're present in the conversations that the internet engineers are involved with and saying, look, it's not just a spook thing, it's not just an industry thing. It's also something that regular people have interest in and care about. But getting to the tech specifically, I like to think of this in terms of hygiene. You can go about your life, not caring about your hygiene, not caring about how you smell, what you look like or whatever, and you're not gonna have as good of a time as someone who might be more sensitive to those kinds of social norms. It's a little different on the internet in the sense that I like to talk about digital hygiene. What things can you do to sort of keep your house in order in a digital sense? There's a variety of things, but I'll mention a few in passing, a VPN, it's three letters, it stands for something that's more complicated. But essentially, if you have one of these pieces of software and turn it on, all the local stuff that's happening outside of your computer sending signals out is encrypted. So if you go to a coffee shop or an airport, you'll often see free wifi, it won't have a little lock next to it like your home probably does or should, in my opinion, won't have a little lock next to it, which means that even though you have to sort of click on some terms of service or pay a little bit of money or whatever, all the communications you send from your computer aren't encrypted. If you use a VPN, at least all the communications out right there locally are encrypted out to some other thing and then it looks like you came from New York City if you're in DC or something like that. That's a really, that helps you, that protects you only from the people that might be trying to subvert you that are local to your coffee shop or your airport. Another big one, some of these sound like they're maybe not NSA level protections, but they all sort of add into making you sort of less smelly in your digital lives, so to speak. And another one is a password manager. So I have, I know three passwords and I really don't only know one, but I have 1200. Some of those I haven't used in many, many years, but they're all completely randomly generated. I never have to think about them. My password manager, and there's a bunch of different types of those tools, manages all of that. The EFF, the Electronic Frontier Foundation makes a really handy plug-in for Firefox that's called HTTPS, HTTPS everywhere. That means that when you see the little lock in your browser, your long URL line will go from being HTTP to HTTPS. That S means secure, it means it's encrypted. The lock will turn on and you'll see it. This plug-in from the Electronic Frontier Foundation, Dynamite technologist over there, makes sure that if there's an option to, if they know about it, make sure if there's an option to have an encrypted connection, use the encrypted connection over the unencrypted connection. There's a variety of these and I can talk about more, but I'll shut up. Those are important ones. Thanks, Joe. So we've talked about a variety of technical solutions. We've talked about a variety of policy solutions. I have one more policy thought to throw out and then open it to you guys for any closing thoughts and then we'll open it to questions. One issue we didn't talk about was a policy response to this offensive hacking by the NSA. And this is an issue that we also see in the context of law enforcement and we're finally starting to have an above board for the first time in many years conversation about what should the rules of the road be when the government wants to hack into a computer. Right now we have a computer crime law, the CFAA, that has a pretty broad carve out for law enforcement and national security. And we're only now starting to see a few court decisions about when is it okay for the law enforcement to use a vulnerability to break into your computer remotely. And we're starting to see a debate, a discussion in the advisory committee of the US courts that discusses like what warrants should look like if you're gonna use a warrant to break into a computer. But we haven't, in the context of the NSA discussion, had a debate about what the rules should be if the intelligence community wants to break into computers. And pulling short of actually making a policy recommendation there, I will simply say that that is a discussion that we need to have and that hasn't yet begun except in the law enforcement context. ACLU, amongst others, has done some really great work on that issue. But on that, I'll leave it to you guys if you all have any other ideas, thoughts, policy recommendations or closing sentiments before we open it up to questions. Thanks for coming. The fact that you came means you care a little bit about this. If you didn't understand it, come ask us. We'll explain it to you more. It's a little complicated. And he'll translate it into a television sci-fi concept. Absolutely. All right, questions. And do we have, who's working the mic? All right, great. Right there, front row. We know this guy. Yeah. Hi, so my name is Chris Segoian. I work for the ACLU. A lot of the assistance, or a lot of the surveillance you guys described today relies on the assistance of companies. And the assistance that we're so scared about is when companies are forced to subvert the security of their users. The quantum stuff that you described, for example, subverts our security, but probably relies on the voluntary assistance of phone companies. It's tough to imagine a court order forcing AT&T to install these malware probes everywhere in their network, particularly given they wouldn't be installing the probes for a specific target computer. They just put them there and use them on an ongoing basis. But the subversion of security that troubles me the most is when companies subvert the security of their users voluntarily. And we've heard a lot about how companies have really beefed up security in the last year. And Google in particular has really beefed things up. But in some places, you're still providing voluntary assistance and weakening the security of your users. And the one example I want to highlight here, if the police get a warrant and they seize a cell phone, they can go to Google and Google will unlock that phone for law enforcement. And to Google's credit, they insist on a warrant when other companies might do it with less. But there's no law requiring you to have the capability to unlock phones or to circumvent the lock feature on the screen. And I'm wondering, a year after Snowden, if you're now thinking about whether that's a feature that should still exist or whether you should be taking it away. I think many of your users who enable that lock phone do so with the expectation that only they be able to remove it and the fact that the police can get a warrant and ask you to remove it may surprise and anger some users. Yeah, I mean, my response is brief, which is I don't serve in a compliance role for Google, I think as you know. But I actually haven't heard about that before. But I'm happy to take that back to our law enforcement team and ask that question. I'd be happy to say that I really think that disk level encryption is a key technology. And that enabling disk level encryption on things like phones is the kind of thing that would make me very, very, very happy to see. It may mean you haven't followed that. Absolutely. Well, it's weird with iOS, right? It's like some things are encrypted and some things aren't. And so I know that there's practical problems that takes a long time to do certain things. But it would be nice if you had to sort of turn that off. I'm not a product guy. I'm just a nerd. How will they go with that choice? First of all, a lot of really cool Cloak and Dagger stuff here. So thank that. I'm going to go and watch sneakers tonight when I get home. With respect to what's going on, I mean, Joe, I think you make a great point about the password managers and two-factor authentication. I think a lot of the people in the room and at home already use that type of stuff. What type of activities are and steps have the companies themselves taken post-Snowden revelations to make our communications more secure? And I'd be remiss in not asking Joe and Kevin to also discuss perhaps ECPA reform, because after 180 days, our electronic communications protections significantly decrease as well. So I was hoping you could address those two issues. So I can just talk some quickly. So certainly we've seen more SSL, more encryption on the web. And we've seen what's called, gosh, there's got to be a better word for this thing. I don't want to use any of the nerd words, but whatever. I'll call it ephemeral keyed cryptography. The idea being that often when you use encryption, the key's the same forever in some cases. But a lot of web properties have been moving. The Google is the lead. They're often at the lead of this. They're using a model of encryption where you have one key per session. So if you come back tomorrow and start up a new web browser, the key that encrypted your stuff is not the same as the one yesterday. So those are extremely. It requires a little bit more work on the side of the companies, as you may know, Carl. But it's worth it. And it's often not that more expensive than other kinds of stuff. And I'll shut up. So the place to look is EFF has a good scorecard of the major internet companies and seven or eight different things that they should be doing to encrypt the web to protect their users and who's doing what. So that is the place to look. It's continually updated. So you can see who's doing what. And then you can look at the history, who's did it recently, who did it a long time ago. And that's a good way to get a handle on which companies are doing what things to protect the security of their users. And I would be remiss if I didn't add that certain civil society tech, well, a certain civil society technologist is offering personal incentives to types of organizations if they move to encryption by default, mainly SSL. It's a good incentive to do it. In response to ECPA reform, so very briefly, because it is an important issue, the Electronic Communications Privacy Act, law of 1986 that was really our first digital privacy law. But it's so broken at this point because it was based on a lot of assumptions about how technology works, such that the emails that you have that are less than 180 days old require a warrant issued by a judge based on probable cause. But emails older than that require only a subpoena written off by a prosecutor. And in fact, under the DOJ's reading of the law, they don't even need a warrant for your email, even if it's less than 180 days old, if you have opened it, or if it is in your draft folder, or if it's in your sent folder. So the really incredible takeaway from that is under current law, the most protected email in your email account is everything in your spam folder, because you haven't opened it yet. I love you haven't read because you're too busy. Don't read your emails. Don't read your emails. And so there's been a monty. There's one practical tip of things you can do to protect your security. Don't read your email. Don't read your email. Mark, I'm read. I'm glad we're here for this. And so many of us led in a coalition effort by CDT called Digital Due Process, a coalition of companies and organizations have been pressing for many years to try and reform ECPA, starting with a single clear rule. If you want somebody's content, your email content, or whatever, stored content with a provider, you need a warrant. And we think this sort of follows a basic principle of, in the digital age, we think that what you store in Dropbox or Gmail or whatever should receive the same protection as the files that you keep at home. But right now we're in a very frustrating place where we have a bill in the house, sponsored by Mr. Yoder and Polis, that actually has a majority of the house sponsoring the bill at this point, like whatever the magic number is, 218 plus, whatever. And it's still not moving. And so we're in a, I mean, from my perspective as someone who's been working on these issues on both intelligence and law enforcement for a long time, in a weird, bizarro world where it seems like NSA reform has more heat than what should be a really uncontroversial fix to the law enforcement digital privacy law. But the momentum is still building. At some point, the leadership and the committee leadership are going to have to move this bill because the tide is unstoppable. Knock on wood. Kevin, I'll just add, I mean, I think, ECPA reform truly is the lowest hanging fruit on the surveillance tree. And there's a reason why there's a majority of Congress that supports the bill and it enjoys broad bipartisan support both from Republicans and Democrats. And I'll point again to the Riley decision from a couple of weeks ago where there was a passage where the Supreme Court was saying that some users aren't familiar whether the data they store on their cell phone is actually stored locally on the phone or stored remotely. And they said it really doesn't make any difference for Fourth Amendment purposes. It's a unanimous Supreme Court. So, you know, it is a fate of complete, but it's not a fate of complete. The Supreme Court is sending signals that to the extent that type of case comes before it, they will hold that there should be an ironclad warrant for content requirement. But we see, I think what we're seeing, I think, in a different context with the debates around the limits that might be imposed on the NSA is that, well, maybe that warrant requirement isn't so ironclad and maybe there are circumstances where the NSA should be allowed to search communications that they've already collected. If the data is lawfully collected, the argument goes then there shouldn't be any restrictions on the ability to query it. I think that skips a step in the analysis because it focuses just on what happens to data after it's been collected. And, you know, it's a significant constitutional moment at the point that data is collected. So I think that's really important. One thing Kevin, I probably shouldn't mention in case my overseers from Mountain View are following this at all, that the plugin that you referred to before we actually released source code for end-to-end, which is going to be, hopefully is going to be a browser extension for Chrome that, if it works right, will seamlessly enable end-to-end encryption using open PGP. We're not quite there yet. So we're kicking the tires and encouraging other people. We have a vulnerability rewards program. So security researchers that discover any vulnerabilities or problems with the source code, we're encouraging them to report them to us. NSA pays 10x though. Yeah. One I think last thing to add is that a lot of the things that we've talked about today are things that security researchers or people have known or suspected for a long time. And so one of the good things in the past year is that this is actually something that's coming out for like meaningful public discourse, which is where, which creates a much greater opportunity for what Bruce highlighted, which is political change. Because, you know, it's very clear now that a lot of these laws are outdated. It's very clear that these are things that affect real users. And as we keep getting more stories like the one on Sunday, that there's a lot of collection that's happening that's incidental that makes people uncomfortable and makes people want change. And they can actually talk about it now in a sort of, in a responsible and in a well-informed way. And that's very positive for sort of moving the political process forward and for seeing reform on a wide variety of issues. And so I think, I know Kevin has said this before, this is sort of the beginning this year of what will be many years worth of fights on a lot of these issues. And it's not, doesn't mean that it's gonna be easier that the changes are always gonna be the ones that people, you know, especially in the advocacy community love, but it means that a lot of these conversations are happening and they're long overdue. Yeah, it's kind of like the tinfoil hat crowd was right, but you should put the tinfoil around your cell phone, not your head. I mean, I don't know, my personal, you know, I come at this from an experience of previously working at the Electronic Frontier Foundation and suing AT&T and the NSA based on whistleblower evidence from 2006 that the NSA was sitting on AT&T's network and sucking up everything and then filtering out the things that they thought they wanted. And being looked at like we were crazy conspiracy theorists. And so it's certainly been validating to actually have that be, you know, in all the papers of record. And finally, at this point, admitted that, yes, the NSA is sitting on our domestic internet backbone. Now we actually need to do something about it. First of all, thank you all so much for this. It's been very interesting. My name is Katie McAuliffe and I'm with Digital Liberty in Americans for Tax Reform. And I will say it is indeed the week of ECBA. There are two other events this week, one at Cato on Tuesday and one on the Hill on Thursday. But I have a question for you that I've written down because it is indeed complicated. So what I wanted to ask is, how does NSA target bad actors if any kind of weakening or strengthening of security affects the entire world? So it's been said that the NSA has the ability to target government to government espionage, but it was also said that often we don't know which programmer is the underminer of encryption. And then how do we slash the NSA find foreign or criminal bad actors? Does this also mean that we don't know who is whole poking in our different browsers? So really, how does the NSA target? And then when I say, how do we find out who is we? Yes. So let me ask for a quick clarifying question. Quick clarifying question. Do you mean how do they do it now, technically? Or do you mean how would they do it if we encrypted everything? So actually, it'd be great to answer both of those, but that would probably take forever. But I guess what I'm curious is that you all have said that the NSA does have ways besides getting everyone's information to everything. And I was wondering what those ways are to actually target bad actors and bad government actors. So it's the same techniques that the criminals used to target, for example. You break into a network. So the criminals want to get a target corporation, they want to hold polycriticard numbers. They break into the network, they did that through a partner, they used standard hacking techniques, and they grabbed the data and left. That is what the Chinese government did. Couple of months ago, we indicted five Chinese military officers in absentia, doing exactly that same thing to five U.S. corporations stealing data for the Chinese government. This is something we believe the NSA does. You want to target North Korea, you hack into their computers, and you target them. So there's lots of targeted targeting techniques for targeting targets that everyone uses and we could talk about the technology of those. But that is what's done. And that's very different than targeting everyone, going after everybody. So he sort of asked, so what does the NSA do? Near as we can tell, there is a series of filters. So the NSA will put a computer on the internet backbone, and this is not something, this is nothing that Chinese doesn't do in their own country. This is not NSA specific, we just know a lot of NSA details here. So don't think of this as magic NSA technology, this is what any well-funded government's gonna do, Russia does the same thing. We'll do a broad collection of everything, and then very quickly based on names, based on keywords, based on topics, cull out stuff that I don't care about, you're watching cat videos, we don't care, I get rid of that, to try to focus on things that they're interested in. And that winnowing process, I mean there's gonna be signal of noise, you're gonna get things you don't care about, you're gonna lose things you care about, but the hope is that you do pretty well. Now last weekend we had a very interesting story from the Washington Post, that the end result of that entire funnel were reports given to NSA analysts. Here are communications that have passed all of these filters, they're no not American, they're on bad topics, they're from bad people, whatever, here it is. And what we learned is about 90% of that stuff is about innocence, including Americans, that the filters actually don't work all that well, even with all of that filtering. I don't know if that answers the question, that's basically the process. I think it does, because I mean it's just something on enhancing those filters so that we actually find targets that- I mean there's sort of, the way, and if you look at our successes in law enforcement and terrorism, they don't stem from looking around, saying, aha, there's someone suspicious. They stem from following the leads, the kind of police and intelligence stuff you see in movies and television. We're gonna go after that guy, who's he talking to, what is he doing? The things you don't need broad surveillance for, normal investigative procedures that start with a target and figure out what's going on, that the successes, and we see this from variety of review groups that have looked at these broad surveillance programs, that there actually isn't a lot of value from looking at everything, looking for someone saying the word bomb. I just made this up actually, but it's probably true. So I could look, everyone's saying the word bomb and if you say the word, I'm gonna start watching you. That has extraordinarily low value because random people say bomb all the time and actually that people that blow up things actually don't say bomb at all. These normal, these bulk systems don't work and they're incredibly costly. The big discussion here, we didn't really talk about the ineffectiveness, what we talked about is the cost, insecurity for the rest of us to enable those broad surveillance programs. No one's arguing here that there isn't a valid intelligence mission, that there isn't a valid espionage mission that targeted surveillance with a warrant by the FBI, isn't a great idea. But what we want is transparency, oversight, accountability, a presumption of innocence and the ability of us to protect ourselves from all threats. Did I sum up well? I think so. I'll just add, I mean, I think that in a way part of what we're debating and what Bruce keeps coming back to is we used to live in a world of retail surveillance. You would pick a target based on some sort of suspicion and then you would surveil that target. Now we've reversed that into wholesale surveillance where you collect on everybody and then you decide who to target. And ultimately that change to the law happened without us actually overtly having a discussion about whether that shift in the way we investigate people made sense in terms of the trade-offs we were making. And it's the decision, the discussion, we're finally starting to have now like far too late. Thank you. I'm a former member of the British Parliament. We want to get you on the web, but be sure to let them in the mic. Forgive my country for having exploited you for 200 years, although we did abolish slavery a bloody side quicker than you did, but I'm not going to talk about it. No more you're going to complain about one of the members of your committee. Please start paying tax to the United Kingdom. We are in deep trouble and we'd like you to pay actually tax towards us for all of the money you are taking out of the country. Now having said that. That'd be you, I think, that's not me. It's the one, it's the one. By a serious point, I chaired the Defense Committee. I was on the Defense Committee for 30 years. I chaired it for eight years. I was moving up the hierarchy for a long, long time. And one thing I learned, morality in politics is important, but not too important. What you have to do is to protect your society. And if you are being confronted by evil bastards who are using every trick available to make life difficult for us, extorting money, putting us in danger, the idea of responding to that with an excess of morality seems to me, as we would say in the UK, bonkers, stupid beyond words. It's difficult to say that, but when I was on the Defense Committee, we knew who the enemy was. They were playing nasty, and if we did not play nasty, we would have been absolutely pilloried and we didn't do that. So you're talking now, listen to somebody who has a perspective that's not a very nice perspective, but it is a realistic perspective. You've had your big inquiry, which some of you think hasn't been good enough. You know that your intelligence services play dirty games? Thank God they do, because if they did not play dirty, as the other side did, then the bigger problem you would have would be exploitation and the possibility of political economic disaster. So if I do appear a little bit off message, it's based on 30 years of experience. I headed election observation missions through the OSCE for on 25 occasions, all of the stans, Russia, evil countries, not evil people, evil countries. And I knew from first hand almost all of my beard of 36 years and three months in parliament, it was fighting the dangers of our country and our alliance. I'm glad to hear that we've had a strong degree of realism. There should be a greater degree of realism. I'm not defending every nasty thing that your government has done. I'm certainly not defending your Mr. Snowden, who's buggered off to that great democracy in the world, Russia, who I still call the Soviet Union. So I don't think we need any elections for them, any elections for them, or from people like that. If we have to play dirty, we don't admit it, but we have to play dirty, because I'm absolutely certain the consequence of playing it decently, as though you're playing football again, not of the English already good at that, but that you're playing, congratulations, the US are getting farther than we did, which wasn't very difficult. But frankly, I am in no doubt that if you have to play dirty in world politics, you've got to do it. Oh, question, how did you tolerate me speaking for so long, full stop? Well, I just, I did want to allow you to finish, because at one, it's not an uncommon perspective, but also I wanted to hear it all so I could fully comprehend exactly why we threw a revolution. But I do want to reflect on what you said about making arguments about morality. And in fact, I think much of this discussion and the discussion we've been having, that we had in the spring, and that's the focus of our paper, is trying to step away from a moral argument or even a personal privacy argument or a civil liberties argument, even though as a civil libertarian, that's the one that most motivates me. And to talk, clear-headedly and clear-sightedly about all of the various costs of these programs that we're not talking about. The cost to our internet security, the cost to our economy, the cost to our foreign relations in other respects, the cost to our internet freedom agenda around the world. I think there are a whole raft of reasons to be concerned about these programs, completely separate from concerns about civil liberties or the morality of those who are engaged in it. And so moderate as prerogative, that's my answer to that question. So that argument is fundamentally a fear argument. And I could summarize it in one sentence, terrorists will kill your children. That is the argument. We must do all of these awful things because otherwise terrorists will kill your children. It's an argument that shuts down debate. You're right, it's an argument that wins over every other possible argument because it's an argument that can't be argued with. Now, the problem here is that argument, short circuits, any discussion of, are the things you're doing actually effective? Do they do any good? Up here, we're not making a morality argument. We're making an efficacy argument. We're making a cost argument because there are, yes, there is a threat. There's a threat that the bad guys and the bad guys don't play with the rules, that's fine. But what does that mean the defense should be? There are actually many threats in society. We have been talking about the threat of government overreach, actually a very serious threat. In the United States, you're eight times more likely to be killed by a policeman than a terrorist. I mean, terrorism is not the one thing you're worried about, automobile accidents, I can list dozens and dozens of threats. And we're trying to balance them. And we balance them by looking at costs and benefits. Up here, we have talked about the costs. If the costs of broad surveillance are greater than the benefits, we don't do that. Even if the bad guys are bad guys, the bad guys aren't gonna go away. The question is, what is the best way to deal with them? The arguments we're making is that there are more effective ways to deal with them. Not that we're gonna be more than they're not and they're gonna win, that's dumb. That makes no sense. If the question is, what is the efficacy of the various tactics? What are the variety of threats? And what are the best ways that we as society can deal with them? And in order to get to those arguments, you actually have to dampen fear. Because once someone says, terrorists will kill your children, all that discussion goes away. No congressman will vote against something that someone says, if you don't do this, terrorists will kill your children. We saw this in the administration. There will be blood on your hands if you don't vote for this. That's never explained, that's never justified, but as soon as it's said, the fear sets in. And one of my great worries right now about reform is that if we ask congress to oversee the NSA, we will get a more permissive NSA. Because right now, congress is scared. Not just scared of the terrorists, scared of being blamed if something happens. Getting beyond this fear is the single most important thing we can do to move society forward. And honestly, this might take a generation. You and I might have to die before more sensible people take over government. We simply can't be terrorized. And that's exactly what Bruce is explaining. We have to be able to stand up to, in some cases, very large political pressures in the face of low probability events and argue very soberly that that's not worth it. This gentleman right here has been raising his hand very highly for, well. Correspondent for EuroPolitik's newspaper. I was just wondering, has this issue of the encryption and the internet security aspect more than the surveillance aspect? Has this appeared on the radars of other countries around the world? Like for example, in Europe, which is considering its whole data privacy regulatory framework at the moment. And sort of a follow on for that. I mean, it seems to me that the reason the NSA can do this so extensively is because all of the companies involved are US based. Does this create an incentive for more, say, European companies to develop software that has encryption in it that cannot be hacked into by the NSA because they're not subject to US law? Daniel? Yeah, so I think, so first of all, I think there's been some of the stories I've talked about, not just the US intelligence agencies, but other intelligence agencies, including the British actually doing this, but it's most certainly as one thing that we have learned when you look at the cost, like the economic costs to the United States, we've seen a huge rise in sort of this competitive advantage from foreign companies in Europe and elsewhere claiming that they have more secure products or that they have products that haven't been tampered with and that they're using this as a way to get, you know, to lure business, which is incredibly profitable. And so I think the broader thing we talked today about the cost to internet security specifically and sort of how in an attempt to protect security we may actually be weakening our security, we're also doing it at a great economic cost and there's the cyber crime cost and you know, sort of actually the amount of money that we're spending on all these programs in order to weaken our own security, but there's also what we're doing to American companies and that's a serious, from a purely US focused perspective, a serious problem because we are sort of driving customers away from the United States and that doesn't also always mean that we're driving them to more secure alternatives, we're just driving them to what they believe are more secure alternatives and that's also a key point. Just because it's not a US product doesn't mean that it's more secure, but if you believe that the US government is interfering with US products, you may be more likely to try your luck elsewhere, which is kind of one of the big challenges. So I think we have time for one more question right there. Hi, Matt Stoller with Congressman Grayson's office. So a couple of weeks ago, BAE Systems said that, representative from that company went on CNBC and said that there was a cyber attack on a hedge fund and their stock popped by roughly 2% and they formed, the FBI formed a partnership with the Think Tank called the Center for Financial Stability and there was a lot of discussion about cyber attacks in the financial space. Then I think it was last week, BAE Systems said that, in fact, they made a mistake, there was no cyber attack on a hedge fund. It was a training exercise which they confused and thought that there had been an attack, but it was essentially, it was their own training exercise. It's complicated, we told you. Well, I mean, that probably helped their business. It's not, you know, I don't know what happened there, but there's a lot of money in saying cyber security is this big problem, right? And if you don't know anything about technology and I'm not a coder, I don't really know that much about it. You know, how much of the sort of fear of these cyber attacks, how much of that is just profitable for entities to push for their own security businesses? How much of it is legitimate? How well is the NSA doing in terms of defending the country from this kind of, these kinds of attacks? And then how do you measure these risks against other risks, climate change, nuclear terrorism, and so on and so forth? I don't have a framework for how to think about this. So when I'm thinking about political action, when we're thinking about policy questions, you can certainly say let's carve out, let's have warrants. That tends to be a good idea and has been ever since the Magna Carta, but how do you think about these new, really novel institutional threats? Okay, 30 seconds, go. Wow, it's complicated. You know, I mean, there's a lot going on. I mean, there is, yes, there's a lot of profit motive, a lot of profit making, a lot of fear mongering, but there's a lot of the threat that's real. We tend to, for example, over exaggerate the terrorist threat and under exaggerate the criminal threat. So you will find discontinues on both ends. There is, I mean, cyber crime is enormously profitable. It's a very big deal. It's a very big business. Companies are not doing enough to defend themselves. But on the other hand, a lot of the other threats are over-hyped. There's an enormous security industrial complex supplying cyber weapons to the U.S. military, being a lobbying force for some of these draconian laws, but at the same time, there's real stuff that needs to be sold to real companies. NSA is not doing a lot to defend the country, but that's really not their mission. Their job is to defend military and government networks. They have not been tasked with defending the broader internet. That's probably a good thing. So we really can't judge them on that. There's a lot going on here. How to compare this with climate change? Your guess is as good as mine. My climate change is probably the single most catastrophic threat our species is facing, but it's 100 years out. You know, we as people cannot do threat analyses 100 years out. We can barely do to the next harvest. We're just not equipped as a people to do that. That's why this is complicated. But it's a lot going on. A lot of moving pieces in profit making versus real threats. Is that 30 seconds? Close enough and bring in full circle. It's complicated. Thank you everyone for coming today. I really appreciate it as does our panel. Thank you. Thank you panel. Appreciate it. Thank you.