 All right, hey everybody, thank you for coming out. I'm very excited about today's talk, which is co-sponsored between the Journal of Law and Technology and the Bergman Center for Internet and Society. Before we get started, I just wanna make a quick shout out for an event we have going on tomorrow at lunch, featuring Hank Greeley from Stanford Law School talking about whole genome sequencing should be a very interesting event as well. So hopefully some of you will make it for that. I don't have Langdell South. Langdell South? Okay, excellent. So Christa Goyen has had a very impressive career in just a few short years actually. He's currently the principal technologist and a senior policy analyst with the Speech, Privacy and Technology Project at the American Civil Liberties Union. He's also a visiting fellow at the Yale Law School's Information Society Project. He finished his PhD at Indiana University earlier this year, which was focusing on the role that third-party service providers play in facilitating law enforcement surveillance of their customers. And he used some pretty impressive techniques to gather data, including making extensive use of Freedom of Information Act, suing the Department of Justice Pro Se and several other research methods. His research has appeared in the Berkeley Technology Law Journal and it's also been cited in numerous federal courts, including the Ninth Circuit. Between 2009 and 2010, he was the first ever in-house technologist at the FTC's Division of Privacy and Identity Protection. And prior to joining the FTC, he was a fellow at the Berkman Center and co-created the Do Not Track Privacy and Anti-Tracking Mechanism that's now adopted by all of the major web browsers. So without further ado, Chris. Thank you all for coming. So thank you all for coming today. I hope that it was to see the content of this presentation and not for the free food. I was a grad student until recently, so I understand the allure of food. So before I get going, I think I need to give you a disclaimer, which is that I've been at the ACLU for about a month and a half. And the ACLU does not yet have an official position on the trade and security exploits or in fact really the cyber war issue yet. Hopefully we will move in that direction, but so for now, what you will hear today are my thoughts and my thoughts alone. I've also been asked to request that I ask, are there any ACLU members in the audience? Woohoo, all right. If you are not a member and you are financially constrained, we have student memberships, and I highly encourage you to get on that train. All right, so that's about the way. So this image may be familiar to some of you. This is Ahmadinejad walking through a nuclear plant in Iran. For many years, Iran has had nuclear ambitions and has been moving towards acquiring the capabilities to manufacture nuclear material. Western nations, particularly the United States and Israel are not terribly excited about this. And so one of the primary responses to this in the last few years has been the use of cyber attacks or other methods to try and disrupt the capabilities of the Iranian government. In 2010, a computer virus known as Stuxnet first surfaced and it was revealed that this code had infected computers in Iran and it was pretty sophisticated. What it had done is it had reached the control systems that were controlling the centrifuges in a plant and these centrifuges are supposed to run at a set speed and this code caused them to speed up and slow down and speed up and slow down until they blew up. Pretty cool stuff. Now, the code that was created was very sophisticated and for a while, no one knew who had created it. Obviously, it was created by someone with fairly significant in-house resources but we didn't know which government was behind it or in fact, whether it was a government or some other sophisticated organization. However, in June of 2012, David Sanger at the New York Times broke the story that Stuxnet was the work of the joint effort between the Americans and the Israelis. There's a huge backstory to this and the Americans blame the Israelis and the Israelis blame the Americans for the further spread of this code. It was supposed to only run on a few select computers but a lot of the press and a lot of the focus by scholars since then has been on the code that ran on the computer. That is the code that did the speeding up and slowing down of the centrifuges and what I want to focus your attention on today is actually the code that got Stuxnet in the door. So the Iranians didn't bring this in themselves. It passed through various networks. It's not entirely clear whether it was delivered through a USB thumb drive or whether it bounced from a few computers but once researchers analyzed the Stuxnet code they determined that there were four zero day security flaws that were used in Stuxnet. Now I'm going to explain exactly what a zero day is. A zero day is a security flaw that is not known to the software vendor, right? So the companies, the Microsofts, the Googles, the Apples of the world routinely and frequently find out about security flaws in their products. Maybe they're told by a researcher, maybe they discover the flaw being used in the wild, maybe their in-house security team learns about it and once they discover the flaw they will engage in some process to fix it. They will develop a fix, they will test it and then they will roll it out to their customers. A zero day is the technical term for a flaw that is not known to the vendor. Well, how do these things float around? Well, what's happened in the last few years and I'll get into this in this presentation is that governments are now acquiring these security flaws not to patch systems, not to make the internet more secure but rather to deliver offensive code that may be used for warfare, for espionage and for law enforcement surveillance. So Stuxnet used four of these. Four flaws that Microsoft didn't know about and that Microsoft only found out about once Stuxnet surfaced in the wild. This is a paper that was published a couple weeks ago by some security researchers from Symantec. They define a zero day as a cyber attack exploiting a vulnerability that's not been disclosed publicly. What they note is that there is no defense against a zero day attack. By very definition, the software vendor doesn't know about it and so hasn't deployed defenses that can allow consumers to protect themselves. Zero day is like a nuclear weapon in the security community because there's nothing that you can do to patch your systems. You are by very definition vulnerable to these. Now, Stuxnet isn't the only cyber weapon that the US has used that has come to light. Earlier this year, a second tool known as Flame was revealed to, was discovered. Flame was not a destructive weapon. It did not break into a system and cause physical harm or destruction of infrastructure. Flame was a highly sophisticated virus that was designed to exfiltrate, to steal information from a select group of computers. Flame actually from a technical standpoint was even more impressive than Stuxnet because of the particular exploits that were used but we don't need to get into those. And just to make it clear that this is not solely a US phenomenon, in January of 2010, it was revealed that Chinese state attackers had compromised the computers of Google and several other large US companies in what's now known as the Aurora attack. The Aurora attackers used a zero day security flaw in Microsoft Internet Explorer. So again, there was nothing that Google or some of these other companies could do to protect themselves against the use of this flaw. All right, so I've presented this, these flaws are being used in state sponsored attacks. All right, so meet Charlie Miller. Charlie is probably one of the best known computer security researchers in the country. Used to work at NSA, then he went private. He worked for a company called Independent Security Evaluators for a while and Charlie got a name for himself by constantly breaking the security of the iPhone. There's a contest in Vancouver every year called Pone to Own, where researchers attack a up to date device, an iPhone or a laptop running the latest software, all the patches. And if the researcher can penetrate this device and gain administrative access against a computer that should be up to date, they get to keep the device and I think they get about $10,000. Charlie won the contest so many times that they finally excluded him from the contest to give others a chance. And then I think either last year or this year, Charlie was actually kicked out of the Apple Developer Program because Apple was unhappy with his repeated attacks against their system. Charlie is, by all regards, a stellar security researcher. And a couple years ago, Charlie published a really enlightening research paper at an academic workshop in which he described the vulnerability market as in a market for security experts. Now Charlie revealed in this paper that a few years before, after he left the NSA, he had found a security flaw in the Linux operating system, which is an operating system used in many computers around the world. He found this flaw and he'd sold it to an unknown government agency for $50,000. Now Charlie doesn't reveal in the paper who the customer was, but he was kind enough to include a copy of the check, of the cash check as an attachment to his research paper. Charlie said in an interview shortly after that, the government official said he was not allowed to name a price, but that I should make an offer. And when I set the price of $80,000, he said, okay, and I thought, oh man, I could have gotten a lot more. So Charlie used the money to buy a kitchen, at least according to his paper. This is actually not Charlie's kitchen, this is like his kitchen I found on the internet, but Charlie did say that he used the money to set to outfit his house with a new kitchen. The reason I bring this up isn't because I'm particularly concerned with kitchens, but rather than I'm interested in the sale of these exploits to the government. The government didn't buy this security vulnerability from Charlie because it wanted to patch its own systems and then tell the Linux community. They bought this so they could use it for aggressive purposes against some target. Now Charlie didn't know who the target was. Charlie was just paid a substantial sum of money in exchange for providing this information under a non-disclosure agreement. This is the government buying a flaw without the intention of fixing it, with the intention of exploiting it. Charlie also added in an interview later, I don't think it's fair that researchers don't have the information and contacts they need to sell their research. What Charlie was complaining about at the time was that in his view, researchers were not receiving the full monetary benefits owed to them, or they weren't receiving a fair price for their goods. The government had all the information, the researcher didn't have very much, and maybe the government was offering 50k when the true value was 200k or 300k or a million dollars. And in the grand scheme of things, $50,000 or even a million dollars is a very small price given the actual price of traditional weapon systems. A rocket is a couple of million dollars, a security exploit is super cheap in comparison. Well, back in 1999, 1998, when Charlie was selling this information, if you were a researcher, there are only a few legitimate options for you if you found a flaw. So one option was to engage in the vendor bounty programs that many software companies offered. At the time, Mozilla, who makes the Firefox web browser, gave researchers $500 if they came to the company with the flaw rather than publishing it to the world. Google, Google's Chrome team offered between $500 and $1,300 for the same kind of information. Some of the computer geeks in the audience may realize that number means something special. But these were really, really small sums of money. I mean, to researchers who can make $10,000 a month doing penetration testing or something like this, this is beer money. And this was really just to incentivize good researchers to do the right thing. If researchers didn't want to follow the bug bounty path, the other thing they could do is work with a couple companies that were in what you might call a managed disclosure process. And so there were two companies, iDefense and ZDI. And what you would do is you would give them the exploit or the vulnerability. They would give you somewhere in the ballpark of $5,000. They would immediately tell the Microsoft or the Googles or the Apples, the company whose software had the flaw. But in the meantime, while the fix was being developed, they would tell their corporate customers, Bank of America, Merrill Lynch, these guys, so that they could defend their networks and look for the exploitation before the patch was ready to go. And then finally, once Apple or Google or Microsoft had developed the fix, everything would be announced. The researcher would get credit for it. Everyone was happy. And the researcher would pocket $5,000, $10,000, $20,000. Now, of course, one other option and one that was quite popular with many researchers was just to publish the information to the whole world and not get any money. This was called full disclosure. You might do this if the company had a reputation for not fixing flaws promptly. Maybe you were worried that the company might threaten to sue you and try and stifle the disclosure. And so back in 1999, the real debate amongst computer security experts was about responsible disclosure, which is telling the company, or full disclosure, which is telling the whole world. And the only real way to make any money was to go down the responsible disclosure path, either directly to some of the companies or through these managed disclosure processes. But the sums of money were relatively small. So this is Kansak West, that conference I talked about where the Pone to Own contest took place. This is Kansak West in 2009. These are two of the top security researchers in the community, Alex Sodorov and Dino D'Azovi, holding up a sign. And I'm not sure if you can read this, but it says, no more free bugs. These two researchers got up on stage actually during a presentation by Charlie Miller, the iPhone hacker, and said that they were sick of giving away their labor for free. This is Dino D'Azovi shortly after that protest. Vendors have been getting a freebie for a while, he said. Why would I want to sit down and volunteer to find a bug in someone's browser when it's a nice sunny day outside? It's a good point, right? D'Azovi is one of the top researchers in the field. He's got better things to do than give his labor away for free, particularly if maybe there's other people who are willing to pay for these exploits on the market. So at the time, D'Azovi worked for a company called Endgame Systems. Endgame is a shadowy defense contractor located in Atlanta, Georgia. Any of you who have heard of the HB Gary incident that happened last year where this sketchy security company pissed off anonymous and was hacked and then all their emails were made public may know a little bit about this company because their name shows up in some of the emails. So this is actually one of the emails to Aaron Barr, who was then, I think, the president of HB Gary Federal. Aaron was the gentleman who really pissed off anonymous. So this is an email from Chris Rowland, who was then the president of Endgame Systems. You can see next to the arrow, please let HB Gary know we don't ever wanna see your name in a press release. Endgame wanted to keep an extremely low profile and in fact, after the HB Gary emails came out, the entire Endgame website was taken offline and if you go to Endgame's website now, all you'll see is a logo, that's it, all the information is gone. However, there was a slide deck included in one of the emails sent from Endgame to HB Gary and this is one of the slides from that deck. You can see that they offer subscriptions to their clients. Many government clients and what I want you to see is in the Maui tier, the first level tier, they offer original vulnerability research, right? They also offer exploit toolkit development. Why are people buying this? Because this company is delivering weaponized security exploits to the government. Now Endgame is not alone in this. There's actually a fairly large and established market for defense contractors to supply these services. But what was interesting is that while Dino D'Isovi was on stage complaining about No More Free Bugs, his company, his employer, because he was the chief research scientist at Endgame at the time, his employer was offering exploits at high dollar values to defense contractors or up the defense contractor chain and directly to US government agencies. So really, No More Free Bugs wasn't complaining about the piddling sums of money that were being offered in the bounty programs, the $500 and the $1,000. But it was, I think, a war cry. It was a warning that basically said, look, these things are worth $50 or $100,000. We are sick of leaving value on the table here. Either the software companies step up and offer the same prices that our government clients will offer, or we're gonna start selling more and more to the government. Now since then, some companies have responded. In fact, Google paid out $60,000 just a couple of weeks ago for a single exploit. But at the end of the day, the companies are never gonna be able to win a bidding war against the military because as we all know, military budgets are huge. All right, so that was 1999. Let's fast forward to 2012 and see what's happening since. Well, in February of this year, I gave a talk at a researcher's summit in which I described the sale of these things in fairly clear terms and said that the people who were engaged in this business were cowboys. They were fueling a market that was in essence a ticking bomb. This was the first time that really been a public talk about the sale of this and calling out some of the specifics. After that, Forbes ran an expose on the industry and was able to interview two people or two companies engaged in this business. This is really the first time that some of the price lists, detailing the sums of money have come to light. And what you'll see as I show you is that the people who are involved in this, at least the ones who are stupid enough to talk to the press are not the kinds of folks you want to bring home to meet your parents. So this is a gentleman called the grug. I don't know if you can see all the details here, but you'll see a computer. On the left, you'll see a martini. And on the right, on the floor, maybe you can see a bag. In that is stacks of Thai baht, Thai currency. So this is a photo that this gentleman gave for Forbes. We have a cocktail and a bag full of cash. And the grug is a middleman. He is a buyer and seller of computer exploits. He doesn't buy them and sit on them, but he acts as a broker, introducing those who have the flaws to the clients who want to buy the flaws. He said he takes a 15% commission on sales and that he refuses to deal with anything below mid five figures because it's just not worth his time. Now, the grug is an independent agent. This is a group of gentlemen who work for a company called Viewpen. Viewpen does not act as a broker. Actually, they do all of their exploit development in-house. And so rather than buying things from individual researchers, they have a team of researchers who find their own flaws, and then sell them to government clients on a subscription basis. This is the CEO of Viewpen speaking to Forbes in March of this year. We wouldn't share this information, the exploit information with Google for even $1 million. We don't want to give them any knowledge that can help them in fixing this exploit or other similar exploits. We want to keep this for our customers. And so when Dizovie and his colleague held up their sign and said no more free bugs, they were saying they weren't getting enough money. But that's not what Viewpen is saying. Viewpen is saying even a million dollars is not enough. They feel like their customers have a right to flaws that Google will never learn about, and that they will do everything possible to avoid Google learning about so that it won't fix the flaws. The CEO added, we don't work as hard as we do to help multi-billion dollar software companies make their code secure. If we wanted to volunteer, we'd help the homeless. But these guys are making things really clear. They're not in the business of making the internet secure. They're in the business of delivering weaponized security flaws to government clients. On September 1st of this year, The Washington Post published a story on this industry, interviewing me and a couple other folks. This is really the first time in a mainstream US publication that the existence of this market has been described. Prior to this, Forbes was big, but The Washington Post is much bigger. Prior to this, this is really the stuff that had been surfacing in the computer security community. A couple weeks after The Washington Post published their story, Viewpen updated their website and published a list, sort of a set of guidelines for companies they will and won't sell to. Their guidelines are fairly limited, or rather fairly wide. They will sell to any government that they're not prohibited from selling to by law. So if US law or European law prohibits them from selling to that government, they will not sell them. But anything else is open season. They sell to partners of NATO, which doesn't mean NATO members, but NATO members plus all their partners. Also some Asian and other governments, and also the Australia-New Zealand Alliance. So some of the NATO partners on their approved list include Azerbaijan, Turkmenistan, Egypt, Morocco, Qatar, and Pakistan. The Asian members include Indonesia, Burma, and Vietnam. These are not nice governments in many cases. At the same time as this is going on, so we have growing awareness about the sale of the exploits, but the exploit is only the thing that gets you in the door. Once you hack into someone's computer, the exploit has left you there. What do you do if you're an intelligence agency and you want to get information? Well, you need additional code. You need code that will steal the information for you. So this is a gentleman named Martin Munch. He has kept a fairly low profile. He's one of the leading German security researchers. And for years, he led the development of something called Backtrack, which is an open source Linux distribution designed for penetration testing and other security research. His tools are widely used by legitimate security researchers. His stuff is a name brand product for many in the security community. Many folks don't realize that Martin has sort of branched out since. So he used to do this fairly helpful development for the security community. He then started a company called Gamma, a British and German company that makes a product called Fin Fisher. Fin Fisher is a highly sophisticated spyware suite that's sold to government agencies around the country. This is a screenshot of the Fin Fisher sales video available initially only to their clients, but then leaked by WikiLeaks last year. You can see that the surveillance target is having a Skype conversation with his partner. The video then pans to a shot of a law enforcement official in a control room where they're able to monitor the video, audio, and other information from that Skype conversation remotely. Fin Fisher's software is designed to easily allow law enforcement agencies to remotely spy on the computers and conversations of targets. Makes sense. The New York Times published a story about Gamma and the Fin Fisher software in August after the software was found to be used by the Bahraini government and the Qatari government and a few others. Gamma told the New York Times that they sell Fin Spy, which is their mobile software, to governments only to monitor criminals and it's frequently used against pedophiles, terrorists, organized crime, kidnapping, and human trafficking. That sounds good, right? I think all of us are probably against crimes against children. We think the organized crime is a bad thing and those engaged in the organized crime should be investigated following appropriate legal process. But there's a problem and the problem is this. On the list of priorities for the Turkmenistan government, child pornography is not particularly high. On the list of many of these governments, child pornography and corruption is not particularly high, usually because the rulers themselves are corrupt. But pretty high on the list, though, are investigations of journalists and of dissidents, right? If you're in the ruling party in, say, Turkmenistan, thank you, if you're in the ruling party in Turkmenistan, journalists are a far bigger problem than child sexual predators or those exchanging child porn online. And these tools work just as well against journalists and dissidents and human rights activists and those kinds of folks. And so what we end up with is a sale of this software to governments that engage in fairly oppressive activities, governments that do things like this, or like this, or like this. This is Egypt, by the way. Many of you may recognize this photo of a woman being beaten by the Egyptian authorities. So Gamma, the British company that makes Finfisher, actually did supply their software to the Egyptian authorities. This is a copy of the invoice sent by Gamma to the Egyptian authorities for a suite of surveillance products in June of 2010. What's interesting is that the Egyptians didn't actually use, didn't actually buy anything from Gamma. At the same time as they were offering these products for sale, they gave Gamma, or Gamma gave the Egyptians a demo version of the software and using the demo version of the software, they were able to spy on the conversations of activists and dissidents in Egypt. Why buy when you get for free? But clearly Gamma has enabled some pretty nasty regimes. So again, Gamma denied ever selling anything to the Egyptian government, but the software and the sales demonstration information were found in the offices of the state security forces in Egypt. After the government fell, activists went in and were able to liberate documents from there. In August of this year, researchers were able to pinpoint the location of the use of Gamma's Finfisher software. It turns out that the command and control server, the server that the infected computers call back to, to report the conversations that are being spied on, had a fairly unique fingerprint. And so researchers could sort of hit random computers on the internet and test for the existence of this fingerprint. And using this fairly basic technique, researchers were able to identify the presence of Gamma's software in 10 countries that included places like Ethiopia, Qatar, Mongolia, Turkmenistan. Interestingly, the United States was on the list, but the US computer that was running the command and control server for Gamma wasn't a computer running in a company. It wasn't a university computer or an FBI computer. It was actually a server running on Amazon's cloud EC2. And so someone was renting a server from Amazon using a credit card. We don't know who this was, Amazon has not revealed whether the person got kicked off or whether they have any information about this. They haven't said whether they've referred this information to the authorities, although presumably if this is a law enforcement agency there wouldn't be much of an investigation. But Gamma's software is being used or at least has shown up on computers run by some fairly nasty governments. In October of this year, just a couple weeks ago, the issue of the vulnerabilities and the issue of the spyware came together when the Bloomberg broke a story. Now up until now, all of the cases that we knew about where this FinFisher software was found on the computers of activists had been delivered through what we call social engineering. So this is when someone received an email that says, click here for a photo of an activist being beaten and they click on it and then it infects their computer. They might see a photo but there's usually some malicious payload. According to Bloomberg, there was an instance of an activist in Dubai who was infected using FinFisher software and that installation was delivered through a security exploit through a zero day in Adobe Flash. So here we have the first confirmed instance of the use of this software coupled with a security exploit. Now this was an exploit that was being sold by a ViewPen to its partners, to its subscribers. After Bloomberg broke their story, ViewPen denied providing it to the government of Dubai, of UAE and they said, well look, we may have had this flaw but maybe other companies had this flaw too. We have no monopoly over this information. Since these stories first started breaking, some governments have started to take an interest in this issue. The Europeans have really been on the leading edge here. The European Union first banned the export of surveillance software to Syria and then followed it up pretty soon after with a ban of surveillance software to Iran. Shortly after, well, I guess towards the end of this summer, British privacy groups applied strong pressure to the British government because Gamma is based out of the UK and the British government has now said that they are regulating the export of Gamma's spyware software, FinFisher, and that it requires a license. It's not clear who is getting a license and what they're doing about previous sales. And then Germany's foreign minister spoke out at the beginning of September and said that he was uncomfortable with the sale of this technology. All right, so I think it's fair to say that this industry has a bit of an image problem but it's not surprising, right? The kinds of people who are engaged in traditional arms sales are generally not very nice people, right? And unsurprisingly, the people who are engaged and what I would describe as digital arms sales are not particularly nice people either. They're interested in a sale and don't really care how their products are being used. So this is the grug and he is right now sort of the spokesperson for the industry, for better or worse, and has defended his practices and what he does and said he doesn't really care about the consequences. This is one of his colleagues, Ben Nagy. They both work for a security company. I'll read you this quote because I think it's great. So he's describing why he does what he does. I do it for the money because I like it and because most of the time I don't need to wear pants. I spend approximately no seconds of any day worrying about the imaginary ethical implications of every little thing I do and I'm not particularly unique. This is Martin Munch, the president of Gamma or the Finfisher Division of Gamma, quote, given that a can of fizzy drink or soda or a car battery can be abused and used as an implement of torture, it is of no surprise to anyone if our products can be abused too, right? As Gamma describes it, their products are like a screwdriver or a knife or some other generic product that you might find in your house that has good uses and bad uses. In their view, they are not on the hook for the uses of their technology when targeted against activists or journalists or other parties who we may think should not be subject to government surveillance. All right, so how does this then come into play with the law? Given that I'm at a law school, I added a bit of legal stuff at the end of this talk. This is Harold Koh who was until recently, until a few years ago, I think the dean of Yale Law School and is now the top lawyer at the State Department. So this is his first ever speech on cyber war and he gave this at Fort Meade, which is where the National Security Agency is in September. I'm gonna read you this. The U.S. government undertakes at least two stages of legal review on the use of weapons in the context of armed conflict. First, an evaluation of new weapons to determine whether their use would be per se prohibited by the law of war and second, specific operations, employing weapons are always reviewed to ensure that each particular operation is also compliant with the law of war. And why does this matter? Well, there's now this thing called cybercom which is based out of Fort Meade which is the sort of multi force. So all the Army, Navy, Marines and Air Force all have people stationed there. It's run by the director of NSA and this is the headquarters for all of the military's cyber operations and the military of course is engaged in cyber operations just as the intelligence agencies are themselves. And so it's interesting to look to see what Harold Koh has said which suggests that the activities that the government is engaging in are getting some kind of legal review but also the individual weapons themselves are getting legal review and so when I read this I thought, okay, this is a good thing. This means that someone signed off on the use of four zero days in Stuxnet in theory. So the individual forces themselves, the Army, the Navy, the Air Force have their own rules for the acquisition and use of weapons and everyone but the Air Force hasn't really addressed cyber yet but the Air Force last year published an updated document that addresses how they consider the acquisition of cyber weapons and what kind of legal review is required. They define weapons as devices that are designed to kill, injure, disable, temporarily incapacitate people or destroy damage or temporarily incapacitate property or material. All right, so the Air Force definition while somewhat broad still defines weapons based on things that cause harm. Their document also defines cyber capabilities as any device or software payload intended to disrupt, deny, degrade, negate, impair or destroy adversarial computer systems, data, activities or capabilities. So Stuxnet would clearly be a weapon or a cyber capability from the perspective of the U.S. Air Force because the centrifuges were destroyed. If someone were to engage in a denial of service attack that brought down a bank's computer that would be a weapon or a cyber capability in the Air Force because of the disruption or the denial of service but their definition specifically does not include a device or software that is solely intended to provide access to an adversarial computer system for data exploitation. So the tool that gets you in the door is not considered a weapon and the tool that you run on the computer is considered a weapon. So I'd like a show of hands. Everyone who knows how to shut down a computer raise your hand. All right, put your hands down. Everyone who knows how to hack into a computer raise your hands. All right, a couple of hands. I think what I'm trying to convey here is that the stuff that's denying the access isn't that difficult. Now I'm not gonna say that the average person can write code that will speed up and slow down centrifuges. That's clearly some high tech stuff. But by the definition that they have of cyber capabilities and cyber weapons they're focusing on the part that's pretty easy and the part that's difficult, the part where there are scarce skills in the market where the exploits are very difficult to develop and they're selling for high prices, those things do not fall under the existing definitions of cyber capabilities and cyber weapons. So the military flaws are not weapons. Which means that exploit sales are not currently regulated as weapons. There exist existing frameworks for regulating the international sale of arms. Wassanar is probably the most well-known of these. This is an agreement between many countries. But none of the signers of Wassanar currently includes the exploits under the agreement. Now there are discussions primarily being led by the British to expand Wassanar to include some surveillance technologies. But my understanding is the exploits themselves are not being considered as part of this updated framework. So this is a gentleman who runs a company called Netregard, they're based in Boston. Netregard is one of the more open players in the security exploit business. They're just down the road from here and they openly admit that they buy and sell exploits but they say they're a good guy because they only sell to the US government rather than Viewpen who will apparently sell to anyone with a check. So the CEO of this company posted to a mailing list in August and said that they think that the market should be regulated and he acknowledged that they're selling bullets and that the computers are the guns. This is the first time we have a vendor of these security exploits really publicly acknowledging that what they're engaged in relates to war and calling for regulation. Now I don't know if this is because he thinks that it should be regulated or if he thinks that it will just disadvantage his competitors whose market will be shrunk. I don't really know what's going on there but these guys do see themselves as a legitimate player. I've spoken with him by email a few times and he tells me that every year he gets a legal review from outside counsel of his practices. I've asked him to publish the memo so that there can be independent analysis by interested parties. I think we all know that when memos are written in the dark sometimes the standard isn't very good. I think the torture memo written by John Yu is probably a good example of that. What I want to conclude with before I take questions and have a conversation with you all is this. This entire industry while it's been in existence for probably a decade or more hasn't really received much sunlight. Regulators don't really understand that it exists. Policy makers don't understand that it exists while there is a lot of discussion in DC about cyber war and cyber capabilities. It's really in many cases a fighting match between DHS and NSA over who will have authority. It's about who will get budget and it's about which defense contractors will get a lot of money to deploy technologies that will protect systems. Many policy makers in DC seem to have this idea that NSA has this magic pixie dust so they sprinkle on computers and then that gets them in. And there's not really any discussion or awareness or understanding of how the NSA is getting into computers. And the part that concerns me the most about this is that an organization like NSA always has two sides. It has the offense and the defense. NSA is supposed to hack into the computers of our opposing nations and engage in intelligence collection but they're also supposed to defend the computers of the US government. And when you have a single entity engaging in offense and defense there are gonna be conflicts, right? And what's happening here is that the offense folks are winning. And the reason I say they're winning is that for the exploits and the vulnerabilities that NSA and the other government agencies are stockpiling to be effective, we all have to be vulnerable. Microsoft doesn't issue patches only to US companies or only US computers. When Microsoft learns of a flaw they release a patch and give it to the whole world. That if the whole world releases the patch then the flaw that NSA or CIA or the Army, Navy, Marines or FBI pay for isn't gonna be very useful. We cannot rely on the Chinese or the Iranians not updating their security software. And so for those vulnerabilities to be effective the rest of us need to be kept in a state of vulnerability. And I think that is a very dangerous thing and I don't think that policy makers really understand the fact that our military and intelligence community are essentially exposing us to flaws that they could fix or they could tell Microsoft about so Microsoft could fix. So I'm gonna stop and take questions now. And then do you wanna do wanna do this or do you wanna, whatever you want, okay. And then there's a mic and if you raise your hand I think it'll come to you. Yeah so this is being recorded just so everybody knows. So please speak into the mic. Also if you have to get to class or something feel free to go. What's your best guess as to where this is going? I mean many of the people in a position to actually regulate this may not even know how to send an email kind of thing in terms of the disconnect. I mean I think it's a really valid point that policy makers in DC do not have a firm grasp of technology. If they did we wouldn't have seen things like SOPA. I think one of the things you need to understand is that right now there's a large fear about China and cyber war in DC and one of the reasons for that is that many defense officials, former generals, have quit and they go and work for defense contractors and they get sent to the hill to go and lobby members of Congress and you'll have a 60 year old general come in and say we're on their constant attack, we need to do something to protect our country and that guy won't really understand technology and the person he's speaking to, the member of Congress, won't really understand technology and so the end result of the meeting is that members of Congress are scared but they don't really know what they're scared of other than that it involves China and the internet. Before Newt Gingrich came into power in the mid 90s Congress used to have an office of science and technology that provided nonpartisan scientific advice to members of Congress and Mr. Gingrich thought that that was a waste of money. I think they were getting probably 20 or 30 million dollars a year. He thought that that money could be saved so he killed them. He killed the entire office, shut it down. So the end result is that Congress doesn't really get much in the way of advice, independent advice on science and technology issues. There's not, there are only a couple of offices that I know of that have an in-house technical advisor and so I don't think you can expect to see sane, nuanced understanding of these issues from many congressional offices anytime soon. I think what will eventually happen, my fear is that at some point there will be an incident where a US computer, maybe a government computer or a critical infrastructure or even a large corporate incident where a computer is hacked using a zero day that our intelligence agencies knew about and didn't tell anyone about because they were using it themselves also. And I think, I hate to use the rhetoric of cyber pearl harbor, which Leon Panetta used just a week or two ago, but one of the things to remember about Pearl Harbor was that we knew about it and we didn't say anything about it. Our folks had recorded information and didn't tell anyone to prevent it. You know, I think that there may be a time when there's something similar here and then I think there are gonna be folks on the hill who are gonna be really upset to learn that NSA could have stopped this but they didn't because they wanted to keep exploiting the flow. I think this gentleman over here and then we'll work this middle over here for a second. Sorry. The gentleman in the picture. Do you think there are cases going on where the government comes to Microsoft or Google or whoever and says, don't fix this? You know, look, so I spent six years doing a PhD on government surveillance. I'm obviously like a paranoid person but there are enough security flaws that exist in windows. The NSA doesn't need to go to Microsoft and say, don't fix this. There are always more to be found and if the price of a zero day is $100,000, that's a rounding error in the federal defense budget. They can buy plenty. Now I should be clear, they need to keep a lot of these things in the barrel because cyber capabilities are unlike any other arm, any other form of weapon. If you have a warehouse full of bullets, you know with some reasonable doubt how long it'll be before the gunpowder goes stale. If you have a nuclear warhead, you know that every 10 years you need to maintain that warhead to make sure that it doesn't go bad. If you buy a security vulnerability in Microsoft Windows and then two weeks later a Spanish researcher finds the same thing, your flaw is worthless. This thing could work for a week or it could work for a year or two years. There's just really no, there's no sell by date on these flaws. And so you know, if you talk to the researchers who are involved in this, one of the things you hear are the way that the contracts are structured is they will get a payment on day one and then they'll get royalties every month that the flaw has not been fixed to incentivize them to not tell anyone else, but also to defend against, you know, you're spending $100,000 and then losing its value a week later. So I don't think they need to call up Steve Ballmer and say keep this quiet because there are enough flaws that haven't been found. Do you wanna work down that side? So aside from more transparency and better policy in the NSA, do you think this is a market that can even be meaningfully regulated because we're talking about informational goods and these are the best security researchers in the world and if they wanna make an illegal transaction, how could you possibly stop them from doing that? So it's tough to regulate the researchers, but I think you can regulate the buyers. So defense contractors have to follow the law, right? They may not like it, they may not like the fact that U.S. laws they prohibit bribing foreign officials but we have the Foreign Corrupt Practices Act that seems to be, you know, quite effective in some cases. If the government is buying these things directly, then the government is subject to its own regulation and if the government is going through middlemen, then those middlemen, if they are private companies or public companies, are gonna be subject to regulation. Obviously, researchers can do what they want, especially researchers not in the U.S. will be able to avoid these rules but I think the buyers could be limited. You know, right now we don't know very much about the scale of the market and so I haven't been calling for any regulation outlawing the market but what I would like to see is reporting of sales, right? So in the surveillance space, we've just learned in the last, you know, six months how many requests the phone companies get. And they get 1.5 million requests a year in the U.S., requests from police to cell phone companies. It's really useful to learn about the scale of activities if you want Congress to pass legislation based on the facts on the ground rather than hand-waving and I think that if we want any chance that Congress will pass wise legislation in this area, it would be really useful for them to understand the scale of the market, how many experts are being bought and sold, whether they're coming solely from U.S. researchers or what percentage of experts are originating in Europe or in Asia or in Africa because my gut feeling is that there are a whole lot of foreign researchers who are selling these things and they're currently selling them to the U.S. because we're paying the most but they have no natural allegiance to us and if the Israelis offer them more money, I think they'll probably take it but we'd learn more through reporting, yes. Yeah, I'm coming from a country who is subject to the U.S. strong crypto export ban in the 90s and I can tell you that was a joke even though there was then effort to regulate strong crypto. So this give me the same doubts about the effectiveness of doing something with this market by government regulation but you just didn't, I would like to ask you to assess government regulations effectiveness in comparison to the actual market processes that are able to control and fix these holes. There are whole industries that have, that make millions of dollars, billions of dollars in fixing security holes and making computer systems more secure and there is also an arms race from independent researchers and so on and so on so as you mentioned the expiry date is unknown on software exploits and backdoors and all that stuff which means that the market is working is able to somehow control at least some of these activities why market is not good enough in this case. So I don't think the market is functioning and I think one of the reasons that the market cannot function and will never function is that Microsoft gets less value out of a vulnerability than NSA does. There are presumably a large number of flaws waiting to be fixed, waiting to be found and fixed in Windows and Microsoft doesn't make any money by taking one of those off the table. All it is is the reputation that's on the line and is anyone really gonna stop using Microsoft Windows or Apple because the Egyptian government has purchased the flaw in the software? Probably not and so Microsoft may feel like they have a duty to do this but they're certainly not gonna get $100,000 or $500,000 or $1 million or $3 million value out of this but if NSA can buy one of these flaws for a million bucks and use it to steal the blueprints for an Iranian nuclear facility I think that the US government would probably say wow that's a pretty good use of a million dollars. I mean given the obscene sums of money that are spent by the military right now a million dollars for a flaw that gets you into an adversary's computer is money well spent and so I think that companies are never gonna be able to win a bidding war against governments. Their own budgets aside simply because that each party doesn't get the same value out of the good. I don't think there is a real market right now to buy the flaws on the good side. I mean Google is paying some money but that's just to help the honest researcher. It's to incentivize good behavior but Google is not claiming that this is taking flaws off the bad market and bringing them to the good one and I think that the other companies out there are not really playing at all. I mean Microsoft still doesn't have a bounty program that I don't think of. Facebook is offering I think a few thousand dollars. Mozilla is offering a couple thousand dollars. These are just not really equals. The antivirus companies all have products they're selling but realistically the AV companies are snake oil salesmen. They protect you against the flaws that they already know about not the ones that governments are using and this summer after the Stuxnet story came out the CTO of Fsecure, a Finnish antivirus company wrote a wired op-ed in which he said, look there's no software that you can buy at Staples for 50 bucks that's going to protect you against the government. That's not the business we're in. We will protect you against flaws that we know about flaws that are likely being used by scammers but we are not in the business of protecting you against state sponsored attackers because we don't have the resources either. Next, wherever you want. How about in the front? I had a two part question. The first part is I actually was volunteering at the ACLU in the fall of 2009 working on some Freedom of Information Act requests relating to fusion centers and our request was denied in the early months of 2010 which is very sad for us. So I wanted to know first what kind of legal mechanisms are you using or hoping to use to affect more open and transparent reporting of sales of these cyber weapons. And then the second question that I had was pertaining to Harold Koh's statements about the two tiered review. Is there anything that the government is not touching in terms of cyber weapons? Is there anything that has been put off the table? Because with my experience with reading government and standards of review, it seems to indicate that you can have as many tiers as you want but nothing is actually going to be removed from the pool of available weapons based on those tiers. So that's my bias but I wanted to hear what you had to say. So the first question, we have two real tools in our arsenal. Tool number one is to file FOIA requests and then sue when those FOIA requests get denied. And the second tool is to go out drinking with the lawyers who work for the companies and the government and get them to leak stuff to us. I am more skilled in the latter. My colleagues are more skilled with the former. I've only filed, I've filed two lawsuits. I'm waiting for the second one to come back. The first one I did myself and lost. So you don't really want me engaging anymore in any more prosa litigation. We will more than likely be filing FOIAs in this space. But there's no guarantee that we're gonna get anything and in many cases filing FOIAs with the NSA is a waste of time. If you're interested in this obscure area of law, there's something called a GLOMAR response which is like the we cannot confirm or deny the existence of these documents response. And that's like the standard form letter for every NSA request. As for Harold Koh's speech, my understanding is that the military right now does not feel constrained by the existing law of war framework for cyber. If anything, they feel like it gives them free reign to do pretty much anything they want. So as to what they may have taken off the table, we don't know that again is an area ripe for FOIA requests and litigation. And if it's happening on the state side, we might have more results than if we file requests to NSA. But cybercom is basically run out of NSA. So I think there are a couple more hands over here. How much time do we have left? Let's see, we've probably got another five minutes. Okay. So who else was, let's do one more, the gentleman in the center in the blue. And then one more over here. So two linked questions. You mentioned the Pearl Harbor incident. I mean, and this is just crypto in the 50s. As you know, we broke the Germans ciphers. We broke the Japanese ciphers and quite consciously chose to sometimes not act on information we receive through broken ciphers because then the Germans and the Japanese during World War II would have known we'd broken their ciphers because that would have been the only way to have received the information. And it was deemed worth it to let a convoy be sunk or something like that because the overall benefit from having access to the unencrypted data was greater. And it seems like that's a possibility. And so the first part is, do you think that maybe the government has done this cost benefit analysis and is saying, yeah, the public's vulnerable, but it's worth it if we can get into Iranian computers, et cetera, et cetera, et cetera. And so then the second piece is, let's say you've got the NSA's magic pixie dust. What would you like to see? What's the optimum scenario that solves these problems? Those are some really tough questions. All right, so let me try and see if you like the response. Your enigma example is an interesting one. So enigma was the crypto system used by the Germans during World War II. And it's not really the best analogy, even though I was the one who introduced the Pearl Harbor example because the Germans had their own system and the Japanese had their own system and we had our own system and the British had their own system. Everyone's using Windows. So in this scenario, it's almost like everyone is using the enigma machine and it's a question of do we reveal we have this vulnerability in enigma and then lead to the Germans not use it and us not using it anymore? Or do we keep it quiet and exploit it? Everyone is running either Windows, everyone in the world for the most part is running Windows, Linux or macOS and the commercial distribution of Linux is most popular right now is Android. Everyone's using US software and the tools that are on Chinese desktops and Iranian desktops and North Korean desktops are more than likely Windows XP machines or even Windows 7 machines or Macs if they have for people with money. The analogy doesn't work because our opponents are not using different software from ourselves. With regard to the second question of, we can talk about this more later. The second question, what would I like? This is something that's been known about for a decade and you go to the computer security conferences and the people who are in this trade are the ones giving the keynotes and they're treated like rock stars. The grug, that scumbag whose photo I've shown sponsored a party, an exclusive party at DefCon in Vegas this summer, 100 people were allowed. It was highly sought after to access. Some of these other guys are sponsoring parties. These folks are treated with a great deal of respect in the community and everyone knows about it but no one talks about it and that part sickens me and so one of the things that I've done is just try and get people to talk about it. I think that this is something that should see sunlight. Ideally, if I were a king for a day, there would be a conference in a year or two with a bunch of law professors debating proposals for how to deal with this. I'm not gonna claim that I have the solution to this but like so many issues that come up in DC, there are policy problems with a thin shell of technology around them and policy makers get to the issue, are confronted with this tiny, tiny bit of technology and then throw up their hands and walk away because they don't understand the technology and what I'm trying to convey is that the fundamental hard issues here are not about the technology. They are public policy questions and I think that those need to be addressed by international law experts, by people who do law and econ. I mean, if this is a functioning market, then maybe there are ways that we can tune it or adjust it and if it's a broken market, then maybe there are ways we can fix it but I don't claim to have the solution but I do think that a bunch of other smart people who didn't know about it before should be looking at this. Do I have time for one more? Yeah, the gentleman in the red over here. You work at Berkman, you can talk to me later. So at the end of this, what's a noob to do? Lawyers, et cetera, people who are like, oh, I patch OSX. Are we basically at the mercy of everyone else or is there anything an individual can do? I mean, right now, you have no reason to worry because you're a law student. Yeah, no one is gonna burn a $50,000 exploit to hack into your computer. But you're at Harvard Law School, chances are you'll end up employed when you're done here. And maybe you're gonna have documents on your computer in a few years that are worth $50,000. Look, the people who are hacking into computers in DC are not just targeting the White House. They're targeting the think tanks that actually come up with proposals. They're targeting the law firms that are hired by companies to come up with policy proposals because the big secret in DC that is not really a secret is that policy makers don't come up with their own policy. It's created outside the government and then laundered through executive and congressional offices. And I don't mean in the financial laundry sense, but realistically, the computers of most foundations and think tanks and law firms are not very secure. And if I were a state actor and I wanted to find out what the US government was gonna be doing in terms of energy policy, I'd be hacking the law firms and lobbyists that do energy related lobbying right now. And a zero day will work really, really well against those guys. I mean, in many cases, maybe even social engineering is enough, but once you get to that point, no, there's nothing you can do to protect yourself. I mean, that's why these things are worth as much money as they are. If they didn't work, they wouldn't be worth the money.