 The next talk I want to ask you one thing. Did you know there had been crypto wars 2.0? Who has knowledge of that? Okay, I see about three, four hands. So the most of us hackers even didn't know about crypto wars 2.0. And it seems like we kind of lost it. So what are we doing now? One step to take would be to stopping law enforcement hacking. And how can we do that? And this is why Christopher is on stage. He will tell us a little bit about exactly that topic. Thank you very much. Thank you all for coming. This is a topic I've been researching for I think three or four years now. So I'm really excited to finally be able to talk to my community about this. So let me start by saying just a little bit about myself. My name is Chris Sigoyan. For the last four years I've been employed at the American Civil Liberties Union, a major NGO in the United States. And I'm a computer scientist. I advise the lawyers who do our surveillance cases. So I work hand in hand with the lawyers who sue the FBI and the NSA for spying. But thank you. So this is actually near the end of my job. I'll be leaving the ACLU at the beginning of January to spend a year in Congress advising politicians on technology. And so I want to emphasize one thing before I begin my talk, which is not only am I not speaking on behalf of my current employer, but I'm definitely not speaking on behalf of any future employer. Government hacking is an extremely controversial topic. There are mixed feelings even within the hacker community and within civil society. And so I recognize that my own views are definitely not mainstream even within the community of people who fight the government and fight law enforcement surveillance. All right. So disclaimer number two, there's actually three disclaimers. So this is a great quote from a famous journalist in the United States. I just want to read it real quick. The trouble with fighting for human freedoms is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed and oppression must be stopped at the beginning if it is to be stopped at all. And really what he's describing here, so what he's describing here is the fact that many of the court cases that define our basic privacy rights come from cases involving drug dealers, people smuggling alcohol, and pedophiles. And so it can be very unpleasant for people to sort of engage in these cases. But if you wait until the government is using its powers against journalists and freedom fighters, by that point the case law is settled. And so if you care about our rights, you have to roll up your sleeves and get into some pretty unpleasant fights. And so that's in fact what I've done this year. I volunteered in four child porn cases for the defense in my personal capacity. I took time off from work. I didn't take any money. And I went and volunteered in these cases because I wanted to understand how the FBI hacks. And these are the cases, the unfortunate fact is that child porn cases are the only cases we really know about where the most innovative and troubling techniques are being used. And if you want to help to decide what the law of hacking is going to be, this is where the action is. It's probably the most difficult thing I've ever done in terms of forcing me to confront my feelings about the state and about the criminal justice system. I understand that there are many people in this community who would not go that far. And it certainly kept me up at night on many evenings. Because I volunteered for the defense in several cases, I've actually seen things that are not public. I've seen documents that are still sealed. I have copies of some software that the defense was provided to the defense that's still not public. And what I want to emphasize is that everything that I'll be describing today is based on public information. Although I've worked as a volunteer expert for several defense teams, nothing in this talk will rely on any information learned in that context. All right. And then disclaimer number three, this is going to seem very American centric. And I apologize for that. Given the audience here, I'm an American. I live in the United States. And the fact is that my government has been more open about its use of hacking than many other governments. And that's not necessarily because we are a more open society. I think it's more that the FBI has been caught a few times. But the end result is that we've learned a lot more about law enforcement hacking in the United States than many other countries have. And I think there are many lessons that can be learned by what has happened in the U.S. even if you're not an American. All right. So what's the point of this talk? This is not a talk where I'm going to give you the history of law enforcement hacking. This is not a talk where I'm going to describe in real intimate detail how the technology of government malware works. The purpose of this talk is really to advance the debate around law enforcement hacking. When I first learned probably five or six years ago that the FBI had a dedicated team of hackers, my instinct was that I didn't like it. But beyond sort of the initial ick factor, it took me a while to figure out why I didn't like the idea of the government having the ability to control people's webcams or microphones or break into their mobile phones or laptops and steal information. It took me a while to come up with my own feelings, my own arguments. And the unfortunate thing is that many people in civil society in trying to push back against government hacking authority have not advanced our arguments very much. Our arguments are very basic and really it's been focused on protecting the privacy of the targets. The government has violated the privacy of the pedophiles they're investigating and that's a total loser in the political sphere. And so the purpose of this talk is to advance us beyond that. If the best argument we have against government hacking is tied to the privacy of pedophiles, we're toast. Even if we think that everyone in our society deserves basic privacy protections, this is not a winning argument for people in Washington, for people in Brussels, and in capitals around the world. And so if we would like to see laws passed, if we would like to see restrictions on government hacking or even a prohibition on it, we have to come up with arguments that politicians and that the public will embrace. To win the fight against government hacking, we actually have to change the debate. And so the purpose of this talk is to begin the process of reframing the debate around government hacking away from a privacy issue and to one focused on collateral damage and in particular the harm that governments impose on innocent third parties, the harm that governments impose on the internet at large. All right, so I'm going to run through about I think six different areas where hacking causes problems. And the goal really of this talk is to equip you and to equip those who will continue this debate with arguments that will actually work. So that if you're seated with someone who thinks that hacking pedophile seems like a really good idea, you can come up with convincing arguments as to why even again in those circumstances it's not a good idea. So the first problem with law enforcement hacking is secrecy. When the protests in Ferguson, Missouri first took place a couple of years ago in the United States, I think many people were shocked to see images like this in news articles and on TV screens. Really what we have here are military technologies being used by law enforcement. For those who've been watching this space for a while, this is not a new phenomenon, but certainly in Ferguson, this was the most high profile demonstration of the militarization of law enforcement. Just a few months back there was an armed standoff in Dallas, Texas where the police ended up using a bomb disposal robot to kill an armed target. Of course in recent years, law enforcement agencies around the country in the United States and around the world have acquired drones and other sophisticated formerly military tools. And I think at this point many people have probably heard of stingrays or as they're known in Europe, MC catchers and these two are surveillance devices that were first designed for the military and the intelligence community and have trickled down to state and local law enforcement. So this phenomenon has really been described quite well by Radley Balco, a libertarian author and journalist in the United States. The phenomenon really is the militarization of police. The fact that sophisticated technology designed for the military, designed for the intelligence community eventually trickles down to law enforcement. And the problem with this is that it's not just the tools that trickle down, it's not just the arms that trickle down, it's not just tanks and machine guns and armored personnel carriers and vests and helmets, it's also surveillance technology. Military and intelligence surveillance technology trickles down to state and local law enforcement. But because this same technology remains in use both in the military, the intelligence community and in law enforcement, it also comes with a cloud of secrecy. That is law enforcement tries to keep everything about their use of these kinds of technologies secret because they don't want to tip off the bad guys. And also they want to be able to keep using it in the military and the intelligence community context. And so what we see time and time again, whether it's stingrays, whether it's hacking or other innovative surveillance technologies, is their use by law enforcement comes with massive secrecy. And so the FBI has had a dedicated hacking team since at least 2001. It wasn't until the early 2010s when EFF got a bunch of documents and put them online. And when I was reading them, I stumbled on this phrase, the remote operations unit. It wasn't until I think 2013 that I first learned the name of the FBI's hacking unit, 12 years after it was created. The fact is this unit has operated in near total secrecy. And this secrecy doesn't just affect the public's awareness of who is doing the hacking, but we're seeing pervasive secrecy entering our judicial system. Court orders related to government use of these technologies are routinely redacted if they're ever released. They're sealed routinely. Defense lawyers may not know how their clients were identified or arrested. Judges may not know what they're being asked to authorize and may not know where the evidence in their own cases came from. We really have this pernicious cloud of secrecy shrouding the criminal law enforcement landscape because the government wants to preserve the secrecy around these tools. Separately, because of the desire to preserve the secrecy, we also have a circumvention of traditional legislative oversight. There has yet to be a single congressional hearing focused on law enforcement hacking. Even though for more than 15 years law enforcement agencies have been engaging in hacking, we need to have a debate around this, but a debate hasn't happened in part because they don't want to highlight their use of this technology. All right, so number one, the problem of secrecy. Number two, mistakes will be made. Hacking tools are designed by humans and deployed by humans, and humans are not perfect. So the first kind of mistake that will take place is that innocent users will be hacked. And this is not a theoretical issue. This has happened. So in 2013, in one of the first bulk hacking operations to date, the FBI went after, I think, 23 sites on the dark web, one of which was Tormail. Now, while I think sites one through 22 were focused on contraband activity, mainly child pornography sites, Tormail was a service used by many legitimate users, journalists, activists, or people who just like to maintain some privacy online. And although the government got a court order from a judge authorizing them to hack 300 particular Tormail users, the way they deployed their malware led to them hacking innocent users. Essentially, anyone who visited the Tormail homepage for a few days while the FBI was engaging in their hacking operation would get a piece of malware from the government. Now, I don't believe that the FBI intended to do this. I actually think they made a mistake. But the fact is that they accidentally exploited vulnerabilities in the browsers of innocent people, deployed malware to their computers, then never told them about it. Those individuals never got a letter in the mail, no apologies, to the extent that the malware ever caused damage to their computers, the government never volunteered to clean up the mess. And so as hacking becomes a routine tool, we should expect to see these mistakes take place and we should expect to see more and more innocent people getting hacked. And when that happens, the government will shrug their shoulders and say, oh, not our problem. In the Tormail case, for the first year, they refused to even acknowledge that the FBI had been behind the hack. It was all nudge, nudge, wink, wink. And so there's a complete lack of accountability when the government makes mistakes. Second problem in the context of mistakes being made is that when the government agencies use zero-day vulnerabilities or use exploits that target zero-day vulnerabilities and they make mistakes and they get caught, those zero days will be thrown out into the wild. This is, again, not a theoretical phenomenon. And when it has happened, time and time again, the agencies responsible have disappeared. They've taken no responsibility. They've not paid for any cleanup or compensated the parties that have to actually clean up the damage. And so we have three high-profile examples of this. When the U.S. and Israel deployed their Stuxnet malware against Iran, they exploited several zero-days in the Windows operating system. And when their Stuxnet malware was discovered and then publicized, criminals took advantage of those same zero-days while people on the Internet were waiting to get the patch or waiting to install the patch. Separately, earlier this summer, an entity we believe to be Russia released several of NSA's router hacking tools under the name of the Shadowbrokers. When that happened, what happened from the NSA? Nothing. NSA said nothing. They did nothing. It was the engineers at Cisco and Juniper who had to, you know, work overnight and try and quickly develop fixes and roll them out. And then just last month, an unknown entity, probably a law enforcement agency, got caught engaging in a bulk hacking operation targeting a dark web site called Giftbox, which is a child pornography site. And a Firefox zero-day was released into the wild when that operation was detected. Who had to clean up the mess? Mozilla. When governments lose zero-days, it's the Internet that has to deal with the consequences. And it's the companies and developers who build the software who have to deal with the collateral damage. And so I really think the analogy to think of here is a bit like an oil spill. Right? The oil companies tell us that they will work hard to ensure that there will not be accidents. But there are, of course, always accidents at oil drilling sites. And when those accidents take place, it's the people who live in the community who have to deal with the consequences. Right? It's the people who are fishing in those waters or who live on the coast. They're the ones who have to deal with the mess. The CEO of the oil company, his or her children are not eating fish from that water. And so I really think we should be thinking of the government loss of zero-days in the same way. They are forcing the costs of their mistakes onto the Internet at large. All right. Problem number three, trust. The FBI has a bit of a tricky problem when it comes to deploying malware. In the event that they're not able to do like a drive-by attack or a fish, a watering hole attack where they know where someone is going to visit, a web page that someone will log into, in the event they're looking for a particular user, they have to get their malware onto that user's device. And the tool of choice, the first little choice will always be fishing because it works so well. But most reasonably sophisticated targets are not going to open up an e-mail if the from address says fbi.gov. This is an obvious thing. No one will open an e-mail from law enforcement. And so they have to go undercover. Law enforcement has to trick someone into opening up an e-mail and clicking on that attachment and looking at the PowerPoint or the PDF file. So how do they do this? They need to impersonate parties who are trusted in our society. They need to impersonate journalists, which the FBI did in 2007 when it was trying to identify a teenager who had called in a bomb threat to his school. They impersonated the Associated Press, sent an e-mail to the teenager saying, hey, we're the Associated Press, we're writing about you. Please see the attached word file, which is a draft of the story we've written. Let us know if there are any mistakes. The kid double clicks on it, malware installs, he gets arrested. But what about the press? What about the collateral damage caused to journalists if sources think that when they're contacted by a journalist, it might be the government going undercover? This fall, Citizen Lab released a really devastating report showing how an Israeli company, the NSO group, had provided iOS malware to I think the Bahraini government. This got a lot of publicity because the word on the street is that the vulnerability cost a million bucks. What really sort of slipped below the radar was that the NSO group, as part of their deployment infrastructure, was using a bunch of look-alike domain names. Domain names that people might click on because they look somewhat legitimate and there was a Facebook domain and there was a WhatsApp domain and a Google domain, but there were also two domains that looked like the Red Cross. Now, I understand why governments might want to impersonate the Red Cross, but I would hope that we can all recognize that we do not want militaries, intelligence community, or law enforcement agencies to impersonate medics in our society. Medics and doctors play a vital role and if you are worried that your doctor is secretly an FBI agent, you will not go to them when you need help. You will not tell them about your drug addiction or your suicidal thoughts. Even though it might be temporarily useful, the risk and the harm to the trust in our society could be devastating. Another example, just a few years ago, Flame, which is sort of the cousin of Stuxnet, was discovered in the wild. We believe this was the work of the US government and Flame utilized a novel hash collision technique to actually allow the government to impersonate the Microsoft Windows update service. Now, we've all permitted Google and Apple and these other tech companies to deliver automatic updates to our browsers and automatic updates to our computers and these update mechanisms rely on code signing. We only allow Google to deliver updates to Chrome and we only allow Microsoft to deliver updates to Windows. Well, what if governments can leverage that update mechanism? What if they can impersonate Microsoft or Google or Apple and deliver spyware directly to our computers? People may turn off automatic updates which I think many of us don't want. We don't want to go back to the old days of Windows XP where people were not getting security updates. And in the Apple FBI case this spring, one of the arguments that the government made in that case was, look, we're being really nice to you. We're asking you to write the software but if you don't want to do this, we can come back and demand your source code and your code signing keys and we'll do it ourselves. So we've seen a clear threat from law enforcement in the US that they believe that automatic update mechanisms are a fair target for law enforcement surveillance programs. So the trust thing I think should worry many people. All right, problem number four, the economics of surveillance. So I love Chris Rock, the American comedian. And possibly my favorite Chris Rock standard routine is where he talks about his views on guns. And of course, American views on guns are very different from European views on guns and I'm not taking a political position here about guns but I just want to use Chris's routine to really drive home a point. He says that he thinks that guns should be legal but bullets should cost a million dollars each. And the idea here is let's make it really expensive for people to shoot others and then, okay, it might happen every once in a while but they'll only do it when it's really, really important. And I sort of view surveillance in the same light. I know that governments are going to want to hack and I know that governments are going to want to spy but if we make it expensive, they'll have to focus their resources on the people who are really, really important, the real threats. And my concern is that the costs have gotten a little bit too cheap. This is one of my favorite quotes from a judicial opinion from a court decision in the United States. This is a famous American judge, Judge Posner, talking about the economics of surveillance. And he says that technological progress poses a threat to privacy by enabling an extent of surveillance that in earlier times would have been prohibitively expensive. And what he's talking about here is the economic cost of surveillance. When the government has to send a team of five agents to follow your car, they only have so many agents. There's only so many people they can simultaneously surveil but when a GPS tracking device in your car or in your phone can enable that same degree of surveillance, a single officer from his or her desk can spy on hundreds or thousands of people. Suddenly, the government can spy on more people than it could before because technology makes surveillance easier and cheaper. So the most high-profile FBI hacking operation to date, the Playpen operation, the one that I described at the beginning where I volunteered in a few of these cases, we now know that in that case the government hacked, the FBI hacked more than 8,000 computers around the world, about a thousand in the US and the rest abroad. So let's do a little bit of like back of the napkin math here. Let's say that, so we don't know for sure that the vulnerability the FBI used was a zero day but let's assume that it's a Firefox zero day and the average price seems to be about $100,000. That seems like a fair thing for this conversation. $100,000 for a zero day in Firefox divided by 8,000 targets equals $12.5 per target. So my concern with the economics of hacking is that if the government hacks enough people, hacking not only becomes an attractive way of surveying, it becomes the cheapest way of spying on people. Two or three officers can conduct one of these operations and hack thousands, tens of thousands or hundreds of thousands of targets. And as long as the operation happens in a relatively short period of time, they'll be able to get that many people before the software industry finds out and rolls out a patch and deploys the patch. And so my concern is that when they hack enough people, surveillance becomes so cheap. It becomes cheaper than, hacking becomes cheaper than even a single hour of law enforcement overtime that this will become the tool of first resort. Hacking will be the first tool in the toolkit that they reach for before they go undercover, before they try and convince them on the old fashioned way. My concern is that hacking is making spying far too cheap. All right, so that's the economics of surveillance. Problem number five, cross-border hacking. So it's not just the FBI that is engaging in law enforcement hacking. As Joseph Cox revealed in one of his stories a few months ago, the Australian police have engaged in hacking of tour users, which led to them hacking some people in the United States. Companies like Hacking Team have sold surveillance technology to governments around the world. And so this technology has been used not just for domestic surveillance by governments, but for cross-border surveillance. And the most high-profile case involves the Ethiopian government hacking into American journalists of Ethiopian heritage living in the Washington, D.C. area. EFF is currently engaged in a lawsuit against the Ethiopian government, but it's an uphill struggle. This is not just a phenomenon of your government hacking you. We are now about to enter a world where plenty of governments will hack across borders. So you might ask, what's the problem with this? Maybe this is just where it's going. Cross-border law enforcement hacking raises a couple really thorny issues. So when this incident happened, I think a year ago, this was in UC Davis just north of San Francisco, a very, very iconic photograph of a police officer tear-gassing or macing nonviolent protesters. This photograph, of course, went viral, captured the world's attention. This officer ended up losing his job. And the controversy around it ended up actually costing the president of the university her job, too. So there were consequences associated with this event in part because the people that lived in that community were disgusted by what happened. And the students at this university were outraged. And that led to political pressure and political accountability. Now think about what happens when a government other than your own engages in an activity in your country that you don't like. So a classic, I think, example here is the US government's campaign of drone assassination in Pakistan and Afghanistan. Now, the average person on the street in Pakistan is not happy about the fact that the American government is dropping bombs in their towns. But people in Pakistan don't vote in Iowa and California and New York. There's nothing that they can do about it through the normal powers of the political process. You cannot vote foreign police out of office. And so, you know, while I'm not completely comfortable with what the FBI is doing, I at least have a vehicle as an American voter to register my displeasure, to petition my government to change the rules. But there's nothing that I as an American can do to stop the Australian government, to stop the French government or the Italian government from using these kinds of tools in my country. And I think we're going to find out that the cross-border hacking is going to be the most problematic and the most legally difficult form of law enforcement hacking. All right. And then the last problem area associated with law enforcement hacking is what I call the digital security divide. And really what that boils down to is that we are not all equally vulnerable to surveillance. Some of us use devices that are more secure than others. Some of us use web browsers that are more secure than others. And some of us have up-to-date software. Some of us don't. Think about the average iPhone user. They have a 600 euro device in their pocket that gets automatic software updates supported for three or four years after they buy the device. They have automatic disk encryption, default, end-to-end encryption of text messages when communicating with other people who have iPhones. This is a device that out of the box is pretty damn secure. Now think about the situation with Android phones. With the exception of the Nexus series and now the Pixel series of Android phones, most Android phones rarely receive security updates. And if they do receive them, it's often very late. Android phones still do not use end-to-end encryption by default for text messaging or voice or video communications. And many Android phones still do not use disk encryption by default. Even though Google has required it of newer phones, there's still a carve out for slower, older chipsets. And so the end result is that many people who have Android phones are more vulnerable to law enforcement hacking. Now, if Android phones cost 600 euros and iPhones cost 600 euros, then I'd say let the market decide. But the fact is that Android dominates at the middle and low end of the market, which means that the most vulnerable in our society, minorities and the poor, are more likely to be using devices that are easier for law enforcement agencies to hack. So that might mean that to hack a middle-class banker, the government needs a zero-day in iOS, and to hack a poor immigrant, the government can use a two-year-old exploit that they purchased for $5,000 online. My concern is that law enforcement hacking, because of the inequality of software security, will actually further perpetuate the existing inequalities in our society. All right, so for those six reasons I described, there are serious collateral harms associated with law enforcement hacking. Even if you think that there are justified reasons for the state to hack, you should at least now see that it's not necessarily a clean technique with no harms caused to third parties. So the purpose of the title of this talk is stopping law enforcement hacking. How do we stop this practice or at least restrict this practice? So option one, of course, is to legislate, to pass laws, to regulate this. As I said before, the FBI in my country has been hacking for more than 15 years. There's never been a law passed to regulate this. There's never been a congressional hearing, and it's only really in the last couple years that the courts have started to struggle to deal with this phenomenon. We need legislation, but that's really tough, and it's particularly tough in this current climate. So in the U.S., there's been no laws at all. In the U.K., they just recently passed the most sweeping piece of surveillance legislation in decades that clearly authorizes hacking. In the U.K., they call it electronic interference. But I think part of the reason why that legislation sailed through so easily and why the government got all the hacking powers they wanted is we haven't done a good job of articulating the problems with hacking. And so that means that civil society, if we're going to get laws passed to regulate or restrict hacking, we have to do a better job about how we talk about it. So we cannot change the law until we change the debate. And as long as this is a debate around the government violating people's privacy through hacking, we lose. We have to talk about damage to internet trust. We have to talk about the government losing zero days. We have to talk about the government hacking innocent people. We have to talk about the government hacking poor users more than rich users. That's the only way that we create political support for hacking legislation that will benefit the internet and that will restrict these tools. Otherwise, any hacking legislation will give law enforcement everything they want. All right. So the legislative landscape may be a little bit depressing. What are some other ways that we can restrict or prevent law enforcement hacking? Well, this is a technical conference and this is a community of nerds. Let's talk about how tech can restrict government malware. So we can do a better job of increasing the security of the platforms that we all use. If we make our devices more hardened, if we make our software more secure, then it will be more difficult for governments to hack. They will have to spend more money. And when they do lose a zero day, it will really hurt them. So one of the big problems in this space is that the privacy software we use is often such a soft target. It's so, in many ways, the privacy software is often more vulnerable than regular off-the-shelf software. And I really think there's no better example than Firefox. I think this photograph really sort of sums it up. So the Firefox is otherwise known as the red panda. And this panda is barely hanging on this branch with just a single push. It would fall down. And really, that's about the security posture of the Firefox browser. Firefox is not hardened, which means that although there are techniques, there are well-known techniques that Mozilla could employ that would make it harder for the Firefox browser to be hacked, they have not employed those techniques. So this is a chart that Mudge, the famous American hacker, put together with his cyber-independent testing lab. And this chart compares Chrome, Safari, and Firefox based on the exploit mitigation techniques that they've deployed, things like ASLR, heat protection, and stat guards. You can clearly see here, and this is from a few months ago, but you can see that Firefox was lagging behind the other two browsers. Many of these are simple compile-time options that can be enabled with a few changes in the build process and then make it significantly harder for an adversary to hack Firefox users. And I should be clear, this is not just about the hundreds of millions of Firefox users. This is about people using the Tor browser, of which is a variant on Firefox. So one of the key techniques that Firefox has been missing, and in many ways is still missing until now, is the security sandbox. And the impact of the security sandbox is that it makes it harder for an attacker to use a single vulnerability to take control of your computer. I apologize for my really shitty Photoshop skills. This is not my area of expertise. In many ways, the Chrome browser, although it is clearly the most privacy-invading browser, is also the most secure browser. And Google has spent a huge sum of money to really armor their browser and make it more difficult for governments to exploit. And really, we can see the fruits of that in Google's bug bounty program where they pay researchers for successful compromises of their products. This is from the Chromium blog, the Google Chrome team's blog in 2012, talking about a chain of vulnerabilities that a researcher named Pinkie Pie delivered and got a prize for. So how does one get full remote code execution in Chrome? In the case of Pinkie Pie's exploit, it took a chain of six different bugs in order to successfully break out of Chrome sandbox. In the same month, what's clear is that Sergei certainly earned his $60,000 Ponium reward. He chained together a whopping 14 bugs, quirks, and mish-hardening opportunities. And so what we see here is that in the case of the Chrome browser, researchers cannot, in many cases, take over the browser with one vulnerability. They need six or 10 or a dozen. Now, to be clear, it is possible for researchers to find six or 10 or a dozen bugs or vulnerabilities, but that's certainly more difficult. And if we force governments to up their game, it'll make exploitation more expensive. And it'll mean that when they do get caught, they'll be maybe more reluctant to engage in those operations in the future. And so, as I said before, the Tor browser currently uses a variant or is a fork of the Firefox browser. And so the lack of a sandbox in Firefox directly affects the Tor browser and those users who depend on Tor for their safety and security. So one method, as I said, is to sort of harden the software that we all use. Another method that we could employ to make hacking more difficult is to actually target the specific methods that we know that governments are employing. So in the case of Tor users and governments, governments are not trying to steal Bitcoin from users. Governments are not trying to install ransomware on your computer. If you're a Tor user and you are likely to get hacked by law enforcement, the one thing they want more than anything else is your IP address, your IP and then your Mac. So they need to gain the ability to execute code on your computer. They want to learn your IP address and then they want to send it back to an FBI or GCHQ computer. And so if we want to make law enforcement hacking more difficult, why not focus on the way that they hack? Why not focus on or rather the information they're seeking to extract? And that's in fact what some folks are doing right now. Just a couple months ago, the Tor project announced that they had deployed a new experimental technique. It uses something called Unix domain sockets. But essentially, the Tor browser, if you're using this experimental build, cannot talk to the internet. All communications go through the Tor browser. And so there's no way for malware on that computer to call home. We just saw the first experimental builds from that project, I think released maybe a week or two weeks ago. And so that is a direct effort to take on the exploitation techniques that law enforcement agencies are using. There's also projects like cubes and subgraph, these projects that are sort of trying to build penetration resistant operating systems or a more secure operating system. And the idea here is that we will get hacked and how do we survive a hack? And in both cases with both cubes and subgraph, although it might be possible for law enforcement to hack your browser, that necessarily won't lead to the discovery of your real IP address because the browser is either contained in a container or a virtual machine that cannot see a real IP address. And then this brings me to my final technical point. The Linux community has not been great in embracing security techniques. There's like three people over there who really care. So there was a devastating article written last summer, a year ago, in the Washington Post about the toxic relationship between Linux kernel developers and developers in the GR security project. So I've been using Linux since I was, I think, 10 or 11 years old. I remember as a child debating with friends and family members about which operating system was better. And I always sort of felt superior as a Linux user. It's embarrassing that so many of the exploit mitigation technologies like ASLR were designed first in the Linux community but are still not deployed by default in the Linux community. Windows has taken the best of our ideas and deployed it to their users. Apple has followed. And the fact is, there are, we don't need to wait for next generation R&D to make exploitation harder. The GR security project and others have pioneered some really amazing technologies. But then there's this toxic, toxic relationship between the security community and the kernel community, which means that basic protections that other operating systems have already deployed are still missing from mainstream Linux distributions and from the Linux kernel mainline project itself. Now, things are getting better. After that Washington Post story came out and it's really, it's a long, fun read, but it's not every day that a 5,000 word story about the Linux kernel appears on the front page of the Washington Post. And so I strongly recommend that you read Craig Timberg's story. Everyone in that story comes away looking like shit. There's egos as far as the eye can see. So things are getting slightly better. There's a kernel hardening project now that's trying to strip off individual features from GR security and get them upstreamed. But our community needs to do a much better job of getting these mainstream security technologies upstreamed. It shouldn't be this easy for governments to hack the users who depend on technologies. And you know, non-technical users can only be expected to do so much. And we have created these tools for them. And then many of them still get hacked successfully by governments even though the users are doing the right thing. We've delivered software that lets you shoot yourself on the foot a bit too easily or that self-destructs too easily. And we know how to build hardened software because many of the people in this room have, you know, custom builds of software or custom patches installed. We need to make the default software that we're delivering more secure. We know how to do this, but people need to get over their egos and their hate of each other. In many cases, it really seems like these communities have hated each other for so long that they've forgotten why they originally started hating each other. And I'm hoping that in two or three years that Ubuntu will have some of these protections and that Debian will have some of these protections turned on by default. All right, so to sort of wrap this up, even though the title of this talk is Stopping Law Enforcement Hacking, we're not actually going to be able to stop all law enforcement hacking because it's too useful. And governments like tools that work. But we can make it more expensive. We can make it much, much more expensive. And as I think I've outlined and explained, law enforcement hacking is not just cheap. It may become the cheapest form of surveillance when deployed at scale. And if we don't do something about the cost, I think we're going to see hacking not just be an obscure tool used for special cases against special targets, but becoming the first tool because it works so well. Thank you very much. So now it's time for questions and answers. Does anybody of you have a question? Yes, on three. I see one. Hello, I'm very content with what you say, but I'm kind of confused also. You say that it's about law enforcement. And I would argue that Australia hacking in the US is not law enforcement. It's espionage and drone bombings are not law enforcement. It's warfare. So please help me with that misunderstanding or tell me what do I get wrong or where is the distinction between law enforcement and other activities of the government? Yeah, when the Australian police hacked tour users, that wasn't an espionage case. That was a case designed to identify people who would then be arrested and put in jail. That's law enforcement. Yes, the Pakistan drone example, that's definitely either the military or or the spies. But the reason I brought up that example is simply to show that when a foreign government does something in your country and you don't like it, you have a lot less ability to get things changed. You cannot call up your elected official and tell them to stop doing it. Thank you. A question from the internet? The internet wants to know, does the government ever remove the malware from a PC? For example, in the case of tall male, if they found out, oops, not the person we are looking for. My understanding, at least in the three or four both hacking operations that have become public so far, that in none of those cases was the malware permanently installed on your computer. So it ran, it collected some information and sent it home. There are scenarios where law enforcement have asked courts for permission to install more permanent or persistent malware on computers that would collect video, footage, or webcam information for a period of like 30 or 60 days. In that case, the software would stay on the computer until it was removed. But again, we know very little about how these technologies are deployed. In the U.S., law enforcement has had malware for 15 years and we know of maybe 10 or less cases where it's been used. I don't think this team is sitting around twiddling their thumbs with nothing to do. I think they're very busy. But most of the operations they engage in remain sealed or hidden from the public. And so we don't know enough about how these tools are used. Number two, please. Do you believe in the United States that there's a public perception that government hacking might be good and civilian or black hat hacking might be bad and that might contribute to why we're not able to legislate it against it effectively? I mean, I certainly think that the average person probably doesn't know that law enforcement hacking is taking place at all. Most Americans are busy trying to put food on the table or get their kids through school. But I think to the extent that you have a conversation with the average person, yeah, I think no one is going to be sympathetic with criminals hacking. And like many forms of violence, you know, it's not okay for me to taze someone, but the government is supposed to have a monopoly on violence. And maybe they should have a monopoly on hacking. If that's a thing that is to happen, then we need to have a debate about it. But as I've tried to explain this talk, just like law enforcement use of so-called less lethal weapons like tasers, just like they can have collateral harms, they can accidentally kill people sometimes, so too can law enforcement hacking lead to unintended consequences. Number four. I wonder, in many cases, terrorism is used as a reason for espionage, especially in France and Europe now. I wonder if fighting's narrative around terrorism is a good strategy to fight this idea of law enforcement hacking. So the question is whether we should use the terrorism rhetoric or we should allow the other side to use the terrorism rhetoric? It's not fighting the idea of law enforcement hacking, fighting the reason why people, the government, want to use law enforcement hacking often terrorism. I mean, I think the average person doesn't want, doesn't want terrorist attacks to take place. The average person also doesn't want pedophiles to be able to do horrific things to children. I think if we allow the debate to be framed as the government going after bad people, then we lose the debate. The conversation really needs to shift to, even if the government has good intentions, what is the secondary effects of their techniques? And, you know, if they're looking for a terrorist in an apartment building and they burn down the building, that's a problem if there are lots of innocent people living in that building in addition to what they do to the target. It's very tempting to engage on the specifics of how the government is using it, whether to discuss the pedophiles or the terrorists or the drug dealers, and if we get sucked in there, it's a trap and we lose. We have to stay back and focus on the harms to the internet and the harms to innocent people. Another question from the internet if there's one. Would it be feasible to initiate a class action lawsuit against the FBI or other agencies? And would we find enough sufficient evidence or enough people for such an initiative? I'm not a lawyer, and so I don't know. But it's hard enough to sue the government when you can prove what they're doing. It's even harder when you cannot prove. And as I've described, in many cases, it's really difficult just to prove which agency was doing the hacking even when they get caught. Number three, please. I've been talking to someone in the police and he was saying that we already decided that the police can quietly go into people's homes sometimes and search everything they want to. So why would we just not make legislation that says they can do the same to computers and other devices? But I wasn't thinking, well, maybe there's a little bit more to perhaps your phone, for example, than there would be in your home when it would be searched. How would you view that? And how would you convince him that it might be the other, yeah, it might be some other way? Yes, I think you bring up two sort of really interesting questions there. The first is that there was a public debate, at least in my country, about whether or not when and why the police should have the ability to search people's houses. That debate took place at the founding of our country because we were concerned, or my forefathers were concerned, that the British had abused their authority to conduct general warrant searches of entire neighborhoods. So we've had a debate about that and as a result of that, we got laws passed and there's a specific legal framework that governs when the police can kick down your front door and search your living room. We haven't had that debate around law enforcement hacking. Instead, the existing search tools, the existing search authorities are used for a very different kind of search. And when you think about why we need a debate and why we actually need specific hacking rules, if the government fucks up a normal law enforcement search, maybe they search the wrong house, maybe they shoot your dog or ransack your living room. But the harm is limited to a relatively small area. Maybe they get the building above you instead of your apartment. But in law enforcement hacking cases, when they make a mistake, they could be searching an entire neighborhood. They could release a tool onto the internet that criminals could then use to hack innocent people. And so modern bulk surveillance technologies, the worst case scenario for when they make mistakes are so much worse than a traditional physical search of a home that we do need to have a conversation about this. We do need to ensure that those we elect to office are politically accountable for enabling this kind of technology. And then we need to keep it under close watch because when they make mistakes, and even when they don't make mistakes, they can still harm innocent users on the internet. Number two, please. Okay. Don't you think that it's still pretty unlikely if you use Linux to be hacked by the government because it's way less affordable for them because they're way less Linux users because so they're a way smaller target. So the question is, do I think that Linux users are safer because you're a small enough minority? No. There are enough Linux users that it's cost effective for the companies who sell the tools to governments will sell a Windows version, a Linux version, and a Mac version. And we've seen the FBI deploying all three flavors of malware in hacking cases to date. So this idea that you're somehow safer because you use Linux, I think, is a myth. If you want to compile your own version of your browser with custom compile options to make yourself a little bit more unique, yes. In that case, maybe you can achieve protection by using a custom build, but not just by using a Debian or Ubuntu ISO image. Thank you. A question from the internet, please. The question is, what is your opinion regarding law enforcement agency hacking on the accountability of evidence acquired by hacking and thereby the impact of usability as proof-and-court? So there's an entire question or the entire debate to have around how reliable malware evidence, malware-derived evidence is, and that's something that's being litigated in the United States. The FBI in the playpen operation did not use TLS to transmit the information it collected back to the FBI server, and one of the arguments the defense lawyers made was that it could have been tampered with as it was sent back to the government servers. There are forensic standards used in, established by the industry and by the government for searching laptops and extracting data from them. There are no forensic standards for malware, and because the government wishes to preserve the secrecy around their tools, they are very reluctant to even disclose the shell code that they're using, let alone the exploits that first break into a computer, so it's very hard for the defense in a case to really understand what took place and what mistakes may have taken place while the search was going on. Number four, please. Speaking of framing the debate, I think there's one point that wasn't really made, and it's the color drill damage that is done by the zero days. If you have a zero day, it doesn't even matter if it goes out to somewhere. Somebody else could, in the meantime, while you're using that zero day, could have developed the same capabilities and use it against target. In the end, police forces are actively putting the public, and with public eye, I also mean other government agencies, the army, hospitals, power stations, and so on, in jeopardy to be attacked with the same measures that the police already knows of, so it's totally irresponsible for any law enforcement agency to use any zero days at all. That's my point. I think it's an important point for framing the debate. So I'm definitely aware of that argument, and it's certainly an argument we made in the Apple FBI case in our ACLU brief. I don't think it's a winning political argument. If there was only one zero day in the Firefox browser, then yes, if law enforcement discovered it and didn't tell Mozilla it's possible that another entity, a foreign government or criminal gang could discover that same vulnerability, and by not telling Mozilla we're all left vulnerable to those foreign governments using the same tool. That argument makes sense when there's one zero day to be found in Firefox. But let's be honest, the Firefox browser is a target-rich environment when it comes to vulnerabilities. It is not a particularly secure piece of software. It's very complex with very old software. They're still paying off their technical debt, and I'm not going to say that zero days in Firefox are a dime a dozen, but there are certainly enough of them that I don't think it's as convincing to say that governments are living as vulnerable. I get the rhetorical strategy, but I think the argument of governments losing zero days and then them being instantly used by criminals is a much more convincing argument, and we now have enough examples of that taking place that I think it makes sense to focus there because everyone, even the least technical person, understands that the government makes mistakes. But thank you. Number three, please. I think your point about changing the debate is as important as often missed, especially in this community. I think that we focus on technical solutions to social problems very often, and we kind of miss this part. So thank you for this very much. But I do have a question. I think part of this debate is language also. And so the question is, do you think that perhaps we should find a better word for maliciously using, you know, exploits to break into things than hacking? And this is an honest question, right? Because I think that with this word being used in two different meanings, we miss a lot of opportunities to explain the difference between a hacker who is opposed to law enforcement using and abusing exploits or the militarization of exploits and the people on the other side of this discussion. I think that makes this discussion, you know, muddied and harder to actually engage in and to actually change the debate as you suggested. Thank you. Yeah, I think that's a fantastic point that you bring up. The power of language is extremely important and often overlooked. In the U.S., for example, the FBI came up with their own special term for malware, which is called a network investigative technique. And every time the defense lawyers use the term malware in court or in briefs, the government freaks out. They say, no, no, no, malware is something that criminals use. When we do it, it's called a network investigative technique. So from my position, if a gun is held by a police officer, it's a gun. And when it's used by a bank robber, it's still a gun. The person who's holding it doesn't change the physical object, but the Department of Justice has contorted themselves into a strange shape to insist that their malware is not really malware because a judge who didn't really understand what she was authorizing gave them a piece of paper. I think that for them, one of the benefits, or there's a few benefits of redefining the term malware, the first is that many judges who are being given applications for a hacking warrant may not really understand what they're being asked to authorize when it uses this clinical made-up term. Until two years ago, a Google search for network investigative technique would reveal nothing. And so a judge encountering one of these applications for the first time would have no way of knowing really what they were being asked to authorize. And still to this day, you don't see zero day or shell code or exploit anywhere in the warrant application. And so they really use this sort of clinical language to make it seem like what they're doing is really not a big deal, doesn't have any risks, they don't talk about what happens if they make a mistake. And so I think that's really useful when it comes to convincing judges or at least making sure that judges don't know what they're being asked to authorize. And then I think, yeah, that also impacts the political debate when there is one. Our community needs to do a better job of using language in our favor. And we unfortunately often let the government define the terms and then we use their terms, whether it's the increased adoption of sort of intelligence community terms into our surveillance debate, like things like implants, operators. I think we use the terms because we think they're cool, but then we miss the opportunity to take the power that comes from defining them in favorable terms for our side. So we all need to do a better job with that. But thank you for bringing that up. So thank you very much. And I have to close this talk now. Give a warm applause. Thank you.