 But I don't want to go into a whole lot of biographical material because there's so much of it and it would take a half hour. But Cory Doctorow is here. I want to welcome him to our home and prepare to be amazed. Thanks. Thank you. Thank you all very much. Thank you for coming. Thank you, Emmanuel. Oh, you got the slides up. There we go. So, hi, folks. It's a genuine pleasure and honor to finally get to hope. I've been trying to come, as Emmanuel said, for more than a decade. I've had, like, planes canceled and been kidnapped by my wife for my birthday weekend and lots of other reasons I haven't been able to come. And it's such a pleasure and an honor to be here at Hope in New York after all these years of admiring it from afar and reading 2600 since I was a teenager. So after this talk, just as a heads up, they've given me a really big generous block of time. So there's going to be lots of time for Q&A. There's an audience mic. And I like to alternate between people who identify as female or non-binary and people who identify as male or non-binary, so I'm going to call on you in that way. A heads up now to start thinking about questions. Long rambling statements followed by what do you think of that, technically a question but not a good one. So just start, yeah, start thinking about that. So as you can see from the slide here, I want to talk today about different kinds of arguments because in the world of information security, hacking, civil liberties, technology, we have a lot of serious, deep, significant, good faith arguments, arguments about whether or not we should be stressing the utilitarian instrumental benefits and calling it open source or the moral benefits and calling it free software. Those are good, meaty arguments to be having. But there's a different kind of argument, the denialism kind of argument. These are the debates that are manufactured by people who profit from controversy over something that's not actually controversial. And the canonical example of this is smoking. The cancer denial movement was kind of the template for all the denial movements that followed. And if cancer was kind of the warm up, the kind of mega death that followed on cancer from denialism really showed that there was like no limit to how many people's lives you could destroy just by pretending that something that was true was instead controversial and unknown. The AIDS denial movement was a particularly toxic version of this. There was a German doctor named Matthias Roth who owned a big vitamin company and who claimed that AIDS wasn't caused by HIV, he claimed that it was caused by vitamin deficiency. And he ran these ads in South African newspapers saying things like, why should South Africans continue to be poisoned with AZT? And that would be bad enough, but he was a crony of the president of South Africa at the time, Tabo Mabeki, and they adopted the official policy along with the South African Ministry of Health that AIDS was a vitamin deficiency and that anti-retroviral drugs would not be the standard treatment for AIDS in South Africa. And because it was a vitamin deficiency, it also wasn't treated as a sexually transmissible illness. And so during that period, 300,000 people died. And the population of HIV-positive people, which started at 1% in South Africa by the end of this, by the time they recognized that AIDS denial was wrong and that AIDS was the express of HIV, that population went from 1% to 25%. And part of the kind of playbook for denial is that when people say that the thing that we know is true is true, you have to silence them. And so when Ben Goldacre wrote this up for The Guardian, he's an eminent epidemiologist, when he wrote it up for The Guardian, Matthias Roth sued him for libel and The Guardian spent hundreds of thousands of pounds and eventually prevailed and were able to publish this story. And so that is a kind of template we see over and over again in public debates, the kind of bad faith argument where experts all agree that something is true and yet in some important official circle, we act as though it's still an open question. So you know about climate denial, obviously. But there's another kind of denial that this community has to deal with extensively, and that's touring completeness denial. So it's a funny name, but it's real, right? And so there are a lot of expressions of touring completeness denial, the denial that like computers can run all the programs that we can express in symbolic language, but we don't know how to make a computer that can run all the programs except for the ones you don't like. So a good kind of long running example of this, one that I'm going to return to in this talk a few times, is digital rights management or digital restrictions management. So the idea that you can build a computer that you can send a cipher text to, and then you can send the keys to, and then you can ask the computer to make sure that the owner never sees the keys and doesn't get to save the clear text when the keys and the cipher text are combined, right? That's like in security, we have a technical term for that security model where you give the keys to the adversary and then hope the adversary doesn't find out what they are. We call that wishful thinking, right? Like if we made the best safe in the world, we wouldn't keep it in the bank robbers living room, and if you make the greatest DRM system in the world, it will fail because you are sending your adversaries the keys. It's not Alice and Bob and Carol. Alice and Bob trust each other, and Carol is trying to, man in the middle of their communications, just Alice and Bob, right? Like Bob is Netflix and we're all Alice, and Bob wants to send us a scrambled movie and make sure we can't save it for offline viewing. And anyone who wants to can become Alice just by getting a Netflix subscription, which I know it went up by like a dollar this month, but it's still not hard to become Alice. Alice can be like board grad students with their own electron tunneling microscopes and a whole bunch of undergrads hanging around with nothing to do like a bad smell who can just go through the code one line at a time, go through the computer one register at a time until they find that key. And so, you know, DRM fails all the time, but we pretend that it still works. Or crypto backdoors, you know, this is a subject that's everything new is old again, everything old is new again, we're back to crypto backdoors again. The idea that we know that we can make a crypto system that keeps all our secrets perfectly and forever except when we need it to fail catastrophically. And then it fails catastrophically just for one minute long enough for good people to be able to scramble the message. And then it reasserts itself and starts working again. Right? We know that that doesn't work if we're, if we know anything about crypto. And yet there's like amazingly a live debate, right? People who are serious people who get paid lots of money and get to make decisions who say with perfectly straight faces that we know how to make that crypto system. And it's just that the nerds are like hiding it from us. Or privacy denial, right? This idea that we can like collect lots of data and retain it indefinitely and that it won't leak and if it does leak nothing bad will happen. Right? The idea that if you have nothing to hide, you have nothing to fear. That it's the same thing for you to drive down a street with a car that has a license plate as it is for that car to have its license plate photographed every 100 yards. And then for those photographs to be subjected to OCR and then for the records of all the places that you've been to be stored indefinitely. Right? These are clearly not the same thing and there are clearly harms that can arise from it. And yet we treat them as though we're equivalent. Right? We say that everything that's not secret is, is therefore not private. But there's lots of things that aren't secret that are still private, right? Like I know what you do when you go to the toilet. Right? It's not a secret and yet it takes a very special person not to want to close the door behind them. I, I, I know what your parents did to make you. Right? And yet most of us, you know, it's, it's cool if you don't want to do that in private but it's not any of my business if you choose to do it in private. Private isn't that no one knows your business. It's that you get to choose who knows your business. And the problem with denial is that it begets nihilism. Denial matters because the things that we deny are real. Right? Climate change is real. Cigarettes cause cancer. HIV is not vitamin deficiency. We can't make computers that run all the programs except for the ones that piss you off. And so the non-solutions that are demanded by denial don't solve the problem and the problem festers and it gets worse. And because we're not solving the underlying problem and yet we're pretending that we're solving the problem is that our non-solution is actually worse than doing nothing at all. Right? It's, it's worse than doing nothing about cigarettes and cancer to think that smoking light cigarettes won't give you cancer. Right? Because it takes people who think that they're mitigating their harms and puts them in exactly as much harm's way as they would have been if they'd gone on doing the same thing. It's like having someone whose cars, whose car has faulty brakes. If they know their cars brakes are faulty, they can take steps to remediate that, that faults. They can drive more slowly. They can increase their following distance. But if you don't know that your cars brakes are faulty, the way that you find out is in the hardest way possible. So doing, telling people that their problem isn't a problem makes things worse, worse than if they knew that it was a problem and did nothing at all about it. You know, people who think that AIDS is caused by vitamin deficiency have unprotected sex, even though they know that they have AIDS. Right? And that's how you go from 1% to 25% infection rate. If you deny climate change, you build houses on floodplains. And then the floods come and it's worse than, than doing nothing at all. If you deny touring completeness, then you go around and you tell artists sweet lies about the day that will come when the internet will somehow be worse at copying. Right? That day is never going to come. This is it, right? July 22nd, 2016, remember this day, because it doesn't get harder from here on in. Right? Your, your grandchildren and great-grandchildren will marvel at how hard it was to copy stuff in 2016. And we'll ask you, tell me again, grandma, tell me again, grandpa. About 2016, when you couldn't go into a CVS and buy six thumb drives for a dollar, each of which could hold all of the words ever spoken and all the paintings ever painted and all the songs ever sung and all the movies ever made. Tell me when like, all of the, the, the retirees hadn't gone to free classes at the library to teach them the magic incantation, movie name, space, bit, torrent. Right? So this is it, right? Like it doesn't get harder to copy. And so we go around and we tell, we, because of touring completeness denial, we tell artists that someday it will be harder to copy than it is now. And we tell them that the reason that they get so little money from Spotify is that their audience listens to music the wrong way. And not that the billions that streaming services hand over their labels is just intercepted and pocketed by those labels and none of the money is passed on to them. And we end up pointing artists at the people who love their work and not at the people who sit between them and the people who love their work who takes the lion's share of the money. And it's easy to see how that won't solve anyone's problem. Or with crypto denial, we build out infrastructure and we build embedded systems, you know, forever-day systems, systems that are never patched, that have vulnerabilities that were deliberately introduced at the time of manufacture in order to allow for lawful interception capability. And then we hope somehow that that won't be exploited by bad guys, whether those bad guys are officials or people who can successfully impersonate officials to the device. Or in the world of privacy denial, we encourage people and we fund them to found businesses predicated on the idea that if you just siphon off ever larger caches of people's personal information as they move through time and space and the internet, that you can bet somehow you'll be able to monetize that in larger and larger sums before that information leaks and destroys the lives of those people and that you can flip your company before those people realize what you've done and sue you for everything you have. And the problem with this is the old lady who swallowed the fly problem, that these non-solutions begat further non-solutions. The problem is still there and your non-solution hasn't worked and we can't admit that the solution hasn't worked because we're wrong. We say the problem hasn't worked because we haven't tried hard enough. Your light cigarettes aren't light enough. The reason your DRM is breaking is because we haven't made it enough of a felony to tell people about the problems with it. The reason that we have privacy problems is that our terms of service aren't thoroughgoing enough. If only we make people admit when they use the service that there are no privacy problems, then there will be no privacy problems. By using this service, you agree that I'm allowed to come over to your house and punch your grandmother and wear your underwear and make long distance calls and eat all the food in your fridge, right? If you make people agree to that, then the privacy problems will somehow go away because they will have agreed that there are no privacy problems. And so we spend more money, we take more elaborate steps, we subject people to more rigmarole to non-solve the real problem, and it starts to feel like too much trouble. It starts to feel like it's a fact of life. I mean, these cigarettes are going to kill me someday, but what the hell, it's too late to quit now. Or there's already so much carbon in the atmosphere, there's no point in not enjoying ourselves with the few years we have left. Or the entertainment industry is going to insist that DRM be baked into all of our technology, and we won't have a future without their content, so why not just go along with it and maybe we can convince them not to make it as terrible as they did the last time around. Or my data is going to leak no matter what, so I might as well hang out on Facebook and get invited to a few parties before the information apocalypse hits, right? So this creates nihilism, the idea that there's no future, but eventually it becomes undeniable. Eventually, at a certain point, no matter how much denial and fud there is, the problem becomes obvious to everyone. So like privacy, we're reaching that moment. The US government treats the Computer Fraud and Abuse Act as though violating terms of service was a felony, and so privacy researchers have their hands tied when they discover flaws in systems that we rely on, that we give our data to, and they're not allowed to report it, and we can see the impact of that unfolding every day, and fewer and fewer people don't know about that. More and more people know about it every day. Or the Digital Millennium Copyright Act, which subjects people under Section 1201 to criminal liability for investigating systems with DRM in order to determine what impact there have and how much information they're gathering and how that information is being used. And so long as we make it a crime to break DRM, so long as we make it a crime to tell people about the flaws in it, we make DRM into an attractive nuisance that spreads and spreads. Section 1201 of the DMCA doesn't stop people from breaking DRM, but it does give manufacturers the legal right to sue people who break DRM, even for legal reasons. So Phillips put DRM in their light sockets and updated their firmware so it would only accept Phillips' light bulbs. And reconfiguring your light socket becomes a potential felony, even though switching light bulbs is not piracy and isn't against the law, what company would not take up the US government on a sweet offer like that? Add a thin skin of DRM to any product, and we will give you the legal tool to sue anyone who removes it, even to do legal things. And so DRM metastasizes, right? DRM is now in every kind of device. Last year, the US Copyright Office held hearings on the problems with the DMCA and with Section 1201 and with DRM, and they heard from researchers who were finding DRM and being frustrated in their ability to report on its flaws in tools as varied as pacemakers, thermostats, cars, medical diagnostic equipment, baby monitors, insulin pumps, cat litter boxes, smart light bulbs, and these little beauties that debuted at CES this year, that's the Internet of Things Rectal Thermometer. In 2016, we have DRM up the ass. So the privacy and security implications of these things being off limits for auditing and not able to be audited as a consequence and proliferating without auditing an investigation, those privacy consequences are figuratively thermonuclear, but they are literally potentially lethal because you put computers in your body all day long and there's only more of them coming, right? Like you and me and everyone we know who grew up with earbuds, we've logged enough punishing audio hours that there will come a day that we're gonna have hearing aids and those hearing aids aren't gonna be beige, retro, hipster, analog, transistorized hearing aids. Those are gonna be like touring complete computers in your head and depending on how they're configured, they will allow you to hear everything. They may give you super human hearing or they may stop you from hearing things selectively or on a blanket basis or they may tell other people what you're hearing or they may make you hear things that aren't there and it all comes down to whether or not you have the right to investigate them and having investigated them and discovered flaws in them you have the right to remediate them. Now at a certain point as all of this stuff becomes more and more apparent as the DRM spreads from one realm into another as problems with reporting privacy issues thanks to the Computer Fraud and Abuse Act become more pointed will hit peak indifference. The moment when the number of people who care about this stuff can only increase, right? There was a moment like that with tobacco. There was a moment in which the number of people who'd watched a loved one die of lung cancer was only going to go up. There was a moment, there will come a moment when climate refugees can no longer be ignored because they will literally be washing up on your shore. There will be a moment and I think we're at the threshold of that moment when privacy breaches will every week destroy the lives of millions of people. So remember the Office of Personnel Management breach? They were collecting information for Americans who wanted security clearance to work with the government and military. To get security clearance you have to tell the government all the things that might be used to blackmail you. My mom attempted suicide. My brother's in the closet. I have HIV and I haven't told my family. All of that information was stored by the Office of Personnel Management and over 20 million Americans and it was breached and the best guess is that it was breached to the Chinese government. And so there are a lot of people who today feel like they're terminally compromised by the consequences of privacy and you can see peak indifference at work. This year I went to a Rand Corporation multi-stakeholder war game where they brought in civil liberties advocates and lawyers and spooks and security experts and CSOs and lots of other people and they gained out different scenarios where all the banking collapses because of a zero day or all of our network infrastructure dies. What do we do? And they put you in a little team, a multidisciplinary team and they ask you to game out possible answers and then periodically someone will come in and say, well there's riots in Philadelphia. What are you gonna tell the president? And under that hot house condition people suggest a lot of different possibilities and a lot of them had real negative privacy implications and I noticed something that whenever the really bad privacy stuff came up all the spooks and three letter agency and cop types in the room said, no, we can't do that. And I couldn't figure it out until one of them mentioned Office of Personnel Management. I was like, oh yeah, right. Like a liberal is a conservative who's been mugged. A privacy advocate is a spook who had their personal information raided by the Chinese government from the Office of Personnel Management. And it's not just spooks, obviously. There are people who've had their information breached in lots of compromising ways. Not just the Ashley Madison breach, but increasingly we're hearing from people who've been victimized by voyeurs who spy on them through their own security cameras, through their phones cameras, through their web cameras and extort them into performing live sex acts on camera, sometimes minors. People whose attackers piece together enough information from lots of disparate breaches and then use that to come up with novel attacks. It turns out that like the environmental movement, scammers use every piece to never recycle something that can be reused. So we have information from old breaches like Target being merged with information from new breaches. And last Christmas there were a rash of stories in the Wall Street Journal and the Financial Times about homes in New York and London that were being sold out from under their owners by scammers who had pieced together enough information from different breaches to get duplicate deeds for their homes and who sold their houses out from under them. So over time people are coming to realize that just clicking through an agreement doesn't make the privacy issue go away. And over time people are coming to realize that having computers that are designed to treat you as an attacker and to prevent you from reconfiguring them so that the manufacturer can make as much money as possible from you is a colossal disaster. That making it a felony to reconfigure your device to serve its owner rather than its original manufacturer is a real significant problem and new and new constituencies are coming into the tent every day. So John Deere was one of the targets of that copyright office hearing last summer. John Deere tractors are computers that you put in fields. And John Deere tractors, when you drive them around your back 40, they gather extremely detailed soil density data because they have torque sensors on the wheels and they have centimeter accurate location sensors. And so by the time you finish plowing your field you have extremely detailed soil density data that you can use to automate your seed broadcasting. And John Deere locks up this data in their tractors with DRM. And that DRM prevents you from getting diagnostic information out of the tractor so that you can fix it yourself or take it to an independent mechanic who might use third-party parts instead of the very expensive John Deere parts. But it also stops you from getting your seed data out of your soil data out of your own tractor. And the way that you get that data is you buy it back through John Deere's partners, seed partners like Monsanto, who sell it to you in a package, right? And so now we have magazines like Modern Farmer taking up the cause of breaking DRM. So if that's not peak indifference, I don't know what it is. Peak indifference is the moment at which nihilism can and must be averted. The moment in which our job is people who care about this stuff moves from convincing people that this is a problem to convincing something that they can do something about it. That they can quit smoking, call for emissions reduction, install crypto software, jailbreak their devices. It's the moment in which you tell them the names of the people who personally benefited from their immiseration, that you tell them that it wasn't an emergent property of the system that caused them to be where they are, but the depraved indifference of named individuals who chose to put them in that situation by deliberately engineering the false controversy about the thing that the experts knew was true all along. And when that expensively sown doubt finally collapses. That is a moment where if you can catch it, you can move people from indifference to making a difference. So for example, I'm on the board of a charity called Simply Secure. Simply Secure creates usable interfaces to crypto for normal people and publishes them under free and open licenses along with fully documented process materials that are also free and open. And Simply Secure's premise is that the reason that privacy tools are hard is not entirely because security is hard, although that's a piece of it, but also because before peak indifference, all the people who knew enough about technology to understand that they should be taking steps to protect themselves was someone who was already a deep geek. Because to understand how all this data could be pieced together required real technical nouse. And when all of your audience for a tool are people who are already technologically adept, you might as well assume technological expertise when you develop that tool. But back when desktop publishing was a new thing, all of the type setting software on the market assumed that you were already a typographer. And being a typographer is a hard job. People train for years to be typographers. But it turned out that when we assumed that normal people might wanna set type, we could collapse about 95% about what was hard about being a typographer into something that a civilian could use. There's still an irreducible 5% in typography, just like there will always be an irreducible 5% in security. If you're Ed Snowden, you're not gonna be able to use an off the shelf tool with its default configuration and be private. But there will be something that will help all of us be private, even if we don't have technical skills. And that's what simply secures mission is. And we describe it as making tools that are so easy your boss can use them. Because this idea that tools should be so easy your mom can use them in addition to being sexist is horribly inaccurate. Because nobody designs tools for moms, right? We assume that moms will contort themselves to fit whatever dumb assumptions we've made about how they wanna use tools. And so moms have to be ninjas to make their technology work. But bosses, bosses get to call up their CIO and say, I don't care about the security business. I'm at a days in in Tampa. And as soon as this kid has finished playing Angry Birds on the laptop and the lobby, I wanna sit down and access my email. And you make it work, dammit, right? So we need to make tools that are so easy that entitled jerks who've never had to take a step to accommodate themselves to technology can use them. And then the rest of us will get to benefit. So, I'm not a ventriloquist. I can't talk while I'm drinking water. So, how do we know how to break the rules? Which rules to break and what to do once peak indifference hit? We need to have principles. As the eminent cryptographer Alexander Hamilton once said, if you stand for nothing, what will you fall for? So just because some rules are bad, it doesn't follow that rules themselves are bad. You need to have principles to guide your work. You need to have a way to defend those principles against people who want you to compromise them. And one of the people who will want you to compromise your principles almost certainly is you in the future because there will come a time at which you will feel hopeless or you will feel tired and all of those calls from outside will ask you to change your principles. And in addition to needing a way to defend your principles, you need a way to amend them to keep them up to date and make sure that they reflect our current reality as technology changes. And we have a really good example of this from our community, the GNU project, the free software project. So the free software movement has an overarching purpose, to make computers serve people rather than making computers into tools to enslave people. And it has three principles that it uses to embody that, the three rules of free software, which is actually four because they number from zero, the right to run code, zero, the right to understand your code, the right to modify your code and the right to distribute your modifications. And then they have a tool to make those principles stick, the GPL. Once you license software under GPL, there's no backsies, you can't change your mind about it. It's irrevocably licensed, which means that you might start a business full of high-minded ideas about making the world better and in the long term, and you can maintain those high-minded ideals even later when your investors start putting pressure on you to close off your source code. You literally cannot comply with a request to do that, not even with a gun to your company's head, not even to make payroll for 30 dear friends who quit their jobs on your ask and who are betting their mortgages and their kid's future on you. The idealistic you who was president at the start has defended future you from your own weakness. And what's even better than that is once your investors understand that you can't change your mind, they won't pressure you, right? There's no point in asking people to change things that they are literally incapable of changing. It's a waste of time. They can't fire you and hire someone more pliable. There's no course of action that puts the GPL toothpaste back in the tube. Now this has got a technical name in economic circles. It's called a Ulysses pact. When Ulysses was sailing into siren-infested waters, the accepted security protocol was to fill your ears with wax so you couldn't hear the siren songs that would lure you into jumping into the sea and drowning. But Ulysses was a hacker. He wanted to explore forbidden territory, so he wanted to hear the siren songs, but he wanted to remain safe. So he said to his sailors, lash me to the mast so that once we sail into the waters, no matter how badly I wanna jump into the water and drown thanks to the siren songs, I can't. Future Ulysses was defended by early Ulysses from his own moment of weakness. You license code under the GPL for the same reason you throw away the Oreos the night you start your diet. Not because you're not strong enough to resist Oreos, but because being that strong involves being realistic and recognizing that there will be a future U with a weak moment, with a moment of low blood sugar and using the strength of that moment to armor yourself against the coming weakness that all of us experience from time to time. And the GPL is especially resilient to attacks on it because the most obvious way to weaken the GPL is to say, well, software isn't copyrightable and therefore licenses are invalid, which is exactly their victory condition. So heads I win, tail you lose. Now, the early pioneers of the net set out to build a decentralized open system to let anyone talk to anyone else without anyone being able to stop them. But along the way, we accidentally built the world's biggest surveillance system, a surveillance system that's so advanced that now when there's a coup or an uprising, the chances are pretty good that rather than shut the internet off to stop the coup, the powers that be will leave it on to spy on it. Now no one is the villain of their own story. The net's pioneers went from don't be evil to surveillance capitalism, not by twirling their mustache and deciding to sell out, but by taking one tiny compromise, one step at a time, each one only a little distance from the last one. Because as humans, we cognitively really only sense relative differences, not absolute ones. You know that exercise where you have a black stripe and a gray stripe in between a black stripe and a white stripe and if you cover the black stripe, the gray stripe looks lighter and if you cover the white stripe, it looks darker because we really only sense the relative differences between things. And so when you make a little compromise, you evaluate the next compromise you're asked to make, not against the position you were in when you started, but against the position that you've come to. And any compromise can be arrived at in a series of sufficiently small steps that each one seems harmless. So there is this great project underway to reverse that, to harden the internet against surveillance to re-decentralize it. And if we're gonna prevent the network that we're building through the re-decentralization project to de-re-decentralize, then we need rules and tools to guard us from our future selves and our moments of weakness. The rules that will defend us now when we're pirates from the admirals that some of us will inevitably become. So on those lines, I'm gonna propose two rules today, two iron-clad principles that we can evaluate any potential solution to any problem that we have in the future against. Rules that will make us fail gracefully. The first one is that computers should be designed to obey their owners. When a computer receives two conflicting instructions, one from its owner and one from a remote party, the owner should always win. The second principle is that true facts about computer security should always be legal to disclose. When people rely, thank you, when people rely on a computer and you know that it isn't suitable for them to rely on it, there should never be a legal barrier to you telling those people that they can't trust the computer, that they're relying on them. And the way that we're gonna make those stick, the way that we will armor ourselves against future moments of weakness is by building them in to the legal code and the moral code of our network by making them our principles and by making them our rules. We'll make them license terms. We'll make them conditions of membership and consortiums and standards bodies. We'll make them conditions of regulatory approval. At EFF, we've just asked the FDA to tell medical device manufacturers, implant manufacturers that is a condition of having their devices certified to be put in people's bodies. They have to promise never to invoke Section 12.1 of the DMCA against security researchers who know true facts about the flaws in those devices. Thank you. We've asked the FCC to make that a condition of their new set top box rules so that as 80 million households transition from having those terrible cable boxes in their living rooms to having new independently certified boxes in their living rooms that can be made according to open standards, that the definition of open standard include that to comply with the standard you have to make a legally bonding promise never to attack people who know about flaws in those devices that are camera equipped, microphone equipped that sit at the boundaries of our networks and that live in our living rooms. We should ask the Department of Transport to make that a condition of certifying firmware for cars. Every regulatory forum should have as a rule that you can never, as a condition of being approved for regulation, you can never threaten people who know true facts about the dumb mistakes you've made. We can incorporate these into the definition of open standards and the open source initiative which is one of the great keepers of definitions have in fact amended their definition of open standard to say that when you certify something through which the DMCA or laws like it can be invoked, it is not an open standard unless you also make everyone who implements that standard make a legally binding promise not to attack people who implement new features for it or people who discover and report flaws in it. Now there's places where this is a live issue today. The Worldwide Web Consortium is the very long running and correctly respected organization that has been one of the forerunners of open standardization on the web and one of the great flag barriers for it. But they've made a dumb mistake. They've allowed the Hollywood studios and the major browser vendors to corner them into incorporating DRM into the core standards for HTML5 which is envisioned as a mechanism for replacing apps as the control interfaces for actuating and sensing tools as varied as airplanes and nuclear reactors to pacemakers and your home thermostat. And by incorporating DRM into that standard, they are potentially making them off limits to external auditing, disclosure and the addition of new features that unlock functionality that has been irrigated to the manufacturer at the expense of the user. We tried to get them not to do this at EFF. We joined the W3C and made a principled argument for them not to do it and it fell on deaf ears. But we've since made another argument which is that the W3C already has policies about other kinds of government imposed monopolies. They have a patent policy that says that if you join the W3C you have to promise never to use your patents to attack people who implemented standards. We've asked them to extend that policy to say that if you join the W3C to make DRM, you must promise never to use the rights that you get under statutes like the DMCA. And the DMCA has international equivalents all over the world because the US trade representative is patient zero in an epidemic of shitty internet law that has been spread to every country that America trades with. So we've asked them to adopt this as a rule that if the W3C makes a standard that gives you the right to sue people under one of those laws, they have to make you promise not to use it as a condition of participating in that standard. We tried to get it through in the last round of voting on this. We didn't get it through. Now we have a much larger constituency who have supported this. Organizations as diverse as Oxford University and the Royal National Institute for the Blind who are correctly worried that if you make it a felony to add functionality to tools with the manufacturer's consent that you will wall off people who wanna add an extend functionality to allow for accessibility. And there are ways that I want you folks to help in this. This is a campaign that's live underway right now. It's occupying the majority of my time and you can help in meaningful ways in two ways. The first is if you're a security researcher please sign on to our petition asking the W3C to take this step. Ron Rivest has signed. The core tour team has signed. Bruce Schneier has signed. I want you to sign. And the way that you sign that petition is you send me an email. Corey at EFF.org, C-O-R-Y at EFF.org. Tell me your name. Tell me what country you're in because we want them to know how many different territories support this because it's the worldwide web. And tell me if you have an institutional affiliation like a company or university that can accompany your name. And we will add you to that petition and that petition you should go look at it. It's long and awesome. It's full of some of my personal security heroes all of whom have come together to tell the W3C that it is a terrible mistake and terrible for the future of the web to introduce a weapon that allows companies to punish security researchers who discover embarrassing mistakes that they've made. The other way that you can help is by auditing these implementations. So there are the standards called EME encrypted media extensions. There are EME implementations already in the wild. The major browser vendors all have them. There's a Microsoft one, a Google one, an Apple one and a Mozilla one. The Google one was just audited by an Israeli researcher. Israel is one of the only industrialized nations that doesn't have an analog to DMCA 1201. And this researcher discovered that it had showstopper bugs that he thinks might have existed since 2010. And as we know, requiring people not to disclose bugs doesn't mean that they don't get independently discovered. It just means that the last people to find out about them are the people who rely on them. So if you're looking for something to audit, we want you to take a good hard look at these tools and then contact us if you find something that you worry will impose liability about disclosing it. Info at EFF.org is our intake address and we want to hear from you if you find something in these because we want to be able to make the case to the W3C that they're living in a fool's paradise if they think that they can be secure by passing rules against auditing their own implementations. So your rule breaking needs principles like these two principles that computers should obey their owners and that true facts about security should always be lawful to disclose. We need these simple minimum viable agreements, these rules for rule breaking, principles that you can be so hardline on that they call you an extremist. If someone's not calling you on unrealistic utopian Puritan about these rules, you're not trying hard enough. And you need tools. You need your own Ulysses Pats. You need ways to defend yourself against future compromised you. The werewolf sin isn't that he turns into a werewolf on the full moon. It's that he doesn't lock himself away before the moon rises. Your trick is not to stay pure. You will never stay pure. We all make slips. Every vegetarian meets a vegan someday. Every vegan meets a fruitarian. Every fruitarian meets a breatharian. Your trick is to anticipate and correct for the impurities that are sure to come. And once you have these principles, they can inform everything that you do. Hang on, let me see if I can make this scroll. Oh, I guess not. I'm gonna shut down the slides for now. So I have an announcement to make. You may have seen it on Thursday. The Electronic Frontier Foundation announced that we are suing the US government over section 12.1 of the DMCA to invalidate it. Now, DMCA 12.1, that's what I've been talking about during this talk. It's the rule that says that breaking DRM even for lawful purposes, even for legitimate purposes, can be a crime. That disclosing true facts about defects in DRM can be a crime. And we're representing two clients that represent these two facets, the right to reconfigure a device in lawful ways and the right to report true facts about its security. The first one is someone you've probably heard of, a guy named Bunny Wang. Does everyone know Bunny Wang? Yeah. So Bunny broke the Xbox when he was a grad student at MIT. And he went to the general counsel for MIT and the general counsel said, you're on your own, kid. You decided to poke the Microsoft Bear on your own time and it's not MIT's job to defend you against that. So he came to us. He came to the Electronic Frontier Foundation and we made sure that he could publish his work. And he's since gone on to be a hero of reverse engineering and of product development and of security. Our other client is someone else you've probably heard of, Johns Hopkins researcher named Matthew Green. Matthew Green has an incredible storied history as having worked on some of the most important security accomplishments in our field. Most recently was part of the team that audited TrueCrypt. Now Bunny, we're representing because he made a product called NETV. NETV is a shim that allows you to insert graphics or overlays in an HDCP signal, high definition DRM video signal. And it was really clever. He figured out how to do it without breaking the DRM. And so there wasn't any legal risk for him doing it. But he wants to extend NETV. He wants to make something called NetVCR, NETVVCR. That will allow you to make fair uses of high definition video by allowing you to capture it and save it in the clear. And Matthew Green, we're representing him because he has certain research projects that he'd like to undertake that he believes the DMCA threatens. He wants to investigate the security of industrial grade encryption devices used to secure cryptographic keys for purposes like protecting credit card and ATM transactions. And he has a grant from the National Science Foundation to investigate the security of medical record systems. And he wants to investigate the security of medical devices, toll collection systems, industrial firewall and VPN devices, wireless communication systems, and systems that connect vehicles to one another in their surrounding infrastructure. Now you can see a kind of a priori that lurking flaws in devices like this are a serious threat to the economy and to the hundreds of millions of people who rely on them every day. So we really want to make sure that Green is able to independently validate their quality because the bad guys who abuse those devices don't ask for permission to investigate their flaws. We need to be able to investigate all the things. Our lawsuit challenges the constitutionality of section 12, one of the DMCA. They argue that we argue that restrictions on which code you can write infringe the First Amendment and that disclosing security vulnerabilities is a constitutionally protected form of speech and that fair use can only be enabled if you can acquire the tools to make those uses. And a funny thing happened the day we filed that lawsuit, a starter pistol was fired because the status of the law became indeterminate on that day. The world around us is full of computers that have been designed to disobey and control their owners to secure monopoly rents for their manufacturers, to metastasize the inkjet printer business model into devices as varied as tractors and insulin pumps. And risk takers and entrepreneurs have been slavering to break DRM. If DRM costs a user $10, then a DRM defeat device should be easily bendable for five. As Jeff Bezos once said in a moment of unusual candor to the publishers, your margin is my opportunity. And even before we won our suit, even as it progresses through the legal system over the next year, two years, maybe even 10 years if we get to the Supreme Court, we're changing the risk calculus for entrepreneurs and their backers, giving people who don't care about this ethical stuff, but who for purely selfish reasons wanna break DRM a good reason, personal reason to take the risk that if they break DRM, their business might end up being lawful when we prevail. And that's the final lesson I wanna give you today about designing disobedience to defeat peak indifference, that there are people who will support you on principle and you should welcome them into your fold, but it never hurts to arrange things so that they can support you out of their own self-interests even if they don't give a damn about the principles. So at EFF, you may know, work for EFF and then quit my job to write full time and make up stories to help you pass the long, arduous slog from the cradle to the grave. And a few years ago, I went back to EFF and I went back to EFF after seeing that the Mozilla project had decided that they needed to make DRM in order to be viable. And after losing that argument with the Mozilla people and when I saw that and when I heard that it made me really upset. It kind of combined with being a father of a young child to make me realize that the posterity we were bequeathing to the next generation was not the future I'd hoped for in which computers say yes, master. It was the computer, it was the future that we'd always fear, the one where computers say, I can't let you do that, Dave. And so I went back to Electronic Frontier Foundation to work on a project we call Apollo 1201. Section 1201 of the DMCA is the rule that makes it a crime to defeat DRM. And the Apollo project was a 10-year mission to do something that everyone thought was impossible. Our 10-year mission is to end all the DRM in the world forever. And this lawsuit is the first step because as the idea that breaking DRM becomes more legitimate in more places, all those countries that have passed laws at the behest of the US Trade Representative will start to question whether or not that was the best idea. After all, suicide packs are supposed to be mutual, right? If the US has said that its industry isn't going to participate in some potentially profitable line on the condition that Hungary or the UK or France or Canada or Australia also make such a promise, and then the US goes ahead and starts businesses that do that, well, why keep the law on your books? And without the DMCA, without laws that make it a crime to remove DRM for legitimate purposes, who would make it? What's the point of making a thing that hides keys and tools that you give to your adversary? After all, if there's no structural impediments to breaking it, people will just break it for the same reason that if you keep safes in the bank robber's living room, you will wake up and find all your money gone. So how can you support this? I've talked to you about supporting us in our W3C work. Corey at EFF.org, tell me your name, tell me your institutional affiliation, tell me what country you're in. You can support EFF. Electronic Frontier Foundation is a member-supported nonprofit. I've never seen an organization squeeze a dollar till it hollers more. We are remarkably effective for the amount, the small amount of money that we spend. We take on multi-billion dollar adversaries with very, very small money and we win and we win and we win and you can help us. The ACLU has just launched a lawsuit against the Computer Fraud and Abuse Act to repeal key sections of that or invalidate key sections of that. They have a booth downstairs, two doors over from EFFs. If you're not an ACLU member, you should be especially in this season, a voter suppression. It's a key campaign the ACLU is working on. And then if you're associated with a learning institution, a university, a high school, we have a new campus network that we're building out and my colleague Shahid Batar is running that and we're going to build out a generation of people who care about this stuff and work on it. And so if you send me an email, Corey at EFF.org or you can send it to him Shahid at EFF.org, S-H-A-H-I-D at EFF.org. He can help you set that up on your own campus and we're going to create a generation of young activists right at the moment that we need it. And so that's my talk. That's the things I want you to know today that we can make a difference. That, you know, people ask me, they say you're a science fiction writer. Are you hopeful or pessimistic about the future? Are you optimistic or pessimistic? And optimism and pessimism are a kind of prediction about the future that things will get better or things will get worse. And if there's one thing I know as a science fiction writer, it's that science fiction writers suck at predicting the future, right? And so I don't try. But I don't think it's necessary to have optimism or pessimism to want to do something. After all, if you believed that tomorrow we would figure out all this computer stuff and we would find a way to make computers our honest servants rather than tools for magnifying inequality and injustice, you should get up every morning and do everything you can to make sure that comes true. And if you think that tomorrow these computers around us will end up enslaving us and being tools of oppression, you should get up every morning and do everything you could to stop that from happening. So rather than being optimistic or pessimistic, excuse me, I ask you instead to have hope. Hope has been tarnished in the last couple of election campaigns, but I think there's still some life in it. Because hope is what you do, as this conference will tell you, hope is what you do when you think that there's something you can do to make the situation better. It doesn't mean that you know how to solve the whole thing. It means that you know what to do next. Hope is why if your ship sinks in the sea, you tread water. Not because most of the people who tread water when their ship sank were rescued, but because everyone who has rescued treaded water until rescue arrived. If there is a step that you can take next that will take you to a place where there might be another step, then you have hope and you can do something. And so I'm gonna ask you today to pledge yourself to hope, to do something. It's every day you can wake up and say to yourself, I've made some compromise, I am tarnished. I'm not using free and open source software and everything I use. I'm giving money every month to phone companies whose mission is to destroy the free and open internet. I'm giving money to ISPs that wanna make net neutrality a thing of the past. I buy my crystal prison devices from the Cupertino company that wants to make DRM, the alpha and omega of all future computing. And it's true, we all make compromises, but so long as you have hope, you have a way to free yourself from those compromises. And if you find yourself feeling so stained that it's hard to get out of bed, I have a little exercise for you, one that Denise Cooper, who you may know from free and open source circles, twig me to, which is that you can add up the amount of money you spend every month on companies that wanna destroy the future you wanna live in. And you can give that much money every month to organizations that are trying to save it. Not just EFF, not just the ACLU, all the organizations that have cropped up since then. And so with that, I'll take your questions. I would like to start with someone who identifies as a woman or non-binary, and then we'll move to men. Where's the mic at? Is it? I can't see anything. In the center, in the middle. So if we could start there, and again I remind you, long rambling statements followed by, what do you think of that? Not a good question. So are there any people who identify as female or non-binary who'd like to start the questioning? I know it sucks to be put on the spot like this. The thing is that otherwise it's just a total sausage fest. It's true. All right, let's start with a dude, and we'll move on, please. Oh, wait, wait, thank you. I can't see from up here, thank you. Thank you very much. Go ahead. What do you find to be the most effective strategy for people who are non-technical, who are swayed by fear arguments made by the opposing side? So when you're talking about vulnerability disclosure, or surveillance, or... Particularly encryption. Encryption. Oh, so you're talking about crypto backdoors. So I think that if you can explain to people that we have a new fact in the universe that we've never had before, that we can take these pocket distraction rectangles and use them to turn clear text into cipher text that are so scrambled that if all the hydrogen atoms in the universe were turned into computers and they did nothing but try and brute force the key until the end of time, we would run out of universe before we ran out of key space, and then say to them, we know how to make that, but we don't know how to make the one that stops working at the moment we need it to stop and starts working at the next moment, that you can help them to understand that the backdoor is like a technical non-starter, and we can say, like, look, it would be great if there were, like, there are lots of times, wouldn't it be great if we could break crypto for a good reason? You may have seen at Shmucon, there was a presentation from a cryptographer whose colleague had died in a head-on collision, and he had not left his keys with anyone, and so his wife lost all of their photos, and the business that he ran had no recoverable keys. Like, it would be great if we could figure out how to do that, but we don't. And in the absence of that, if we have systems that are broken that we rely on, then it means that things like firmware updates for your pacemaker and your car can't be trusted, and so the stakes are very high when we compromise our crypto, and it's hard enough to make this stuff work, right? Like, we can see the disasters unfolding all the time, so it's hard enough to make this stuff work, but it is, it's a complicated and nuanced argument, and of course the other side has a really simple, un-nuanced argument, and that argument is these poindexters know how to do it, but they just love terrorism and hate America, right? And it's really hard to counter un-nuanced sort of bullshitty arguments with nuanced. It's a little more complicated than that argument, but it's the argument we have to make, and I think we make it one person at a time, and I think to the extent that peak indifference is the moment that we're at, we will only be benefited by this, right? Like, so as the years go by, the number of people, the number of things you can point to where crypto failures have done something bad, hey, do you remember last year when like there was a demonstration in some former Soviet basket case republic and everyone had their identities captured with MC catchers in the square, and then when they went home, they got a text that said, dear citizen, you're recorded as a participant in a legal demonstration, that's why we turned your heat off in February with your internet-connected thermostat, and they were able to do that because there were backdoor lawful interception backdoors and that, then I think you can make the point, right? You can remind people that lawful interception allows the lawful authorities to intercept, and that right now in a big piece of the Middle East, the lawful authority is ISIS, right? And the software can't tell if the government is ISIS. The software just knows that it's the government. So I guess all of that, and then the other argument I use sometimes is especially if people like detective movies, is I say, you ever like watch a detective movie where someone gets a license plate number and they're like, how are we gonna find out who that license plate belongs to? Oh, I know a guy, I'll just go buy him a drink and he'll look it up. Well, if that's the level of security we have for things that if you're a cop, you know the password to, do we want that level of security for your thermostat, your pacemaker? The camera in your living room that's on when your kids are naked, you know, all of those things, do we want that level of insecurity? So I know it's hard. It's a hard challenge. Thank you for your question. Thank you. Next question. Thank you. I just want to say first, when smile.amazon.com came out a while back, the first thing I did is I hooked EFF up to that. Oh, thank you for that. Every one of my Amazon purchases contributes to EFF. Thank you for that. That's really great. I really appreciate that. It's a security I could be very proud of. I think you should be. I'm very happy to hear it. It warms my heart. I am also an annual donor to EFF despite being on the payroll. And MIT Media Lab actually pays my fees at EFF. I'm an activist in residence at the Media Lab, so I don't cost EFF anything. And I take some of the money that I get for working at EFF and give it back to EFF as a donation. It's a very worthy charity. So I guess the government had its werewolf moment with the internet. I thought the internet was originally created to prevent governments, to renegade governments from taking over. So I really liked the idea that you had about personal motivations of possibly government officials, CEOs or whatever. And I'm wondering what you can do to even extend that so that it just makes so much sense for each of those people to make it better. Yeah, I'm always on the lookout for things like kinds of agreements that we can make that locks stuff open. So, you know, Brewster Kale, who runs the internet archive and created the first two search engines, he has this idea that we can build peer-to-peer software that runs in JavaScript that only uses calls that are made by Google Docs. And so to block it, you have to block Google Docs. So like we can kind of use Google Docs as an inhuman shield for keeping this stuff running. I really like that kind of idea. When Stuart Butterfield was still running Flickr, he had an API rule where he would allow total unfettered access to his API for any competitor that would allow total unfettered access to him. And so basically if you promise not to lock up your customers, then you can siph it off as many customers of ours as you can convince are doing the right thing. And it would be great to see that kind of formalized as a licensed term or like a fair trademark or something that Consumer Reports rounded up. I've talked a lot to Consumer Reports and other organizations like them about maybe having a stamp of approval that doesn't say this thing is secure. It says that whatever problems there are with this, there won't be any structural impediments to violating them. So like calling up manufacturers before you review it and saying, do you consider your device to be an effective means of access control under section 12.1 of the MCA? And if they say yes, you put a little red X next to the product that says if there's a vulnerability in this, people might go to jail for disclosing it and then you won't know about it until it's too late because you and everyone else who bought one just got hit by some rando and an opportunistic attack. So I think that there are lots of ways that we can like insert these kind of normative, legal, technical combinations together in a way that help us kind of take, grab little pieces of territory and wall them off from being closed off just one little piece after another. I guess my biggest worry is when vertically integrated monopolies are able to form, New York City is about to implement this major public Wi-Fi, but I believe it's just one monopoly vendor who gets down to think something very wrong there. Yeah, we have an overall problem with monopolistic forces in markets right now. You know, a lot of, like we talk about consolidation in the internet, it's not just consolidation in the internet, it's consolidation in the private prison industry and oil and hamburgers and agribusiness and like everything is consolidating. The internet is like an epiphenomenon of it, but it's also enabled by it, you know, like the internet has made it possible for businesses to coordinate monopolistic practices across territories in ways that wouldn't have been sustainable before. The other thing it's done is it's made it possible to stabilize societies that would otherwise be made unstable by the great inequities of this system. So, you know, Tom Apaketti in Capital in the 21st Century, he keeps returning to the amount of inequality in France in 1789 when they built the guillotines as kind of the moment at which you know you've reached peak inequality. And he says, but we're past there now, right? And one of the reasons we're past there is that we have mass surveillance as a means of stabilizing otherwise intrinsically unstable situations. You know, the Turkish roundup of 6,000 judges and 15,000 academics and now it seems like there's 50,000 people who've been put on lists in Turkey that was made possible by mass surveillance and by IT. Anyway, thank you very much for your question. Thanks. Hey. Hey there. I'm Crowbar. I just want to say thank you so much for your talk. Like you have such great ideas. And I just think it's great that I just think it's awesome that you have these like concrete research based like techy things that you know and do but you're also a fiction writer like a completely different world. And as someone who's a writer and has started getting into fiction, I just think it's cool. And I like maybe you can, okay, question. What inspires you to write fiction? So there's lots of reasons to write fiction. I think fiction, like the more I do of it the weirder fiction gets as like a prospect. Like if you think about it, responding to fiction is like deeply irrational, right? The people to whom bad things happen in fiction, they're not real. And so you shouldn't care about them, right? Like they're like literally inconsequential. Like the yogurt you digested this morning with breakfast was alive and now it's dead. That's more tragic than Romeo and Juliet, right? Cause they were never alive, right? And so there's this interesting thing that happens. Some kind of hack on the way that we experience empathy in other people, and I think it's that we have a kind of simulator that we use to experience other people. You meet someone and you start to build a model of them in your mind and you can interrogate that model later. Like you can ask yourself, what would my mom think of me now, right? You can tell when someone that you love would be proud or disappointed in you. You also are able to do things like say, can I trust someone based on your model of them? And I think that part of your brain is super naive. I think that it doesn't know whether it's receiving input about imaginary or real people because it can form models of people you've never met, right? You can run into someone like, I was once at this iCommons event in Rio and Jean-Marc Souffron who runs Wikipedia France ignored all of the warnings not to go on the Copacabana beach with your laptop and four guys with machetes robbed him of his laptop, right? And then like he came back and he said, if you see four guys with machetes, you should be really careful. And none of us saw the four guys with the machetes, but we were able to form an opinion about people who may not have existed, right? I think he was telling the truth. I can't imagine why he wouldn't have been, but sometimes people lie, right? And we were still able to form opinions about them. And so I think that when you read fiction, you experience, you build up a model in your mind of imaginary people and that's how we experience empathy, right? We subject the people in our model to it. When someone says, oh, you know, Fred fell down a flight of stairs and broke his leg, you kind of take your model of Fred and tumble him down a flight of stairs and imagine what he would go through in the same way that if you think of yourself as like inviting Fred out to dinner, you ask yourself whether or not he'd split the check with you. And so that's how we experience empathy and we experience it for these imaginary people. And I think that that's true of writers as well as readers. So I think when you start writing, it feels absurd. It feels like you're putting on a puppet show for yourself like, hey kids, let's go on a quest. That sounds great, you know? But this weird thing happens when you're writing where suddenly you start to know what your characters are going to do. And I think that that is driven by the model in your brain getting enough information that it's making inferences and telling you in a kind of subliminal way what it is you can expect these imaginary people that it doesn't know or imaginary are gonna do. It's kind of like, you know, you're breathing your own farts, right? Like you're, this thing is getting data from itself but it doesn't know it. So I went to summer camp with a guy, it was very sad. He was profoundly epileptic and one of the last ditch things that they do if you're experiencing brain damage from continuous seizures is they sever your corpus callosum, the electrical bundle that joins the two lobes of your brain to stop seizures from spreading from one side to the other. And after he had this surgery, which was sadly, shortly before he died, he, if you covered one of his eyes and then asked him about something, he could tell you what it was but not what it was called. And if you covered the other eye, he could tell you what it was called but not what it was for. But when he spoke it aloud, his ears formed a free space acoustic loop between the electrically isolated parts of his brain. Just like all that research that you keep seeing it like hacker cons about air gap machines that smuggle information out through their fans or through supersonic chirps or whatever. Your brain can form free space loops between parts of itself that normally don't directly talk. And I think when you write, you form a free space optical loop between the words that you're writing and the part of your brain that creates subliminal opinions about people and you end up like having these parts of your brain that normally don't talk to each other or talk to each other. And it's really exciting and it allows you as a fiction writer to invoke not what something is but how it feels. Before Orwell wrote 1984, if during the peace dividend after the war, you had said, well, we've got all this money lying around. We've just done all this amazing technological development during the war. We've got money lying around because we don't have to buy tanks anymore. What should we do with it? Someone might have said, okay, well, let's put cameras everywhere because now we know how to make cameras and TV. That all came out of the war. We'll put cameras everywhere and we can find all the crime. It's like the Sermon on the Mount. His eye is on the sparrow. We will be men as gods. We will know everything going on everywhere and we'll find all the crime. And in 1947, that might've seemed like a rational argument. It would be hard to counter it. Anything you could say about it would be like that would feel creepy. It would be kind of fuzzy. But after 1948, 1949, when Orwell published 1984, suddenly we had an adjective. We could call it Orwellian. We could say that the experience of living under surveillance is a thing that you can know intrinsically and adivistically by reading this book and by having your emulation engine, your empathy engine spin up and make you feel vicariously what that kind of surveillance is like. And I think that that makes fiction this incredibly powerful political tool as well as like a very powerful aesthetic and cognitive one. And incidentally, I think one of the great fights that we have in fiction, which is about fan fiction, arises from this phenomenon as well. For you to enjoy a book, you have to emulate the characters. And once you've emulated the characters, they don't stop being emulated just because you close the book. Your emulator is naive. And so after you finish the book, those stories go on in your mind and it's natural that you'll want to write them down. In fact, if the characters aren't still live in your mind after you finish the book, they probably weren't live to begin with and you didn't enjoy the book. So for the book to be successful, you have to emulate the characters. And so the writer has the characters emulated in their head and you have the characters emulated in your head and the writer and you have a dispute about who has the true future history of those characters. And I think that's where the tension springs from. And I think if writers came to understand that fanfic only occurs where people have experienced your book fully, that maybe some of that tension would arose too. Next question. How are we doing for time, by the way? We have lots. Okay, yeah, next question. Go ahead. Thank you Cory. Thank you for saying that because certainly I had a lot of empathy from Marcus Yellow during his explains. He's mine. You may not, no, no, no, no, no, you're fine. But that's actually a good segue to what I was gonna ask. Namely that I think empathy is hugely important in debates of this nature and that a lot of the conflicts in this, you know, when it comes to real people, talking with real people is a lack of empathy or a lack of understanding. And even people, when you start to explain these things to people, you either get people who don't understand them, don't want to understand them, or do understand them and think, oh, that's nice, but don't think it's such a big deal. What I've certainly found in my own experience to be the best convincing tool, at least on the receiving end, is when you have your worldview shattered. I say that very casually. I've had my worldview shattered with very visceral experiences, and it's kind of hard to go back to the way things were after that when you mentioned the example of the people in your simulation who understand after this OPM hack, they can't go back to their previous way of thinking. So I think of that as the best way to move the debates forward on this. How would you recommend, I don't know, casually shattering someone's worldview while minimizing the potential consequences? So, I mean, I think that like, so peak indifference is a really interesting idea on these lines, right? Because think of carbon, right? Like all that carbon is out there in the atmosphere, and there's probably nothing we can do about it. It's probably like there, and whatever it's gonna happen as a consequence of that carbon is gonna happen. And so the argument we're having is not about the carbon that's already out there, it's about the carbon we might emit. And the same is true of personal information. And so we're in a kind of race to see whether the consequences of carbon arrive soon enough to stop us putting so much more carbon that they become irreversible, or whether the lag between the action and the consequence is so great that we put ourselves in terminal danger. And the same is true of private information. And I spent the last 15, 20 years, just like so many of you, trying to get people to care about the potential consequences of privacy. And in some ways, the benefit we have now is that we failed so terribly. And there's so much private information out there now that we're having privacy valdezes every week. And making people, using those as touch points, rather than hypotheticals, using actual stuff that happens. That hack in San Francisco that made all the news stories about the family who had an IoT baby monitor that was very badly secured is one of the ones that the researchers at DEFCON had presented on, presented Volnzen at DEFCON, which the manufacturer still had failed to patch. And the toddler kept saying that the phone, the toddler called the baby monitor, the phone, the phone in the room was scaring him at night. And the mom walked past his room one night and heard a stranger's voice swearing at the kid and terrorizing the kid. And it was one of those monitors you could steer the camera on with an app. And she walked in the camera swivel to look at her and a stranger's voice said, uh-oh, mom's in the room, right? These like non-hypothetical hair on the back of your neck stand up moments are, I think, you know, if nothing comes of those moments apart from the ruination of those people's lives, it's actually worse than if we use those moments to convince other people to be safe, right? At the very least, maybe we can salvage something of it. But when life gives you SARS, you make SARS-Borilla. You know, we can, we can, we can, you know, I hate to use the word exploit, but we can build on these experiences to stop them from recurring. It's the most tragic thing is for us not to learn the lesson of these moments. And our hope has gotta be that we can race ahead of the harms of privacy breaches to armor ourselves, to defend ourselves against privacy breaches before we, we, it's too late, before there's so much privacy carbon out there in these giant leaky databases that it's kind of too late for us to salvage it before there's too much infrastructure built out that's designed with privacy as an afterthought or not there at all. And I think that, you know, there are multiple ways that we can get there in addition to campaigning. I think there might be market forces on our side. I think of class action lawyers as a kind of Monte Carlo attack on the American judicial system. They're trying different fact patterns and different legal theories on judges to see if they can finally get a company to internalize the full cost of a breach. You know, Home Depot's breach, they paid about 33 cents per customer for breaching their data. And then everyone got a gift certificate for credit monitoring. But, you know, something like 1% of those people will probably have their houses stolen as a consequence of it. So that really the remedy should have been 1% of all the real estate owned by everyone who'd ever shopped at home hardware. And when that award comes down and it may, if we have long enough, sustained enough litigation attempts for, for these breaches, then, you know, the underwriters who insured them will suddenly show up at the doors of every company they've written a policy on and say, we'd like to talk to you about what you're collecting and what you're retaining and whether or not you have a business case for it and how much we're gonna charge you for it and whether it's still worth your while. You know, there was a time in which, you know, insurance companies knew nothing about fire prevention and the way we got sprinklers every 10 feet was not that they just developed it out of the goodness of their heart, it was that you can't build a building anymore and get an insured unless you have the sprinklers, even without the building code, the sprinklers would be there because the insurers would insist on it. And one of our great risks, I think, using markets to recover from this privacy situation we found ourselves in is forced arbitration is becoming a more common feature of those license agreements that we click on and the courts have generally favored forced arbitration as something that is binding. And so forced arbitration says that if you're harmed, first of all, you can't join a class action and second of all, you can't sue in a court. You go to these private pseudo courts that are owned by the people that you're suing and those courts get to, and overwhelmingly find in the favor of the people who pay their bills, forced arbitration is in the Pokemon Go agreement. You have 30 days, by the way, after signing up for Pokemon Go to send an email to them asking to be removed from forced arbitration. It's in Google Fiber agreements. If you have Google Fiber, you have 30 days from the change of their terms of service to opt out of their forced arbitration. It's in the Airbnb terms of service. It's in T-Mobile's terms of service. And forced arbitration could short-circuit this by requiring all of us in society to pay the cost of those breaches because when people lose their homes or their lives are destroyed or their children are terrorized, if the company whose indifference caused that to happen, don't pay for it, it's not like it goes away. It becomes a social cost that the rest of us pay. And so if forced arbitration is allowed to continue to proliferate, I think we'll end up in a situation where it's very likely that the costs of these bad decisions will become more and more coming on us to pay rather than being internalized by the firms. But if we can get out from under forced arbitration, I think that ensures an activist investors, right? Like people who put money into these companies are gonna show up and ask the board of directors, ask the C-suite to defend their data collection and data handling processes. And right now one of the reasons that surveillance is so cost effective is that companies do the surveillance and then governments raid their troves, right? If you talk to cops or spooks or people in the military, they'll say, well, I trust Uncle Sam because he signs my paycheck, but I don't trust Google or Amazon or Facebook. And if you talk people in Silicon Valley, they'll say, well, governments are incompetent, but Zuck isn't gonna take dumb risks with my personal information because he's got too much riding on it. The reality is that governments, so long as they rely on being able to raid Facebook and Google and all these other companies in order to amass their databases, governments are never gonna pass laws limiting their surveillance. There's no such thing as private surveillance or public surveillance. It's public private surveillance. The Stasi had one spook for every 60 people in East Germany. The NSA has got about one spook for every 10,000 people in the world. They've achieved a two and a half order of magnitude increase in the efficiency of their surveillance by having private companies do the heavy lifting for them, by having us do the heavy lifting for them, right? Our destruction rectangles are also microphones and cameras that know all the people that we know, know all the things that we talk to them about, know all the places that we go, and we foot the bill for it. Every month we pay the bill for it, and that's what makes surveillance cost effective. It's like a kind of mass scale version of the Cultural Revolution in China where they'd shoot your father in the head and send you a bill for the bullet, right? We pay the cost of it. And so if we can force firms and states to internalize these costs, then we could approach a solution not just through principle, but through markets. And that, again, like to the extent that we can convince people who don't care about principle that we're right, I think we get a lot further than if we have to rely on principle alone. As Upton Sinclair said at the start of my slideshow there, nothing's harder than convincing the man of something that his paycheck depends on him not understanding. Thank you. Next question. So my question goes back to your comment about the situation in which you were with a bunch of spooks, a bunch of people all over, and you were doing a simulation. And you said that sort of the breaks in the situation were the spooks. And so I'm curious, because they'd had their stuff stolen. And so I'm curious if they've participated in a lot of these conversations more having sort of experienced the visceral, tragedy of having your personal information in the hands of people who you don't think will do good things with it. And sort of more generally, do you think that there are people who will join the conversation after they have that? Yeah, I think so. I think that the risk is nihilism, right? So like all of these people understand now that it was terrible that their data was collected. And I think that if you make reference to it, it's a touchstone. I don't spend a lot of time with spooks or cops or people in the military. But every now and again, my path intersects with them. And when you invoke OPM, they immediately understand what you're talking about. And they become really head up about it. They feel correctly betrayed by incompetence and indifference to something that they had really implicit trust in. And so the risk though, is that that betrayal just turns into nihilism, right? It's all screwed up. There's nothing we can do. I guess this is what our future looks like. And if we can convince them that there is an intercession they can make, right? In that moment when you have that conversation, if you can say this didn't occur as like an emergent property of networks, right? Someone made a bad decision that they should have known was bad. And if they didn't know was bad is because they deliberately chose not to research it because they had a guess of what they would find. And it was that, for example, their favorite contractor wouldn't be a good handler of this data. And that we can take steps. We can, it is not implicit to computers that they retain their log files forever, right? Do you remember in the old days of web servers when every IT manager would eventually have to go into a data center at two in the morning because they forgot to write a log rotation script and the server had shut down because we forgot to delete our logs, right? Logs used to be like the pubic lice of the internet, right? We couldn't get rid of them, right? And then one day someone was like, hey, what about the surveillance capitalism thing? No one came down off a mountain with two stone tablets saying thou shalt subject thine logs to supervised machine learning, right? Those were like decisions people made and they're decisions we can change. And that's why I'm interested in figuring out ways to convince people that something can be done, convince them that it's in their economic interest to do it, convince them that it's in their political interest to do it, convince them that legally it's what they should do. And finding, in that Larry Lessig way, law, code, norms, and markets to make a change, right? We got rid of smoking, which we've largely done, right? Smoking is like, I used to be a smoker. I'm not a smoker anymore. And in part I'm not a smoker anymore because not being a smoker got a lot easier and being a smoker got a lot harder. And the way we got there was by changes in markets and code and law and norms, where fewer people would have you around, it cost more, the penalties for violating it were more. I remember we used to ride the Greyhound bus and we'd go into the bathroom and smoke and the driver would pretend they didn't see it, right? Now they take you off the bus in cuffs. That is not a good response, but it certainly reduced the amount of smoking that took place, right? And then, normatively and legally and economically, it's all changed. And so I think we can get there. Five more minutes. So last question, make it a great one. No pressure. I'm sorry. Make the question a great again. Right, make the question a great again. Many, if not most, eight to 10 year olds are given the little distraction rectangles to carry around all things. Do you have any ideas on how to raise their awareness without scaring them, et cetera? Yeah, how do you raise kids to be privacy aware without scaring them or just... Or even with scaring them, frankly. How do you... What do you do? It's tough. So I think that, you know who like my rabbi is on this stuff, it's Dana Boyd, right? And she runs this thing called the Gata and Society Center. She's amazing. She wrote this book called It's Complicated, which is this incredible book about how young people use networks. Her doctoral work was 15 years of studying as an anthropologist how marginalized young people use networks. And she has a lot of insights. First of all, kids generally do value their privacy. They just value it against the wrong people, right? Like they're really worried about the bully at school and their parents and their teacher getting into their stuff and they don't think about cops, governments and future employers. And one of the structural problems we have with helping kids become good private, have good privacy instincts is that our first line of defense for kids in networks has become the censoring proxy, right? And so we try to, like as though there's a boiler room somewhere that's big enough to hold enough prudes to separate all the pages on the internet into pages that are safe for kids and pages that aren't or that you could like make a reject that's so gnarly that it could do this on an automated basis. And the thing about a censoring proxy is it's also a surveillance tool because in order to make sure that bad clicks don't go through, you have to capture all the clicks. And so we perforce punished children who take any affirmative step to protect their privacy. So I don't know like how we get kids interested in privacy. That's us, like, I mean, I kind of know. We tell them to do it and we encourage them, whatever. But there's a certain intrinsic interest in privacy that kids have, but it's privacy from ourselves. And I think that kids understand that actions matter more than words. When you tell kids your privacy is as precious and irrecoverable as your virginity, but then we tell them that any affirmative step they take to protect it is punishable by immediate expulsion from school, they understand that we're talking bollocks, right? That like if you want your kids not to smoke, you can't tell them that smoking is bad for them while lighting the new one before butting it out in the ashtray. And so if we raid our kids' privacy and treat it as though it has no value, we will never convince them as it has value. So what if our school curriculum was based on teaching kids to jailbreak every device, to circumvent every proxy, and to become hardcore privacy ninjas, right? As opposed to becoming obedient and compliant users of surveillance where at least the kids who did care about privacy would do something about it. And I think that it's tough because zero tolerance plus the Computer Fraud and Abuse Act means that kids who do try that in school are apt to get expelled if not arrested, if not jailed. But I think that there's like a middle path and I've written about this a little. We could like develop curriculum around this because every teacher and every student knows that that sensor wear doesn't work, that it overblocks and underblocks, that in a network of hundreds of billions of pages, a 1% error rate means that there's more horrific pornography that kids can see than all of the pornography that everyone has seen up to the year 2000 and the entire history of the human race. And at the same time that there are hundreds of libraries of Congress worth of things that are not pornography that are blocked and not in a kind of Gaussian distribution. It's lumpy around things like reproductive health, LGBT issues and so on. And every teacher has had the experience of like doing a lesson plan and then having the thing that they were gonna show the kids in the afternoon blocked over the lunch break and having to hand out photocopies and like have them do standardized test prep or something because the lesson is gone. So what if we ask kids, instead of asking kids to break this stuff, which they do anyways, if we said to them, we want you to be anthropologists of this stuff, we want you to go around and interview other kids and document how they break it, document its unsuitability for use in schools as a technical matter. It doesn't work, kids who wanna get past a can, then interview students about things that have been over and under blocked, things they wish they hadn't seen, which they did see, things that they needed to see that they couldn't. And interview teachers about ways that it's intervened in their lesson plan. And then ask them to teach them to use muck rock and to file FOIA requests to find out who supplies their software, what else they do, how much their school board is paying them, teach them to use public record requests to find out who their other customers are because SenseAware is not developed for schools. SenseAware is developed for autocratic regimes in the Middle East and the former Soviet republics in Asia and then it is repurposed for white collar workers, prisons and schools, right? And so war criminals capture all of our children's cliques and warehouse them in insecure places and kids can document that. That's a thing that kids are capable of discovering and documenting and those are important informational skills to be able to learn. And then present it to your board of education, present it to citizens meetings, present it to the PTA, present it during constituency meetings for people who are stumping for city council in this election season and say, we found out that our school board is diverting X hundreds of thousands of dollars to the SenseAware that kids trivially circumvent that blocks all of this stuff, that let us see all of this stuff we wish we could unsee and the companies that get to spy on everything we do, they're the ones who allowed Momar Gaddafi to spy on his people and make enemies less and figure out who to execute, right? Like that's what we need. That's I think an exercise that can help kids. So thank you, we're out of time. All right, thank you all very much. Thank you, Emmanuel and Mahope team. This is great. It's a lifelong dream come true. Support EFF, support ACLU. Thank you. Everyone please, please stay.