 So good afternoon. My name is Jonathan Zittrain, and I'm so pleased to welcome you to this interestingly titled session on defending an unowned internet for which at least two of the panelists have asked me what it means. So that might mean that there's curiosity elsewhere. Before we begin, let me say that this is being recorded but not webcast live. So be aware that what you say is being preserved for the ages in a transient medium. As opposed to under what other circumstances? Perhaps being surveilled from afar by those who would die. Just you set it up, I put it in the hoop. So what we thought we would do is start with a conversation among our four panelists and then tend to open it up to this brain trust in the room early and often. And we have the microphones with which to do that. If you do want to speak up when the time comes, feel free to identify yourself. Or if you're feeling anonymous, you may don a Guy Fox mask or simply decline to identify yourself. So maybe I owe an explanation for a little bit about what we might mean by an unowned internet. And one way to get at that might be something that is going on right now with one of our panelists' sites. Ben Wittes, you run and founded the Lawfare blog. Is that right? Yes, with Jack Oldsmith and Bobby Chesney. And that's L-A-W-F-A-R-E, Lawfare. Correct. As in the law of warfare. And warfare over law. And warfare fighting about law. It's kind of a double entendre, as it were. So Ben, in addition to being a senior fellow at the Brookings Institute working on such issues, has this blog, which has been under attack from IP addresses associated with the Netherlands since? Well, so there were some attacks in December. And then they recurred again over the weekend. And they knocked us out for a few hours on, I forget whether it was Saturday or Sunday. And we should just take a moment to be clear. When we say attacks, we mean a lot of traffic. Yeah, flooding of traffic from particular IP addresses in the Netherlands. And we don't actually know more about what it was than that. Except you have received an apology over Twitter from the Dutch ambassador? Not from the ambassador, but from, this was back in December when we disclosed that the IP addresses were in the Netherlands and somebody associated with the Dutch embassy tweeted his regrets. To be clear, I don't know that the attacker is in the Netherlands. All I know is that a server or a set of servers in the Netherlands are here to be the platform. Yeah, no, for all we know, the attacker could be in this very room, like in Agatha Christie. And if the attacker is in this room, please stop. So this is a long way of saying, when we talk about an unowned internet, we're talking about, to me at least, and I suspect panelists may have different views on this. But to me, I'm thinking of internet as collective hallucination, a set of protocols that brings together different networks for which there's no one or even small group of owners and for which therefore, when problems come up, there is no customer service line to be put on hold with and then to be told they can't help you. There's no CEO to complain about how overpaid the CEO of the internet is. I don't know how much it would take to overpay the non-existent CEO of the internet. And for which I guess we look at something like what law fair is experiencing, which is of course quite typical. And if we fast forward 10 years, in addition to all of the very direct and obvious questions that we will be hearing about today that have to do with government surveillance, government intervention, treatment of the internet space as a space where the armies operate and as a space for war, in addition to all of that, there's the question that if 10 years from now because of attacks like this, everybody pretty much ended up sheltered under the wing of Amazon Web Services or one of three other worldwide conglomerates that can offer you protection from attack and they'll take care of everything but you think you're surfing the web, what you're doing is visiting one of four entities. One question is, well first, would that now be an owned internet? One in which we actually do have a number of CEOs who could exercise some customer service discretion. And if so, is that a problem? And how might we relate to that? And I'm hoping that Abella who's here may be able in addition to speaking to some of the usual stuff having to do with dealing with governments and relating to them in the social responsibility space may be also speak to that. So that's at least my take on our rather capacious topic. And for that I just wanna now just go through and have a brief introductory moment with each of our headlining participants. So Yochai, let's start with you. Your frown is beckoning. So you are the winner of awards from the Electronic Frontier Foundation, from the Ford Foundation, from Public Knowledge, a Lifetime Achievement Award from Oxford University. So you're done, you can leave. Oxford has a long lifetime. And I guess it's the lifetime of the recipient, not of the giver, but probably. Are you intimating that that's shorter? The singularity is near, but that is beyond the scope of our panel. So Yochai, anything more you wanna say by way of introduction to situate yourself on this topic and an idea you wanna put forward as far as what concerns you most about the state of an open or unowned internet in 2014? I think the concern or the worry that I have as I look at the last, and I don't think it's only the question of NSA eavesdropping and surveillance. But more generally, I think the concern I have is one in which we're seeing a series of changes to structures of ownership, to locations of data in the cloud, to processing of big data, to the smaller number of companies, the surveillance as a business model as well as surveillance as a control model. Where the question becomes whether the set of disruptions that broke through a whole set of systems of control that typified the 20th century and that we associated with the internet, we associated with the web as cutting across a whole set of control systems. Were the source of creativity and innovation from places that weren't supposed to be there of democratic participation of autonomy and cultural creativity and creating identity? The whole class of technologies that cut across traditional media and economic organizations, et cetera, and really were the foundation of the freedom that I think we all celebrated over the last 20 years. The concern I have is that the same clusters of technological capabilities are now being recreated as control systems. And the potential was always there and the concerns were always there, but that becomes the concern. Whether they become control systems. So it's not so much unowned in the sense of a single company or not, but that the habits and practices of use both at the edges and at the core become habits of surveillance, information, environment manipulation, and control by whoever is in the position to do that. And would you have just a quick example of how that could bear out if the trend line goes in the direction that you'd not like it to go? Fast forward five, 10 years, what would that look like that would be different from now? At one level, it's the sense that if you had a moment from the perspective of democratic citizens thinking that they can talk to each other, form communities, speak, be active, be edgy, be disruptive, that the sense that everything you do all the time is susceptible to search and analysis and constraint. And if it turns out to be threatening to a sufficiently powerful source of power within the state, particularly in the United States, the national security establishment. That becomes something you have to worry about and you can't participate and you basically go back into the set of mid-20th century of cursing about something that you disagree with but not really believing that you can do something about it because you're concerned about destruction. And the flip side of that is that the quality and refinement of marketing or video, for example, becomes such that we see a recreation of the passivity of the overwhelming majority of the population and what we get is Netflix replacing the three networks. But fundamentally, the experience of visual cultural immersion is one of passive consumption rather than shared creation. Got it. So it's kind of a combination of people on their best behavior for fear of acting out against the prevailing norms as established by the powerful and basically Wally where people are in the very long middle of that movie sort of just coursing around and drinking milkshakes. My dim memory of Wally. I obviously need to watch it on Netflix and my medium would be my message. Bruce, why don't we turn to you because my sense is you may have some fellow traveling to do with Yochai's concerns but you are known, and I think your general tagline is noted security expert and also troublemaker for whom we had said I had to make effort to try to compete on shirt with Bruce and I lose once again. But tell us a little bit about how you've come to this topic and even to the chair you're sitting in. When I think about the owned internet, I think about the dangers of it, that there's enormous social value to commons, to unknown space. It's not just the internet. We see this in the physical world that the physical places people congregate are malls or Zuccotti Park in New York that are treated as public spaces. We believe they are like sidewalks but malls are not like sidewalks. You can be banned from a mall. There could be rules about the mall where I live in Minneapolis as a rule about teenagers not being there at night without their parents. So suddenly what we perceive as a common space becomes an owned space with rules that are set by companies, by for-profit enterprises. And similarly we might think of Facebook as a commons, a place where we go to talk to our friends that we couldn't imagine that someone can say hey that topic is unacceptable, that photo is unacceptable, your account is now unacceptable because that doesn't happen in our conceptualization of the space but because it's owned, because it is not a commons, that happens. Apple can decide what apps are acceptable for sale at the iTunes store. We certainly imagine it is like a free market and there are apps and you can buy them but that's actually not true. It is an owned market and it is controlled and there are rigid rules and if your app doesn't meet the rigid rules it is off. And there's no, it's not like a government where there's legal recourse where you might be able to challenge that. The rules are rules. They can decide that you can challenge but they can decide not. So when you look at this notion of an unowned internet you have to look at the enormous value of it that things can come out of places we don't imagine that innovations, that marginal groups that the majority might find unacceptable are allowed to congregate and speak and now we have to deal with the fact that some of them are acceptable criminal organizations and some of them are just minorities in countries where they're not tolerated, sexual minorities, religious minorities, cultural minorities are because of the commons nature and that's why and as we lose that commons and I'm talking about it from a corporate perspective you also can lose it from a government perspective too surveillance and censorship and all the things that you expect not to happen in a commons area. I mean there are protests you can have outside Harvard Yards, you can't have inside Harvard Yards. Even though we walk back and forth freely like there's no difference, legally there's an enormous difference and I worry about that difference on the internet because you often can't see it. And I gather you have this worry despite the fact that you might agree with the claim that this day in 2014 the internet offers more freedom more affordances to people to express themselves to gather to petition than at any time in the past. The slope of the curve is still going up, yes? The slope of the curve is going up in some countries it's not. It really depends where you are. We happen to be very lucky if you were in, I mean this is not internet, this is a cell phone, but if you were in the Ukraine and participated in a protest, you got a cell phone text message saying, hey we noticed from our location tracking that you were there and we know that. And that's an incredible chilling effect from the fact that someone is controlling the space. But for your initial story to have the foreboding that it carries, you're making the further claim that what we see in certain regions could end up being the trend worldwide. I think there is a more trend to control, certainly more surveillance. The trend I worry about the most relevant here is the corporate trend, is that corporations are controlling what we conceive of our public conversation space. Email is very much an unknown internet. We can send emails to anyone we want and there's no one controlling it in a way that Facebook messages aren't. And as Facebook messages take the place of email or Twitter or whatever system we're using, it is an own system, it's often an own system where we are the product being mined for someone else's profit that the motivations are very different. And that's really what is concerning about the unknown internet versus the owned internet. Abellah, maybe I should skip over Ben for a moment and come to you because you are here at the moment, I guess this very instant, representing all of the corporate world on this panel for which both Yochai and Bruce have laid a number of issues at your feet. So tell us what it means to direct business and social responsibility at Yahoo. Sure, so I direct the Business and Human Rights Program which is really focused on the human rights that we at Yahoo impact most directly through our business and the human rights of privacy and free expression. And when I say privacy, I'm talking about privacy vis-a-vis government requests for user data. So essentially what I do is at the sweet spot of everything that we're talking about now and I think what's interesting about these conversations is that when I first started this job in September of 2008, when I would tell people what I did, they would say, oh, that's about China. And I'd say, you know, actually this is not an issue that's specific to China. This is an issue that we face everywhere in the world and people say, ha, ha, no, that's not possible. I mean, there are good countries and there are bad countries. And the conversation around the NSA has been a gift because when I would say, no, there's actually no such thing. I mean, there are ranges of good, there are ranges of bad, but when it comes to governments being interested in access to your data, I wouldn't say that there are good countries and bad countries. So this conversation has to be very helpful because I think it has helped people recognize that certainly from a company perspective, this is an issue we have faced all over the world and in fact in the countries that people consider to be good countries, those countries will often have more scope to do things that some of the countries that we formally consider to be bad countries don't have. In terms of corporate control, I find it very interesting because one of the things that we are told in terms of or related to government access to information is we've heard people say, well, if you're in a particular jurisdiction or country, you company should push back. And one of the things that my job is around is how do we in a rule of law context determine the best way to protect our users while still respecting the law because I think you have to have that respect for the law. When I hear people talk about the power that corporations have and then at the same time I hear people saying, well, to the extent that you think of a law as bad, you should commit civil disobedience as a company. And I think that that's a very interesting space because when you think about how corporate social responsibility started, it was because companies were not respectful of the law. And so it's interesting in this juncture to have conversations around, well, corporation, you should disobey the law. I mean, do we really want a corporation with shareholders who are elected by, administrators who are elected by no one, essentially making rules that are in conflict to governments, which presumably are elected by citizens. And I think that is a very real conflict. And I think talking about that in a way that's an intelligent way, as opposed to saying things that really aren't feasible, that is sort of the sweet spot of what I do. So your point in part is if a company like Yahoo were to be doing some privacy practice to which many people objected and some plaintiffs lawyers in Beaumont, Texas immediately wanted to file a class action suit and the FTC is interested, those who are privacy protective would say, Yahoo, how dare you think you are mightier than the federal trade commission or than the district for Waco. And you would say, no, no, we're not, we're not, we'll litigate. But at the same time, if the entire nation of China through its non-elected representatives should ask Yahoo to do X, Y, or Z, those same people might be saying, Yahoo, hold fast, barricade the door, ignore the order. Right. And that seems strange to you. It is inconsistent. Uh-huh. And I think it's inconsistent. I think it's also, you have to think about, do you really want a world where companies pick and choose what laws they adhere to? I think you can think of scenarios where you'd say, yes, that totally makes sense. But you can think of many, many other scenarios where that would be an incredibly bad thing for human rights. I mean, imagine if you had, think of whatever country in your mind is a bad country. And that country had a company and they came over to the U.S. and they said, But don't let us know, we're gonna guess. And they said, you know, in my country, you know, we're sort of against this thing that in your country is a good thing. And we have decided in our own wisdom that we're just gonna kind of disobey your law because we are above the law. I don't think you want a world where companies determine for themselves which laws they follow and which laws they don't. And cutting a little bit to the chase, over the time you've been grappling with these kinds of issues, have you come up with a framework that you figure is not a bad one for reconciling these things? So I think a couple of things. I mean, one, we have joined an organization called GNI, which is the Global Network Initiative. And it's a multi-stakeholder organization. It has academics, Berkman is one of the members. It has NGOs, so human rights organizations. It has companies, it has socially responsible investors, really to help us think about these very difficult issues. There are no, I don't think that nobody can say, this is a rubric that all companies should follow, but what we can say is you need to understand, before you enter a market, you should understand the laws that are relevant to your business that could have an impact on human rights. You should do human rights impact assessments to the extent that you do those, if you've identified risks to human rights, you should, it put into place risk mitigation factors. And some of those, so I can say, particularly in Yahoo's case, we learned a lot from China, and so post that when we entered Vietnam, for example, we did a human rights impact assessment, we determined, yes, it is a net benefit to offer Vietnamese language services, but in the case of Vietnam, what will we do? We will not put data on the ground. We will not, in terms of Vietnam, we will not have people on the ground who have access to user data, and we will not have people on the ground who have editorial control, and so some of the, and we will put the service subject to another law that we think is a little bit less restrictive so that we can make decisions that aren't directly impacted by Vietnamese law. And whose law did that turn out to be? Singapore, which is, anyone who knows anything about Singapore law, particularly anything that's going on now. I mean, who hasn't been sued in Singapore, right? I think you will start to see the difficulty of it as a company. I mean, you can pick a law that's better than other laws, and so Singapore, because they are, certainly at the time, were very interested in business, and so they had laws that were different than Vietnamese law. But wait, so are you revering this practice, or the practice of serving up the Vietnamese service from Singapore, kind of like flagging a Panamanian tanker because that's, or incorporating it in the Cayman Islands? So I wouldn't say it's revering, but I think this points out the difficulty that companies face in trying to do the right thing in the space. So I mean, and that's why I said, I don't think that there are, I don't think that there are solutions that are perfect for every company in every context, but I think doing the work of thinking about the problem and trying to organize your business in a way that is as respectful of human rights as possible, that's the best that companies could do. And I think the other piece is being very much engaged with NGOs who do this work, so that as, because there are always crises. So as crises come up, you can make decisions on the fly that are informed by people who understand the human rights implications. Got it. And last question before we turn to Ben. If an engineer were to burst excitedly into your office and say, Eureka, we have figured out a means of encrypting everything on the way, on the server for all of the communications of our Yahoo customers. So that even we can't read their email, even for quality assurance purposes, we must assume we cannot guarantee quality, but their email will be secure. And that means that government requests could fall like snowflakes around us and they would only melt. Is that, I don't know why I thought of that metaphor. It's the melting we're hoping for. Is that an exciting moment for you or is that kind of like an aw crap moment? Well, let me ask you, I mean, so to the extent that someone is using the interwebs to commit a crime, do you want a world where that person or persons can use the internet to hide? I mean, do you want a world? And it's funny because whenever I do this, there have been like Twitter things where people say, ooh, let's see how long it takes before someone mentions child pornography, because that's like, oh, that doesn't really happen. Has anyone here worked at an internet company? Okay, so to the extent that you've worked at an internet company, if you know anything about security or if you know anything about sort of the backend, it's crazy because people think child pornography is not a thing, it is actually a thing. It is a huge thing and even the smallest services, as soon as you come up, if you are a service where people can save images or people can save video, it's a thing. Like people will find your service and they will use your service to do that and all sorts of other things. So I guess my question is, as a community, do you want to create a space that is completely immune from any sort of law enforcement? And I don't know, maybe- I think we call that a question in the form of an argument. Yeah, but I would really add, I mean, and there may be people who want that. Yes, and the question you're pointing to then is, if we try to make it as simple as the internet we want is one in which two or more entities can exchange bits with zero opportunity to inspect it in the middle as part of the fabric of the internet or even to crack the endpoints and inspect it without really doing some analog physical world or some shoework, is that an appropriate horizon to aim for or is it a world in which with warrants, with process, stuff can be found? I think that people who say that's what they want if they had a world where that was actually true. So it's one thing to have services where that's possible but to have a complete internet where that is the way the internet works. I don't think that most people who say they want it, it's something that they want. But having said that, I think there are services that offer a certain level of security. I don't know that a commercially available service that's subject to the law. So for example, if you are operating in the U.S., you cannot create a service that makes it so that the government cannot give you due process. Interesting claim, interesting claim. I think it may actually be different in Europe where there are data retention requirements and the U.S. actually, oddly enough, are... No, you don't have to retain data necessarily. That's what I mean. But you do have to, you cannot create a service where you're not, you don't respond to legal process. I'm not so sure about that in the U.S. I don't know that the crime dog has discovered in the U.S. yet the lever of saying you may not create such a service. But luckily we have some experts who can answer this who aren't uncomfortably also in a moderating role. No, no, no. I think... No, no, I think maybe I'm misunderstanding. There you cannot create something in the U.S. and say that I am immune to legal process. Oh, no, no. I agree. That's what I'm saying. The legal process can land, but the question is to put it in a ridiculous analog metaphor, could you build a safe for which if you don't have the combination an attempt to drill through destroys the contents inside? And I imagine as the bank, you could be in receipt of a subpoena that says show us what's in the safe and you're like, I'd love to, but you know, we wired it in a really good way. The engineer was so excited. We can't get into it without the combo. So, and I, because I'm thinking of the example of, and I cannot remember the name of these two email services that were created. I know that... Lava bit one. Yes. And exactly, but where is lava bit now? Because the owner, because the people, the owners said we do not want to be in a position of responding to process. It's because they were in a position to answer. So that's because they hadn't designed the service to not be in a position to answer. I see. But this is a great opportunity for a moment to bring Ben into the conversation in an unexpected way for him. Because Ben, your expertise, especially of late has been really mastering some of the questions around the NSA, Snowden revelations and the actual scope and nature of the activities there as best you can. And I know we're gonna lean on you for that, but I'm curious if you have a view on the question Abela put about what are we aiming for here? What kind of internet do you want? Well, so I am sort of agnostic, honestly, about whether the internet... You're unsure the internet exists. Well, I'm unsure of the question of how threatened I would feel by an unowned, by either the internet being truly unowned in the sense that you describe it in a perpetual way or by the internet ceasing to be unowned in the way that you also describe. What I'm not agnostic about is the question of whether you could have, meaningfully, over a long period of time an unpoliced internet. And I know I'm gonna make myself very unpopular in this room by saying this, but let me speak on behalf of the Leviathan for a minute. And the idea that you are going to create over the long term, a giant space of human interaction in which people engage in all sorts of communications, transactions, financial transactions, combat games, and you are not, and sex, and you are not going to have any meaningful source of authority except self-policing, I have three things to say about that. One is Somalia. The second is the Fatah in Pakistan. And the third is the tribal areas of Yemen, right? Ungoverned spaces don't work. And they actually really stink. And they're terrible for the people who have to live in them. And I do not look at the current set of controversies and say the problem here is that we are moving in a direction of too much authority. In some parts of the world, we are definitely moving toward too much authority. But in some parts of the internet, the problem is that we have too little authority. And I think you have to look at the system, first of all, in terms of its component parts. And secondly, in terms of the aggregate level of goods and evils that can be produced on it and the proper answer to the amount of governance. And here when I say governance, I don't mean it in the sort of utopian sense of the people coming together and governing themselves. I mean it in the sense of actual government and authority. The proper amount of that is not zero. So that's where I would start. I wanna say just candidly, I come to this conversation from a very different place than some of my co-panelists do. I am not an internet theoretician in any sense. I am a journalist by background and I got involved in this whole area originally in the early 90s when I became inexplicably fascinated by an institution that then nobody, really nobody had ever heard of called the Foreign Intelligence Surveillance Court. And I was fascinated with it because I was a young civil libertarian journalist who couldn't believe that it was not co-intel pro. For those who weren't young then because they didn't exist then, do you wanna just say a word about co-intel pro? Oh, co-intel pro. Because that sounds like what's inside our laptop kind of thing. And maybe not. My co-intel pro was an i7. So co-intel pro was an intelligence program in the battle days of spying on civilian dissidents and civil rights leaders and all kinds of other people who should never have been subject to espionage domestically in this country. And I was incredulous that there was a secret institution authorizing electronic surveillance and by the time I got interested in it also physical surveillance in a secret room in the Justice Department. And I actually, I believe I'm the only journalist who's ever walked inside the FISA court which happened because I asked and the people who were running it were so proud of it that they kind of looked at each other when I asked and said, well, sure. And that hasn't happened since. And were you convicted? What'd you do? Looked around. I mean, it's actually just a room. Yeah, right. You know, this started a long process for me of thinking about why is it that an institution like that that from the outside we look at so skeptically and seems so offensive to the way we imagine civil liberties to work to the people who work on it work for it, work with it is a matter of extraordinary pride. And over time, I confess I have come to identify with them. And I don't believe to answer your question to return to your question that if we set up a world that you describe in which the average transaction was not subject to meaningful enforcement in any way, I think that would be a horrible world to live in. And whether it's done for utopian reasons or for malicious reasons, the result would be the same. Can I just say, I'm just very, very curious. Are there examples in either the real world or the virtual world where you've had humans all come together with absolutely no rules? Burning man? And you had children and you have sexual minorities and you have women and with absolutely no rules. Everyone has been safe. I just, I- Great question and Yochai and Bruce are nearly jumping out of their seats to answer it. The question isn't no rules. The question is the relative weight of a particular set of organizations and institutions. Relative to others, to social arrangements, to this kind of local arrangement, to that set of technical standards, how much of what we do. The idea that the national security establishment that brought us the torture program, the Iraq war and its fundamental strategic error, the failures in Afghanistan, the domestic surveillance system is the correct location where with pride you will define the accurate position on the range from absolute zero policing to a police state is laughable. Oh and I think that- That is not what, so in my conversation, I think what I started off by saying this is something that we and I personally had identified as an issue. So sort of the NSA's, the US specifically had been identified as an issue. I think what I'm responding to is there cannot be, not there cannot be. I am always struck when people say, well we would like to create some sort of thing where it cannot be policed at all. Because then my question is, is that you may, one may disagree with the way the policing is being done, and I do, I really do. But is the solution then no policing? And if we say there should be policing, I think then this is where I go back to having a realistic conversation about what that should look like. Well and I should say, and then Bruce will definitely be getting in. I should say invoking extremes can sometimes not be very helpful in a conversation because it becomes a straw person that nobody's advocating for that kind of thing. On the other hand, it can sometimes be helpful because it can be a way of testing to see what page this group is on the same of. That sentence didn't end the way I wanted it to. It's a preposition at the end, you should never do. But it's a way of figuring out, for example, really what is the horizon that we would aim for? Do we have shared values and the devil is in the details? Who should hold the sheriff's badge and to whom can it be entrusted and what limits should there be on the watchers? Versus, is it entirely possible to build a new configuration because of the dynamics of this space that might not need that kind of thing? So I know Bruce, when I think of this unknown internet and that we wouldn't like, what I think about is the internet before like 1997, before the FBI figured out how to use it, before corporations figured out how to monetize it. That's fundamentally the unknown internet when people say this is what we want, that's what we want. The internet that was created by the government? Right, the internet was created by the government, completely clueless, unpoliced because the government had no idea. And we know about the court cases of that kind of paid attention. When you know, Steve Jackson games, the FBI sees their servers because they wrote a game that talked about hacking and it wasn't hacking, but they didn't know the difference. We had a lot of very clueless government. It was only recently that governments figured out how to police the internet. So the unknown internet that thing that you're saying would be so bad, we've experienced and we kind of came out okay. Of course the degree, scope, amount of activities that has changed dramatically. And I think it's a fundamental error and we all do this. We think of the internet as a separate thing. An internet that is unpoliced. So what does that actually mean? The internet is used by people. People are always polisable. We're looking at one particular communications channel and even today, quite a lot of internet crime is solved through non-internet police investigation. It's not that the child pornographers or eavesdrop on, we figure out who they are, we go get them. It's that people are followed and normal police procedure works. So Bruce, just to nail this down here, let me ask, are you then saying you're okay with a zone where the bits move among the parties not surveillable no matter how much legal process you throw at it and from whatever sovereign source because you figure there's analog policing and other tactics that basically governments could use as if they were users of the internet and not privileged, that's sufficient for you to exercise the sheriff's prerogative. It's a couple of things. One, basically yes, although there are digital techniques too, what we saw from the NSA's TIO catalog was a whole lot of government hacking techniques. The benefit is these are targeted against the bad guys, against someone. And you're cool with that. I think that is the right way to do things. What I dislike is surveilling everybody to get at the few. But fundamentally, we have to- Wait, wait, wait, wait, this is so key. You're saying so long as it can be hacked, the government can do it, but if it couldn't be hacked, you shouldn't be able to present a subpoena and get it. You could present a subpoena, but technically, like you said, you should be able to build a system that is subpoena proof in some way and I can- But if it isn't subpoena proof, then go to town. So it should just be like the cat versus the mouse and may the best one win. Which is effectively the arms race that's happened all through human civilization. So that's not- You have to decide whether the good parts of society outweigh the bad. Criminals can use roads, criminals can use cars to get away from crimes. We can decide to put a governor in a car that will keep speeds under 50 miles an hour and I can invent these reasons how it would make us safer because it'll prevent crimes. We decide not to do that because the positive value of the technology enormously outweighs the negative value. But if you could hack the car to do it, you'd be fine. If you, so targeted, targeted police investigation. We get a subpoena, we get a warrant, go after the person, serve it against their home, against their computer. Those techniques are what we use and we have a lot of security built in to ensure that that is not abused in countries that have those. So we know how to secure ourselves from government abuses in that way. We don't really know how to secure ourselves from government abuses in the broad surveillance, the broad control context. But- What do you think of as the fishing expedition? The fishing, but you know, this is the phrase that the price of liberty is the possibility of crime. We have to decide whether we're willing, whether the positive benefits of anonymous communication, of free communication, of secret communication outweigh the negative. I think throughout our human history, it's shown again and again that the value of that is so great that we're willing to live with the crime that comes with it. And it's something we have to decide. I wanna let Ben and Yochai get in and then we'll open it up. Ben. Go ahead. No system sufficiently complex and open. To permit people freedom to change and learn and experiment can be perfect. Imperfection is a core dimension of freedom. And so when you talk about imperfect policing, when you're trying to, the thing that Bruce was focusing on was not so much the arms race and subpoena or not, but the bulk and everything and the ambition for perfect knowledge and therefore the potential for perfect control and at least the fear of perfect knowledge and perfect control. As opposed to accepting systematic, sometimes even stochastic, and we don't know where the imperfection will be, but imperfection pervading the system is a central part of how we avoid authoritarianism. And so yes, that means sometimes terrible things will happen. Decrying the failure of cities, crime waves in cities, the failure of community and all of the niceness in the pastoral ideal has a lot to do precisely with the scale and imperfection, but also with the creativity and freedom associated with cities. And until we accept, so that's why I reject the idea of the zero policing, but I also reject the idea, the ideal of using every piece of technology and law possible to be able to prevent any single crime. We need imperfection, we need it there all the time and we can't do without it. And you can argue that social progress is based on it, that being able to break the law is how we improve a society. That's how we get gay marriage. That's how we get legalized marijuana through people trying it and saying, you know, that wasn't so bad. If you had perfect enforcement, we could never improve a society. I'm just cognizant that you also wrote a piece in which you said, thanks to these many technologies, true outliers, Ted Kaczynski types may be in a position to wreak far more havoc than the Unabomber did. And I remember your piece ended with, and that's why we're all screwed. I'm paraphrasing. And I think this is the fundamental paradox we have. And so I think of it as a question of scale. And I'll just make some numbers up. If we in a community decide that 10 robberies a week is okay, right, that crime rate is acceptable, that means we can have so many robbers. But if robbery now gets more efficient, that a robber can rob 10 times as many houses as before, we now have one-tenth of the robbers to get the same stasis. So as the amount of damage a bad guy can do goes up, the number that we can have goes down, the more we're pushed into this very draconian preventive policing, which is fundamentally futile, now I'm not sure a way out of this, but either I've just described a fundamental limit of technological advance on par with Darwin's law, or I've made some mistake here and we can figure out how to keep a free society even though there are weapons of mass destruction. That's a weapons of mass destruction debate. Even one weapon is so damaging that we must do everything possible to prevent it. That is fundamentally impossible. So I mean, just to be clear, I am not talking about a world of perfect enforcement. I think what you guys are talking about is a world in which there are very significant zones of impunity, and in which there's no capacity for enforcement at all. The concern, and I think you each using different examples brought up actually examples that don't really support your point. So a road system is a highly policed system, and it's policed by surveillance. It's policed both prospectively and retrospectively, and there is a lot of preventive enforcement. And every vehicle on it is advertising its identity through a very crappy pseudonym, often that you select yourself. And every vehicle on it is licensed and tagged, and every person driving the vehicle is licensed and insured. So you have a whole lot of mechanisms there that make sure that the road system, not that there's perfect enforcement, but that there is capacity for enforcement. The example of a city, which I think is a very good analogy for the internet, think of New York City or Boston in the 1970s and 80s. There was a big problem that a lot of people perceived with underpolicing, right? The whole, we of course, we didn't call it surveillance because surveillance is such a nasty word. And but we felt that there wasn't, weren't enough police on the street watching things. So we invent nice words for it when we conduct surveillance in public places. We call it community-oriented policing, right? And we call it broken windows policies. We don't call it dragnet surveillance. But what it amounts to is large numbers of people representing authorities standing around, watching spaces to make sure that nothing is amiss. And they deter what you would call eccentric or potentially, you know, of course they deter crime, they deter a lot of other things too. There's a good and bad associated with that kind of policing. And what we're talking about really, when we talk about whether you're going to build the capacity for that kind of policing into the net or not, whether you're gonna build it into the architecture is whether you want the capacity for that kind of calibration of surveillance. I'm not, I don't mind the word surveillance. I think there's lots of good surveillance, there's lots of bad surveillance, and it's not enough to call something surveillance. You know, if you go to the CDC and you talk about disease surveillance, that's a good thing, right? They want surveillance. And so to me, it's not enough to say that we're increasing surveillance. You have to ask, are you increasing surveillance of a type that threatens liberty? Are you increasing surveillance of a type that protects liberty by precluding or diminishing the sort of activity on the part of non-governmental actors that meaningfully threatens it? And I think, I accept both of your analogies, but I don't think they support the points you're making. And I would say the one thing that I'll add to that, I agree with everything you said. The one thing I would add to that is one of the things, because my role as human rights, we get a lot of conversations around the internet as a safe space. And so specifically, not just in the, just in the US, although it's a problem in the US, it's a problem around the world, around women and the internet. So if you look at numbers around, so women being, women having voices on the internet and what happens to them afterwards. So, and it's a very interesting, it's a very interesting, it's a very interesting phenomenon to see that when women are public on the internet, what happens to them afterwards? So the threats of rape, the all kinds of things that go on in sort of that space. And so when I think of a completely unpoliced space, I think of a space that is less safe for women, that is less safe for sexual minorities, that is less safe for unpopular opinions. And so that to me is a direct threat on human rights, and it's a direct threat on free expression if you have an environment where there is no authority. Now that does not mean that I don't think that there are abuses, and that doesn't mean that I don't think that there should be a strict control on the controllers, but creating a type of space that is incapable of being controlled I think is a direct threat. It's just a very interesting reminder that conformity can lie on either side of this crude spectrum we're talking about. That if what we want is creativity and unself-consciousness, that a police state isn't really known for that, nor is a zone in which it's really hard to, yeah. I know you want to go to the room, but just a little bit of facts since I actually did the research on this about three years ago, there's not a single competent study that relates the decline in crime either to community policing or to big board data. There's a lot of self-serving belief on both of those, but if that's the theory, then there's better data on the use or not use of lead paint than there is on community policing or big board policing. Look, my point isn't about the effectiveness of community policing. What else could it plausibly be? No, my point. I took his point to be that depending on how you label it, the public might swing largely behind it or against it. That's certainly true. And also, no, no, but also that the proper amount of surveillance to secure a space is, to some degree will depend on, we don't generally think about the security of open spaces and public spaces as areas where surveillance plays no role. But I guess Yohkai's point is that that seems to be, in many cases, the case, that surveillance does not play a role in securing the space. That our value comes from investigation of events after the fact and the deterrence effect. And the meta question may be to what extent does whatever disagreement rests here lie with how effective surveillance is? Because it may turn out that even if it were effective, it would then have that many more bad qualities that would make you skeptical of it. And if it's not effective, then your point is it's not effective, so I do it. So either way, don't do it. Yeah, and I think when I think of it, I'm not just thinking about surveillance. I'm thinking about some sort of rules, some sort of rules, some sort of government. The ability to intervene. An ability to intervene and to enforce laws that we as a community decide are relevant for us as a community. Great, let's open it up. I see a hand here, a hand here, you were up earlier. Yes? I can't speak the question. Oh, just one to a customer, please. Well, the tiny and just a clarification question is, I heard a lot about subpoenas and how it should be subpoenaable, but then destructible. But so then, I guess my question is, could you, the subpoenaing password included? Like, you must provide a password that must be unlocked. And you mean directly to a user. This isn't to an intermediary now. Oh yeah, to a user who might be at risk of incriminating himself. Yes, yes, yes. I don't know if anybody wants to speak. I didn't understand the question. So the question is how a subpoena comes in, or? No, well, I should let you clarify if you want, but my understanding was there have been these instances in which people in the American context have been asked to divulge their password so their hard drive can be properly looked at by authorities, maybe at the border or some other circumstance with a warrant. And if the person refuses to divulge, should that be protected, say, under a Fifth Amendment privilege, or is that what you're thinking? I can talk a bit about that. I mean, in general, I think this is an example of technology outpacing law, that our laws were very much written in a different time when things that were important to us were physically close to us. So laws protect our bodies, our homes, our cars, and there are different rules. And the things that are close to us, there are more stringent rules for the police getting access. We're now living in a world where the things that are close to us are on Google servers. And the rules haven't caught up to that. So when you think about being asked to divulge your password, the equivalent is being asked to divulge your papers, your email, the things you're protecting. But the technology is that it's this disembodied thing. And actually, the rules are different if it was a password or you used a thumbprint, you used a biometric, which is ludicrous on the face of it, but because the rules are antiquated, they haven't caught up. But would you harmonize that by making it then harder to get the thumbprint or easier to get the password? So to me, it's conceptually, those things behind that password are my papers. And the rules should be equivalent if that was a stack of paper I'm holding in my hands. But let's just make it simple. Fine, they get a warrant. They have probable cause. Three magistrates have signed off on it just to be sure. Now, must you, in a just world, have to give up the password on the pain of being thrown into it? The rules should be the same as giving up that stack of paper. The password equals a stack of papers. In which case you could be sent to jail then in Bruce's world for not giving up that password. I want the rules harmonized. It shouldn't matter what the medium is. Yep. Yochai? I like to clarify. It's more like subpoenaing a memory rather than a piece of paper that is in possession, which is kind of more like, so that's where the Fifth Amendment and incriminating yourself comes into play. And that's another reasonable analogy. And it's true. Quite a lot of things I used to remember are now stored on computer files. I've forgotten how to remember things because I don't have to anymore, except I don't have a search. Yep. Any of you on this? Nope. I actually completely agree with Bruce. Note the moment. We will not see it's like again. And I think doctrinally, by the way, the way it's been worked out has been treated as an external thing or if an internal thing, a prosecutor can give qualified immunity for the fact that you knew your password, for instance, to tie your identity to that of that hard drive that otherwise they don't have your fingerprints on, you would be immune from that link being used against you in the law enforcement context, but they get to use all the contents of the drive against you. I think that's tended to be so far the way the doctrine has worked out in the US. Over here. And there are two other mics too, so feel free to summon them and then we can rotate around. Yep. A lot of the discussion has been about sort of social order and crime and it sort of strikes me that absent from the discussion has been the role or responsibilities of citizens. And as I think about it, I have a sense that some of the tension in the scenarios that are being described have to do with looking at the technical aspects of a moral and ethical issue, set of issues. And so I wonder, sometimes when I think about it, my sense is that the solution to a lot of these problems is actually moral and ethical education, such that fewer people in the tension that Bruce described, such that fewer people actually want to be criminals in various senses. They're economic issues that maybe they don't need to. And so I wonder to what extent you all think there are parts of the equation that fall on the aspect, that fall on the side of the surveilled or the citizens that are part and parcel of solving some of these issues. Any thoughts? I think a lot of the work, again, coming back to the question of the city and the decline in crime, a lot of the work has tried to parse out and there's not very clear evidence on what the components were, has come with changes in economic need, changes in culture, changes in social norms across multiple domains. So the real answer is there's no the solution. Part of the problem is trying to load any single system with being the solution. We'll tend to have it go overboard and either fail or overreach or both. So there's no question that some level of a sense of one's own responsibility, a sense of one's own responsibility to look around, to be safe in certain ways, to observe things is part of the story both in the real world and online, but it's also not gonna be the solution. The only solutions we have are multiple intersecting and they will be imperfect. There is, I guess, the interesting, the strong version of that point might be in a world of perfect enforcement where to stray is to immediately invite correction and punishment. It's really hard even to inventory if one is ethical because you never know if you're not strained because of the Skinner box or because of your own compass. But again, that is a far world, the one we have now. Microphone over here, yes. Okay, from my point of view, the, most of the crimes, like the economic and physical things would be in meat space and traditional policing can probably take care of that and what's frightening about internet surveillance is that it allows more to go into the regulation of speech and thought. So if you accept the hypothetical that the internet were only speech and thought, would you be willing to accept a rule-free zone for speech and thought only, leaving out the economics and meat space implications? Well, can I just ask back for a moment? Sounds like it might be for Abella in particular, but let me just ask back. Is it only speech and thought when somebody sends that invective laced email that is the 47th of 48 that afternoon and there's absolutely no way to figure out who's behind it and how to stop it? As a hypothetical, let's say, just as well as my was a hypothetical because meat space does come into it. Yes, of course. Let's say yes. Uh-huh, Abella? So, I mean, I'm gonna start by saying, unfortunately, that's not the world we live in, but putting that aside, let's think about a world where someone can send to your point, someone can send harassing emails or someone can go on someone's Facebook page and write all sorts of stuff about your horrible person, you horrible, horrible things. Now, do I think, do I want a world where that, there isn't some sort of mechanism, it may not necessarily be surveillance, it may, do I think that's a safe world where someone can use the internet to harass someone? I mean, no, I mean, there may be no, now again, is surveillance the answer? No, maybe it's community rules that make it so that that person can no longer use the service. But I don't think, I think that we are beyond the space where the internet is just about thoughts and feelings and penalizing thoughts and thinking of, I think people can use the internet to do things that have actual harm. I think even more importantly, I don't think, not that we don't live in that world, I think we can possibly live in that world. I mean, human beings are fundamentally a species of communicators. That's what we do. I mean, that's kind of why we're all over here right now. That communication always has real world effects, that you can't separate it. And whether it's emotional or commercial or political, or, that's what happens. I mean, the world we live in will be a world of communicators and that communication will always have real world effects. You can't just, you can't separate them. I also, so I agree with both of what my co-panelists just said. Nice, I know. But I wanna commend your question because I think it's actually a very challenging one because it forces you to actually focus on how you do or don't value espionage as opposed to law enforcement. And I just wanna take a moment to pause over that. Espionage, law enforcement is fundamentally about figuring out criminal activity, but a lot of espionage is actually just about finding out what people are saying and thinking about. And it actually is a purely communicative capture. And I think that if you isolate, you say, okay, we're not gonna think about any crimes. We're not gonna think about any damage that you could do. It's a purely communicative endeavor. Then you sort of have to look and you say, well, first of all, I'm not sure how you do that because people plan crimes, right? The communication may not itself be the crime, but you still have some law enforcement interest in capturing the communication. But now think about the true national security situation. You're trying to figure out what another government is doing. You're trying to figure out what options you have going into a negotiation. The mechanism by which you do this is by listening to communications, by capturing and listening to communications, by stealing people's secrets, right? And that is not necessarily that the transmission of those secrets may not be criminal. It may not be improper. And it may be purely reasonable to wanna know what those people are saying. I think that that question, how you weigh that need, and I will talk about it as a need, against the desire for as open and free a platform as possible, that's a much harder question if you take out the activity that is itself you wanna prevent, you wanna stop. And I think your question's a really good way of thinking about what the value of espionage is. I'm really happy you brought up the espionage question. And particularly... You mean happy with a lower case H. We can see the crocodile tears. And particularly because you talked about the need of governments to find out what other governments are doing. And I think this really puts the finger, places the finger in the right place about what it is about the series of revelations we've gotten from the Snowden leaks and the follow-up releases of documents, which have to do with the blurring of the lines between whatever everybody understands to be espionage, which is why the bizarre anger over listening to Merkel's phone, as opposed to large-scale population surveillance was odd. And the idea of centers of power that have a particular set of capabilities in terms of imprisonment, in terms of extremely disruptive investigation powers, transitioning from espionage, listening to another government or from tailored bugs based on subpoena or not, to large-scale systemic population-wide awareness that really changes for the entire population what it is that it's about. And it's that idea. On one hand you have the blurring of the civilian from the military through on one side terrorism and the expansion of the sense of the cyber attack to something that really is the same set of systems as opposed to distinct military civilian, the complete annihilation of the distinction between combatant and non-combatant that's a potential threat and where they're located. And all of these really feeding into a conception of espionage that is about total observation of entire populations. I think that breaks down the espionage framework and the way that you've described espionage really helps to see that and that's the deep concern. So there's a lot in there and let me just say a few things. One is the NSA has always sought to vacuum up everything available to it within its lawful authority and sometimes exceeding its lawful authority. There is simply nothing new about the idea that the national security agency takes a vacuum cleaner approach to overseas communications. And Ben you're fine with that. You'd be happy to see that continue. Yeah, absolutely. And you'd be happy to see it continue to new technologies. How many people are quantifying themselves in some way? Anybody with a Fitbit, a basis band, a jawbone? All right, that's the cutting edge is here. The rest are like I don't know what you're talking about. Product placement moment. If they're hoovering that stuff up too, is that okay? If they're getting a sense of whether this room is getting riled by the increased heartbeat? Hang on a second because if the question is do I have a problem with the acquisition of Fitbit information? Now, I can't imagine what foreign intelligence value that may have, but I can't. As opposed to Angry Birds? And I'm wearing one. So, but if the question is do I have an in principle problem with the acquisition of Fitbit information, the answer is no. If the question is do I have a problem with surveillance against people in this room, the answer is absolutely. By who? Surveillance by, can the UK surveillance room? You know what, the UK, there is no rule that says the UK, I mean, except that GCHQ may have an arrangement with NSA that they wouldn't do a thing like that. But if the Russians are conducting operations against your Fitbit, that's maybe in violation of US law, just as half of what we do is in violation of Russian law, but that's the world of espionage and there is just nothing new about it except the volume of information available for the taking to groups that wanna take it. What is profoundly different? And this is where I agree with you is that the general engagement on the part of the average person with devices that will tend to involve communications internationally that may cause that person to be subject to incidental acquisition in the course of that vacuuming up of a lot of stuff is dramatically greater than it used to be and those bodies of data, the foreign and the domestic are much more intertwined than they used to be and so the impact of NSA activity on the average person is theoretically much greater. Now why does this not scare the pants off me? Right now I'm not at home, I live in the District of Columbia and I have absolutely no assurance that the DC police are not at this moment knocking down the door of my house and going through my stuff and yet I am really, really confident that that isn't happening and the reason I'm really confident that that isn't happening is A, that the rules actually forbid it and I know what the rules are and they forbid it and number two, that there is a series of compliance mechanisms that I am confident if it were to happen would kick in and allow me recourse, would prevent it in retrospect prevent it from happening again and that is exactly the same reason that I'm confident that my Fitbit data not that it's not somehow showing up in some NSA database but that it's not reason it's not I'm confident that it's not meaningfully being used in a way that threatens me at all. So I think there is an enormous difference that comes from the fact that there's so much more data that we as citizens in this year produce so much more data as it goes through our lives. Everything involves computers, computers produce data. The difference can be seen in the Canadian story from last Friday. With the story was about the Canadian NSA, I'm blanking on the acronym, CESG or the other way around, was looking at Canadians at the airport. What that report really was about is that the Canadian researcher who's a trade craft developer, coolest job title ever was trying to figure out if you could take massive amounts of surveillance data and figure out what ISPs were, airports, hotels, coffee shops, offices and if you can use the data of individuals appearing geographically, because you know locations to find people. And there's a scenario about a kidnapper, it almost doesn't matter. What's different is surveillance as we conceive it is follow that car, right? There's a person follow him. Surveillance today is we know some things, let's go backwards in time because we have already surveilled everybody and see what we could learn. Now we can argue there are enormous values in the kidnapping hypothetical, that was in the Canadian report in terrorism. You can argue that there's lots of very bad values in what happened in, it's not Estonia, in the Ukraine, where you're a chilling effect on protesters, but it's fundamentally different. It's not just this more of it, but more of it is making a change in how it works. And that's really what we're talking about, whether that change is on the whole good or bad, how we can take the good things and not take the bad things. So I mean, quick, you're high on it. I think it's a really important change in the way things work. It's the change sort of from not using bug data, not using bug data. I'd like to push back and there's sometimes a flavor, particularly the left-hand end of the panel will look at the stage right, end of the panel that, no, but the problem is that when it's done, when it's massively done to everybody, but I think we have to be realistic about this. Before, so in the early days of the web, if I wanted to follow up with you guys by going to your web pages afterwards, I have to go to each of you on the panel and say, do I have a card? Do you have a card with your homepage on it? And I pick up these four cards and I take them home and I type the URL into my computer. Okay, and that's how I would follow up on you. Or I would go in directly, but I would follow little links. Okay, but that isn't what I do now. I don't worry about that. I don't take, pick your cards up. I know that I can just type into Google a just unowned internet Harvard that I get and then you'd be one click away. Okay, so that, everybody does this without thinking. People do not click cards. They don't write down URLs. They Google for everything anywhere. This is the way we do anything. And when we do that, what we're using is the Google database. In the Google database, Google have hoovered up everything. The reason that works is that Google, A, hoover up anything without looking at whether it's an American citizen or an American person or a person at all. They just hoover the whole thing up. And then some machine, which is completely dumb and just sits there, you come out and you, then I ask the unowned internet and it will run through that on an internet panel and it'll pull out four pointers to four people's home pages. Okay, now we do that all the time to do anything. If you're asking a policeman to go out and solve crimes and you're not asking him to have the, and you're asking him to do the old method, you know, to follow links out there, hello. You know, he's gonna be hopeless. He's never gonna keep up. So obviously I put it to you that you need, your policeman's gotta be able to use Google. And that means that your FBI person has gotta be able to use a version of Google, which if, you know, they haven't got the ability to just be able to do a search where necessarily the way a search works is by having everything that they know and then finding the eigenvectors of it, basically, then they're never gonna compete. But so in other words, it's crazy not to ask, for that to work at all, you've gotta stop by hoovering everything up. So I put it to you that instead of objecting to people hooping everything up, what instead we should do is to say, well, okay, if you're one of the people who's got access to this, you've gotta have some people who've got access to a very, very powerful system. If you're gonna have any chance there may be any progress at all against crime. So then how do we, let's think completely differently about how we limit that. Then now let's think about one of the Google searches that you'll allow people to do. Is it, in fact, more logical? I don't, maybe I don't want people to be able to type in my name, but I do want them to be able to type in terrorism. You know, if Google can actually detect a flu epidemic before anybody else has, just from people, from what people do in a search engine, similarly, access to all the data that NSA has on the right, with the right machine, it's gonna be very powerful detecting really bad things happening without even talking, without even any of the conversation being about individuals at all. So it's not here about individual, we're not, I don't think we need to here address the question of how to limit individuals' privacy being deliberately blown because some policemen has asked for my stuff by name. No, it's when they do a very sophisticated search and at some point they have come up with this pattern. Machines decided, pattern's going up, though really we should be worrying about. And then at that point, that is the time, after all that's happened that they need to go to a, maybe to a court before it actually becomes, the pattern gets broken down into, into dealing with people. I know, in a way I feel, you have a complaining that the laws were made before, in pen and paper technology time, but now we're talking about it as though pre-Google. So I'd like the panel to think, how would your remarks change bearing in mind sort of the Google database syndrome and the way that'll work? Got it. So, yep. I don't think my remarks would change. I think, and this is what I would focus on, I think having due process. So I think to the point about the technology is there. There is the capacity to do it. And there's the capacity for private actors to do it. And so to say that private actors have that capacity, but government actors don't, that's just never gonna happen. And I would be afraid of a society where people, private actors had more capacity than a duly elected government. So having said that, if you know that that's the case and technology's out there, what we have to think about or what rules do we, and going back to your point about what the role of a citizen is, what do we think as citizens, the due process around those things should be? Yes, so far you guys seem completely aligned in not wanting to eschew certain kinds of collection by type, by drag net versus specific, but then to have careful protections on who can use that database or what can be done with it, all that kind of stuff. And I just wanna add that almost everything you said in substance, if not in tone is sounds like a senior official of NSA right now. I mean, this is exactly- Tim's like, you take that back. Right, no, no, and I think there's a- Imagine you have that job, yeah. And I think that's a really interesting sort of convergence of ideas. Look, there's two questions that when you put them together, it's the whole ball game. What is the substance of the rules? And the second is, what are the compliance mechanisms and how much confidence you have in them? And everything else, you have to have compliance systems that you believe in and everything else is a set of policy questions about what you do and don't want to allow government to do. And one of the big questions that animates that necessarily, if you're serious about it, is that Google and Facebook and Yahoo get to do a lot of those things. And how much does that or doesn't that change your sense of what an analyst at NSA should or shouldn't be allowed to do? And it is interesting that in another context, the US Supreme Court has said, by the time a particular technology enters the public mainstream, that somebody can do it with tools from Radio Shack, it's awfully hard to ask the government to obtain a warrant to do the same thing. And there was that thread going through the remarks, too. And this is an interesting argument. It's the question of do we limit collection or do we limit analysis? Now, my guess, it's never gonna be all of one versus all of the other. Now, there's gonna be something in the middle that there will be limits on collection, limits on analysis. I think your point is exactly right. If you don't trust the enforcement mechanism and if there are secret laws, if the best yahoo can ever say is, we protect your data to the extent that we don't have to lie about it, that's disastrous. I mean, there's no trust. So secret laws destroy that enforcement mechanism. And there's also the question of on the corporate side, what is allowed, we live in a world where we basically give corporations a carte blanche to do what they want. But we can easily think of things that we might not want. I mean, Google knows what kind of porn all of you like. Are they allowed to serve images and ads based on that? I mean, they don't have to actually push your buttons. They just tickle them a little bit. Is that okay? I never knew you were an ad man, Bruce. So are we okay with that? With individualized advertisements based on your porn search history. We have to start thinking about those sorts of questions. That the psychological manipulation that results from this massive data collection by for profit entities is gonna get seriously creepy if it hasn't already. And certainly this government collection of use is a lot of reasons why we can talk about the power of government. I think the power of the non-governmental entities also, so I was entering this equation very profoundly in ways that in our sort of all regulation is bad environment we tend not to talk about. So I just wanted a timekeeping note. We have five minutes left in the formal program with the indulgence of our collective sheriffdom. I'd like to say we might go over by five minutes. Is that okay? If there's like a stove on, a crock pot burning, feel free to deal with that. And that will let us bring home the two remaining questions with the microphones that are already out there. And Yochai, you were gonna say something. Tim put something huge on the table. So I'd like to probe your intuitions just a little bit further. NSA might not do it locally, but let's say for the moment, you were making a much more basic point of the policeman and data, and big data, let's call it that. So where would you stop? If in fact, would you be okay with a federal database that included a full record of all GPS data locations of every single cell phone in the country tied to each person, subject only to subpoena for use. That's one, true, an equivalent of Google desktop where every single file is cataloged and searchable of every single device that you have that has any document, also in a collective database available for data analysis by the police officer, subject only to constraint. Are both of those, or one or the other, a point beyond where you would say, let the policeman create their own version of Google? I think that it's not the, I wouldn't want to argue over exactly what is collected. I'd want to, I wouldn't want to do anything unless we have a completely different way of filtering what they do with it. So in other words, if you build me a system, if you put all that data, whatever it is, including my, hey, here, take my heart rate, my activity, my position, put it in some deep mountain, in some vault, in some mountain where you can run very powerful analysis of it, right? But then every time that somebody wants to use that, there is a huge process they go through which to completely, where the people who want to do it come in through, they come in through one court, and then meanwhile, but within that court there is, there are people who fight against them to try to persuade the court that data should not be used in that particular way. And also, you have got another completely separate agency which is independent, which is picked up somehow, which is observable, for example, suppose you make another court, which includes a German judge, okay, and a Brazilian judge to start with, okay, and some of them are my friends, and has two people appointed by the EF. And you, you'll be on the court too, you'll have. Okay, you could be in a short list for this. So, okay, so we've supposed to put you on there, and I would trust you to go in there, and I'd want to set you up with the power so you could take a part, if you think you're not really being shown the whole story, you would have huge power, okay? So you would have huge power to be able to investigate, you would be able to, no, you typically, one in 20 times, you do the long, you do the long follow up, and you find, okay, then we get in there, how did they use it? What was actually used for, was anybody's privacy violated? I'd also want to see massive, I'd also think if you accumulate all that data, obviously there's a massive, massive risks of criminal use of that, inside jobs, so the huge security issues around us. But with enough process and technology, I think his answer to your question, Yochai, is yes. I'm worried about sitting on massive checks and balances like never have been seen before. So this is really, to me, very helpful, because it helps locate the skepticism. And I'm skeptical that such an institution can be perfected. And given imperfection, I want imperfection on all sides. I understand that the institutions are not perfect, and I understand the policing won't be perfect. And I, to my mind, the whole idea of rejecting a general warrant is accepting a partially blind police, knowing that we can only have partially effective institutions to control a perfectly all seeing police. And they're two very different, I'll say very quickly, they're two very different scenarios. If only there were one. No, one is the police saying, tell me who was using Tor at the time, at Harvard the time that bomb threat was emailed. That's one. The other is the system itself saying, based on programmers, hey look, I have something interesting, do you want to see it? And those are very different. One of which is we know how the system is working. The algorithm is not obscure. The other of which is a minority report style. This is a bad guy, but I can't tell you why. You seem to be skeptical, especially in a second. I don't know why. Okay, so knowing that we're in the indulgence of the entire room, I call upon the last two places where the microphones are parked to quickly assert what you'd like. So I think one thing that's clear from this discussion is that the decisions that we have to make around surveillance are gray. They're not black and white. And so there has to be an ongoing discussion. And it's not fixed. We have to, as things evolve, we have to continue to make these decisions. And I think that this points out a really clear difference between surveillance of, say, follow that car, of the policemen on the street, then even versus, say, cameras everywhere. And then you look at the NSA-style mass surveillance. And you can see that the difference is of visibility. The policemen on the street is visible to everyone. Even the camera is visible. But very few of us will ever know if we were surveilled by the NSA. And therefore we don't know, we don't have an experience of what this surveillance is and how it's affecting us. And so this is, I think, one of the key things that you guys touched upon. And I think that really is an essential part of how we can make the decision of what is the right balance. And so to put out an example, what if the NSA was required after each search, after the criminal proceedings were complete and people were completed that the person who was searched upon was notified? So you may already have won. Yeah, so NSA searches on your suspicion. On some suspicion they search your name. In a few years, all the criminal proceedings related are settled, they send you an email. By the way, we searched your name. So we'll leave this as a question for everybody to think about. And of course, a specific example of more general possibilities of disclosure, embargoed disclosure. We just know this will come free and be known. X years hence, could that be a certain check? And whether it makes us feel better that since no one has told us we're being surveilled, maybe we're not, I don't know. But good question. Last point over here, wherever the microphone is, yes? One, two, and then three. Oh man, that's just, holy cow. I will go for the middle person who appears to be hiding. You don't want to speak at all. Ah, quick, yes, quick each. I wanted to ask about the new dimension of surveillance, not so much data collection or access, but attacking communications infrastructure. Belgo-com, quantum theory, muscular, those kinds of programs. What's happening here is not so much this conventional paradigm of increasing collection or increasing access capabilities. But these are outright attacks, infiltrating 100,000 routers all around the world, facilitating mass surveillance with a click on the button. So the question is if we continue to conceptualize surveillance in terms of these conventional human rights, do we lose out on this whole bunch of new threats that subvert basically everything we do on the web? Got it, and so hold that question and something Bruce has written somewhat extensively about, you may have something brief to say on it, but let's get the other thought into it. My question revolved around a lot of the discussion has been associated with state-sponsored surveillance and in fact, Bruce already alluded to this. I had been planning to solicit the panel in their opinions regarding what I would call community-sponsored surveillance. An example would be following the Boston Marathon bombings, at least one dozen innocent people who had their lives turned upside down when they were accused of participating in the bombings. And this followed a crowdsourced surveillance effort by 4chan and Reddit and others. And so I, again, to follow up on Bruce and also the other gentlemen's comment, I guess I would say that state-based surveillance may not be the only type of surveillance that we can talk about. Great topics broached in these last things. And let's just work backwards real quick. Yochai, I think it's maybe to you because you've been such a proponent and thinker about community-based and shared badge, share of solutions. How do you put that in the context of the marathon? So actually to both of these points, the latter points. I think one of the things we're learning of the last few years is that imperfection touches everything, including social activities and social production. And our skepticism needs to be applied to everything. And our concern. I don't think, I think there are reasons to privilege state surveillance because of the particular powers of the state and the way in which the state can really destroy. And by privilege, you mean privilege it for scrutiny and restriction. Yes, yes, privilege it in a sense of. That's a different kind of privilege than, yeah. Privilege it in the sense of spending as much time as we have on it here. To the point about disruption of basic systems disruption of basic systems or attack on basic systems. I think this is actually just a wedge opening just for everyone. The idea that you would have a national system that is concerned with communication security that has an attack on basic infrastructure of communications is the most powerful example of the risk of colonization, of the narrow concern with national security, of all of the subsystems that create the world we exist. So professional, standard setting processes and markets and technological models, all of these systems, the thing we get in an open society is that we exist in multiple systems that are linked but not fully overlapping. One of the most critical about those programs is the colonization effort of taking the national security concern and embracing through it all of these other systems and losing all of the diversity of constraint and contingency that these multiple systems that don't actually fully control each other give us. That I see as something that can't really be solved by process and is a profound challenge and is something that I do think the state is quite unique in its power to do or to be forced to abstain from, at least among the states that are politically accountable. Got it. I would have just said that. We hear by incorporate by, or we do a soft link to his remarks as yours. Yes. I thought the last question was interesting because I think that question demonstrates one of the reasons why I think there are serious issues with state power as we've talked about. However, I think it's an example of what happens when there isn't state control. And by state control, I mean when you don't have a formalized system of enforcement, you have people, there's a reason why it's called a witch hunt. So this is not a new scenario where you have people deciding that they are the law. I mean, there have been posseys at the beginning of the time. There are people who have decided that they will enforce law and order. I think that scenario was why you need some sort of authority that we as citizens have decided would take on the ability to enforce laws because you do not want, that is what happens when individuals take it upon themselves to enforce the law. Even as companies often rely on the crowds to make their own decisions about what to filter and whatnot, right? Well, not in a legal way. So for example, if we decide as a community, oh, we don't want, like this particular site is about, I don't know, kittens. And we don't want people posting stuff that's not about kittens. I mean, in that sense, then you have the community saying, we have decided this is the kitten website, we don't want anything else. I do not think you want the community to be in charge of enforcing actual laws. I mean, I just don't think, I think that that is a bad, I think that's a bad scenario for a lot of reasons. Uh-huh, last word to Ben. I wanna touch briefly on the gentleman from the back rights question. And I wanna push back a little bit on the premise of the question, which is that nobody knows whether they're subject to this sort of mass surveillance. And I think the answer is, if you want to know, there's actually a remarkable amount of public law at this point that gives you a very good sense of the circumstances in which your data is and is not being collected and when collected, how it's being treated. Within the American context, if you are an American citizen. Certainly within the EU context, it's even more information in the EU. And actually with regard to the US context, if you're not a US person as well, there's a fairly detailed amount of information. Now what it does not have is your name on it. And that means we are or are not conducting surveillance against you. But if you take a look at that body of law and that body of practice and that set of interactions between the FISA court and the executive branch over both the 702 program and the 215 program, you actually get a pretty good idea of what's likely involved, how your data likely does and does not get swooped up and used. It is totally implausible to me that a foreign intelligence agency is ever going to put out the list of people whose materials it has and hasn't collected. That's a fantasy. What isn't a fantasy is the idea that you might have sufficiently clear public law, getting rid of the secret legal procedures and have enough general statements of and policies and procedures declassified that you could look at it if you wanted and figure out how it interacts with your life. And I think there's areas where we're not quite there. There's a lot of areas where we actually are there. Well, we are here in the sense of at the end of our panel. I think the authorities have been turning the room temperature up to over 100 to make sure we leave. I will say that we, I think all of us maybe have a tendency as we're thinking about past, present, future, a little bit of status quoism. I know I do in which, you know, hey, now is pretty good time. I like now. And I worry about the future going in the wrong direction. And often when we're asked to test our instincts against possibilities that haven't happened yet, how unfamiliar they are are often maybe the biggest variable to whether we find them acceptable or not, which also means what we can get used to that might end up being the new normal. So with that, please join me in thanking our panelists and questioners for a wonderful discussion. Thank you.