 OK. Well, good evening and welcome to this talk and conversation with Bruce Schneier and company on the occasion of the release of Bruce Schneier's book Data and Goliath, which is available for purchase in the corner of the room next to the water. So if you're thirsty, why not buy a book? So that's available up there. I should give you the threshold warning that this is being recorded and will likely later go public. So what you say will not immediately be used against you, but later. So be aware of that. And that following this session, there is a reception in the Hark pub downstairs celebrating the book. Which will not be recorded. That we know of. That we know of. But more exactly, we will not be recording it. And due to a set of complex factors that have to do with the HLS pub being a shared space, there will be drink tickets available not for sale, but for free for those who are here, but a limited number of tickets. So if you were looking to get loaded tonight, I hope you have already started. And I was thinking I could hand out a few for good questions as they're asked so we can get started. But otherwise, they are alienable. So short half life on eBay, but another way of profiting from tonight's visit. So OK, with that said, I thought what we would do is start with an introduction and a brief conversation with Bruce, work in our respondents by way of introduction and reaction, and then open it up quite early to everyone in the room for reaction. And by introduction, I should say I'm Jonathan Zittron. I teach cyber law here at the Berkman Center. And I am so pleased to be talking with Bruce Schneier. Bruce, how best to describe you? You're kind of the lone wolf of cyber and security and cryptography, cryptology, tomato-tomato difference between the two there, I don't know. Save that. The phrase I normally use as a security technologist, and I like to work at the intersection of security technology and people. Security technology and people? And I'll consider my career an endless series of generalizations, trying to figure out, and starting in cryptography, the math of security, then computers and networks. That could be the title of your next book, the endless series of generalizations. Computers, networks, the writing about general security technology, then the economics of security, the psychology of security. My previous book, I think of as a sociology of security, and here I am, I will call the political science of security, maybe, all in this attempt to sort of figure out what's going on. And in any security system, you're always looking at the bigger system. Got it. And also, if we're thinking appellations, you will now forevermore have the appellation New York Times best-selling author, Bruce Schneier. Actually, it's New York Times' number six best-selling author, Bruce Schneier. You precise. And that is odd, because you get that for one week, and you can put it next to your name every time you write a book for the rest of your life. The only problem is, 15 years from now, people will be like, what's the New York Times? I wonder what it was. That is such a 90s thing to say. That's true. It's true. We're trapped. So all right, well, oftentimes, a Jeremiah, such as yours, will have the danger part. Be afraid, Will Robinson. Be very afraid. And then some solutions. Your book absolutely focuses on solutions a lot of the time. And I was wondering if we might invert the usual lane of things on the table and presume that pretty much everybody in the room is either utterly scared already about use of their data information, knowledge about them by whether it's companies or the government or resigned to it. But one way or another, the problem has been out there. I'm curious if it might make sense to start with if you were given shell access to the US code or to some other form of regulatory or normative intervention, I would love to hear what you'd be most anxious to do to balance between Dave and Goliath. So fundamentally, I think the problem is power. That data is an empowering thing. And when the powerful have it, they become more powerful. And that is at the heart of the problem of government access to data and corporate access to data. When I look at solutions, they tend to be on the order of reducing the power imbalance. Ways to regulate what corporations can collect and do, ways to regulate what governments can collect and do, ways to give the people more privacy, raising their power. And I think that is in a very general framework how you balance things. This gets very difficult because the same technology is used by everybody. So any solution I propose for the United States government has to work for the Chinese government. Any solution I propose for an iPhone has to work no matter who has that, good guys and bad guys. That makes the normal solutions of let's give these people power and not those people much harder because you can't draw technological lines. We're all using TCPIP and Microsoft Windows and the same stuff. Is there an example of a kind of solution that isn't sensitive to that fact that sounds like a decent solution to some of the problems that you talk about but that fails that test that because the Chinese are out there doesn't work so well? So there's a lot of discussion recently years ago about putting in back doors into systems. The problem statement's pretty easy. There are communication devices being used by criminals. FBI gets a warrant, can't listen to the device because of technology. And James Comey has said, solve this problem for me, please, put in some kind of lawful access system that allows me to go in with an appropriate authority and get at text. That statement presumes a bunch of things. It's a technological process but there is a, with authorization at any type of system I build like that, the Chinese government is gonna go in with their authorizations, which is presumably a lot easier to come by and then use that same access to round up dissidents. So suppose I'm James Comey, let's just continue to play out this example because it's very much in the news and probably has application to other sorts of technologies and back doors. Suppose I'm James Comey and I'm persuaded by that. I realize this is stretching the hypothetical quite far. But I'm like, Donna Bruce, you're right. I don't think it's in the interests of the United States to have the Chinese have instant access to the communications of dissidents or of others. And they can do it, if we can do it, what then should we do? Or is it just game over, give up? Game over, give up is a bit much. There are, I think in the future, there are going to be technological barriers to governments doing things. And in general, it's a very basic philosophical statement that all of our infrastructure can be used by good guys and bad guys. Cars can be used as getaway vehicles. Terrorists eat at restaurants before they do their terrorist attacks. And the reason society survives is at a very fundamental level, there are more good guys than bad guys. And we put up with the fact that bad guys use our infrastructure. And there are lots of ways to catch criminal stop terrorists that might not involve this particular technology. And there are times we're gonna have to accept that technology will put limitations on things in the same way technology aids other things. Yes, I'm still Jim Comey and it still sounds like you're telling me I have to give up. You don't sound like Jim Comey anymore. You have to give up on certain types of access. Like you have, just like you have to give up making, being able to fly. I mean, there are certain things that you can't do that the technology, the laws of physics, of math, of information theory don't permit. And when those things are true, we go around them and try to do other things. So if you draw out the dotted line, the vector of technological progress is what you see then a range of technologies for which right now the government is kind of an apogee of the kinds of access it can expect but is going to be less and less. This might be called the going dark hypothesis. Or do you see more and more ways given that now our coffee makers could for all we know have microphones in them and have that telemetry going back to some sweatshop so that you can be like, coffee, Earl Gray, whatever Captain Picard says, I screwed up the joke. But what is it, Earl Gray? Tea, tea, not coffee. Yeah, you would think that by the 24th century it would be like, here's your tea. You always ask for the same thing. But anyway, there's gonna be so many different vectors that aren't secured that actually Jim Comey, the answer might be, don't worry about the iPhone. There'll be eight other ways to get some. I think that's right. When we look at technology, it's sort of a truce in that everyone wants you to be secure except from them. So Google wants you to do great security except from Google. Dropbox wants you to do great security except from Dropbox, the FBI. You'll have security except from us. Most of our systems, there's gonna be someone who has access to the data because we want that. We want the convenience of having someone else do our storage and doing our processing. And my guess is there'll be so many ways into data, into our lives that we cannot protect technically, that we have to protect legally, that his worry that the technology will lock him out is very short-sighted. I think he has nothing to worry about. The opposite of the gold going dark is the golden age surveillance. Your cell phone will not work unless it knows you're in this room. That's a level of surveillance that Comey didn't have 20 years ago. Is there anything else you want to put on the table before we open it up to our respondents? I'm good for now. Excellent. So we have Joe Nye, Sarah Watson, Melissa Hathaway, and Yochai Benkler. Yochai, why don't we start with you with either anything you already want to react to from what Bruce has said, or any question you'd particularly like to put to Bruce? So going back to the last round, the concern with not being able to gain access and what the baselines are, it seems to me that what you're describing that in the rush to solutions, we've so much jumped over the baseline threat because the assumption of going dark assumes some baseline of full transparency, which has never been there before. So I wonder to what extent what you're seeing in this, would you see this kind of a pushback? A form of cynicism, a form of stupidity, a form of just sort of near-termism that you can't even remember what was yesterday, such that essentially if you look at this question of, let's stop for a moment and let's assume that everybody has very strong encryption, Google, et cetera. You actually look at the ways in which state powers, with authorization as it is, would are able to see whatever they want much more than they ever could before, is the baseline that we need to deal with so that the concern really isn't combing. The concern really isn't the FBI, the concern is is there anything we can do to create even partial replication of prior darkness? I think it's a good question. I do think there's a lot of near-termism that you go back 20, 30 years and you didn't have, you aren't carrying your tracking device with you. And the FBI, I think was looking at hypotheticals, I think they have in their mind these examples, which never seem to happen. There's a kidnapping and the only way into the data is the cell phone, which we might make a great TV show, but it's not the way the world works. These are the four horsemen you're talking about? Yeah, the four horsemen, the information apocalypse, right? And this is something from the mid-90s and it was terrorists, drug dealers, child pornographers and money launderers, I think. Kidnappers. Kidnappers, that's right. Maybe there were five horsemen back then. I learned it from you, bro. That was great. Probably there were five, these were examples that I always used. The horses had a full. And I don't see, I mean, yes, I think these hypotheticals sound scary, but when you sort of tease them apart, you don't see. So can I actually use that to get you to talk about a different part of the conclusions, which I think is more general and foundational? The question of fear. You have this very powerful claim about a fear-driving policy and fear-driving politics. And you talked about the book before last as being about the psychology of security. How do you, where is your, our committee and point outside of the dynamics of fear of the moment that allows you to say this is irrational fear that is driving us into an insane arrangement and this is, yeah, be very afraid. I think a lot of it has to do with frequency and salience of events that we tend to react out of fear for things that are horrific, things that play heavily in our imaginations. So I now often look at, looking at the cyber security, we tend to overestimate the fear of cyber terrorism, grossly underestimate the fear of cyber crime. And I think we're building policies in an effort to stem what we think is this cyber terrorism risk that increase crime, that make our systems more vulnerable in the main because maybe we're afraid. I mean, I did an interview today on Fox News and they're actually all worried we're going dark in Yemen. And I told them Yemen's in chaos. People living there are going dark. It's not about our intelligence. It's the way the world is. So I think it's the common threats we have to worry about. And this is, stepping out from fear is really doing the math, which human beings are terrible at. I mean, one, two, three, many. That's the joke, but it goes the other way too. One half, one quarter, one fifth, one in a million and one in a billion is pretty much the same number to us. It's almost never. But when you look at policies being driven by one in a million versus one in a billion, they're not the same number. We should belatedly introduce you by the way, Yohai, author of The Wealth of Networks, teacher here at Harvard Law School and sometime participant in Chelsea Manning's defense team for which it might make sense at some point to say a word or two about whistleblowing, which you have a bunch in the book about as well. But at the moment, why don't we turn now to Melissa Hathaway? Why don't you tell us a little bit about your background and anything you'd like to ask or react to from Bruce? So my background is I'm here as a non-resident at the Belfer Center and most people know me for the fact that I led the cybersecurity strategy for both Bush 43 and for Obama. And now I really kind of you look track what I write about what's going on in our cyber insecurity. It's sort of a roadmap of things that I think that need to be done, of which Bruce and I tend to agree on many things, which might shock people in Washington. And that is shocking because Bruce is seen as a creature of the anti-establishment and you're seen as a creature of the establishment? Probably, yeah, I think so. But we see eye to eye on many aspects of it. So today, Bruce and I talked a little bit about and I would ask him to comment on. It seems to me that as we're talking about director Comey and wanting a backdoor into the current communication system to deal potentially or to sell on fear of terrorism. Could you talk to a little bit about the future of the internet of things and what we're going to see from our personal connected devices to our houses, to our cities and to our infrastructures and the worries that you might have for data and Goliath going forward. So I write about this. I mean, to me, the internet of things is fundamentally the internet of sensors. That these things- S-E-N. Sensor, yeah. Not C-E-N. Right, right, S-E-N. I got that part. Okay, no, good, good. That these things, whether they are internet-enabled thermostats, light bulbs, sidewalk squares, enormous number of different things are talked about having an internet connectivity. They are sort of fundamentally sensing things about their environment and potentially actuating on their environment. I mean, thermostat would be a great example of a little sense, very simply to temperature and then correct a furnace. A lot of these things are going to sense us. They will have either a face recognition capability, some way to sense who's in the room, what we're doing, because the more they can sense, the better they can do. And then just get back to everyone wants that. You'd have security except from them. The companies and systems that are building, installing, operating these systems are gonna want our data. We might actually want them to have it. I mean, we might want our front door to know who's coming in and who's in the house and who isn't. I mean, we could certainly imagine these things. I see a lot of security risks in this, mostly because the computers in these things are where desktop computers were in the mid-90s, which is nobody's thinking about security at all and just hoping for the best. With the added problem of these things will be so unimaginably cheap that they will not be a way to upgrade the system and install a security patch. We'll see this even things like our home routers, which are very cheap devices, embedded systems, vulnerabilities will stay in them until you throw the thing away because there's no mechanism. This is the Jim Getty's worry, yes. And I do worry about that. Who has access to the data, and then power really falls into here. Getting your data out of Fitbit is hard. But it's... So you have to pay extra for Fitbit to give you your data. Now, at least they're willing to give it to you and many medical devices will maintain that the data is theirs proprietary, even though it's like your heart being monitored and you can't get it because that data is power and the company's collecting it, want it. So that's a big security worry and a privacy worry. Suppose Melissa were recruited back into the administration and made the relevant czar and now you're having a meeting with her in one of the executive office buildings. Let's just see that conversation. Like, suppose she's agreeing, at least as far as the problems you're laying out, is there something an American administration can do about this or is looking to public policy the wrong place to look? I think public policy is the place you have to look. I mean, what we're seeing are market failures that the market rewards these things being cheap and sent out quickly, not being secure. So let's see the conversation. What would you like Melissa to do? Fix that. Wow. All right, Bruce. Well, I have the full availability of market levers. What are the best market levers so you think that I should fix these two markets? Because I actually think that we have two markets and there may not be a market failure. We are perpetuating the market of insecurity by our desire and seduction of these technologies that are cheap, functional, and all of these things. And so we want it bad, so we're getting it bad. And then we're perpetuating this market of the fix and patch and pray that we can fix it after the fact. So what would you propose that I do as the market intervention when I actually think the market is working? So I don't think it is. I don't think the market is encapsulating our values. And there's a sort of a very strong philosophical issue here that we know, I mean I'll tell you a more general statement, we know when you ask people whether they value privacy, they invariably say they do. When you ask them at the cash register to pay a dime more for a privacy device, they invariably don't. And the question to ask is which me is more accurate? The libertarian will say this is the one that matters, that the cash register matters and what you say on surveys when you think about it doesn't. You appear to be valuing the surveys more. I believe that when people consider the issues is what people want, I think there's a lot of hidden risks that people might not know about. Because it's not just a risk to you of your thermostat being insecure, there's actually a risk to us as a community, as a country, as a world of billions of insecure devices being used to attack each other. So this really feels like an environmental problem. So one more crack at this though, suppose Melissa is told by her hypothetical boss here, I am so hungry to do more executive orders. I love executive orders. What do you want to have done? So I think the kind of regulations we need are regulations that mandate levels of security, levels of protection. So if you're building a coffee maker. Right. You can't market that coffee maker until. Until, and then we have to figure out what these regulations are. But like you can't market a pharmaceutical unless it has level of testing and level of assurance. This is the first time I've heard the pharmaceutical industry as a model. Actually, but it's not a bad model. The pharmaceutical industry is not a bad model and sorry to say the financial industry isn't a bad model. That there are very strict, we can argue how effective they are, but there are very strict regulations on what these industries can do and what they can offer to the general public. There are no such restrictions on systems that operate on our data, that produce data, that process data and more of that, what are we looking for? Now this all came off internet of things. Let's just push this one more step and then we're gonna go to Sarah, which is obviously the distinction between hardware and software is diminishingly small. Are you saying that basically anybody writing software that if it's popular enough could be a critical environmental threat ought to have that software reviewed to see if it meets certain security standards? Or we need to build a system where it's not gonna be that kind of threat. I mean right now internet refrigerators can be used to launch denial service attacks. We haven't seen it yet. This is a chilling prospect. Oh! But. Just seen if you're awake. And the question is what do we do about that? We either have to accept that sort of build, we have to build resilience in some way. I hear you throwing the red meat, sorry. But am I having to drink tickets early? I'm sorry. But I just wanna make clear for those who will be talking about this later on Reddit, did you just call for security standards for software that if you fail to meet them your software cannot get out the door? I think we're gonna have to get to that eventually. That the market will not solve those problems. We can leave early. We've just made news today. I agree with that. You agree to be made news or you agree that we need security standards for software? I agree with what we made news and we need security standards for software. And not because I just convinced her. I actually proposed it earlier today. So I think there's a need for it. This is lightning fast regulation. That's great, yeah. But I think that we need an internet underwriters lab that just like we had the UL for the energy and for things that plug into the grid that we need the UL, the underwriters. And for those who are not totally up on standards and such only know that occasionally if they flip over something they're trying to open cause it's broken and says UL listed they don't know what that means. What does that mean? So it means that anywhere you go in the world that you can plug in a device and it's not gonna blow up or cause a fire. And we created those standards basically in the United States was the model because of the Boston fire and the Chicago fire and we had to create a standard of care for the physics. So for the next generation product that was gonna actually enable our way of life. And we are at that place for the internet now and that we need to have a standard of care that is both a physics based I think that's not going to be the next widget that's going to be take down the internet or create the botnet or et cetera. But I think that there needs there's a second model that you could look at that we would talk about actually today was the FDA of the safety and advocacy model. And so those two models could be something that you could look at. Yochai is slowly having a litter of kittens in the corner but I want to get to Sarah Yochai. Loading in so many ways I can't even get I'm sorry I'm breaking the phone. You don't seem that sorry but go ahead. Here. I'm just pretending. We were just talking about how fear leads you into way bizarre solutions. Let's get this straight. We by Jonathan's classic law school Socratic method of moving the hypothetical one little step at a time the fraud is completely boiled. Standards set by the government for software as a model that will allow us to withstand Chinese internet. You're pushing back for example in the book on the shift to a sovereign internet. You're pushing towards resilience and recognition that fear is an overstatement and that we need essentially to be willing to absorb more damage and live with it in order not to give up on all those standards. What are going to be the costs to innovation to distribution of power? Who are going to be the companies or organizations that are actually gonna have the economic heft to mobilize around standards and comply with the standards? It's a program for a highly captured process for a high degree of concentration and a killing of social production of software which where did we get PGP and GPG? Where did we get poor? It didn't come from companies complying with government standards. I would slow down before taking software standards and certification and comparing it through standards for electric plugs that need relatively little innovation so that everybody can innovate around it. So I think it's 100% correct and the devil's in the details. That I agree that if we have those kind of rigid standards, you may end up with something like the drug industry where it takes 10 years to approve a new version of Microsoft Windows and maybe that's a good thing, but probably not. We do want faster innovation but we're gonna be in a world where if we have critical systems and uncritical systems on the same network that we're gonna have these revenge effects that things will be able to take down each other. I don't see this as, it's odd because I don't see this as fear-mongering. This seems like a perfectly reasonable extension of what's going on today just move forward as it is naturally. I agree you can't do the same kind of regulation you do for the FDA because software is just different and even underwriters labs doesn't work because electrics is so much simpler but I think there is something in the middle that is going to make us a lot safer. How do I convince someone to pay 30 cents more for a thermostat? I see no other way to do that than to have a regulation that says things like you have to pay your workers minimum wage and you have to have some level of security. One possibility that maybe we've already caught some sparks here but one possibility of course has been floated as liability after the fact if it can be traced back to the maker of the code for which then you want to get insurance against that liability and the insurer then says you gotta write code that meets certain standards. And actually that's underwriters laboratory but it is underwriters laboratory it was the insurance companies that drove that. Security was half of it privacy was the other half and I suspect Sarah will be able to speak or ask a little bit to that or whatever she wants to speak or ask about. Sarah Watson fellow at the Berkman Center technology critic feel free to introduce yourself further in any way that fits and you're up. Yeah well so focusing on the fear and the kind of governmental aspect of this have been not focused on the kind of developing what the harms are in this kind of data regime that we're developing that is the kind of okay we're talking about the refrigerator what is my concern with how my refrigerator repopulates my groceries or is telling Google about how I consume and therefore influencing future personalization in other ways. I think to that end the big question for the solutions that we're trying to get at to me is about trying to figure out how to do more of that accountability and transparency about how those systems actually work. So we haven't really touched on that and I think there's a standard question at play there as well. How do we even kind of intervene and figure out how data is being used? I think that's still one of the biggest open questions like you've outlined how data is potentially being used in ways that we already know and have uncovered and have investigated but I still think that there's plenty to uncover and systematically uncover to have this larger discussion about the normal question which is how should it be used and how do we want it to be used? This feels like the notion that our data is being used to judge us in many different contexts. Our data is being used to decide simply what ads we see in the internet. Our data is being used to decide what offers we see so we know credit card companies who will give us different offers depending on our data. More and more retailers want to know who we are when we step into the store or certainly step onto their website to offer us different prices, different levels of service. We've seen some examples of this. More extreme cases, our data is used to decide whether we get a mortgage. Our data is being used to decide whether we get a job. More and more companies are hiring data analytics companies to troll through a candidate's social networking and come up with an employee reliability score. At the extreme cases, data determines whether we can fly it on an airplane and the former director of the NSA, I'm blanking on his name. Is that Alexander? No, before him. Hayden Hayden said, we kill people based on metadata. That's your data decides whether someone drops a drone on your house. And a lot of most of these analyses happen in secret. They are opaque to us. We cannot when we are told you didn't get this job to find out what is it about my data? Is it correct? It doesn't accurately reflect me? And we have these systems that allow them for efficiency reasons. It makes a lot more sense to have a mortgage market where instead of me going to the one bank that knows me personally and the vice president making a decision probably based on all sorts of prejudicial factors that instead I get a number that I can take to any bank in the country. Well, this gets back to the regulated industry thing, which is we have the banks, we have healthcare is a heavily regulated industry because certain types of healthcare data are not to be used in certain ways. And insurance underwriting is still regulated, certain types of data are not to be used for insurance underwriting. But I think there are so many novel uses of data that we're talking about in the kind of big data context that have no accountability whatsoever, especially in terms of the kind of innovation space. Yeah. Joe Nye, you are a distinguished service professor at the Kennedy School, longtime public servant at the highest levels of government and I guess a student and scholar of power, among other things. And so Bruce's emphasis at the beginning that this is really about power must have been catnip to you. It is, that's why I've fallen under Bruce's spell over the years. No, I first will put in a plug for the book up in that corner. You'll find a book which I blurbed and said you must read it. So I stand by that. But having stood by that, let me ask Bruce a couple of questions. If you look at the beginning of the book, you make a strong point that there's not much difference between bulk collection and surveillance. The government says bulk collection, where I don't look, it's just metadata. That's different. Surveillance, where I do packet inspection, that's different. And you point out that, look, I can learn as much or more useful information through just the bulk collected data if I have enough databases as through going inside the packets. But I guess the question I have is, what does that mean for where we go now? I mean, there are various proposals that the government not store data and not collect bulk data and so forth. And that has some cost. You argue they're not very high for the government. But at the same time, there's enormous amount of bulk collection and comparison of metadata that are done by companies. And so if you ask, what are we really protected from, our privacy is being invaded constantly. It's as I think in somewhere in the book, you talk about the little brothers rather than the big brothers that we should be more worried about. And there's a chapter near the end in which you talk about principles and rules for companies. But I was talking to somebody in the advertising business who said, you think of Facebook as a communications company, Facebook from our point of view as advertisers is a de-anonymizer. What do we need Facebook for? We first decide looking at zip codes and other things from bulk data. We want to know who is this person. Then we find them on Facebook and we know who they are. Not only that, we learn all their friends and their likes and dislikes and so forth. So that's the world we're living in. So as we focus on trying to save our privacy by saying that the government can't collect bulk data, and yet the companies can, are we just kidding ourselves? I mean, are we spending enough time on little brother as opposed to big data, a big brother? And in particular, if the whole system is built on an advertisement, as you put it, or you quote this phrase, which is fairly well known, that if it's free, you're the product, not the consumer. And if we are living in a world which is built as far as it is on the fact that it's advertising, that we all are products, how do you get out of that? I mean, I liked your chapters on principles for government and I agree with you about no back doors and so forth. But on the chapters on what you do about the commercial model and little brothers, I didn't come to rest when I'm thinking about it after I finish. The reason advertising is the business model of the internet is because we kind of backed into it that our laws regulating corporate collection and use of data didn't keep up with technology. Internet appears, it's non-commercial, becomes commercial. People are used to not paying for things. Advertising is the model that seems to work, it's easy. There's no regulations at all regulating what can be collected and used and it doesn't have to be. There's nothing about the internet that inherently says, look, it has to be an advertising model. There are lots of other ways to fund it. One of them is to give, we believe that Facebook gets about $9 a year out of each of us, that's our value and the reason there's this big company is not because our data is so valuable because they have so many, many people. And this again gets into what are our values? What do we want? In general, people tend to like giving their data to companies in exchange for direct benefit. Now I actually really like it when I go on Amazon and they give me lists of books I might wanna buy based on books I've already bought. I've gotten some really good books that way. We tend to dislike more secondary uses as the data gets further away from the transaction. That's when the creepy meter starts rising. And now the question is, what do we as society want? Do we want certain data not to be collected? I like getting real-time traffic information from Google Maps, but should Google be allowed to store that, that's surveillance data, that's where I've been. And maybe we will say you can have it at the moment but you have to delete it. This is us as a society, as legislatures, I think putting restrictions on collection, on use, on retention, on accuracy, how it's disposed. Target corporation keeps all of the data about the things we purchase, hackers break in and suddenly we don't know where it is. And so now there are liabilities to keeping all this data. So I actually think that the like government, these things have happened because we haven't been paying attention to the effects of these things. But first if you go back to something you said earlier that people won't pay another dime at the cash register and you have a whole system which is based on Facebook earning $9 from each of us, how do you get from where you wanna be from where we are now? This feels to me like any other time we as society have outlawed a business practice. Whether it's bribing public officials, which we've outlawed, child labor. I mean, these are more extreme examples but we have as society said, yeah, I know the cheap prices were great but we're gonna force companies to raise their prices by prohibiting things they've done. So this is kind of a fair credit reporting act model that says if somebody is gathering financial information, credit worthiness information, the consumer has a right to know what is about them and to correct it and that sort of thing. And that kind of thing I think would go a long way to making things better. I think that still requires understanding how the data is used down the line, right? So in the kind of current discourse about the corporate use of data, it's all about advertising but I think it's really hard for people to understand the harms that are involved there which is it's just ads or I have nothing to hide. And so kind of the next step is trying to tie the harms closer to people's sense of what matters which is for the cases of using it for loan underwriting is a question of equality and access. And to simplify, is this kind of fear mongering okay with you, Yochai, compared to other fear mongerings? Yeah. So a couple of different things, if I may use this. One of the remarkable things about the first, let's call it 45 minutes of this conversation is how little we spend on the moment that triggered this moment in American concern with data and surveillance, the Snowden revelations and the role of government. We've been talking a lot about the fear of private companies and how government can respond to them. Rather than also, I mean, Joe mentioned it a little bit, we mentioned it here there, but not as a central of the breakdown within the government systems that comes from the fear and the complexity of the cybersecurity and intelligence system, the industrial relationship and the development of these products. So what we have here is this weird setup where when you look at government surveillance, it fails dramatically in its ability to set up oversight and transparency and actually deliver something we want. When you look at companies and markets, they fail dramatically at delivering something that we think we, in our better self moment when we're thinking in the long-term and answering a survey, let's imagine that that's what we do when we answer a survey, really want. And then we're at the cash register, we can't help ourselves, we end up paying the 10 cents less. How do we deal with the fact that both of these systems are so imperfect that every time we lean into one or the other, we say, okay, let's rely on the market services, so let's not have the government hold the bulk data, let's have the companies hold the bulk data. Is that going to be the solution? If we say the companies don't work, we say let's have the government lean in on the regulation. One thinks that was, in some sense, missing from me, and by the way, I want to associate with myself with what Joe, that's a great book. It's a really important book and you should all read it. As should everybody else who bumps into this in any other context when it's recorded and made public. And Bruce thinks it might already be on the torrents. Anything's possible. I think so. One thing that the setup of the last solution set left out, I thought, was the one mechanism that we've seen online developing that was neither state nor market. So that's what I was mentioning earlier with regard to initial encryption that was widely available and PGP with Tor, with some aspects of Mozilla browsers and some plugins. The whole class of essentially social production that people, not everyone, but enough people actually develop systems that are not market oriented and don't need that model and that are not state oriented. Because otherwise I think we find ourselves stuck. There's this little vignette in your last chapter or so where you describe iPhone as the good guys. They have a model that's not based on selling and Google as the potential bad guys because they have the selling information. The inversion of our model of who are dangerous and who are not, when we're thinking about innovation, creativity, the more proprietary versus open model. I thought I read a book about generativity on the internet the other way around. But that's somewhat the other way around. But let me follow up one other question about the interaction between business and the government on a particular issue which is mentioned in the book. In Europe, the court, which is part of the government, has said the citizen has a right to be forgotten so he can go or she can go to the company, Google, and say drop this. That was a useful indiscretion or whatever. There's also a danger though of that becoming a little bit like Stalinist history. Where somebody lost power in the public bureau, you photocrop the thing, this is all before photoshop, the person suddenly vanished from the podium of the public bureau. And if you wind up allowing citizens to tell companies, destroy evidence about me, how does it not become a way of distorting history? There's certain things, political positions that somebody who wants to be elected president took 20 years ago and it's inconvenient. They want to get rid of them. What are the limits to this right to be forgotten? In the book, you're sympathetic to the right to be forgotten. I'm sympathetic to pieces of it. This is actually a very hard problem. And this is something I think you've said this, that what does it mean that the top 10 hits of Google are your effective biography? And why should a private for-profit company be in charge of your official biography? And as these companies become infrastructure, the right to be forgotten in the way the ruling was, was not against the newspaper that published it. It was against Google. And because of its position in the search market, it was not against other search engines that weren't as popular. It was against Google. There's a lot, we need to talk a lot about what it means when your reputation is owned by a for-profit venture and they can do what they want with it. I mean, I don't think that the ruling was correct, but I don't think we can do nothing. This is a very hard problem. And it is fundamentally, I think, the power dynamic. Some law school professor wrote. Is that him again, or is that another one? Some law school professor wrote that you could require not the deletion of history, but an equivalent of an electronic asterisk saying this is contested. So that when you call up this fact or factoid or whatever is contested in fact, you see that it is not unalloyed. Why not something like that, rather than deletion? I mean, that feels like a good idea, but who's, do you remember who said this? Someone said that instead of the right to be forgotten where you can go to Google and say, remove this data, you can go to Google and say, don't search anything on my name, that basically you either accept Google or you don't get it at all, that you can't pick and choose to either in or out. Do you remember who did that? No, I've not. That was someone's, I thought it was a really clever idea. I'm not findable by it. Right, it's sort of really giving a cost to the person who's saying, delete this data, because now he's completely gone. But this to me is just one of the- If you wonder how it works for Mike Smith. Yeah. This to me is one of the skirmishes when we're looking at data and power. Who has the data, who controls how it's presented, how it's used, what's relevant, what's not, what's to lead it, what's not, is incredibly important and I don't think we're having these discussions at the right level. We're just sort of letting the companies and by extension the government sort of doing what they can based on the technology instead of deciding what we want a society to be possible. So let's democratize this conversation and somebody hurried with a hand, feel free to tell us who you are and spell it for Google later. But hello, my name is Eric Skase and it's spelled S-C-A-C-E and there's only one so it's an easy hit. I'm very sympathetic to the concerns that have been outlined here but now I'd like to ask a question as a Jean provocateur. You know, people get driven by fear and greed and when I look at the concerns that we've expressed, I don't see the fear or the greed aspect that will actually cause policy change or behavior change on the part of organizations. For example, it's like I don't like to be surveyed but I'll buy a car that monitors what I do because I like it being semi-autonomous. I like having the diagnostics and so forth and my purchasing decision is overwhelmed by characteristics that have nothing to do with privacy even though I don't like the data collection. And you're saying that there's a nano-economics by which the way to talk that person out of a poor decision is through fear and or greed. Yes, well, that's the thing that would motivate a regulatory change, for example. So there was an advocacy for regulatory change but there's no pressure behind it. It's like we're concerned about privacy violations because corporations are collecting information and then, for example, it gets lost. So Target had a data breach and my personal information of some type was disclosed. Did I feel any particular harm? No, how many other people here were affected by a target or other similar breach? Did you have a personal harm? No, so there's no crowd with pitchforks going looking for policy change in this area. How do we get the crowd with the pitchforks in the torches to make a change actually occur as opposed to sitting around and having a good policy? Yes, theoretically discussion. Tortures and pitchforks, I don't know, Melissa, this is for you as a sometime policymaker. To what extent might we even know what counts as good policy but it's just not gonna happen because the pressures aren't there to bring it about? So I think that it's about having a conversation about what's really happening and we're not having a responsible conversation in any capital anywhere around the world of what's happening with our data and what are the capabilities of our government or what are the capabilities and what our industry is doing with the data. I would venture to bet that 75% of the people in this room are using Google or have Gmail, right? It's way higher than that. Okay, right. Gmail right now. Okay. So let me just say that if you're looking at the government is using fear of how are we using the technology and the companies are using greed because they wanna make money on your data and so why aren't we talking about that? The $9 for Facebook and how they're making the money and the advertisements and why you walk down aisle eight in Walmart and you get your coupon and you're excited about it as opposed to afraid about it. And seriously, I was just recently in Germany and I talked about this and it's like, I had the location to work because I'm using Google Maps and all of a sudden they knew that I was on Liptonza and the next thing I know is I'm getting pushed advertisements because they know I like food and wine and here's the three restaurants in the general block that I'm going to be in and that you might wanna eat at tonight and you could have heard the whole audience gasp because that's a level of collection and personal privacy invasion that's unacceptable. So why aren't we talking about this? And so that's where I'm at. You can't have responsible policy or effective regulation until you've had a conversation about what's really going on and how is that affecting my sense of privacy, my freedom of maneuver and my democratic values as a country and this country. We don't have the same values all around the world. Sarah, you're gonna say something? Yeah, I was just gonna say, I think we often couch the Google Now personalization thing as a privacy invasion, but it's as much an autonomy invasion and I think figuring out ways to expand that part of the narrative is gonna bring out the pitch works. Got it. Although these tend to be things we don't notice until they're gone. I mean, this is hard. Autonomy. Autonomy, you don't notice how valuable autonomy is. You don't notice you lost it until you're forced to. You don't notice how valuable it is until you don't have it anymore. I'm ignoring your joke. Okay, enough. My name is Jessica Mink and I'm a data archivist, but I'm not gonna ask about that because I use computers like all the time. And first question is one thing we haven't talked about is regulating the fact, let it regulating people to be able to access all of their own data without limits if it's your own data. And the old fans of that. Right, so I figured you were. It just never came up exactly. And a way to make it happen is the question. And one other thing that's just a note is that Google Scholar lets you claim publications so that you are, they're like, if you have a name where there might be multiple different kinds of people, you can clarify which ones are your specific ones. I call Bertrand Russell. Anyway, but I don't know much about, but they only let you choose ones where your name is involved. So you can't just pick a random name and it happens to be linked to your Google account. Of course. You can sign his books though. They never asked for ID if you wanna sign your books at some bookstore. Never. So the question is, are there ways to regulate access to personal data by the person who it's about? I mean the FTC is starting to try to do that and especially in the context of data brokers and advocating for axiom to put forth the profile so that you can see what the data is about you and see all the inferred information about you. But I would push it further, which is meaningful information, which is okay, enough that I know that they have this information about me, but how is it used after the fact? That's the limitation there. Got it. We'll go for a question here and then up to that zone. Yep. Christmas Jackson. I'm a national security fellow out of the Harvard Kennedy School. We've talked a lot about, and you guys all have brought up different actions and in fact you brought up the Snowden incident. The Snowden releases caused a discussion within the government that forced us to understand how our data is being used, what it's being used for, and to re-look at the ramifications of legislation that had been put forth. And that's potentially gonna be renewed in these next few months. That's right. With that, how do we start to see that same thing and you've all touched upon it happen within industry? Because we don't understand what's being collected. We don't understand the algorithms. All of that's considered proprietary just as the government had helped things as classified. So how do we cause a thoughtful conversation when we don't understand the ramifications of the legal paperwork we're clicking on every time we click? So your statement gave us one possible answer, which is a whistleblower within the data broker industry. And what all of these things are is transparency. And the reason we're having this natural conversation, it was Snowden and what Snowden forced is transparency. Here's what's going on. And if we have that level of transparency within the data broker industry. Is the problem that we don't know or the problem that we don't care? Well, I think one feeds the other. We don't know that we don't care. I think the problem is we don't know. I think it gets back to the scope of actually trying to define how it's being used, which is still the thing we don't really have our hands around or our arms around it. So there are two ways. You could have a whistleblower within the data broker industry who would do an extraordinary amount of good to the conversation. You could have mandatory transparency legislation. The type of things Sarah was talking about. US companies have to tell people these things. But transparency is the first step here. And this gets back to the conversation. We can't have a conversation. This lets me know the facts. Let me actually use this to try to tease out a little bit more because you're interested in power. And I wonder to what extent the premise, which is that the Snowden disclosures and the extensive public coverage, some of which you wrote, resulted in a moment that felt like change could come. And yet one of the things you do so well in the book is show the redundancy of the systems that are in place such that any discrete regulatory solution ends up hitting, it's like the gecko's tail. You'll cut that off and the thing will keep running. And so to what extent is the problem that we don't know and transparency is the solution? And to what extent is the problem that data transients the so much power of two extremely entrenched systems, state and market, whose power is too resilient for the happenstance of mobilization of the population around a particular problem? So this is again, back to very general philosophy. I do believe we will solve this problem. I don't believe that this is sort of the end of privacy in society that we are gonna move towards these totalitarian powerful surveyor entities. I think transparency is the first step. I mean, yes, the system is extraordinarily entrenched and point interventions I think are much less effective than those on the receiving end want you to believe as they scream as they're being pinpricked. But I don't see any other way to move forward than the transparency, a discussion, figure out what I wanna do and then figure out how to do it. It might be that the powerful are too entrenched that it can't be done. That feels unlikely to me. I think it's gonna be really difficult but I think we've done harder ones before. It is fascinating to see the technologist placing hope in the legislative process through the mobilization of people. And when I talk to ACLU attorneys, they want the technology to solve the problem. So maybe we should. Right, and the law professor with a sufficiently bleak view of both the market and the government that the last repose is Tor running on GNU Linux. Yeah, pretty much. Not an absolute. Absolutely not. Yes. Hi, my name is Alice Alice-Mitt and my next question actually ties in real good about alternatives, because we keep talking about this is a problem, but what are in terms of solutions and what got me thinking about it is there was a professor who after the whole Patriot Act was passed where he would upload to the government everything he did every second. I mean, even when he went to the loop, he recorded that and uploaded on the government's website. So I wanted to know like in terms of your work with like David and Goliath, how much have you done on open and free software and Linux and how are those alternatives playing into the field or are they even playing in the field? I'm just kind of curious. This is Yochai's point that there is a huge benefit to systems being built outside the two imprenched power structures. My worry right now is simply because of the way the net works that those solutions tend to be around the edges, that the surveillance coming from your cell phone, that the metadata, the data that has to be unprotected for the system to operate is so great that a lot of these tech solutions don't really solve the problem. And I think they're very important and a part of the solution, but it really is right now a policy problem. We're living in a world where policy can subvert technology, where the United States, and I'm actually not making this up, can have secret court rulings on secret law and issue secret orders to companies and do secret things and then lie to you about it. That fundamentally means that any tech produced is inherently suspect. And I can't build technological solutions. And that includes the artisanal tech lovingly crafted by the distributed free software community. Well, because it's running on proprietary hardware and there are ways to subvert layers down. I mean, we're talking about building open source hardware, but then now it's the closed source building tools. I mean, it really is turtles all the way down. So I be because of this secret law. This is why I wrote when these things came out that the NSA has poisoned the internet. And this is really how? That by subverting everything about the trust, we can't build anything trustworthy on the platforms. Bruce, you now are competing with Yochai in the bleak lot. Yochai had an early lead, but you're catching up quickly speaking of turtles. Yeah. Okay. Hi, my name is Sarah Williams and I run the Civic Data Design Lab at MIT. I'm also an urban planning professor at MIT. And my work tries to look at how we can use data to create a public good. And so examples of what I mean by that is, if we have all the information about where Google searches for traffic, we actually can create better traffic patterns and create kind of smarter kind of routing through cities or another example is a lot of the healthcare data. If we had access to certain parts of it, we could actually look at kind of environmental conditions that are causing those problems. So, you know, on the one hand, I'm really concerned about security. And then the other, I'm really interested in regulations around how to use this or access this data for a public good. And I just wanted to ask the group, you know, how do we create regulations that also allow data to be leveraged in this way, as well as protect against some of the harms that it can cause citizens? I guess I'll start, I do talk about, so I believe that what you're talking about is one of the fundamental problems of the information age. This idea that our data and aggregate is incredibly valuable to us. And medical data I think is the best example that putting all of our medical records in one database and letting researchers at it would be incredibly beneficial to society. Yet, whoa, that's dangerous, that's personal. And, you know, something simple as real-time directions from Google, it's the same thing. Real valuable, yet very personal. And I think we really have to address those individually and figure out mechanisms. And I don't have an answer here, but how we get those good things. There's so many good things in our data and aggregate and so many bad things. But you don't see any technological silver bullets like differential privacy or some other way of salting and hashing data so it's useful in the aggregate but not in the specific. There are point solutions for point problems but there's no general solution. So there will be some tech. And it could be as easily as there's a guy with a regulation sheet and you go to him and say, hi, I'm a medical researcher and I want you to run this query. And he says, okay, that looks good and it gives you the answer. So there are ways, there are policy ways, there are tech ways. And so I think this, what you said is core. That is the core issue of this entire conversation. Even the NSA, if you take them at face value, what they're saying is give us all your data and we'll keep you safe from terrorism. I don't believe it, but that's their argument. Your data together is valuable to you. How do we enable them to do what they need to do as national intelligence, as law enforcement in a way that doesn't allow them to abuse that? Any other thoughts on this question, Jochai? Yeah, and again, I'm just digging deeper into your book on this. I think one of the things you raise is the lack of necessity of completeness in order to achieve a high degree of the benefit. So if Julie Cohen, when she was writing on privacy talked about semantic discontinuity, others talk about imperfection, but I think the point you were making, which I think is important as a class of solutions, is the persistent imperfection of the complete data. So if I wanna go do research, I'll go do it through a private browser, maybe using start page instead of Google to get one class of results, and that'll be enough. And you were talking about sometimes you use someone else's purchasing card in order to mess with the data. So this trade off between the quality of the overall insight with the imperfection of the model seems like class of solutions that you were talking about in the book that's an important part of the solution. And the disconnect between is there enough noise to confound or obscure to some extent the individualized aspect with enough clarity in the overall to get you the results? Any other thoughts from our panel? Bruce, maybe you can bring us in for a landing. It's a especially good question to ask you because you write about one op-ed per hour. What's your next book? So I tend to be in a year of denial. I'm spending time thinking, I was such since we were in competing for pessimism, about catastrophic risk. That the chapter that got pulled out of the book because it didn't really fit is to think how this plays out in the future. How we deal with a society where individuals get very large destructive power. That the extrapolation of giving people power is a nuclear bomb in everybody's back pocket. And how does society respond to that? How do we build robust systems, secure systems where the Five Sigma guy can ruin it for everybody? And for which, and that's not, Five Sigma guy is not like some corporate called Five Sigma. The outlier. Right, the extreme outlier. And I think it's even gonna be a terrorist. Terrorists are rational within their frame of ration. It's more the kid from Columbine who's just gonna behave in the most extreme way possible. And just to telegraph it, have you gotten beyond the point, where I think you were last at, but I just wanna put it on the record, where civil libertarian board member of the Electronic Frontier Foundation was like, the only way to deal with this catastrophic risk is through significant intrusion upon personal liberty. Actually, I don't believe that, because I don't believe that will actually deal with the risk. So you have moved since then. I mean, the best solution I think is to become the Borg. Because that gets rid of the Five Sigma guy because we're all doing the same thing. Would you check when you have a low probability that high-dissutility event, which you change your view about bulk collection and surveillance? So I believe, so surveillance tends to only work during the conspiracy phase. So I'm gonna make up a scenario, that someone adds a biological way you can kill everybody. There's just sort of a pretend it's there. There's a point where it's impossible and you don't have to worry about it. There's a point where it becomes possible, but only the people with the right ethics know how to do it. There's a point where a conspiracy can do it, as the point where anybody can just push a button. That's called the science fair part. Right, I mean, that's the, you got the bioprinter, click here to kill everybody. Warranty Boyd. At these points, surveillance is irrelevant. Surveillance is actually only temporarily useful in the area where you need a conspiracy. So think of, I don't know, the Fort Hood shooter. But no amount of surveillance will stop him. He's just a lone actor doing something. But so for threats in that zone, the answer to Joe's question is yes. If the threats in this zone, where threats temporarily reside, For which then the reason that's not very satisfying to you is the threats are all gonna end up at the science fair project. I'm not actually doing anything useful here. I need a better solution. So title of your next book is We're All Doomed. It can't be, that doesn't sell. It's gotta be, it's gotta be. Come back to incentives, doesn't it? It's gotta be, we're all doomed unless you read this. Too much fear ruins the greed. It's greed and fear. Greed and fear. Well, we are all doomed unless you join us in the reception, but first join us together in thanking Bruce and our panelists.