 I'm happy to introduce Neil. His book is called Intellectual Privacy, Rethinking Civil Liberties in the Digital Age. The Coop is offering copies for sale here today, $25. They accept cash and cards. So if you're this interest to you, I highly recommend picking it up. It's a great book. And to tell you a little bit about Neil himself, Neil is an internationally recognized expert on privacy, information, law, and freedom of expression. Neil teaches at the Washington University of School of Law in St. Louis. Went to UVA for his undergrad and law. Is that right? And law, and law. And clerked for Chief Justice Rehnquist before going into academia. His book builds upon a lot of scholarship. He's been a real leader in the space and the connection between privacy and expression and the First and Fourth Amendments. If you ever study this space, you'll get to know his work quite well. So it is my great pleasure to introduce Neil. Please join me in welcoming him to the stage. Neil. Turn my microphone on before I speak. Thank you for coming. It's great to have such a great crowd and hello people of the internet. I know at least one of my students, former Berkler Fellow, is watching. So hi, Brett. So I want to talk to you about a couple of big ideas about the way privacy and freedom of speech fit together. But first, I want to tell you a story. Does anyone know what this is? So this is, depending on the crowd, if I were to give this talk to say to law students, they all know what it was. So for those of you under 25, this is called a typewriter. They all know what it was, but they've almost never used one. They've never used one for serious academic purposes. Maybe they played around in a craft project. And when I tell them about typewriters, when I tell my children who are younger than law students about typewriters, these devices are shrouded in a kind of antique mystery and a sort of retro cool. So until you tell them about this stuff, which was how you used to have to replace errors in the early versions of typewriters. This was liquid paper. In England, where I grew up, it was called white-out. And it's a kind of white paint that you would, because every draft on a typewriter, remember, and the digital revolution has been so transformative, we almost have to remind ourselves about how this old tech used to work. If you made a typo on the early typewriters, you used to have to actually paint over the word. So every type typo that you would make, and you'd paint it over with liquid paper, if you were lucky, you'd wait till it dried, or clever, you'd wait till it dried. And if not, it would sort of, you know, strike that the hammers of the typewriter would strike, and you'd have sort of like a little avalanche of sludge that would come up, and you'd be typing, you know, G in gray for the rest of the day. So I'm 42 years old, so I'm just old enough that I can remember having used one of these things from my first academic paper, actually, just down the road in the Lexington and Bedford school system, so it is wonderful to be back in Boston. And I'm just young enough that I had no patience for this stuff. And so I wrote my first academic paper on a typewriter. Actually, my mother had to do some of the typing for me, because I'm still not a very good typist. She doesn't type my papers for me now. I type my own stuff. But I was young enough, so old enough to have used a typewriter, but young enough to have realized that this was nonsense, and I wanted something better. My bribe from my parents for moving to America at the age of 11 was one of these things. And so I asked my parents for Christmas for a word processor for my Commodore 64, which, to be frank, I had actually got as a games machine. That's the joystick. My parents bought me this, which is Bank Street Writer, also available with an optional box that had a spell checker attached. And the beauty of a word processor, the beauty of Bank Street Writer is you didn't have to retype every draft. Every draft could be saved, and you could edit just a few words. You could edit an entire paragraph. You could cut. You could paste without using scissors and glue. And this would mean that the records of your communication, in this case an academic paper, really anything you'd want to write, anything you want to type using a typewriter, would be preserved. This was what the interface looked like. I have to. This being a technically sophisticated crowd, this was not my version, because this is a cracked version. But this was the only image that I could find on Flickr. And this is what the interface would look like. Again, this, I believe, was the Apple II interface. I couldn't find the Commodore interface online. And my Commodore 64 is a distant memory. But the beauty of this, right, you could enter the text. The text would be saved if you're lucky. The five-inch disk drive would work and would preserve your text. And you wouldn't have to retype everything. It's the switch from analog to digital communications technology. And this was a fairly trivial event in my life. But collectively, this is an event that has tremendous significance for us as humans. Because for generations, for centuries, the technologies we have used to communicate, to exchange ideas, to read, to learn about the world have been based upon paper. So we have letters, we have newspapers, and we have books, just a random book, I promise. In our lifetimes, these technologies have been supplemented and increasingly supplanted by digital versions. Instead of letters, we have emails. Instead of newspapers, we have web pages. And instead of books, we have ebooks. Now, we typically think about these technologies as implicating the freedom of speech. Of course, anyone with a computer can connect now to the world, can express their viewpoints. They can learn about the world. But the digital nature of these technologies, that by their design, they are engineered to create records of their data, records of data about the data, records of the transaction, also implicates privacy. More is now known about our communications, about our expression, about our reading, than before. The old people in the room will recognize this slide, which is boring and tired. And I promise I have never in my career used this slide before. I've used it giving this talk at other places. But I have never, I promise, used this tire. I used to be the person who had grown loudly when on the internet, nobody knows where a dog's slide for the New Yorker comes up. So why is it here? It could be like a sort of a relatively retro move. But the idea in the 1993 internet was the internet was a technology anonymity. TCP, IP, and associated protocols, the web protocols, by themselves, didn't create a world of tracking and surveillance. But that world is the one that we have now. And the New Yorker recently ran an updated cartoon with the same dogs earlier this year, a little longer in the tooth, a little grayer in the fur. Remember when on the internet, nobody knew who you were. The dogs that used to impersonate women in chat rooms are now sad that they are outed for the porn dogs that they are. OK, so why does this matter? I want to tell you a story about technology, but also a story about human values. This is the technological revolution, but I think it's important that the revolution have human values built into the technology. Not just the values in the code, the ones and zeros, but important values like freedom of speech and privacy, but also equality and liberty and other sorts of things. So there's two ideas. First idea is one I don't want to talk about too much today is that when free speech and privacy conflict, free speech should usually win. The second idea is much more important, which is that free speech requires a measure of what I call intellectual privacy. So what do I mean by intellectual privacy? Intellectual privacy is the shield for our intellectual explorations when we are making sense of the world and generating ideas, when we are reading, when we are thinking, and when we are speaking with confidence. That's the argument in a nutshell. So back to the first idea. When free speech and privacy conflict, free speech should usually win. The story of privacy in America begins, if not in this building, then very close to this building, with the publication of The Rights of Privacy by Warren and Brandeis for Harvard Law Review 193, 1890. Warren and Brandeis were enlisted by Warren's wife, Mabel Warren, who was rich and beautiful and deeply anti-Semitic and hated Brandeis because he was Jewish and wouldn't let Sam play with Lewis after their wedding. Even though Sam and Lewis had been best friends who met at this law school, Sam had helped Lewis when Lewis' eyesight failed by reading his assignments to him, and they graduated first and second in their class with grades that would not be duplicated in their lifetimes. Lewis and Sam wanted to hang out together, so they wrote this article to protect Mabel, who was offended by the press incursions into her dinner parties and to the coverage of their wedding and to her friendship with a woman who later married the president, Grover Cleveland, even though she was much, much younger than the president and had been his ward from when she was about 14 until when she married him at about 19. So presidential politics haven't changed all that much. Warren and Brandeis wrote this article, and it took a while to take off. But their argument was that the common law should recognize the right to privacy. It's familiar to lawyers. But it put privacy on a collision course with the emerging right of freedom of the press, which Brandeis himself also helped to promote when he was a justice in the 1920s and 1930s. And that's how American law has tended to think about privacy, a cause of action, a tort cause of action, preventing or remedying invasions of privacy by the press for publishing the truth and for publishing the truth that people wanted to read. That's inconsistent with the modern First Amendment. But I want to talk much more about another way of thinking about privacy and freedom of speech. Because I think the old form of privacy, the tort privacy, except maybe in the special context of revenge, poor and non-consexual pornography, is not one we should spend too much time thinking about. And I think it is handicapped the way we think about the relationships between privacy and speech. OK, so intellectual privacy. The protection from surveillance or interference when we're making up our minds about the world or when we are generating ideas. We live in what is by most measures the most speech-protective society in the history of the world. There are all sorts of prohibitions on the government from stopping us from engaging in a vast number of hateful, harmful, offensive, libelist, blasphemous, in many cases, defamatory, offensive, and just plain rude kinds of expression. But we haven't given very much attention to the processes by which we come up with our ideas in the first place. Most of our most cherished ideas were once deeply subversive. The equality of the sexes, the equality of the races, the idea that the people should be in charge of the government and not the other way around, the idea that people should have freedom of religion and not live under some sort of authoritarian theocracy. Though that actually also started in Boston as well. These ideas were once deeply subversive, deeply dangerous, deeply threatening, and people, including people in this city, died in order to promote them. But we don't have really any legal protection or much legal protection for the processes by which we safeguard the development of those new and interesting and subversive and deviant and dangerous ideas. And it's hard for us to make the claim that we really do have a system of free speech if we can't come up with anything interesting to say. If everything that we say is just really louder versions of things that have been said before, perhaps issued in a more bombastic way on cable news, that is not the kind of free speech culture I think that we should have. And certainly it's not the kind of free speech culture that is resonant with the best traditions of First Amendment theory. OK, so there are two claims here. First, intellectual privacy matters. At this point, this is a normative claim. You either have to agree with me or you don't. But you might be saying, OK, Richard, I agree with you about intellectual freedom, but I don't see intellectual freedom in subversion and dissidence, but I don't see the relationship between surveillance and the chilling of the development of ideas. And this is an empirical claim that surveillance chills intellectual privacy. OK, so we have a number of sources of evidence, intuitions, as well as proof that surveillance does chill intellectual privacy. Of course, there is the cultural specter of George Orwell and the metaphor of Big Brother from 1984. There's also some interesting empirical proof. This is a study from the University of Newcastle in England. So I want to give you some useful facts to take home with you. First fact is English people drink a lot of tea. And particularly in the north of England, where I am from, I'm from the west side, not the east side. But the climate is still equally oppressive and burdensome. And hot tea with milk is an important way of getting through the day, at least until beer becomes a solace at a certain point. University of Newcastle's psych department drank a lot of tea. They had a faculty honesty box where you would contribute your share. They had a number of teabags, which they counted. They have an amount of milk, which they provided with the money from the honesty box. And they measured that, too. So they could measure, based upon the number of teabags, amount of milk that were being consumed, how honest people were being in contributing to the kitty. There was a sign on the kitty that said, please give 20p or whatever it was every time you make a cup of tea. And it was a variable. They changed the background of the signs on even weeks. It was flowers, rhododendrons, hyacinths, pansies, daffodils. On odd weeks, it was eyeballs. And as you see from their study on the eyeball weeks, there was more honesty given by the psychologists, particularly week five, where there's a scary vampire guy. And also week one, where I believe this is Pierluigi Collina, the referee with alopecia. He used to referee World Cup finals. And perhaps there's a Collina effect, which is also contributing to the surveillance. But the intuition here, the suggestion here, is when we are watched, when there's even a sensation that we're being watched, we act differently. Second fact about the world, people at Applebee's who work there steal a lot of money from the cash register. One of my colleagues at Washington University studied the implementation of IT monitoring technology at an Applebee's in order to deter employee theft. And they found that when the employees were aware of the technology and the technology was deployed, they stole less from the cash register. So maybe you're saying at this point, well, good. Surveillance means people break the law less. They engage in less bad behavior. In fact, they contribute to the common good more. They engage in good behavior. So surveillance makes us engage in more good behavior and less bad behavior. What's the problem? Well, the problem is when it comes to our society, there is bad behavior. But there is not, as Justice Powell put it memorably in a Supreme Court opinion dealing with defamation, there is no such thing as a false idea. And there is some evidence also from Boston, from a colleague at MIT, that surveillance chills our ability to search on the internet. So what Matthews and Tucker did was they measured Google search trends before and after the Snowden revelations. And they found two things. They found, first of all, that after the Snowden revelations, people searched less for terms like bomb making, dirty bomb, al-Qaeda, terrorism, TSA, Logan Airport security plans. I'm an Al-Zawahiri. And you may be thinking, well, that's good. We don't want people learning how to make dirty bombs. But here's the interesting finding. They also found globally that people searched less for terms like am I gay, hair loss, anorexia, bulimia, divorce lawyer, alcoholics anonymous. In other words, they searched less for things that had nothing to do with the NSA's stated purpose of preventing terrorism. The awareness of being watched, the fact that the illusion of search privacy was breached meant they searched less for socially embarrassing things, potentially controversial things. And in First Amendment parlance, we call this a chilling effect. This chilling effect was actually designed by NSA surveillance. Third fact that I want to leave you with is that jihadis watch a lot of pornography. And this was a fact the NSA discovered because it was monitoring the web surfing, and particularly porn surfing habits of radical jihadis. Now, not terrorists, not al-Qaeda leadership, not al-Qaeda members, not people on no-fly lists who were considered to be a threat, merely dissidents, people who were critical of American foreign policy from a Islamic or radical Islamic perspective. And this is a declassified Snowden document. I suppose if it weren't, my talk would be very short. I'd be sent back to wherever I came from. But I guess this Logan was my immigration port. There would be a certain symmetry to that. But the NSA wished to disclose the porn habits of jihadis, of radicals, in order to degrade their authority in the information war. That one of the authors of the study I mentioned just came in. Alex, you missed your slide. It was just up. So this is the spreadsheet that identifies a number of the targets. So what do we have here? We have people who are clerics, who are academics, who are preachers, who are critical of US policy, who are US person, and who are radicalizers. And I don't know who you think of when you put those variables together. I think of this guy, who was all of them. Martin Luther King, Jr. was surveilled by the FBI in the 1960s because they believed him to be destabilizing the South because he was a communist. So they surveilled him. They found out, first of all, that he was destabilizing the South because he honestly believed it to be a deeply evil regime that was a violation of people's human rights and ungodly. But they also found out that he was having an affair. And they listened to his phone conversations with a woman or woman who were not his wife. And they sent him the tapes in an envelope. With a note which basically encouraged him to kill himself to save his family the embarrassment. The most important political dissident in American history. Again, I'm English, so the guys from the 1770s I don't like to think about. But certainly from the last 100, 150 years, the most important dissident in American history, surveilled by the government, and deterred from, or attempt seeking to deter, he died before he could sort of struggle with the issue. Or the news could be broken, deterred by surveillance. So let me talk briefly about three special dimensions of intellectual privacy and then say a little bit about what should be done, and then we'll take questions. So the first dimension is thinking. The process of thinking, the freedom of thought, what is this girl thinking? Well, we don't know, she's thinking very hard though. We could ask her. We actually asked my son what he was thinking and he gave us this. This was, I didn't sort of make myself, go and do a picture of your brain so I can give it in talk. He actually came from the playroom one day with this. And this is a map of my son's brain, which has lots of things that are important to a seven-year-old boy, play, and soccer, and family, and football, and Star Wars, and reading, a very little amount of sleep, which is true, if you know, Declan. Awesomeness, stupidness, and screaming, which if you know, Declan, is also true. This is actually quite reliable, but we can be more systematic. We can just look at your search records, right? Think about for a moment of everything you've entered into the search bar, Google, or Bing, or whatever your search engine of choice might be, and think about everything you've entered over the course of your life. Think about all the wanderings, all the musings, all the embarrassing things. And then, if you have time tonight, just if you're logged into your Google account, you can just look it up, and it's linked to your name. And you can see, it's actually quite amusing as long as you're in private. Amusing to see all the things that you didn't know that you wondered about, that you weren't sure about, the things you wanted to see, and find out, and learn. But we are creating a record, in this case, a partial transcript of the operations, of the wanderings, of the wanderings, of the musings, of a human mind. And it is currently not especially protected by law. Google itself recognizes that searching and browsing are thinking. When it launched the Chrome browser, it used the tagline, browse the web as fast as you think. Using the internet, using Google services is a kind of thinking, a kind of special activity. Browsing also involves reading, which I wanna talk about now. So what is this woman reading on her Kindle? Well, if she was an English woman, or maybe Scottish by the hair, in London a few years ago, she was probably on the true reading one of these books on her Kindle, right? E.L. James's publicist was quite clear that the 50 Shades of Grace publishing phenomenon was one that was enabled by e-readers. She said that a woman, and it was almost all women in the early days of the explosion of reading of these books, a woman sitting on the tube can read this book on her Kindle, or her Nook, or her Kobo, or whatever. Safe in the knowledge that the people next to her across the train from her have no idea what she's reading, right? She couldn't bring in an 800 page paperback volume of bad S&M themed pornography onto the tube in the early days. Now anyone can read this stuff anywhere. But it was the privacy, the physical privacy afforded by the e-reader. But there's a problem, because the Kindle creates an illusion of physical privacy at the same time as it is engineered by Amazon to record detailed, minute, granular dimensions of the reading habits of its users. So not just what book, or what page, or how fast, or whether they finished the book or not, not just what books they might like, but what books they actually do like and what books they have read, or in the case of this book maybe highlighted, or maybe read and reread and reread particular passages, right? That kind of granular data about our reading habits has not been available. There's some law on this point. When Robert Bork was nominated by President Reagan to the Supreme Court in the mid 1980s, he denied the right to privacy. He, of course, meant abortion rather than the kind of privacy that I'm talking about. But Michael Dolan, who was an enterprising Washington City paper reporter, knew that Bork used the same video store as he did, Potomac Video in Northwest D.C., and he went to the video store and he said, that Bork, he said to the clerk, that Bork, he's kind of a piece of work, isn't he, in the clerks? I mean, I assume, I'm casting political assumptions upon video store clerks, but I believe that behavioral evidence bears this out. Yeah, he really is. Now, Bork says he doesn't believe in a right to privacy. Oh, oh, oh, really? So can I have all of his movie rentals? Oh, sure. And so the clerk printed off on his dot matrix printer all of the movie rentals that Robert Bork had watched. And I know how you want this story to end, right? You want it to be sort of like 50 Shades of Grey, right? Bork is really into sort of gay German erotica. But it turns out the most salacious thing, there was a lot of John Le Carré novels made into films. There was also 16 candles, but we believe that was rented by his 16 year old daughter, not by Bork, even so, it's not very juicy. What is juicy is the final line of the article. Dolan said, this was a really fun project, and now I've got a source. There are 537 elected representatives in Washington, in the houses of Congress and in the White House. I wonder what they've been watching. Three weeks later, Congress passed the Video Privacy Protection Act, which protects the confidentiality of movie watching. And actually, because it regulates the cultural practice, rather than regulating VHS, it actually still continues to apply today and has been interpreted to cover not just DVDs and Blu-rays, but also your Netflix Q. I mentioned my daughter because we were talking about, actually she asked, why are you up in the office so much these days? I said, well, I'm working on the book. What's the book about? I said, well, this chapter is about the privacy of reading. She's like, well, why does that matter? So I sort of gave her a short version, maybe not much shorter, a slightly shorter version of what I've just said to you. And she actually said, that's brilliant. I think intellectual privacy is really important. And it matters so me and Declan, that's her brother, so me and Declan can read inappropriate books. And I couldn't think of a better way of putting it myself, right? The right to read in a free society is the right to read inappropriate books. And the choice to read inappropriate books should be at the reader. It should not be monitored or tracked or surveilled or bookstores or librarians should be the enablers of our inappropriate activity, inappropriate intellectual activity, rather than the watchers and the monitors and the gatekeepers. So let's say we've done some thinking and we've done some reading and we have a half-baked idea about the world we wanna share with the world. Now, good practice at this point would not be to just sort of post your wacky idea on the blog, right, you wanna confide with a friend and you might use the telephone or email to do that. This is Roy Olmsted. Olmsted was a former Seattle cop, turned bootlegged during prohibition. At one point he was the largest employer in the entire Puget Sound area. The federal officials, Elliot Ness and those guys, knew that Olmsted was a bootleger and they seized and they knew he was running out of his house but they couldn't get a warrant or they chose not to get a warrant. So what they did is they tapped the phone line at some point between his house and the telephone company and they found all sorts of evidence that Olmsted was engaged in bootlegging, they indicted him, they prosecuted him, they convicted him and he appealed to the Supreme Court. Olmsted appealed on the ground that his, there should have been a warrant before he, before they had obtained the information and the Supreme Court affirmed the conviction but this guy again, Brandeis, dissented repurposing his notion of the right of privacy from the 1890s and arguing that the Fourth Amendment should be read more broadly in a time of changing technology to protect a person's thoughts, feelings, sensations, emotions, ideas, sort of the genesis of this idea of intellectual privacy that I've been working on. Does anyone know who this is? This is Smiling Bob. He used to be on late night TV and maybe even not late night TV advertising Inzite, the natural mail enhancer, published by a company called Berkeley Unicycle. Now what does this have to do with intellectual privacy? It turned out that Inzite didn't work and it was really a credit card scam scheme using this product as bait and Mr. Warshak and his mother who ran this heartwarming tale of mother, son, business partnership ran this sort of erection pill regime together and credit card scam scheme, they were also investigated and the government obtained their emails and they used the emails to prosecute them. They didn't get a warrant. The Warshak's appealed, the Court of Appeals and the Sixth Circuit held that the Fourth Amendment requires a warrant before the police obtained emails. This is only a Court of Appeals decision in much of the rest of the country. The warrant requirement is not clearly established to be the law. There's some good news in this perspective and the Riley decision last term and in the Jones decision before the Supreme Court hinted that it was gonna treat digital records with more protection and held in the Riley case that when the police arrest you, should the police arrest you, pursuant to a warrant, they have to get a separate warrant in order to search your smartphone, in other words, because of the importance of our mobile phones to our communications privacy and implicitly, though they didn't use the term to this idea that I've been calling intellectual privacy. So briefly, what should we do? We need to think, first of all, beyond tort privacy. This is a Manchester United player in England who under English law is much more broad than the US. He is able to obtain an injunction gagging the press from reporting the details of his affair with a reality TV star. I think that model, the Mabel Warren model, the Ryan Giggs model, is the wrong way to think about privacy and speech. We should think more importantly about protecting intellectual privacy through law. Now you might think, surely Richard's intellectual privacy is just for pointy-headed intellectuals sitting in their ivory towers stroking their goatees, right? Well, first of all, large melon head, no goatee. But intellectual privacy is actually not just for intellectuals, any more than intellectual property or intellectual freedom or intellectual privacy is for anybody with an intellect, which is to say intellectual privacy is for anyone. These are not my children. These are actually Woody Hartzog's kids. But they were very excited by the book and he sent me this picture. So we should protect it across the board. We should realize that intellectual records, records of our searching, thinking, reading, communications, are sensitive records and give them more protection under our law than other kinds of records. We need to reject the idea that privacy is binary. It is always on or always off, known only to me in my heart of hearts or as Warren Brown has put it, shouted to the housetops known to the entire world. Most information has existed in intermediate states. It is known by some people, but not all people. And we need to recognize this fact in crafting information rules for the digital society, not have a binary distinction between the public and the private. We need to recognize the importance of confidentiality, something that lawyers realize enhances their relationships, right? The clients tell us information because they know we're not going to blabber. We know we're not gonna use it to market to them. They know we're not gonna use it to sell to the highest bidder or discharge it as an asset in bankruptcy. The client tells us, because of the promise of confidentiality, more information and better information, and we are better off as a result. But ultimately we need to think beyond law. It's a strange thing for a law professor to be saying, but this problem is far too complex and far too important. The values are too important to leave to the lawyers. We need to develop a professional ethics and this professional ethics needs to be adopted by technologists as well as by users. Google's famous Don't Be Evil mantra is a useful start in this respect, but it's by no means a complete or finished article. I think that technologists could take a page when it comes to intellectual records from librarians who were the first information professionals and developed a robust body of theory protecting both the privacy and the confidentiality of patron records under such documents as the Library Bill of Rights, which is almost over 70 years old now, and through entities like the American Library Association and its wonderfully named Office of Intellectual Freedom. In developing an intellectual and information ethics, I think we are starting to see some evidence of this. We're starting to see some companies compete on intellectual privacy. The Mozilla browser is engineered according to a manifesto which protects privacy, promoting such values as limited data, only the data that is necessary to accomplish the transaction, putting the user in control and having settings that the users can actually use. Of course, the DuckDuckGo browser, which is related to many people in the room, operates a search engine service under something less than fine-grained individualized surveillance. It works the way that search engines used to work seven or 10 years ago when we had no trouble finding things. Some responsibility has to be placed upon users. I think this is often how technology goods are marketed. This is the Steve Jobs approach to marketing that we just sort of invent something and persuade people that they wanted and this can be a deeply invasive type of technology. I think this is where law, so users have to be more sophisticated, but I do think this is where law can come back in. I think we need to develop something like a doctrine of consumer protection for information products and I believe the Federal Trade Commission is starting the process of doing that. More fundamentally, I think the novelist Zadie Smith put it best, that in the Anglo-American world, we race ahead with technology and we hope that the ideas will look after themselves. My book is, and really my work is an attempt to try and give some attention to those ideas, to be sure, to ensure that the society we are building contains not just numerical values, but human values and that we build these values into the technology before the technological environment and the social practices surrounding it stabilize and it becomes too late because a world of surveillance has become the new normal and I think in order to do that, one of those values that we have to be sure to enshrine is intellectual privacy, so I'm happy to take questions. Thank you. Jones is walking around with a microphone. Hi, Neil. This, I'm, Dan Gilman, I'm visiting actually from the Federal Trade Commission where I think some people are starting on some of these things, I know not precisely how, which is part of the question, so I'm utterly persuaded by some of this. I'm familiar with the NSA paper and the change in search terms and there are other reasons to think that surveillance can have suppressive effects in certain ways. There's what kind of surveillance and what context and how much all gets harder to tease out so part of the value here seems to me in sort of describing, articulating an interest that we have and ought to be aware of both for public reasons and private ones, but part of the task seems to me to be going back towards this FTC task, as you say, and then more narrow legal task, that is wrestling in the same way we've wrestled with commercial speech doctrine. How do we operationalize our notion of intellectual privacy and how do we move not just towards seeing this as a priority to be protected, but weighing it in a way that we can trade it off against other sorts of interests? How do we start to make this more technical and operational? Yeah, great question. So obviously, the goal here, as Dan says, is to identify the value. I mean, someone said to me a couple weeks ago, this idea is the word we were sort of, that they were sort of searching for in the Snowden revelation. It's this intuitive sense, it's a word to put to the intuitive sense that surveillance is bad, that a world of total transparency is not the world that we want. But as you say, someone in the United States, it's increasingly the Federal Trade Commission has to make sense of these values and operationalize them. I think one thing we should do would be to extend the protections of the Video Privacy Protection Act across the board. I think we should cover books which federal law doesn't cover much to President Clinton's dismay during the impeachment scandal with the books that he bought from Monica. I think we need to extend it to communications, we need to have ECPA reform. And I think what we need to do is we need to identify the types of data, and also in a big data world, the types of inferences, which may or may not be based upon sensitive data, so a two-tiered approach, but the types of data and the types of inferences or uses that are particularly menacing to intellectual privacy, and at least build in a pause, at least if not a restriction, at least build in some sort of friction so that the decision gets made and so there is some rational thought given to the practice. But some uses, I think, should be prohibited. At the level of the FTC, I think the, so there are some non-lawyers so a barrican law, privacy law hasn't operated on a global scale across the economy the way it has in every other industrialized economy in the world. There is a sectoral approach, which means some sectors of the economy, so maybe banking privacy, financial privacy, or videotapes get protected, or telephones, but others like books don't. And I think what's happened is the Federal Trade Commission has, in its ancient mission, ancient by sort of American administrative law standards, ancient, of protecting consumers has filled this vacuum and has developed a jurisprudence of sorts, a series of consent decrees and investigations and a few enforcements against companies that have engaged in bad data practices and I think what needs to happen is that body of law needs to continue to develop and sort of just find more people like Julie and put them, Julie Brill, the commissioner, and put them there. We are being videotaped. No, I think that what we've seen is a development of law on the metric of deceptiveness, right? Americans, from an external perspective, you can do anything you want as an American but don't lie about it, right? That is the story of, at least in American political scandals. Privacy policies have emerged as an industry practice and false statements in privacy practices. Practices that don't line up with the policy have been found to be invalid. They've also been developing a security jurisprudence. I think security is an important part of the story as well. And the third place that could be the most profitable is the other prong of FTC jurisdiction, which, as Dan knows better than I, is unfairness. And what we need to do is to develop some notion of what kinds of informational uses are appropriate and what kinds of inappropriate. The way we've decided what kinds of labeling requirements on industrial economy products or other sorts of goods are safe and appropriate and inappropriate. Now, I've got, under no illusions, this is gonna take a long time. But if you think about the last major revolution, we had the industrial revolution. It took us 50 years or more to really grapple with the problem of unsafe working conditions and consumer products and pollution. We're still grappling with these things. So I'm under no illusions. We can pass a couple of laws and solve intellectual privacy in particular or privacy in general for the digital economy. But I think we need to start working, even if we realize there are going to be some hard value choices. And I think the more work we do along these lines and the more we recognize intellectual privacy and other privacy values as values, that may enable us actually to create these islands of protection and actually allow more deregulated uses of data in other contexts if they don't threaten the values that we've identified. Hi, I'm Saltanabama's local blogger and activist. Fascinating talk. I mean, but one of the things you didn't do that one often hears in these sorts of talks is talk about privacy from whom. And that often sort of rat holes a discussion because when you say, well, the NSA shouldn't surveil the responses, well, Google knows anyway. I mean, you started your talk talking about the First Amendment, which is a protection from government. And then you spent some time talking about protection from corporate surveillance where there is no real First Amendment protection. Could you sort of tease that out and explain why you went that way? Yes, you see what I did there. So I actually, I used the word free speech rather than First Amendment. I think all too often American law pivots on this very hard distinction between the state which scours libertarians and private actors, which a lot of libertarians love, but a lot of progressives hate. And I think many, when we tend to call this public and private, and we tend to call this information also public and private, and everything is public or private and public things we care about and private things we don't. And the reality of information practice is right or that the line between the state and corporate entities as any sort of quick perusal of the Snowden revelations will reveal is a very blurry one, not just in terms of data flowing in both directions but technologies being developed by the state and deployed by private actors and technologies being developed by private actors and deployed by the state. So I think if we focus only on problems of the government or problems of corporate actors, I think we're missing something which is really big, which is both of these issues are going on at the same time. I think it's instructive. About a year ago, the White House issued a big data report and it said, you people in industry, you need to do a better job dealing with the consumer data you're collecting and using, we're particularly concerned about some of the discriminatory uses of big data. And the NSA has mentioned once in this massive White House report on big data process. I'm sure the White House, they work for them. They didn't forget about the NSA. At the same time, a lot of the big tech companies write an open letter to President Obama after the Snowden revelations, Microsoft and Google and Apple and Apple. And they were concerned that particularly in overseas markets, for some reason, the Dutch or the Belgians aren't as okay with American technology companies and American spy agencies sharing data with each other. And it was hurting their business competitiveness against other countries' corporations. And they wrote the letter, the great line from Brad Smith, the general counsel of Microsoft is, Mr. President, people won't use a technology they don't trust. They didn't mention their own back-end data. And I think what's going on is everybody is recognizing the other guy's the problem. We have to talk about this this morning, right? With censorship. Everybody's recognized that the other guy's the problem and no one is minding their own backyard. And I think because the technology is because the practices, because the kinds of discrimination are the same, it makes sense to at least envision the problem at a societal level that transcends the old-fashioned public-private divide. I think particular solutions, you're exactly right. The First Amendment is going to require more of the government than our commitment to freedom of speech will require private actors. But I don't think we should cease caring about free speech just because individuals are doing the censoring rather than the state. I'm doing a separate project right now about free speech-immediated environments. And I spent two days last week with a large technology company that runs an online gaming service. And they moderate, and they do it in a very sophisticated way, they moderate user expression in that service. Private kinds of content moderation slash censorship are becoming increasingly important. And I think we need to care about those as well. So I could write another book about that. So that's why I do it. I think we need to look at the entire problem all together. David Larshall, Berkman Center. Thanks for this talk. A lot of your examples are about sort of media that can be sort of passively consumed, whether books, movies, web search results. I'm wondering how you think about where social media fits into this picture. Twitter, Facebook, where there's both kind of like a follower relationship that's often made public and kind of an explicit part of the platform. And just part of the medium is the participatory nature and the kind of interaction. Is there a way of having privacy in those spaces? Yeah, I think there is. The hard thing about the new media like new technologies is they do tend to blur the boundaries, right? And so I think in terms of an example, that's why I've used the older examples, because they also tend to illustrate not just the values, but the ancientness of some of the values that I'm talking about, the fact that people have been worried and thinking about these values for a very long time. I think for new platforms, so maybe like Twitter or Snapchat or what have you, I think it's actually a good thing that we're seeing the development of new kinds of media and new kinds of technologies, new ways of engaging with the world and learning about the world. I think we need to build privacy where appropriate into those technologies. But the hard thing is it really depends. So without an example, it's difficult to do. But let's use Twitter as an example because it's Twitter and Snapchat, so we know those technologies. Twitter is a, except if you have a private account, is a largely public platform. And I don't think there's a need for privacy of things you spread to the world under your Twitter handle. However, at the back end, there's an awful lot of important privacy to do with anonymous or pseudonymous speech, which is why I think Twitter's development of the transparency reports and the protection of user privacy against government uses can also write the existence of a fairly robust privacy policy saying, here's what we collect, here's what we know, here's what we do, these are promises, and then if you break from the FTC, we'll pay as a visit. With a technology like Snapchat, I think it's interesting because it shows that people do actually care about digital expression in a way that is private or is controllable or is moderated as somewhere between fully public and fully private and is evanescent rather than stored and logged forever. And I think that Snapchat's own privacy struggles are a direct result of a mismatch between user expectation that they want to have a private or pseudonymous or evanescent communications technology and the engineering difficulties in doing that. So I'm happy to talk more about this, but I do think we should build the tech, the human values into the new technologies, but we should do it where appropriate. And there might be technologies we want to have as entirely public and some that maybe we want to have as entirely private, like making sure encryption remains strong, but it's going to be context dependent, so it's hard to answer in the aggregate. It's a great point. Hi, it's a little bit on computer scientists here at Harvard. I wanted to follow on the question relating to privacy from whom. So you gave some nice examples in the presentation about studies that show the chilling effects of surveillance, but the examples you mentioned all refer to surveillance by an authority figure. So after the Snowden revelations, people becoming aware of surveillance by the NSA or perceived or thought authority figure with the paying for the T. And my question is, has it also been established through similar sorts of studies that there's a chilling effect of surveillance even by non-authority figures? So for example, in the context of interactions with service providers and technology companies, sort of independent of the possibility that they might share their data with the NSA or the like. Could you repeat the last part of that again? Sorry. So again, so studies showing a chilling effect of surveillance when a government-like authority figure is not part of the picture. Yeah, well, a number of the examples that I gave involve not the government. The difficulty is there's not as much rich empirical evidence, whether qualitative or quantitative here as we would like. There's actually, I believe there's quite a bit of good quantitative evidence because I believe a lot of the tech companies are studying this fairly closely with A.B. testing and various things, but they're not necessarily publishing these sorts of studies. A notable exception being Facebook's Mood Study, which met with widespread disdain and fear as a result of the potential power that is going on here. But actually, maybe even the Mood Study is a good example of the ways in which interfaces broadly can be engineered to either encourage sharing or discourage sharing or really manipulate behavior. But no, and if there are computer scientists or social scientists who would like to study this more, I'd love to talk to them because I think we're blundering forward into this digital future. Sometimes, you know, at 1,000 miles an hour under the Zuckerbergian mantra of move fast and break stuff and disrupt. And I'm worried that while some of the disruption is good, we're disrupting some very important and long-held human values. And we actually don't know whether they're being disrupted or not as collateral damage for the Apple Watch or whatever the latest trinket or bubble is going to be. But particularly as the development of technologies like what's called the Internet of Things or personalized robotics or drones, as these tend to infiltrate our lives, we are engaging in a massive natural experiment in which we have almost no data. And I think, you know, so the no data argument can actually run in the opposite direction. Who are you? Not you, because I'm sure you were worked lovely. But who are you, the technologists, to build these massively disruptive technologies and to idolize disruption as a sort of mystical force of all powerful goodness when actually things have been pretty good. And maybe we should look before we leave. So I think we do need to moderate some of the rhetoric coming out of the VC-funded marketing departments of internet startups and to be sure the internet and really the digital society that we are building is a society that we want that is constant and with our values, whatever we decide those ultimately to be, it could be a world of no privacy if that's what we want, but that we make that choice on our terms rather than on Sand Hill Road or Palo Alto or Seattle. Time for one more question, right here. Hi, John Stubbs, I'm a fellow at the Berkman Center. I'm just wondering why there's less demand for this. And I think that kind of plays out in your slide about the shiny object, I get an MP3 player, I give you whatever you want. Do you think that we're on a trajectory where there is more demand being built through greater understanding about what some of these technologies mean and how surveillance and data mining works or has everybody kind of signed on to we just want the singularity and let's get there as fast as possible. Should we vote? This might be a self-selected room on the singularity and maybe I don't want to vote on that. There's a higher percentage of singularity worshipers here than maybe anywhere else, though Google's got a lot. I mean, so why aren't people, I think people do care about this, but I think just as we don't have the evidence, we don't have the, we were discussing this before lunch, the cultural framework to understand what we're building. We have, this is, there's a wonderful quote that I saw on the way in from the Magna Carta. We have 800 years this year of experience in building legal and cultural models to restrain the power of the state. We have 200 years of thinking about corporations, but this internet thing is new and I think people are anxious and they want to build technologies, they want technologies that they can use to make their lives better, but they don't have the time and the technical understanding or the framework with which to perceive the good from the bad. So I think we are in a time of great anxiety and I think you're seeing that reflected in things like Snowden and things like the reaction to the mood study, but that anxiety hasn't yet coalesced around the issues that we think we should care about other than people like free stuff. Though that's a whole separate issue for another day. Great, so your own publicly disposable but non-trackable version of intellectual privacy can be picked up right here. Please join me in thanking Neil Richard so much for taking this time to talk to us.