 So good afternoon, everybody. We'll get started. For those of you who don't know me, my name is David O'Brien. I'm a senior researcher and the assistant research director for privacy and security at the Berkman Klein Center. And before I introduce our speaker today, I have a few housekeeping notes to share with you. First, we are webcasting this event today. So please be aware that we're both broadcasting live and we're recording it. The video will show up on the Berkman Klein Center's website. And second, for anyone who is watching remotely, hello, and welcome to the room, we will be monitoring tweets. If you have a question or a thought, we'll keep an eye on those. If you tweet at BKC Harvard, we'll make sure to weave it into our Q&A session. And today, it's my pleasure to introduce all of you to Woody Hartzog. Woody is professor of law and computer science at Northeastern University. He's also a faculty associate this year at the Berkman Klein Center. His research focuses on a really interesting area, in particular on the evolving conceptions of privacy, data protection frameworks, the rules around automation, artificial intelligence, and robotics. He's one of the rare breeds of law professor that also holds a PhD in mass communications. And if you study this field, you're actually likely already familiar with a lot of his work. I count him, certainly among the leading scholars. He's written numerous articles, I think, that have been great contributions to the field. You can find those things in places like the Yale, Columbia, California, and Michigan law reviews. And he's also written in The Guardian, and Wired, and BBC, and many other places. Just last week, Woody was down on the hill in DC, testifying before the Senate Commerce Committee on Federal Data Privacy Frameworks. And today, Woody's here to talk about his latest book, which you see here on the screen, Privacy's Blueprint, which was published by the Harvard University Press. I personally can't recommend this book enough. Anytime someone asks me, I'm really interested in this, what should I read, I point them to this. And you all should, if you haven't already, take the time to read it and meditate on it. And well, without further ado, Woody, once you come up here, we'll welcome you up to the stage. Thank you so much, of course. I appreciate it. Thanks, David, and everyone at the Berkman Klein Center for hosting me today. It's really a pleasure to be able to come and talk to you. I make sure I don't have a hot mic here. So I want to start my talk with a series of stories. The first one is a story about a user interface that's asked people if they would give their permission to share their contact information, including information about their network friends to third-party apps that they were downloading or using. And as many of you may know, that data made its way to the firm Cambridge Analytica, which then spilled over and began an international debate, I think, about the ways in which we collect and use information. A second story has to do with this photo, which I found when I was doing research for the book. This is a photo from a hacked Wi-Fi-enabled baby monitor. And the disturbing thing about this photo is not necessarily that this is a hacked photo, but that it was found, I found this through a story on Ars Technica, on a search engine dedicated to archiving photos of hacked baby monitors. And so it's not just one photo. There's a search engine for it if you're interested in it. And then the final story that I wanted to talk about that many of you are likely familiar with is the story of the fight between Apple and the Department of Justice and the FBI regarding whether Apple would make an alternative version of its iOS software on the iPhone that allowed law enforcement to bypass encryption. Now, what do all of these three stories have in common? That is that they are stories about the design of information technologies. And in my talk today, I'd like to make three points. This is the same three points that I try to make in the book. That design matters for privacy more than we law makers, in particular, and policymakers have admitted. The second point is the thesis of the book, which is that privacy laws should take design more seriously. And I'm going to talk a little bit about what that is. And the third one is that a design agent should have roots in consumer protection and surveillance law in contrast to some of the prevailing wisdom about how we should regulate data and information technologies, which looks at things from a data protection perspective. And I'll compare the two and what I think is different about them. All right, so first, design matters for privacy. The reason that I think design matters for privacy is that it is everywhere. It is power and it is political. What do I mean by that? Well, first, design is everywhere. This is one of those things, one of the most interesting things about writing the book for looking for anecdotes for the book and examples to prove the points, is that once you start seeing the ways in which design matters for our privacy, you see it all over the place. And a really good example is this. Does anyone know what this is? Does anyone recognize this? What is this? Snapchat, right? This is technically marketing materials for Snapchat, but it's a mock-up of the user interface for the social medium Snapchat. OK, without knowing what this is or what this does, what do you think that this app is used for? I like to, I've given this talk to a lot of non-academic audiences, and my favorite thing is to get people that have never even heard of Snapchat. What does this do? Close, the seven seconds matters. It allows you to take a photo, and after seven seconds, the photo is supposed to disappear on the other end. It's supposed to be an ephemeral media service. And this is what it was when it first launched. And you can guess that precisely because it's got the scroll wheel. Without any directions, we know exactly how long it's supposed to last. You can send it to other people. And then there's another design element here that's maybe not so subtle. What sort of photos do you think Snapchat is telling you this service could be good for? Sexual ones, right? Snapchat, when it first came out at least, was known as the sexting app. And perhaps it's no secret why. We've got some very carefully cropped photos here that you can't tell whether these women are wearing clothes or not. And the invitation, the cumulative effect of the design is you can share this on our site because it's safe, because it won't last forever. It's temporary. See all the signs there? Temporary. So you can be a little more risque on this site than you could with other sites. And all of this is conveyed with pure design, without any sort of explicit instruction. Design decisions are all over the place. What is this? Everyone probably knows what this is. Google Glass, right? Google Glass, one of the more infamous wearable technologies. People might could have handled, I think, a screen over their eyes. But there was one design decision that may have put, may have doomed Google Glass' project. What was it? Oh, the vibration there. I didn't even think about that one. Actually, I was thinking about the camera, right? You don't know when it's on. You don't know when it's on? Although technically, there was an interesting design feature there. The light was supposed to, there was a red light that was supposed to come on whenever it was active. But that didn't seem to comfort people too much, right? Because there was a camera. Now, the question about whether there was a camera ostensibly might seem silly. Because of course, every single one of us already has a camera on us at all times. There's actually another design decision that makes the camera different. And that's the fact that it goes around your ears. And then you can wear it at all times. It reduces the cost of accessing the camera from having, now it seems trivial, having to reach in, pull out, open the thing, take the photo. Google Glass used design to reduce the cost of taking photos. And that made people intensely uncomfortable. Right, and surreptitious, right. The idea. Right, exactly. It's something, not just a cost, but it was more visible as well. People might not notice the red light if it was supposed to be on. And so there were websites dedicated to glass holes as they were called. And it was met with a lot of feedback because of the design decision. And then many of you may recognize this icon. Do you want to hear, know what this is? Well, it certainly indicates something nefarious, doesn't it? Or possibly, or at least some kind of subterfuge. What is this? Private browsing. This is incognito mode for the Google Chrome browser. And whenever you open up incognito mode, you see this, I don't know, shady character, someone who doesn't want to be detected, right? Someone who has got the hat pulled down low and the collar pulled up. And the idea is, if you don't want to be seen or recognized, you are now in a context where that's possible. Now, without saying anything else, you might think that you are overprotected. Because of course, that's not how Google Chrome works, right? Just because you're using incognito mode doesn't necessarily mean that no one can see anything that you're doing. Website that you're interacting with can see what you're doing. Your ISP can still see what you're doing. Incognito mode just removes its traces of what you've been doing on your own computer. And to Google's credit, they actually have a series of disclaimers about what incognito mode does right below this icon. But if those weren't there, you might think, ooh, now I'm in private surfing mode, right? No one can see what I do online, all because of design. So design is everywhere. Design is also power, meaning that it does several different things. It makes things easier or harder. It affects the transaction cost. And it gives off signals. It tells us something about the nature of how the technology works or about the relationship that we're getting into with the tech user. One of my favorite research projects that I looked at for this book was actually one that was conducted by Harvard's own Leslie John and Alessandro O'Quisty and George Lowenstein at Carnegie Mellon. And they brought research subjects in. And they had them answer a series of questions. And the questions were relatively sensitive. So if you can see this, it says, have you ever smoked marijuana? Have you ever cheated while in a relationship? Have you ever driven when you were pretty sure you were over the legal blood alcohol level? And this is just a taste of the kind of questions that they were asked. Basically, have you ever committed a crime? And what's the first thing that you notice when you sit down and look at this interface? Comic sans font. No one has ever taken anything seriously in the history that is written in comic sans font. What else? Yeah, the devil icon and the you, right? It's funny, right? So the devil icon is like a devil, but it's an emoji devil that's like, oh, it's a cutie devil. And it says, how bad are you with the multiple exclamation points? And the idea is, it's like cutesy, right? Yeah, binary answers here. Yes or no, right? There's no sort of in between. There's no skip option here. Forced choices. But the overall sort of effect of this here is they want to say, how bad are you, right? But what they're really saying is, you're bad. You're just a little bad, right? Everyone's just a little bad. You can admit it here. Now, here's the control group. Gone is the comic sans, right? Replaced with a respectable sans-serif font. Gone is the how bad are you, replaced with a respectable imprimatur of Carnegie Mellon University. And this research project really illustrated to me, and I want to quote it exactly, because I don't want to misstate their conclusions. But they call this the non-frivolous interface and this the frivolous interface. And the scholar said that relative to the non-frivolous interface, participants in the frivolous looking survey, which is this, that asked identical questions were, on average, 1.7 times more likely to admit to having engaged in risky behaviors. For example, the scholar said, a participant in the frivolous looking survey was, on average, 2.03 times more likely to admit having ever taken nude pictures of himself or a partner. And they concluded that people at scenes feel more comfortable providing personal information on unprofessional sites that, arguably, particularly, are more likely to misuse it. Every single design decision makes a certain reality more or less likely. It is power distributed. Finally, it brings me to my last point. Design is political. It's always political. When I first started giving this talk, I got some pushback from some people that said, why are you obsessed with the design of information technologies? Why wouldn't you just regulate misuses of that technology? And we see this carrying out in debates now. So people say, you wouldn't regulate a knife. You regulate stabbing people. Knives can cut vegetables. They can be used for very useful things. But they can also be used for bad things. And so we should regulate those uses, not the technologies themselves. And it sort of often gets distilled down to this sort of maxim, which is, there are no bad technologies, only bad users. I don't think that's true. I think that because every single design decision makes a certain reality more or less likely, that means that every technology that is designed to have some sort of effect on the world is inherently political, maybe just a little bit. But even ignoring the certain realities that technologies are likely to lead to is a decision in and of itself. And so what it means, I think, in the way that this sort of plays out, is that lawmakers should not ignore the design of information technologies. And to do so is actually a choice to ignore sort of the effects. And maybe those choices are good or not, but let's not do so under the pretense of neutrality. OK. The second part of the book that I try to get into is that privacy law should take design more seriously. And the reason why is because if you look at privacy law, there's actually a design gap there. There are several holes that I think ignoring the role of design has left in our jurisprudence. Now I teach information privacy law. And it turns out that even though it's infinitely complex, a lot of data privacy law can be distilled down to three rules, just three. And so now it summarizes. So you never have to take information privacy law. The first one is basically follow the FIPS. Follow what is known as the fair information practices. The fair information practices are the building blocks of modern data protection law. They are things like access, notice, choice, data minimization, procedural safeguards, things that we all recognize. They are the fundamental building blocks of statutes like the Fair Credit Reporting Act, the Privacy Act, and most notably the General Data Protection Regulation in Europe. The FIPS have become amazingly for a concept to sort of broad and diverse as privacy, almost an international language of privacy, which is amazing. And so the FIPS are what we tend to embrace. But the problem with the FIPS is that it almost always distills down into this notion that we should give people, people as data subjects, control over their personal information. This is a compelling concept. It drives a lot of our national conversation about what our data protection law should be. The problem with control, and this is one of the major points of the book and something that I feel really strongly about, is that control isn't workable as a solution, as a regulatory strategy for privacy. And here's why. The way that control gets instantiated is through this thing. Everyone recognizes this. It's a toggle switch. Can this app collect your geolocation information? Green means yes, gray means no. You click one button, boot, done and done. Fair enough, simple enough, except it's never just one. So we say, OK, I've got a few decisions to make here. Compass calibration, that seems good. I think I want that. Find my iPad. That's useful. It helps me find my iPad when I've lost it. I can remotely brick it. That's good. Location-based alerts, I don't know what that is, but maybe it's good. Location-based I ads, I don't know how that's different than location-based alerts, but maybe that's good. Setting time zone, share my location with who. So we've got a series of decisions that we have to go through here. But maybe let's say we take the time, we make the decisions, we go through, we pour a second cup of coffee, we work through, we say, OK, I think I'm set. Oh, god. Now I've got lots of decisions to make. The problem with thinking about privacy as control is that it simply doesn't scale. We don't have the resources, because we have to keep fiddling with these switches forever and ever every day for the rest of our life. If we conceptualize privacy as control, then we are gifted with so much of it that we choke on it. And the way that that happens is through design. The second rule of privacy is do not lie. This is how the US first entered regulating privacy in the 1990s through the Federal Trade Commission's authority to regulate unfair and deceptive trade practices, where they said, listen, we don't want to regulate it too much because it's a brand new technology, the internet's a little baby, we don't want to crush it. So what we're going to say is, do whatever you want, but if you promise something, just make sure that you do it. Fair enough. The problem with the do not lie ethos though is that technical truths can get embedded all sorts of places and mean nothing. If you wanted to hide the location of a dead body somewhere, where's the first place you would put it where you know no one would read it? In the terms and conditions, you'd put it in the privacy policy because it would be guaranteed to remain unread. So technical truths, the do not lie rule, doesn't help us when technical truths are embedded in places where we know they're never going to get read through design. Or what happens is that we oversimplify. We use design to have a pop-up that says, by the way, here's a thing, but there's only so much information you can convey in a little pop-up ad. How many people, when you are in Europe and you go to a website and the I Agree banner pops up and this website uses cookies, you're like, yes, yes, yes, fine, whatever. Click and you agree. Technical truth, worthless or privacy. The final area in which privacy's design gap manifests itself is in the rule of do not harm. This is sort of the final rule. And it's manifested in the Federal Trade Commission's requirement to police unfair trade practices. And the do not harm mentality says, well, as long as you don't physically or emotionally or financially hurt someone, then you're fine. But the problem with modern privacy harms is that they don't always manifest themselves in clearly tangible harms. Rather, it's sort of death by 1,000 cuts, a little information here, a little information there. And each discrete disclosure isn't enough necessarily to rise to the level of what the law would recognize as a tangible harm. And so when data breaches happen, for example, information gets slowly compromised bit by bit. But it's difficult to say any of those rise to one particular level of harm unless you have your bank account compromised or something like that. And so the design gap manifests there. Okay, so what do we do about it? This is the next part of the book where I propose a theory, a design agenda for lawmakers and the way in which I manifest this is three parts. One is we have to identify values. What are the values that we want designed to further? Two, what are the boundaries that we want designed to not cross? And then what are the tools we can use to effectuate that? In terms of values, I say your mileage could vary. There are multiple values that you might wanna see represented in design. For my part, I chose three of, I think the most important values for at least privacy and it's actually not control. It's trust, obscurity, and autonomy. Trust, I think, is one of the key components of modern privacy law, it's something that we've forgotten about, which is that when we disclose information to people, we often do so within relationships of trust. There are promises that are made, expectations that people have about discretion, whether you're going to reveal certain kinds of information, about honesty, what sort of things do I need to know? Not what's buried in the fine print, but that you should tell me even if you don't want me to know. Protection, are you going to store my passwords in clear text, please know? And finally, and what I think is one of the most important values or concepts that's been missing in privacy law this far is the concept of loyalty, which is not putting the data processor's own needs in front of the interests or needs of the data subject or at least not unreasonably so. This is in harmony with Jack Balkan and Jonathan Zitrain's concept of information fiduciaries, where we can talk about that a little later, and I argue that these values should be reflected in the design of information technologies. Another value that I think is less established in privacy law but equally important is the concept of obscurity, the idea that if information or people are unlikely to be found or understood, they are to a relative degree safe. And we live our lives surrounded in zones of obscurity. When all of you walked here today, you walked down the street probably thinking that that walk wasn't going to be broadcast on the Times Square jumbotron, right? And we use our risk calculus to adjust ourselves all the time and the problem is that surveillance technologies, what I call seeking technologies can eviscerate that obscurity, right? There are biometrics and other sorts of surveillance technologies that can significantly alter the risk calculus that we use every day to adjust whether it's safe to pick our nose or gossip in the hallway or do random things that we're pretty sure that most people are never going to see. And then finally, the value that we should seek to further is autonomy. And I draw a distinction between autonomy and control. Sometimes people use those two terms synonymously, but I don't think they're the same thing. Control could serve autonomy. But as I've tried to demonstrate earlier, if you're given too much control through design, it actually is corrosive to autonomy because it inhibits our ability to make a decision and it actually works as an interference because our acquiescence can be, our silence can be taken as acquiescence. Okay, so at the end of the book, I make the argument that the design agenda should have roots in consumer protection and surveillance law. And I say that there are, if you look at what we've already created, not within data protection law, but within consumer protection and surveillance law, we can pull from a couple of standards already, deceptive design, abusive design, and dangerous design, which are potential boundaries that we can draw from. If we create an app that says add friends, the idea is that it's not going to go through your address book and automatically upload the entire address book and send invitations out to everyone. Rather, we think it's going, this is the path case that the Federal Trade Commission brought, a complaint against the social media path. We should avoid interfaces that are unduly malicious or abusive or manipulative. And what I mean by that is one of the things that has been interesting and fun to try to collect is all the examples of what designer Harry Brignale calls dark patterns, which are user interfaces designed to make people do things that they might not want to do or to channel them in a certain kinds of behavior. And this is one that's sort of, I think called confirm shaming, right, which is here at the bottom. It says, no thanks. I'd rather pay full price for delicious tea. If you want a decline, click off the op-ed. If you see here, you have shaming. What if your neighbors knew whether you voted or not, right? So people's personal information is leveraged against them in sort of abusive ways. You know, here it says stop losing customers and then it finds, it says, no thanks. I'm fine with losing customers. There was one that I found that was, it was for a health app. And to decline it, you had to click no thanks. I'd prefer to bleed out or something like that. It was really, and of course we laugh it off, but like over time that sort of like weedling and chipping away at our resolve can have an effect in the aggregate. The use of things like double and triple negatives. Like I don't not disagree to this thing. It's not deceptive, but it is abusive, right? And then it leverages our own cognitive and resource limitations against us. And then finally, there are certain things that are outright dangerous. Spy cams I think could be outright dangerous. This is designed, this is a camera. This is designed for one thing only, right? It is to surveil you in a way that you don't know that you're being watched. In the book I talk about there's a toothbrush that has a camera in it, it's a spy cam. That's designed for one thing, bathroom voyeurism. And that to me might be too dangerous. And so what are the responses that we can have here? Well, we can have a soft response. We can have a moderate response or a robust response. The soft response involves funding. It involves education efforts. It involves trying to work with academics and industry to create standards around the design of information technologies. And we see certain efforts underway to do that right now. There could be moderate efforts. Moderate efforts might involve simply taking, being more aware of design in the frameworks that we currently have. And so in the book I talk a lot about how promises are being made through design all the time. Yet judges, when interpreting what the agreement is between users and websites, very rarely look to the design. They almost exclusively look to the terms of use. Even though things like padlock icons are basically design-based promises for some kind of protection, right? They say, well, this will protect you some way. And we need to be a little more critical about what is design trying to tell us? And then finally, and we can talk about this in the Q&A a little, some of the robust responses could include financial punishment for creating certain kinds of technologies or perhaps outright bans. We ban spyware. We ban the trafficking in spyware. We might wanna consider that model a little more. One of the things that I've been vocal about is facial recognition technology, which I view as probably the most dangerous surveillance technology ever created. And I think that it's so dangerous that we need to move me on simply trying to get consent for uses of facial recognition. Start thinking a little bit more about robust responses and what that might look like, including possible moratoriums or outright bans. And then the final part of the book, I take the blueprint that I proposed and I show how it might work within the context of social media, how it might work within hide and seek technologies and how it might work in the, what I call the flaming dumpster fire of the internet of things. So, and so that's it. I'll wrap up now and I'm happy to move to Q&A. Excellent, thank you Woody. All right, so that was really a terrific presentation. Thank you so much. Now we'll go over to Q&A and actually we have ample time for that. So start to think about your questions. I had a couple that I wanted to get us going with and I'm gonna put on my libertarian hat as best I can here and it seems a little bit like what you're saying is we need to be a little bit more maternalistic or paternalistic when it comes to design choices from the top down more or less. But you also mentioned autonomy and control are still important factors here. I mean, where does consent fit into this? We know how it fails, right? We know that people don't read things like Terms of Service. But what is it, where could there still be places for consent or how do you think about that? And then the other thing I'd like you to address if you might is manipulation. That's a wonderful topic. It's incredibly broad and very subjective perhaps in my view. How do we think about that too? Okay, that's a great question. Thank you. So in terms of consent, I somewhere along the way in the 1970s when data protection frameworks were first being conceived and we first started to realize that data being aggregated posed potential issues. We started thinking about what the framework should be for regulating data and one of the frameworks that came out was this concept of informed consent. That people should have a choice as to whether their data is used or not. And it seemed like a really good thing because who doesn't want more control and more choice? And it works at least somewhat well in other areas like with informed consent as a cause of action for surgery. So anyone here that's had a medical procedure done recently, someone came to you, they sat down, they said, listen, this could be pretty risky. Are you sure you want to do this? Here are the alternatives. And then you sign off and you say, I give informed consent. I teach torts, we use consent as a defense to battery. Consent, it's not as though we just sort of drew it out of thin air, drew this concept out of thin air. Here's the problem though. We didn't think it through well enough because consent only works within a limited context, right, within a limited environment. And if you think about areas in which informed consent theoretically works, it's got three common things that make it at least a possibly workable concept. One of those is that informed consent is asked infrequently. So when we give consent for surgery, we don't have to make that decision very often, right? So we actually have time to think about it and say, okay, this is a big deal, I should consider this. The exact opposite is true for data environments where we are given a relentless onslaught of I agree options, right? And we press that button over and over and over multiple times a day. And so it doesn't work there. Another area sort of precondition for what I call gold standard consent and work that I've done with Neil Richards is the idea that the harms that we envision are visceral or easy to imagine. So when you give consent for surgery, you can imagine someone having a scalpel and sneezing and slipping and cutting, right? That's a very visceral thing that I can actually envision in my head. I'm like, that's scary. When you consent to contact on the football field, you can imagine what might happen if it goes wrong. You get hit very hard, right? We can see it with our eyes. The exact opposite is true in the data environment where once our information is disclosed, the flows in which it travels is largely invisible to us. And so it becomes very difficult for us to do any meaningful risk calculus consistently. It's basically guessing, right? It's just wild guessing. And then the final precondition for gold standard consent, I think is that the harms are, the discrete harms, so what I mean by discrete is each individual decision point has a consequence that is significant. So if I make the wrong decision about surgery, I could die. If I make the wrong decision about football, I could, getting hit on the football field, I could get seriously injured. There are all these reasons why I might want to take this particular decision seriously. None of that exists in the data environment because each time we're asked, we're usually just giving just a little bit of information, right? Every time, just a little bit. Maybe my geolocation just this one time, right? Maybe my name and address just this one time, right? And we have difficulty having the right incentives to take each I agree button seriously. And what we do is we fail to do it every time, right? We collectively gloss over it and then you look up and you've glossed over it 400 times, right? And so consent, I'm not convinced. Now the pushback to this and the pushback that I've gotten is, do you think people are idiots, right? You have no faith in humanity, that we don't have the capacity to decide for ourselves what we want to do. And my response to that would be it's not that I don't have faith in humanity, it's that so much is being asked of us, it's unrealistic. And so we chose a regulatory strategy that just doesn't work at scale. If the preconditions for effective consent were there, it might be different and there might be a place for it, but it requires a really difficult conversation that we haven't had yet. And that difficult conversation is do we start prioritizing requests? So we could have a place for consent, but not everybody gets to ask it because right now our requests for consent are weighted equally, right? So Google gets to ask for consent, the some random app gets to ask for consent and they're all weighted equally. And the kinds of requests, can I have all your information, can I have just your geolocation information, can I just have your name and address? All those requests are also weighted equally. And if we want to prioritize what I think is a finite resource, then we're gonna have to start categorizing. It's gonna have to be like bankruptcy where everybody gets in line, right? And we have a pecking order for consent and we haven't yet had that conversation and it's a hard one, right? I mean who's to say that geolocation is the most important thing to ask for? Who's to say that the company that I spend most, the app that I spend most of my time with gets priority on consent, right? I mean how do we even begin to construct that? But it's a place we need to be. And in terms of manipulation, so I talk a lot about wrongful manipulative design or abusive design in the book, that's hard. And it's something that we've been struggling with for a while in the advertising context because manipulation by a less pejorative term is trying to get people to do the thing that you want them to do, which is advertising. And that can be actually a good thing, right? And it can inform people and give them lots of options. And so the best example, the best examples that I've seen so far, one of them has to do with a design, if you look at the Consumer Financial Protection Bureau. Unlike the Federal Trade Commission, the Consumer Financial Protection Bureau has a three-pronged consumer protection mandate, unfair, deceptive, and abusive trade practices. And abusive trade practices are those that leverage people's own limitations against them. And there are things where we might be able to start identifying the constellation of abusive practices in a way that when negligence law was first a thing, who knew what it was, but now we sort of like, have sort of zeroed in on it. And it could be things like the use of triple negatives, right, or double negatives, the use of negative option marketing, right? Where you say, you stay enrolled unless you don't opt out, or the use of false choices, right? Like these sort of false binaries. Like there are these patterns that consistently I think we've learned to identify, but that's gonna take a lot more conversation, I think. Questions, we have mics running around here. When one reaches you, please state your name and a question. Hello, my name is Emiliano. I want to know if you think that substantive provisions that can be used to regulate design, I was thinking on the California Privacy Act that it has the equal services provision. And I think most of the designs that you showed in the end will be legal because if you make a choice of information, you cannot be given different services. So do you think this is a good way of regulating design? What do you think about that? Yeah, so I'm of two minds about these sort of substantive rules. And one of the big fights that we're having, I think now in data security is whether we're going to have clear rules. And I think industry, generally speaking, can respond to a lot as long as the rules are clear, which is something they've advocated for. But there's also a virtue, I think, in having a reasonableness approach, which has a lot more flexibility, because my fear about having very specific substantive rules is we ossify things relatively quickly, particularly with the design of information technologies where it can change a little. And so unless we do so within a structure that has things like sunset provisions and mandatory sort of re-uppings, I think we risk creating, if we have very specific rules, substantive specific rules, which generally I'm in favor of, I think we risk sort of ossifying and then the more specific we get, the more we sort of allow through omission. Whereas if we create responsibilities to avoid an abusive design, for example, unreasonably dangerous design, then we're, I think, relieving a little of what we ask of users and we are putting more risk on companies, but companies are the ones that, A, are designing the technologies and B, I think, are able to make changes to that. I wanna go back to the last phrase you used in your last sentence, unreasonably dangerous design. How do you decide what's reasonable and what's unreasonable? Such a good question, thank you. The short answer is you can bring, there are a number of collective strategies to decide what's reasonable and unreasonable. One of them is, so in tort law, sort of negligence is acting unreasonably under the circumstances, to create an unreasonable risk of harm to others, that's the whole torts. What is it? It depends on context. So the short and easy answer is you have to look at the context and the way in which we inform that is we look at things like custom, industry standard, we look at perhaps international standards, we can look at things like reasonable alternative design. This is one of the ways in which we decide whether products are going to be under products liability law, whether something is a defective design, unreasonably defective design. The reasonableness test, generally speaking, is broad and undefined, but it's not something that I think law is unfamiliar with. It would require, I think, a lot more serious conversation, but I think we could get there through time, and then iteratively. Wait for the mic, please, so the folks could hear. You disadvantage in new industries that do not have any kind of customer standards. Yes. And therefore, you're kind of slowing down the progress, in a sense. I don't see much of a difference between that and Walmart to be allowed to sell in India, and then India changes their law once it's become popular, is that you can only sell things that are produced within the country. And we're certainly not a third world country to not oblige to some grandfather clauses. Well, so the question of data localization, I think, is an interesting one, but I would distinguish this from that. And would I be imposing a cost on companies? Absolutely. I mean, I think that we need to be clear here that when we're talking about design rules, we're talking about rules that limit what industry will be able to create. My point is that maybe that's the cost of admission given how dangerous these information technologies are. And that if we can't do it safely, then maybe we shouldn't do it at all. So children are a protected class. I'm wondering what you think in terms of these concepts being applied to both children and families in terms of the issues and then the appropriate response. So I agree. I, generally speaking, one of the reasons that I am so critical of consent structures is because we often, even in COPPA, which I think is a well-meaning law and in many ways has been at least effective from its ability to bring claims and get some sort of consequences from it. But we're asking, A, I think, that it's important to consider children specifically. So I would agree with the idea that it's worth differentiating people, not just children, but lots of different populations that suffer the effects of design differently. So people of color, women in certain instances, there are lots of people for whom design affects differently within privacy. And I think it's worth having a legal system that is sensitive to that. And I think that when we're talking about children, one of the things that has motivated me in this entire field is the idea that the way in which we grow up is through certain sorts of zones of privacy so that we can have room to play in experiments and fail in a way in which people develop. And so I would push back, I would support sort of getting rid of this idea of consent because also, at least in contract doctrine, the idea of having consent regimes based on children is, A, they don't have enough practice making those sorts of decisions. B, they probably haven't had enough time to accumulate the wisdom that would help them sort of calibrate the right decisions. And so to the extent that I'm critical of consent, I'm particularly critical of consent with the space of children and then actually would encourage and maybe even be able to justify design protections for children even more than sort of the general populace. Oh, I'm sorry, the mic's back here. Hi, thank you for the presentation. And name first, if you might. My name is Aran, I'm an LLM student here. So I think consent is closely related to choice and how much choice do users have when the tech companies with the services they provide, they're increasingly becoming monopolies and maybe the GDPR is disappointing in that sense because it's being criticized for having a disproportionate effect on small companies and it doesn't really matter to the large sync companies and they're also monopolies. So what would you advise to perhaps other jurisdictions who are working on comprehensive data protection law to avoid this from happening? So I gather that in terms of the net effect on competition of data protection frameworks like the GDPR, one of the things that was built in to try to increase consumer choice and generally speaking, I should say that even though I'm highly critical of consent regimes, I'm very much in for data subject rights. So rights of access, rights of deletion, rights of, and the area in which I think is meant to be pro-competitive is data portability, the idea that you can just sort of like wholesale download. Now that's an insanely complicated thing but it's to the extent that we could encourage that, I think that we should encourage data subject rights as a way to empower people. I do like the, I'm supportive of choice in that sense in terms of a range of options to choose from. The problem is when we ask for people's choice to transfer rights away, that's where it becomes really problematic. So people should be protected I think regardless of what they consent to or choose. In other words, I just want a higher floor than what we seem to have right now. So I'd focus on data subject rights and also maybe getting serious about anti-competition law if we want to, but that's a different conversation and I'm not a competition law scholar. Hi, Michael Rand from New York City and Columbia University. And my question is, I know here in Cambridge there's a firm called IDEO and of course there are a lot of other places too. And I was curious if you had gone to or had any interaction with any of the design people, studios, et cetera in thinking about solutions to this and identifying the problems, et cetera, et cetera since design and theory is there bread and butter. So I have given this particular talk at multiple technology companies and I've had the privilege of meeting with a lot of their designers and engineers afterwards. And it's A, always immediately a humbling experience because then the complexity of the problem is always presented very starkly. But it is something that I thought about and I hope to continue to think about. One of the things in which I would love to see and generally supportive of is these sort of dashboards that are being created now for people to be able to take stock in a non sort of synchronous flashpoint way of get deletion rights and then making sure that it does what people think they should do. Hi, my name is Alicia. I noticed that we've talked a lot about design in terms of user facing design. So I was wondering about the relationship between design and the sense of software engineering choices at lower levels and relationship to privacy. Yes, so a lot of my design, a lot of the focus on the book is on the UX UI and less about design at several layers down in the stack. I think that that's equally relevant and probably is a bigger component of this. So when I talk about privacy by design or privacy and design, I actually mean all of it. But in terms of what I focus the most of my energy on for the book, it was the user facing stuff, that's where I see the biggest gap because I think that there are a lot of other scholars that are doing some really great work on ways to engineer design solutions and do other parts of the system. So segmentation, de-identification solutions, I think are really interesting. Back-end encryption solutions. I think this is, and a lot of this also overlaps with data security law, generally speaking, which is my next project. So that's a long-winded way of me saying, I think this is relevant, but I haven't thought about it as much specifically for this book. Time for maybe three more questions if we go quickly. In the, oh, sorry. Andy Summer, I'm an ALI fellow. Can you talk a little bit about the enforcement mechanism that you envision for this regime? Do you think there are existing agencies that would be responsible enforcement? Would you give a private right of action? Yes, I would. So my short approach to enforcements of design as I've been, the design agenda that I propose in the book is sort of letting 1,000 flowers bloom. I like the overlapping jurisdiction between the Federal Trade Commission and state attorneys general, which is why I suggest the consumer protection approach. While I think that establishing a US data protection authority could be beneficial in some ways, I view it as relatively limiting in other ways because privacy in my mind is about more than just personal data and the harms that come with the aggregation of personal data. I would like to see civil rights law sort of brought into this conversation a little bit more. Other values that are identified in the design, I think could bring to bear their accessibility concerns that I think we would want to bring into it. And so the Federal Trade Commission is the agency that identifies most easily able to implement immediately sort of some of the suggestions in the book. They would require an additional grant of authority under some circumstances. And then judges were actually a part of the audience of my book, which is that they oversee a lot of disputes. And if we had a private cause of action, they would oversee more, which I would encourage. And to be more cognizant of the role that design plays in shaping people's expectations, I think would be an important step forward. In the case, I don't know if you want to put the slide back up of the user interface that presented you with the choice of getting 10% off or not, depending on whether you gave them your data, do you believe that that was just a misleading interface or do you think that's an abusive choice that should never be presented to a user in the first place? Yeah, so I think that probably would fall on the, if we addressed it, we would address it through the soft response, right? So I think it's an issue of the idea that in the aggregate, that sort of thing can wear us down. That particular one in isolation might not be necessarily violative, but the use of things like double negatives might absolutely. I was just wondering whether you felt that the practice of giving a discount in exchange for giving up your data is itself an abusive practice. Is that what you were aiming at? Oh, oh. Or is it just the user interface of making that choice that you were aiming at? So I have to say, I don't love that practice. And the reason why is I don't love the practice. I don't like the idea of the more you sort of disclose the more we can take without having a sort of trustworthy relationship rules imposed. Now, if we did it within the context of a trustworthy relationship, I might be okay with it because then we've got loyalty, protection, honesty, discretion, but I don't like, and this brings into a larger debate about should we sell our data, right? So will I am, and I think someone in California saying we should be paid for our data. I think that's a really dangerous frame, frankly, and would push back against it because when you take something that is as broad as privacy that incorporates human flourishing, civil rights, well-being, so many other things and we distill it down to a market transaction that sort of implies you can now do what you will with this data, I don't think we're gonna like what people get on the end of that. And I also don't want privacy to be something that only those, I mean, the sort of flip side of that is people that can afford it don't have to take the discounts. And I don't want privacy to be a luxury good. Maybe one last question from me. So we know right now you were just on the hill last week and right now Congress is of course debating the future of our privacy laws in this country. If you could make a prediction right now about what things look like a few years from now, is there no change? Is there a little bit of change? Do we see a lot of change? And how long do you think it would take to implement a shift in the direction that you suggest in your book? So that's a really great question. I think the answer to that question depends upon what happens in the states because this thing could break two different ways, I think. One is that Washington, so there are a couple of bills now floating around the states because the number one question here you should all know I think it's preemption. So it was the thing that kicked off the hearing is apparently the only thing that seems to matter right now. And so what that means is we have to watch very carefully what the states are doing. If it breaks like the data protection laws broke. So California introduces the first data protection law in 2003, all the other states look around and say that's a great idea. And they start passing their own data breach protection laws but they're all more or less the same, right? I mean there are important differences and I don't wanna over generalize but we've been living with the 50 state framework of 50 different data breach laws for a few years now and the economy is not crumbled apart because of that. If privacy law breaks the same way, you're not going to get as big of a push for federal legislation. And we might actually end up with something like a 50, continue our 50 state framework of privacy with a little more legislation. If it's, if a couple of states go out on a ledge and create wide variants from like the California Privacy Protection Act for example or something that even worse would be incompatible, in other words a company literally could not dual comply with multiple state laws. I said, you know, go into the ceiling. Then you're gonna see, because I think there's an appetite for regulation now in the wake of Cambridge Analytica in a way that I haven't seen it before. And if you had asked me this question two years ago I would have said you won't see federal legislation for 20 years. Now I think it's really dependent upon what the states do. Massachusetts just introduced a bill, so. And how long would it take? My agenda is severable. So you could break it off into lots of different parts. There are things we could do, you know, flip the switch now and the Federal Trade Commission judges and state attorneys general could easily implement. You could pass a small, you know, upgrade bill to the Federal Trade Commission to do a lot of what I'm requesting in the bill. And then some of the stuff that I'm introducing like outright bans or moratoriums, San Francisco just introduced a moratorium, I mean an outright ban on facial recognition technology. I don't know whether it'll pass. There's a more moratoriums have been proposed on facial recognition in Washington State and in Massachusetts. And so there are bits of it that are sort of percolating even now. And so actually I'm optimistic actually. Could you say more about why you mentioned earlier that you think that facial recognition is the most dangerous kind of surveillance? Sure, absolutely. So even among biometrics, I think it's the most dangerous technology ever invented. And the reason why, there are three reasons why. One is that there is a legacy of name-face databases that exists that doesn't exist for other biometrics, right? So facial recognition is almost like plug and play because Facebook is the largest name-face database in the world, I would assume. And so because biometric surveillance typically relies upon both the database and the implementation, one of them's already been fulfilled there, whereas with things like Iris recognition, gate recognition, those databases have to be created. So there's already, so facial recognition is already ahead. Number two, it's a lot harder to hide your face, right? So if you wanted to engage in surveillance countermeasures, there are ways to do that for other biometrics, this is really hard. And in fact, there are laws that sometimes prohibit hiding your face. And so, and then the third reason is that of all the things, of all the physical traits that are central to a person, external physical traits that are central to a person's identity, it is your face, right? So if you all, a week from now, someone mentions, hey, I saw a talk by Woody Hartzog and you sort of conjure that up, you probably will think of my face, right? Not my hands, not my gate, right? Not my ears, whether they, these are all sort of biometrics, you will think of my face. And so it's something I think on a conceptual level that's worth protecting even more. And so that's why I think it's so uniquely dangerous. It particularly be given the incentive for creep here. So facial recognition technology has, thus far what I've seen has lots of, there are three sorts of benefits. The benefits we don't know about yet, the benefits that are stated, which are great and difficult to deny. And then what I call modest benefits. The modest benefits are stuff like being able to unlock your phone with your face rather than a thumbprint, right? It's a modest improvement, right? Or maybe being able to have your face recognized in a series of photos so that you can organize all your photos according to people's faces that are in it, something like that, right? They're useful, but we were getting along, it's not fulfilling some long met desperate need for society. There are benefits of facial recognition that might fulfill that. Sometimes when I propose a moratorium or ban on facial recognition technology, the first question I get is, why do you hate missing children? Why do you want the bad guy to get caught? Facial recognition offers the promise of being able to find anyone anywhere. The problem with that benefit is that to achieve it, we will have to give up everything. We will have to have surveillance on every corner, right? If we want that, we will have to sacro, we will have to leverage so much in terms of implementing an infrastructure to realize that. And that's when the real abuse of potential comes to life for facial recognition. And so, the way in which I cash out the sort of net positive benefits, I don't see any way that we come out ahead embracing facial recognition. I think there's lots of room for disagreement in this debate. There are many people that say, with effective controls, it could be an effectively regulated technology, but I just, for several reasons, view it as uniquely dangerous. I thought we were gonna end on an uplifting note. Sorry, I'm sorry. I think most decidedly that takes us down. Woody, thank you so much for this really thoughtful talk. Thank you all, I really appreciate it. Thank you.