 Our first connection talk will be about the hardest challenges to designing privacy technology solutions. And our speaker is currently a visiting professor at Harvard CRCS, but is usually based at Carnegie Mellon at CMU, where she's a professor of computer science, technology and policy, and where she's also the founder and director of the notorious Data Privacy Lab. Please help me in welcoming Latanya Sweeney. Yeah. It's great to be here. The operative word there, of course, was notorious. And many of you have your own arguments as to why we were notorious, or have been notorious. I have to tell you, I've been working in Data Privacy in particular for about 10 years. So here I am at yet another privacy conference, but I can tell you it's unlike any privacy conference I've ever been to. And this notion of starting sort of with a fundamental concept of how do we take a space and look at the design issues in that space and view this notion of private space and public space from the advantage point, I think it's very, very interesting and very powerful. So many kudos to the organizers for doing that. In the first panel, we saw immediately this linkage between physical space and cyberspace. And I couldn't help but sit in the back and realize exactly one of the interesting notorious kinds of activities that I think were known for as to why I always fuss about people who talk about privacy in cyberspace when I talk about Data Privacy. And so it's exactly the kind of enlightenment that Jay-Z mentioned I had. And that enlightenment was that the equivalent of what we do in Data Privacy is really cosmos space, that we look at the world as a series of parallel universes to the physical space in which you live. As you're sitting here listening to these talks and participating, there are different spaces that are capturing data and information about you continuously for which different designs, different walls and different ways that that data can move to other places exist. And all of these are parallel snapshots of the same moment and place and time about you. And so if we look down on those parallel, through that parallel space, we get and we track you through various ways through this parallel space. Some barriers, some places have barriers and we can't see you but we can infer things about you from the other views that we have. It's a really amazing perspective. And that's the world that we live in. And so often a remedy that might exist in physical space, the notion of I'll put up a wall and will that wall have plaster or will it have glass, evades conversation in the world of parallel spaces. So to whatever extent I can help in the worlds that you're conveying, I don't want you to understand a little bit about where I'm coming from. And so one thing, for example, is we don't talk about public and private. We see that as a total continuum. You'll see lots of conversation about semi-public and semi-private. And we don't talk about, which we did here, here, one use, but we do talk about multiple uses. And so when you put this together, you end up with a three-dimensional space in which you're operating of the access issues of public versus private is one access, the uses that you're doing. And another dimension, of course, is just simply the space in which the data are captured. And of course, it's all operated in time and space. So there are some additional dimensions. What we've been really fortunate to be able to do is often whenever there's been these big privacy problems that we've gotten involved in, we've enjoyed the ability to craft really sweet spots. That is, discussion is often between some tension between privacy and utility. And we've been able to find these nuanced solutions in technology and policy that give society both privacy and utility. That has been sort of our hallmark. And that is actually what I say to you is the goal. It's not one or the other. It's how do you find the sweet spot between them? So since I've been here at Harvard, I've had an incredibly wonderful time. I've been engaged in what I call the privacy rethink. I thought it was quite unique, except in fact I find that everyone is going through a privacy rethink in many places, and that this is the age of privacy rethink. And one of the reasons that's happening is big data. It's the combination of these universes that we talked about, these parallel universes collapsing. That the data from them are just simply becoming magnanimous. The details that any single database would have about you is absolutely amazing. And it's driven by the likes of things like pharmacogenomics, computational social science who wants to put together your phone information and your loyalty card purchases and your cable viewing habits, and even things like national health databases. These re-things are happening within legacy environments. And as soon as they start happening in legacy environments, that is the kind of policy structures and notions that we're used to. The things that are coming up for debate and discussion are the cornerstones of what historically have been privacy protections of data, and that would be the questions about de-identifications, questions about informed consent, and questions about a regime of consent and notice. And so you'll see all around that these things are being rethought. But what is exciting, what's more exciting to me and I think really resonates in the space of this conference is a re-think happening at the architectural levels too. Technology and society has been bold new frontiers, and when we want to think in that we can say, well how can we move away from conversations about de-identification, informed consent and consent and notice, and move ourselves into a whole new realm? And then we leverage technology and this new developments as ways to move us forward. And so I call these the level of architectural re-things. And we're already starting to see some publications and some tractions on that level. Let me just mention very briefly three of them, because it's not so much these architectures that I think are of interest to you, but it's what are the components, what are the design pieces that goes into trying to architect across these parallel universes. So open consent made very popular by George Church at Harvard Medical School is the idea that as a researcher I'm going to ask you for your genomic information. But unlike researchers of the past, I'm not going to ask your consent. I'm not going to make you any promises about the privacy of the genomic information you give me. And I'm not going to make you any guarantees whatsoever. Instead, I'm going to give you a contract and you're going to sign away liability that you know that I'm not going to be responsible. He says it's just too hard to figure out what are the risks for you. And so you're going to assume all the risk and you'll assume all the liabilities. And it sounds a little kooky, like at first I laughed. And then I looked at the Personal Genome Project website. They currently have a thousand people who signed up, hundreds of people whose data are currently online and available includes medical information as well as their genomic data. Another one that isn't so popular in many of us question whether it can actually be used is called the trade secret model. And this model, again, the researcher is saying I want your genomic data, but I don't know what might happen to you or and so forth. So here's what we're going to do. We're going to enter into a contract and I'm going to treat your data like your genomic information like a trade secret. That is as long as it doesn't become public, we're good. And if it does become public, all bets are off. And so this is kind of a tongue-in-cheek. Many of us consider a kind of tongue-in-cheek arrangement because first of all, I'm leaving my genomic information everywhere. And eventually, you know, within five years, much of your genomic information will be part of your health data. And that's a whole nother discussion about all of the hundreds of places that your health data go. And so to what extent could it ever be kept secret? And then my colleagues here at MIT have also offered up a model. So we also already have a horse in this game called Privacy Preserving Marketplaces. It's the idea that you should design data sharing arrangements as markets and arrange these market components so that you can make some guarantees that you insulate or compensate subjects for harms. So those are some of the ideas of the current leading three models of the rethink to whatever extent it may or may not help you. But what are the components that it really comes down to? What are the pieces that we get to design around? What causes the challenge? The challenge has always been over the last 10 years every big problem that we've worked on and we've worked on almost all of them from surveillance to genomics and face recognition and so forth always comes down to the interplay of these components. So let me kind of arrange them around and begin to sort of look at them. So data subjects. Historically in the legacy environments data subjects have virtually no say. They're not very empowered and so their decision making power historically has been very limited. One of the things we see in the conversation of this group though is an exact opposite. Here is a place where individuals haven't been empowered about the data that the decision making about what data they will share or not share in certain environments. So they're empowered. They're vulnerable because they still can suffer economic harms. So we may not always realize what those harms are at first glance but over time they become quite evident and some experts will predict them ahead of time and so we worry very much about the economic harms of the data sharing arrangement. We say they're irrational and we mean that in the economic sense so for those who follow Alexandra O'Quisty or Tyler Moore or other scholars in the economics of privacy they show us once all over and over again no matter how much we tell you the risk are and the harms that can come from you that we as humans have a tendency to discount those risks. We act irrationally at the point where it has to make a privacy decision and so I use the term irrational. There are the technology developers. I'm a computer science professor and many of the technology developers that I talk about are actually students of mine. We don't mean to make them produce students who have often give us challenges back to privacy. It's just a simple interesting issue that design decisions made by technology developers have policy consequences and it's a tremendous amount of power in the hands of technology developers but power that we don't actually teach them how to understand or manage. A very quick example that I've used at first brought out many years ago and as the cases are mounting up we all kind of tickle laugh about it when Sony first came out with the video recorder it automatically and still does record sound and video. So anytime I record your video I'm also recording sound and there's no mute button. And the profound effect of this of course is that wiretapping laws in the United States how we view what you're allowed to videotape in terms of by the camera is really different than what you can record on sound. And there have already been cases where someone trying to videotape for one purpose has been been prosecuted for wiretapping. A very simple design decision that for one penny for one sense Sony could have designed it differently and would have totally changed the way that construction happened. Or to what extent will it push back on wiretapping and begin to change our expectations about these recordings. Policy makers one thing I can tell you about policy makers I don't think it's a big secret is that some big... so the way we did our work at the data privacy lab almost without question I mean almost out of the earlier projects happened as follows a big disaster happened that made it to the front page of the New York Times or the Washington Post and somebody in DC was getting ready to take some kind of action they knew they needed to react quickly and we would get a phone call to try to find a solution to that problem. And so what we've learned is that policy makers will react. So the technology developers are developing freely perhaps with no real thought about the implications of adoption and so forth and am I over time? Over time. That's okay. So policy makers belief systems who gets benefits and the legacy environments so let me stop there and I don't know if I have time for a question or two one question I'm Zaynette I wanted to ask about the false belief about the trade-offs I find that to be a fascinating a chance to briefly tell us what you were going to say for false belief systems I saw that belief system the false trade-off that one yes so it's exactly the quickest way to answer that question is probably to go back to this that almost all of these debates that happen have this false belief that you have to operate around the red and that you're looking for some optimization point and so this creates a false belief that there's no answer that you could have both privacy and utility and the standoff is in these highly polarized debates whichever side believes your point of view will help them win they are for you and once they believe they'll win they don't try to seek a better solution so you end with these interesting standoffs okay thank you