 That's it. I have no slides, which I apologize for, and you'll hear me. I work at Google on the global public policy team, and in that role I lead our policy efforts on privacy globally. I'm coming at this from a very different perspective, and I have very little to say that's new, which is going to make it quite different from the previous two talks, but that in itself might be illustrative of the different perspective that a practitioner would bring to this. I got the invitation to participate in this panel, and I looked at the title, and I skipped over the word risks, because I didn't quite read it all carefully, and I saw the beauty of a hyper-public life. Why would anyone invite me to talk about that? Because I can't actually see much beauty in it, because to me the framing a hyper-public life implies a Paris Hilton-like existence. It implies this, you're publishing stuff about yourself constantly, streaming to the world, and making not just your identity, but your behavior and your identifiable behavior available for everyone to watch and see. Then I read the description of the panel, and the description actually implied something quite different. The description implied a data-driven life, and that's much closer to what these guys are talking about, right? I want to make that distinction very explicit, because from the perspective of somebody who's practicing this stuff day-to-day on the actual discussions that are happening today in the policy world, that distinction can be blurry, and it's really important, I think, that we draw the distinction cleanly. There's a tremendous amount of good that can come from data-driven services, and I would say that is the beauty of what we're talking about in this panel, is predictive analytics and what that type of analysis can provide the world. The types of examples that people from Google talk about all the time are things like Google Flu Trends. Does everyone in the room know what that is? I won't go into explanation if you all do. Looks like yes. Other examples of that, right? We continue to build those types of tools out, dengue trends. We've recently made that data, the underlying data set that we use to build those tools, basically accessible via an API, so very much in line with some of the things that Adam was saying. There's a tremendous amount of public good there that we can get at with big data analysis. There's also some just baseline economic good, so a study out of Europe last September found 100 billion euros of consumer surplus from advertising. So I think in the privacy community we like to sort of demonize advertising. That's a tremendous amount of value that's created for consumers above and beyond anything that anybody is paying or generating in revenue. There's a lot of good that can come from this type of predictive analytics. I typically think about those predictive tools in two ways, and there's just one design question in here for the room. The first way to think about it is predictive analytics for you based on your own behavior, so Amazon, Netflix, recommendation engines. Very clear in those systems they've designed it in really clearly to say this is the data that we are using to give you a prediction about what you might like. It's very clear to the end user what's going on and the privacy concerns are a lot more minimal. The second type is a prediction for you based on primarily other people's behavior. And into this I would classify things like search, things like flu trends, and actually like the soda machine that Adam was talking about where you're saying, well, a lot of women happen to like this soda, we think you're a woman, here's a soda. And we can talk about the benefits of each of those types of predictive models and we could talk about how we should or shouldn't be using them. I think it's fair to say that both create a tremendous amount of value. And so then you ask what's the risk because we're talking about the beauty and risks of a data-driven life, I can just sort of rephrase it. I actually think the risk is a hyper-public life, right? So the risk of all of this data-driven living is that at some point all of that data gets attached to us in a way that it can be positioned on itself in one context. So all of the context of your life collapse on each other, the data is re-identified and attached to your identity. And this is the underlying fear that a lot of people who are concerned about privacy are advocating on the basis of. I think there's a piece of that that is missing, that simplifies the statement just a little bit and I'm going to posit a potential solution which is similar to what folks have talked about here. I got on the plane to get out here Wednesday night and I, just a small digression, I was an SFO, they have free Wi-Fi, I turned on my Kindle, I tried to download a book that I had purchased the night before. An SFO Wi-Fi let me down, I couldn't get the book before I got on the plane, so I stuck with my existing library. And I stumbled on a collection of Jonathan Franzen essays. And now I'm just curious how late I was to the game. Do people in the room know the Jonathan Franzen essay, The Imperial Bedroom? Okay, so I wasn't that late to the game. This is a great essay from 1998 about privacy, following the Lewinsky scandal. And it was a really interesting commentary on privacy. I would encourage you guys to go read it. It was quite counterintuitive for me. Someone who at the time that it was written just was, you know, totally unaware of these issues. He said something really interesting in this essay. He said, without shame, there can be no distinction between public and private. The reason I found that interesting is because if you think about shame and what actually sort of causes somebody to feel shame, I don't think it's absolutely true that without identity you can't feel shame. I think you could feel shame even if your behavior was anonymous. But for the most part, the more identifiable you are in doing some sort of behavior that is, quote, unquote, shameful, the more likely you are to feel that shame because you're going to feel the perceptions of all the people around you who know what you've done. And so that brings me to this posited solution, right? Which is that the design problem that we have ahead of us is how to construct multiple identities and manage multiple identities and how to preserve some degree of anonymity and manage that degree of anonymity. And I know a lot of people in the room, we've already heard it today, that anonymity is effectively dead. It will become trivial to re-identify you, Alessandro's work on face recognition and social security numbers. And there's definitely a degree of truth in that. I don't know that we've explored all the policy options available to us in that space. So if you have otherwise anonymous data, are there policy requirements that we could put on the data controller so that there was a punishment for re-identifying users of that data set? This is the sort of thing that Jane Yakowitz at Brooklyn Law School is proposing and it's an interesting idea. So I think we could be exploring some policy options there. And I think on the design side, there's a lot we can do to think about managing multiple identities online in different online spaces that would augment some of the work that folks have talked about today around designing spaces to enable users to express different sites of themselves. Let me ask one question then I'll come out and play. I used to say I'd play Oprah now, I guess I'll say I'll play Katie. Not a big pop culture group. And for those of my age with my hair color, Phil. This question, what we heard before mentions of multiple publics, right, that rather than the notion of having one public sphere, in fact, we have this opportunity to create many, many publics and to join those publics, create and join those publics, act with those publics. That's an important. It's related and unrelated to the idea of multiple identities, right? And Mark Zuckerberg has said, and I tend to agree with him on this, that we have one identity and in our efforts to keep separate identities when those come into conflict, so when we have a problem, I think the real identities we're trying to keep out of conflict are our internal selves, our real selves and our show selves. When those come into conflict we've got problems, but we'll put that aside for a second. In the EU there's talk of trying to bring in a right to be forgotten. Well, that brings out a whole nother tension because if I'm the one remembering you and I'm now ordered to forget you, then my free speech is impinged upon. So I'm trying to understand here, you're surprising me here in your kind of fear of the collection of data and its association with a person. In a minute I'm going to get to its association with a thing, but we'll get there in a place. But going for a little bit more on that, that you're fearing that we're all turned into Paris Hilton, that we're all made more public, is that what I'm hearing you say? And okay, if that's the case, stipulated your honor, but what is it you want to do about that? So I think I'm suggesting that the hyper-public identity is constructed from the collapse of contexts beyond the expectation of the individual who's using whatever technology it is. And this is why you continue to see so many privacy concerns come up because people just aren't aware of the variety of different contexts that are collapsing on their mobile phone, for example. You're moving from place to place and all of those contexts and the people in them are becoming one. And I don't think that many people actually want that. I think they want a separation of the data that's used to personalize an advertisement for them from the data that they are sharing with their friends. I think that people want some separation of that and they want the ability to maintain and control that separation. I think it's quite different from the right to be forgotten, what I'm suggesting. I'm suggesting the ability to understand and maintain those different sort of silos of data about you and your behavior. That's the break wall of the discussion, it's worrying in a sense, isn't it? The data is there, if Dana were here, I would call on her now. And she helped me immensely in my research for my book in separating gathering versus use of data. That if you try to restrict the gathering of data, you're telling people they cannot hear something, they heard, they cannot learn something, they've learned, they can't know something, they know. It becomes absurd after a while. But if you say you can't use this data in a way, then that's a different issue. Right. I'm not sure where you come across on that because in a sense you're saying you want to be able to silo the data and so I... Effectively restrict the use, right? So if you have otherwise anonymous data and you say this data could be used for research purposes or for product development purposes but the second you go and re-identify it, you have somehow violated a law or some other sort of policy solution you could put in place that limits the use of that otherwise anonymous data so that the risks of re-identification and the risks of context collapse are minimized. Okay, I've got to tie this back to Google, of course, because I would think that as a matter of principle, Google would want as few restrictions that are hard to interpret and hard to know on data as possible. That context is very difficult. That that means you've got to have... If it's not collected, if it's not obvious what the context is when the data is collected to have to go then back and find out the context or intuit the context or you should have known the context creates a liability for the likes of Google, doesn't it? That's the design challenge, right? Is helping users understand that and maintain the identities. And so that's where when I talk about the two forms of predictive analytics and where Amazon and Netflix and recommendation engines have done something really good in helping users understand that data, I don't think, for example, search engines or the soda machine have really gotten it quite right yet.