 It's welcome to the New America Foundation and to this event this morning. This is a future tense event. It's sponsored, which is a collaboration with the New America Foundation, Slate Magazine, and Arizona State University. And I am delighted to welcome Evgeny Morozov here today. He is a former Schwartz fellow here at the New America Foundation. And he's author of a book that we're going to discuss today called To Save Everything, Click Here. His previous book was called The Net Delusion, and The Dark Side of Internet Freedom. And this book won Harvard Kennedy School's Goldsmith Book Prize in 2012. He's a contributing editor to the New Republic, and he's written for numerous outlets. He was recently a guest columnist for The New York Times. He has a regular column every month in Slate, which is all over the world in syndication. Financial Times, The Economist, The Wall Street Journal. And for any of you who have not followed his Twitter feed, join now. Because although he appears mild mannered, but I can assure you on Twitter there are no holds barred. So we just want to have a conversation today. We're going to leave some time for questions and answers as well. So the first question I actually want to ask, because you've written two books about the internet in fairly rapid succession. The first, The Net Delusion, was talking about the internet more globally. But in this new book, you're saying the internet doesn't even exist. So what was the sort of transition from The Net Delusion to this current book, and how did you get from point A to point B? Sure. Well, I think they are very similar in their goals. The first book built mostly on my practical experience as someone who was working in the nonprofit sector, essentially, before I got into writing. I was working for an NGO that was based in Europe. And we were very excited about the potential of social media to change the world. And we were cutting edge of using a little social media to go to places like Central Asia, or Central Europe, or Belarus, Moldova, former Soviet Union, and basically try to experiment, is what you can do for democratization. So what I tried to do in The Net Delusion was to pursue two different lines of inquiry. One was to understand why we, as policy makers and thinkers in the West, why we have come to expect so much from digital technologies and social media blocks, what we call the internet. And the other point I tried to make and investigate was what were some of the costs and consequences of relying on essentially infrastructure built by the private sector, in this case Silicon Valley, to promote democracy. And when I was writing this book, there was a lot of excitement about the so-called internet freedom agenda. And the State Department, Hillary Clinton, at that time, gave several very prominent speeches on internet freedom. There was a lot of excitement about how Twitter could help us spread democracy following the protests in Iran. There was a lot of excitement about Google and Facebook and what these companies can help American policy makers accomplish outside. And what I tried to show in the book is that there are hidden costs attached to relying on such technologies. There may be all sorts of backlash and pushback. Those authoritarian governments will start building their own platforms. I mean, so I tried to investigate the hidden costs of essentially delegating problem solving to Silicon Valley. In the second book, I think I continued the same two inquiries, but in a very different setting. So I shifted my attention from authoritarian states to, I would say, liberal democracies. So there is almost nothing in the book on China, Russia, or Iran, but there is a lot on how we think about reforming our governments, how we think about reforming or solving problems like, say, obesity or climate change here in Washington or Brussels. So there is still the continuation of this older agenda of trying to understand the costs of delegating problem solving to Silicon Valley. So I talk a lot about things like self-tracking as a way of knowing how healthy and healthy you are. And there is a lot about relying on automated fact-checking, for example, as a technology to supposedly improve some of the problems or solve some of the problems in our public debate. So there is still that line of inquiry where I'm trying to understand what would happen once we delegate some of this problem solving from public institutions to the individuals or private companies. And to me, and even more ambitious intellectual pursuit and is trying to understand how we have come to hold the view that the internet has some kind of a coherent logic and philosophy and that we can learn lessons from the likes of Wikipedia and then apply them to reshaping, say, our political institutions. If you look at the rise of private parties, for example, in Europe, which is something I discuss in the book, there you can clearly see that they see themselves not as a coherent entity, as a coherent ideology. They see themselves as a platform where they can just go and anyone can come and edit their talking points. And essentially, some kind of position or ideology will emerge. And that's a very different way to think about politics than we've had before. And it's a way that is directly influenced by various internet projects and platforms. And the need for such a project to emerge is justified in public debate by the idea that we are living sort of this very unique revolutionary times where success of projects like Wikipedia or Bitcoin or Skype or whatever has shown that new kinds of interventions are possible. And they will all be, more or less, like the internet, decentralized. They'll hate hierarchies. They'll be transparent. So what I tried to investigate was how did we come to perceive this need to reform? But at the same time, how did we also come to believe that we are living through this unique and revolutionary period marked by the rise of the internet, which I put in square quotes throughout the whole book? Well, so this brings us to, I think, what you identify. And well, so there's a phrase in the South, to catch more flies with honey. I would say sometimes you catch your flies with a hammer, which is part of the great fun of reading your book. Because I think much of what you do in this book is take on some of these ideologies and the particular purveyors of them, many of whom are based in Silicon Valley. But we have an entire punditocracy right now that focuses on the internet and doesn't use the square quotes and does see it as. And their message, you've gotten a lot of blowback for being very critical of them. In part, I think, because their message is very optimistic and it tells people what they want to hear. And we've always loved technology in this country in particular. How do you see as your role as a critic in poking holes in some of these ideologies? And also give us a quick overview of one of the main ideologies you talk about in the book, which is solutionism. Sort of. Well, that's a lot of questions I can spend the whole hour. So with regards to criticism, there are, I think, several strands to what I try to do. I mean, first of all, I'm very explicit about it. It's to go beyond the existing methodologies of how we talk about the internet and the existing, one of the existing paradigms is, for example, cyber law. It's the idea that a set of tools that emerged in law schools in the mid 90s want a set of principles that cyberspace is there and exists and it needs its own laws. And now we can go and understand what those laws should and might be and we'll shape cyberspace the way we want it. I don't like it as a methodology. I'm trying to understand cyber law is a historical project and I try to understand the historical origins of many of the concepts we currently take for granted in order to show that there are possibly other ways to think about how to arrange technologies and there are alternative paradigms that would not posit that there is something called cyberspace out there that needs its own laws and regulations. So just, again, to give you an example, if we stick to this idea of cyberspace to illustrate what I have in mind. So we tend to think that, okay, cyberspace is an idea that William Gibson dreamt up in 1982 and then suddenly we just reached for it as a natural description of reality and we just thought it's a very convenient way to talk about digital technologies. That's a concept that has a very long history. Before, for example, William Gibson used it in 1982. He already had people at the Media Lab at MIT at that point which was actually called the Architecture Group talking about data space. There were a lot of urban planners and architects who jumped on the idea of cyberspace in the late 80s because they thought that will help them to articulate their visions for the future of space and people like Manuel Castells who jumped on the idea of information society. He spent 30 years before the Information Society writing about cities and urban sociology. He's urban sociologist by training. In late 80s, a company like Autodesk, the software company, actually tried to trademark the term cyberspace. All of those cities get hidden and we end up in a situation now in 2013 where the Defense Department has a unit called Cyber Command whose job it is to defend cyberspace. And you have serious people wearing serious military uniform going to work every day, convinced that cyberspace exists. But it's a shorthand, doesn't it? I mean, there is this realm. I mean, it's a shorthand, but again, it's what I'm saying is that it's not, it's a shorthand, there is not a natural shorthand. Right, that there are alternative ways in which you can actually go and engage in cybersecurity that wouldn't assume that there is a separate realm out there and that that realm needs to be palest as heavily as it is palest now. So I'm doing my small Foucaultian genealogical inquiry into the origin of those concepts because I think that those concepts now do influence policy in ways in which we do not understand and that's why I go under concepts like openness as they're being applied to ideas like open government. Because if you go and start tracking where does the open and open government come from, you'll see that the open and open government in 2013 is a very different kind of open from open government in 1993. Whereas in 1993, open and open governments were put for more accountability and more transparency of what the government does and giving citizens control over some of the decision making that was happening. Now it stands for releasing data in formats that are friendly and portable and easily accessible by anyone without necessarily asking questions about whether data comes from or whether it actually matters. If the North Korean government tomorrow starts releasing data about its labor camps in data friendly formats, that would count as a revolution in open government by today's terms, even though the only reason why they're doing it is for propaganda, right? And like I don't like this openness and I want to have proper openness we've had before, which was all about politics, which was asking serious important questions. And that I think is the model of openness that we need to stick to. But what happens is that of course, once you start remodeling open government on this set of ideas implicit in open source, you no longer see that something happened at the discursive level, right? So I see my task as a critic very clearly. I go and try to articulate those histories which get lost in order again to either refocus some of the current debates or to show that there are other ways to think about how we arrange this technologies. And we can talk about solutionism if you want, which is a somewhat separate project. Please, no. No, I do, but I want to, on this point of politics, which actually is a theme throughout the book and I think your strongest, one of the stronger arguments to make besides looking to the history, which is very important, which we don't do enough, is to say, you know, one of the problems with these solutionists and the internet centrism is that they don't look at the messy sausage-making as it were of politics as a model. In fact, they see it as exactly the problem. But what I'm asking you is certainly in the context of liberal democracy is, that messy process requires fairly well-informed thoughtful citizens who are willing to commit to the process itself. And so my question then gets to this larger issue of convenience in that your hope is that people will be more deliberative and more thoughtful in their use of technologies and at the same time, you're criticizing the platform makers of these technologies. But how do we get to the point of deliberation, given a citizenry that likes its gadgets, likes its shortcuts, likes its conveniences and doesn't read theory? Yeah, but again, I don't start, you see, I start from very different theoretical starting point, I guess. I don't see users as having needs and having preferences that are completely autonomous and unshaved by the kind of technologies they use. So I don't think that designers just build tools to satisfy the needs of users. I think designers in part shape how users think and interact and how much space they have for deliberation. So the reason why you do not think about energy use of devices in your kitchen is partly because you're just a lazy person and part of it is because you just don't really care, but part of it is because the way in which those devices and appliances have been designed is to prevent you from thinking about this because they were built on the assumption of abundance. Designers thought that energy is everywhere. Why should we care about making people aware of those little things? Because again, the assumption was that this is something that citizens should not be thinking about. They should be boiling their tea without necessarily going through the mental process of how much energy is consuming or whether they should probably not using their teapot now because the national grid is overloaded. Designers never had to think of those questions since the result users never expect that such thinking should happen. But what I'm trying to do is to uncover some political layer to technology use and basically say that an alternatively designed set of appliances underwritten by a different philosophy of design might as well result in more deliberation and who knows, some users might actually like it. So how do we train the designers? I mean, I don't see users as just this passive sheep who are just fed these gadgets and it's not, that's... So how do you get the designers to think about this at the earlier stage of the process? Because they obviously, in many cases, have a bottom line in mind. Well, that's a tricky question. And I can't say I have a good answer to it in the book because partly to answer that question, you have to go into very thorny and difficult questions of how modern enterprises work and you have to get into questions of capitalism and designers, I mean, they don't work on their own. I mean, they work as part of large enterprises where whatever you think about deliberation, if your top leader doesn't want it, I mean, there is no way that an Apple designer would have outconvinced jobs, right? So that's sad. I think, again, you do it through public pressure and you just start politicizing things that were previously dimmed on political or apolitical and you just go in and start... Through regulation, through public pressure, how would that look? People would, yeah, I mean, I do it as public intellectuals through interventions and public debate and I think this is okay. I mean, again, you have to, as long as designers don't think of themselves as being in the political business, then, of course, they wouldn't feel the pressure and as long as companies don't think of themselves that way, they wouldn't feel it either. But again, it doesn't have to be limited to a kitchen. I mean, we can think about it as your browser, right? Your browser is designed in a way to make certain things visible and certain things invisible. I mean, there is a reason why you don't see, by default, how many websites are sucking in your data as you browse the web, right? I mean, but there is a way in which you would be alerted to that and, you know, if we insist, what would be, right? So the question is, how do we make producers of technology aware that they might have this extra layer and the extra responsibility and then the question is, how do you go just beyond mere disclosure? Because mere disclosure wouldn't, I don't think it would actually be that helpful. I mean, if it's just numerical disclosure and there is a larger argument in the book about the differences between what I call numerical imagination and narrative imagination, right? That very often what we want to foster in citizens is narrative imagination, the ability to be able to think holistically in terms of systems, in terms of narratives and not just in terms of numbers that they don't know where those numbers come from, right? So just knowing that your data is shared with 500 websites may not tell you as much as knowing that your data is shared with 500 websites and this is twice as many as they were online in 1993, right? That would be, so it's not just perspective but also making, and in here, I'm really entering sort of crazy territory, but it's making, like introducing some weirdness to our experiences with technology. I want people to feel uncanny somewhat and then that's how you make them think about deeper questions of infrastructure that they otherwise wouldn't want to think about. Well, this is a good segue into a, you have an excellent chapter on crime, which I think has gotten a lot of attention in part because there's crime prevention in the criminal justice systems. Relationship with technology has changed radically in just 10 years, but you also see a lot of partnerships with the uses of technology that are kind of done through criminal justice system. I mean, we had debates about this in the context of a DNA databases years ago, questions that still haven't been answered, the ethical underlying ethical issues. Could you give us a sort of overview of what problems do you see with the use of technology in things like profiling and crime prevention, which are wildly popular when the public hears the sort of watered-down version? What would be your critique of that and why is it important for us to worry about what our police departments are doing? Sure, so there are two main, somewhat separate arguments about crime in the crime chapter. And one is about the rise of predictive policing and that's, for those of you who don't know, it's a set of rules and principles which relies on statistical historic data about previous crimes and you feed that data into algorithms and software and then you end up with predictions as to where future crimes are likely to happen next and the police department then decides where to dispatch their force in order to preempt some of those crimes from happening. Sort of quasi-minority report sort of policing. Sure, so this is, I mean, this is happening in many police departments already and it's modeled actually if you read some of the essays and papers written by people in this field, including police chiefs, they'll tell you that they're inspired by Amazon. They love or buy, you know, Walmart being able to predict that during hurricanes, demand increases for certain, you know, cakes, right? And so they actually modeling themselves on the internet industry very explicitly. And my argument there about predictive policing is mostly that right now, again, this is all done by the private sector. It's a bunch of academics who formed companies that built such software, which they then sell to the police departments and there is, of course, they're not interested in disclosing the algorithms. So you have no idea whether there are any existing biases, whether they're racial or economic that are built into those systems. We don't have a way to audit the algorithms, which is one proposal I make that we need to have some kind of auditing system for algorithms, not just for predictive policing, but more broadly, with a certain party that can come in and examine the algorithms without necessarily making them public to the entire world. So this is one of the policy proposals I actually make. But the other point there is that, again, and that's a problem common to big data in many other fields, it comes with certain epistemological biases and costs, and if you are dealing with crime that has been reported to predict future crime, of course, crime that is not reported is not going to be reflected in your model. So if you're trying to distribute your police force based on those predictions without thinking hard about what that big data doesn't get reflected, you might make choices that will only reinforce the current problems. So this is one argument, sort of the hidden costs of predictive policing as a problem solving exercise. There is another bigger argument, I think, in the chapter, and it's about, which directly connects to my critique of solutionism, which I'll say now a little bit more, which has to do with the fact that as sensors, all sorts of interactive screens and systems proliferate in our built environment, it becomes possible to make crime almost impossible because you can just exclude people who pose a risk from a certain physical space, right? You can actually, since now you can regulate space in new ways because you can build sensors that are connected to facial recognition agents or they are connected to social networks. I mean, there are ways in which you access to physical environments becomes probabilistic, right? So you can actually rely on probabilities to decide who needs to get in, who needs to get out, and as that reduces risk of crime happening, that might also bring in all sorts of other problems. So what I try to do in the book is to show that throughout, that also applies to smart objects, by the way. So objects that wouldn't let you do something because they know that you're not the right person to do it or because they know that it's bad for you, right? And what I'm trying to do in the book is to figure out what are some of the arguments for actually allowing some crime in society, right? What would be the argument to have some crime, right? And you see, you go and read the literature and criminology and you'll discover that there are actually people who are saying that allowing for crime to happen also allows for, for example, civil disobedience, right? So if you cannot break the law because the environment has made it impossible, then there is no way for you to engage in civil disobedience in the first place because you cannot just do the fact that has now been made impossible as previously it was illegal. There are people who argue that committing crimes actually results in public debate. So you bring cases to courts, the media start covering public debate ensues and in the middle of the public debate, you might actually revise the norms that have previously made that crime illegal. So allowing crime to happen is some kind of a wolf for actually progress and revising the norms and rules we have already. There are many other arguments in favor of allow, I mean, not just saying let's just all go and commit crime, but building an infrastructure in a way that allows for some of this mischievous deviant behavior to happen. So deviance is a vehicle of progress, so to say. And that relates to my critique of solutionism because the way I define solutionism is that it's this intellectual tendency to view problems as problems simply because we have the technological means of solving them, right? So in this case, we have the ability, let's say, because of sensors, big data, algorithms, CCTV cameras to make crime harder to make it more or less impossible, but it doesn't necessarily mean that that's the right thing to do, right? To understand whether we should be doing it, you have to go and start asking political, social, cultural questions about the roles and functions of deviance and criminality in society. You cannot just make that case based on the affordances of the particular technology, which is, I think, how Silicon Valley and a lot of solutionists frame these debates. They just say, well, if we can now go and make governments more transparent, if we can just go and eliminate crime, we have the tools, we can automate fact-checking, let's do it, but we do not actually inquire into any of the practices or contacts that you're trying to reform. Well, this is a bit, I mean, if you think of it in terms of a social problem, we've gone from a society that used to work out a lot of, you used the word norms several times. We used to have these norms, they were debatable, they were often transformed by technology, but also discussed in terms of whether we did want them transformed. But now we're a society of nudges, right? You know, we have algorithms and we have behavioral economists who can nudge us along away from obesity towards healthy food. You know, we have, and we have an implicit faith in this as being somehow more objective. And so I'm wondering, I mean, this idea that the human beings or machines that can be reprogrammed is not. That's another thing that's, as you mentioned in the book, it's not new. This is an Enlightenment-era idea that you can program people. But I'm wondering if the embrace of behavioral economic solutions, certainly at the policy level, which we've seen, is that something new? Is that something that's a little more disturbing given the fact that it has this technological infrastructure behind it? I mean, I can give you a long answer, which I'm not sure would be very complete. I mean, clearly there are reasons why behavioral economics has become interesting and appealing to policymakers. Now, I mean, some of that has to do with the rise of neuro everything. And there is a general excitement about neuroscience and psychology as the ways to solve problems. I mean, some of it has to do with the methodology that behavioral economics smuggles through the back door. And that methodology is basically offloading problem solving on the citizens. So it becomes your responsibility to eat healthy food and not necessarily the government's responsibility to regulate the food industry, right? And this is, I mean, that has to do, I mean, Margaret Thatcher would be a big fan of nudging and behavioral economics back in the day, because again, it's all about making citizens feel that it's their job to exercise and it's their job to take their health and everything into their own hands, which to some extent is something I believe in, but to a certain extent, I think that there are clearly roles to play for ambitious structural reforms and institutions. And it's very easy to crowd those out as we get too excited about nudges. So in the book, not in the book actually, but subsequently in a couple of essays, I discuss Google Now, which is, for those of you who don't know, it's a recent app that Google put together. It's available on Android phones and I think it's coming to the iPhone as well. And it became possible only thanks to Google introducing this single privacy policy last year where Google can now monitor everything that you do across different Google services. So calendar, Gmail, YouTube, soon probably Google Glass, cell driving cars, all of that is regulated under one privacy policy and Google can look into your interaction with different services to make predictions that apply across all of them. So an example they like to give and it's already available in Google Now. It's an app that you install on your phone and it works in the background. So let's say you have a reservation in your inbox for a flight to catch later tonight. It will automatically check you into your flight. It will check the weather at your destination and it will tell you that you need to bring an umbrella with you and it will check the traffic conditions on the way to the airport to tell you that you need to leave earlier because the roads are really bad. And it all sounds nice because it saves us the hassle of daily living and that's the vision, you'll have more time to experiment with more apps. Play Angry Birds. That's logic. But they also do one more intervention. Now with Google Now recently a few months ago they introduced a new reminder, a new card. It's all a series of cards basically telling you what's happening and what you do. And at the end of each month they would generate a card without you ever asking for it showing you how many miles you've walked that months and how does it compare to the months before? Percentage-wise, right? So they'll tell you because again, your smartphone has an accelerometer, it has a sensor so it's possible to understand how much you're walking. And that's a notch, right? And it's a notch that Google is happy to show you and probably a lot of policy makers would be very happy to promote it as a way of finding many problems like obesity, right? But then the question becomes to what extent, so if you look five or 10 years from now, right? To what extent you would end up in a world where again sensors monitor everything and they provide us with feedback and ask us to optimize our own behavior, but we are no longer inquiring into the kind of more ambitious structural changes that need to happen to enable the behavior in question in the first place. So it's a very particular type of reform where you are telling people to optimize their behavior and start walking more because they're not walking enough. It's a very different type of reform we actually inquire as to why people are not walking more and you realize that they're not walking more because for some of them there is nowhere to go except for the mall and the highway, right? And in that case, you probably want to go as a policy maker and engage in a very different type of problem solving and start investing in infrastructure or if it comes to the same argument you can make with Google Glass, which again, I can assure you, it will be tracking what you eat. And since it's tracking what you eat, it can tell you that you are leading a very unhealthy lifestyle. In the market, if you are consuming too much fat or too few vegetables. And then you have two second glass of wine and you get in your self-driving car and it won't let it stay. You can buy it and it also solves that problem. To me, that's the, I mean it's solving problems but it's solving them in a very particular way, right? And that's not necessarily a way in which I would want problems like obesity to be tackled. I would want a much more ambitious set of interventions looking at the role of the junk food industry, how they advertise, looking at building access to farmers markets. And there are all sorts of things you can be doing. But the rise of nudging and the rise of this what I call privately run infrastructure for problem solving. I mean, this is what it is. And it's not, it's partly it's enabled by sensors, partly it's enabled by the portability of our social networks. And that's another of the examples that I like to give. And it's the fact that now because of your smartphone and your presence on Facebook, your friends, you carries them with you everywhere you go. And since you carried them with you everywhere you go, your behaviors can now be subject to new types of peer pressures. You can be shamed in front of your friends or you can compete with them, right? So in the second case, that's the rise of this new type of solutionist project that people call a lot of people in Silicon Valley called gamification, right? And it's this turn to basically make everything into a game and start rewarding people with points for participating in behaviors that they wouldn't do otherwise. So one of the proposals that one of the theories of gamification Gabe Siherman made last year was that one way to boost civic participation in America is to have people check in with their mobile phones at the voting booth and reward them with points. Like you reward them for showing up. Yeah, but you get the basic idea, right? That the infrastructure is there and you can earn points and your friends can see that you earn points and that results in new types of behavior that again might optimize efficiency but that might deform you as a moral political subject. And this is the deeper layer, I guess, that I'm getting into the book. What happens to you as a citizen when problems or solvers and policymakers start treating you essentially as a consumer who is out there to collect what essentially is frequent fly miles for behavior that was previously regulated through morality and ethics. Well, so actually, I'm going to interrupt you to ask because you brought up the gamification and there's a book called Reality is Broken, J. McDonagall, she argues we should gamify everything, including the kind of... She never uses the word gamify, that's the problem with Jane, she's very careful. But she does think that what she wants to do is give a sort of rewards-based ethics, right? So I'm wondering... She doesn't use the word ethics either. Well, this is exactly the problem, isn't it? So in Silicon Valley, you don't often hear the word ethics and I think, again, to make the example of bioethics, I think, brings us into an interesting discussion about these technologies because we waited till there were a lot of errors made with some of our bioethical questions, but now you have a structure of institutional review boards, you have a vast scholarly literature on this, some of which is good, some of which is questionable, but you do have a constant ongoing questioning about the morality of certain techniques, certainly when you're talking about genetic engineering. Why haven't we done that with computer engineering yet? I mean, to some extent we have, but how can we get to that place? I don't think we need to get to that place. I would hate if we get to that place. And this is where our political differences might start showing. I'm not a big believer in bioethics and I wouldn't be, because they hold certain essentialist assumptions about what life is and what it should be, but I don't want to open that kind of worms. So when it comes to technology, I just don't think you can generate a set of rules that will tell you what's the proper attitude towards technology. I mean, that's why I was right in the book. I don't believe... But just the civic technologies you could. Couldn't you, like the self-driving car or Google or augmented reality in general, some guidelines? But then they, no, again, I just would find that suffocating, because again, there is no way, I mean, where would you generate them from? You need to generate them from empirical work. Like you need to understand those technologies as they're being used and practiced and as they design. You cannot just start from some broader overarching, like we're not talking religion here. I mean, you cannot start with a set of rules and then work out how those rules will apply to self-driving cars or how they will apply to drones because the context there... Since you wait till there's a crash and then you've... No, it doesn't mean that we shouldn't be having debates about the ethical, moral, political implications of those technologies. It's just the idea that you can come up with some kind of a rule book, a set of moral precepts that will tell you how you should think about drones. I mean, some sort of guidelines, not specific rules, because part of I think what's very powerful in your book is that you get this impression of Silicon Valley as being entirely unmoored from those deeply human philosophical questions. So to find a solution to that problem would mean some sort of vague... I mean, not specific rules, but something. I mean, because we can't just say what they're doing now isn't working and then not offer some kind of guidance as to what a more ideal situation would be. I think you could very much make the argument as we screw up with these technologies, then we fumble through solutions, but that'll be done through the legal system, which is very slow moving. I mean, the Supreme Court has just dealt with GPS. I mean, this is a technology that's been around for a while. So we could either wait and do that slow process, we could use policymaking. I mean, there are a lot of options. I'm just curious what you think would be. Yeah, I mean, look, I mean, I see my interventions as changing the public debate and how the public debate functions and what technology journalists get to say and how early they look into those technologies. I mean, right now, I mean, you have... Technology journalism has become reviews of gadgets and that's all they do 99% of the time and it's not particularly interesting and it doesn't really ask any questions that have to do with politics and it's all about incessant coverage of the latest iPhone model. That I don't like and once they're stuck with technologies like facial recognition, you have technology journalists stand in and say, well, they're already here. There is nothing you can do. You can do it just that don't change your norms. That I don't like and that's why I kind of practice what I preach. I engage as many of those technologies at the prototype stage and actually think it's a feature not a bug because it's the only stage at which you can actually influence things. So when in the book, I talk about a smart trash bin that relies on sensors and gamification to steer the behavior of the user, I would hate to see such trash bin. That's why I write the book opposing it and then people say, well, but this is just prototype, it's not gonna work. I'm not sure it's not gonna work because again, if you ask Cassanstein, he would love such a trash bin and he holds far more influence over US policy than technology journalists. And so if they think that the market is not gonna react to it, policy makers would make sure that in one way or another, such technologies do appear. So for me, a lot of it revolves around the ethics of technology reporting and technology criticism. And this is where I see I can make an intervention and not just the content, but also the style. And that's why I love going after existing paradigms, the way in which people think about the internet because I think they take them to be natural and unproblematic and the only way to talk about such technologies, I'm not sure that that's the case. So I see where I can make a difference, whether it would adapt to a set of policy prescriptions, I don't know, but I'm aiming higher, I think. So this is a good, one of the problems I think if you do any technology criticism is how quickly, once you start looking into some of the critics, many of them are actually being paid by the industry they critique. So you have people who work for Microsoft, but then go out and talk about their research, which is funded by Microsoft, which then reflects well. You mean gentlemen, yeah? Well, I mean, he's won. But I mean, there are many of the people who you see on the TED circuit and whatnot are people who rely financially on the industry. And we would never listen to someone tell us about an assault weapons ban if they were lobbing for the NRA or worked for one of the gun makers. And yet we accept this. I thought that's how I would debate about it. Well, that might be hard. Partly. But I mean, what I'm saying is that this is an industry for which the public, I think, has been far too permissive in allowing people to tell us what the good and the bad is and not questioning others' self-interest, which we do in a lot of other contexts. Why do you think that is it? Is it because we use this stuff and like it and it's cool and fun? Or is it just that we don't have a vocabulary for doing that? I mean, you mentioned journalists are falling down on the job. Part of it has to do, I mean, there are very ambitious and long and abstract answers I can give and much more specific and probably controversial ones. So in the abstract one, I think there are, that technology and information have a certain enlightenment to share with them. Like we think of a company that decides to go and organize all of the world's knowledge, all of the world's information as being in a business that's much more noble than pumping oil. And because that's the initial assumption we start with, of course, our natural ability to see such companies as companies that have their own commercial logic and that have business models and that in addition to problem solving that comes from apps like Google Now, also interested in making all of the services interconnected so they can learn more about us as consumers, we tend not to interrogate those logics and models, right? In part because we think that they're in the business of problem solving. And that's one of the arguments I make in the book, that now you increasingly hear CEOs or such companies actually say that we don't wake up to make money, as Mark Zuckerberg said in his IPO letter, that all we wanna do is to make the world a better place. We primarily see ourselves as being in the business of saving all of the world's problems, they go on and on and on. And of course, one of the reasons they do it is that they know that the public and the regulators wouldn't want to over-regulate an industry that can potentially solve the obesity problem or the climate change problem, right? So I mean, they're making a very deliberate, rhetorical set of interventions. So this is one part of this. The other part of this is, again, I think it has to do with the fact in which we are beholden to this ideology of internet centrism, which I discussed in the book. And it's again, this tendency to see all those projects as essentially stemming from this giant thing called the internet. And if you go after and scrutinize every one of them, you are committing a sin against the internet culture. And that if you go and start actually understanding how Wikipedia works and how open software communities work, it's like an assault on this giant bastion of digital culture that we've built in the last two decades. And if you start doing that, that will open up opportunities for the surveillance industry or the copyright zealots. And you shouldn't be doing that. I mean, that's already the kind of pushback that I got on a daily basis. And to me, it's just interesting as a rhetorical strategy, the way in which people do think that there is something almost sacred about the internet and I'm trying to secularize it. This is a good opportunity now to, if you're game for some questions, we'd love to hear any questions in the audience. Got one right here. And identify yourself if you dare. Otherwise, we'll just facial recognize you. And they will be drawn. Yes, exactly. We have a drone upstairs. We'll just bring it down. We do that anyway. My name is Oliver Grimm and I'm the US correspondent for an Austrian newspaper called Die Presse in DC. I have two short questions. The first one, maybe I'm a bit too slow, but I didn't understand something that you said or sort of the point you were making that I found a bit contradictory. You proposed these household appliances that were simply speaking malfunction if people don't really care about the energy use and so forth, which to me looks a bit like nudging in a sense. Nudging people towards desirable behavior. Yet at the same time, you're very critical of other ways of nudging people towards walking more and so forth. So if you could elaborate some of that. And then secondly, there was a very scathing critique of your book in yesterday's Washington Post by Tim Wu and I would like to know your reaction towards the allegation he makes that you're kind of sort of copying, attributing arguments that other people have already made about these issues. And at the same time then criticizing them. I mean, he even argued that you might be the Bill O'Reilly for intellectuals, which I find a bit flippant. You know, I was like hoping that the entire Bill O'Reilly audience would just go and start buying my book yesterday. We tried to get Bill here today but he was busy, so very okay. He's not onto it. So with regards to the nudging question and the appliances. So in the book, I have a long chapter on design where I get to talk about a particular approach in philosophy of design called adversarial design. And that's something that I find very appealing and it's the idea that you can actually design things that would provoke people to think but it wouldn't necessarily provoke them to think about something specific. So you can get people to contemplate and disagree and to start questioning how things work without necessarily leading them towards a certain set of behaviors or making them think about certain specific content. And the way nudges work is that you actually are seeking a certain intervention. When you are building a cafeteria and you want to put vegetables first, your assumption there is that you want people to eat vegetables. A cafeteria built on the principle of adversarial design would probably hide all those vegetables, hide everything and make you think about where the plates are and then have you searching for them. That might not be a very practical solution to ordering food in a cafeteria but I think that nicely gets to the difference. That you can actually, and when you apply this logic to particular devices, you can still have devices that offer you some functionality but that also prompt thinking but what you're thinking about, it's not something that the technocrat has decided in advance. So my kind of nudging doesn't presume that Kasanstein has done all the thinking for you and he just wants you to think about the very same thing that he has thought about. It's actually, it's done in a much more open-ended way to try to open up some of the social political dimensions that are otherwise hidden. With regards to Tim Wu's review, I was actually very happy to see it because it shows that some people are very uncomfortable with historicizing debates around the internet quote unquote and showing how many of the terms we take for granted shouldn't be taking for granted at all and with regards to the accusations he makes that I don't know what to attribute, Larry Lasik's work. Larry Lasik's work hasn't influenced me at all. I draw on a philosophy of technology and science studies that existed before Larry Lasik even went to college, right? And what I'm trying to do with this book is to show that there is another way to talk about digital technology that completely circumvents the cyberloat debate and cyberloat discourse. And you can actually go and that literature is much richer and it actually has a way to talk about things like culture, history and morality and all that cyber law can talk about is law and economics. It's like Richard Posner applied to cyberspace, right? Have no interest in that project. And of course, people who spent the last 15 years putting law and economics into every law school are very unhappy about it because if my disruption of their work succeeds, we will have to talk about history. We will have to go back to 1960s, 1970s. We'll have to trace the impact that cybernetics has had on how we think about digital culture. We'll have to talk about things that cyber law professors have no way of talking about. So of course, people like Tim Wu are not very comfortable about it and they would rather stick to labors like the internet forever. But I think it's, for me, I took it as a very encouraging sign. And it's a big tent because Tim Wu also a shorts fellow, you see here at New America. We all get along. Other questions? There's one here. Hi, my name's Paul Blake. I'm a computer science student at the University of Maryland. I was just gonna ask, is this maybe part of a bigger American or Western cultural problem where we seek a frictionless existence and all the journalistic kind of focused, like as an analogy, like TV dinners. All the journalists are focused on the convenience of it but don't ever think about the sodium intake, the fat, the calories, blah, blah, blah. Much like this technology, we're so focused on the convenience and how easier it can make our lives but not the problems that it creates. Yeah, I mean, you can generalize that and say that, you know, many, and that's, I think, in some sense, I think it is helpful to position the current debate that we have about, for example, self-tracking in a much broader context where we are forced to think about certain numerical indicators when it comes to health, for example, right? And not to think about broader models of disease or broader models of health that influence the production of those numbers. But I think you're asking more about the mechanics of public debate on those issues and I mean, other than say that most journalists do not necessarily read history or philosophy. I mean, there is little else I can add but I'll make a broader point that I'm not trying to make a set of abstract statements about the utility of utility of certain technologies. Actually, I don't think you can make such statements outside of the particular context where they're used. So there is no way you can compare the use of self-tracking in, say, health with the use of self-tracking in how you consume information. I just don't think that those are compatible and can make the same kind of statements about self-tracking tools. I mean, to understand why self-tracking, the ability to track how much you eat or how much you exercise or to track your sleep patterns, to track your overall well-being through a bunch of biomarkers and indicators. The reason why it's so appealing to so many people in the context of health is because it fits very well with what the current farmer industry is interested in doing and that's in having consumers, and in this case, patients, identify as many symptoms as they can so that they can then go and ask for newer and newer drugs to cure those symptoms. I mean, there is no way that big farmer will sell fewer drugs as more of us take on self-tracking, right? There is no way, absolutely. We'll only be buying more and more. I mean, I don't think there is anyone who started self-tracking and discovered that something is not wrong with them. But again, that has, that happened only because in the last 50 years, if you need more in sociology of health or the history of health, you'll discover that the way we think about health has changed radically to a point where if you think that nothing is wrong with you, something is wrong with you, you need to go and see a doctor. It's like if you're not Googled, right? That was not the case 50 or 60 years ago, and you cannot understand the appeal of self-tracking if you do not position this within the broader discourse about health disease and this broad ideology, and then you start suddenly understanding why so many venture capitalists are excited about self-tracking and it's the same venture capitalists who invest in pharmaceuticals, right? So I mean, but that set of conclusions, you cannot just take it and then say, well, self-tracking is evil, and we need to go and make the same point about education and make the same point about information consumption or about something else. I just don't buy that again because, so in that case, that would not be a very good ethical manual that you know, you discover that it's bad in one context and then you just go and transpose it there with other context. I just don't think it's gonna work. So there's a lady here. Hi, Jill Moss, I'm a Democracy Fellow with USAID. I wanted just to skin the thinking you have with regards to adversarial design, specifically as it relates to the internet counterculture, individuals who are using technology, proxies, VPNs to sort of cloak and dagger their hygiene on the internet, make themselves a little more private and secure in what they're doing, certainly to avoid the filter bubble that we are finding more and more as it relates to our searching on the internet. The other thought is, as you're talking about privately run structures for problem solving, a way to leverage this internet counterculture in a sense that we're avoiding for invoking a better term, big brother to a degree. Sorry, I missed the big brother question. Can you just reiterate it again? Well, more and more as you see even developed countries and the use of technology as it's permeating through the country, all in both the United States or all use the United States and internet usage here in this country as you have Silicon Valley developing technology for the greater good, you also have that technology being used for more surveillance of individuals, perhaps even for socials. No, I get that. So with regards to counterculture and sort of subversive practices, it's a tough one in part because I think one of the key buzzwords that marked the last decade, it's the rise of hackability. Hackability is the new buzzword. Everything needs to be made hackable and circumventable and you can actually track. I started tracking this before 2004, very few people use that term. So hackability, if you go and look at academic literature in design and in computer science, hackability wasn't, it was used mostly as a just descriptive state of a system, whether the system was circumventable or not. Now it's used as almost like a normative goal. You want to build systems that are hackable, that are circumventable. And in part, I think, if you think about it really hard, hackability in some sense is the very opposite of bureaucracy as some like Max Weber sort about it, right? Because we want it as part of modernity, we wanted to build systems that are not circumventable, precisely because we wanted to promote fairness and justice and ensure that everyone gets the same treatment from a given institution. With the rise of hackability, the logic is very different. It's that whoever is more skilled than whoever has better attack resources can hack the system in a way that would benefit them the most. And the assumption is that if you're not hacking the system, then you're doing something wrong, right? So there is some kind of assumption that we now need to go and hack the education system and start watching MOOCs at home and we need to hack the health industry and start tracking our symptoms ourselves with our phones, bypassing the corrupt doctors and the insurance and whatnot. And that puzzles me quite a bit because again, there is some kind of anti-institutional rhetoric driving this, which I don't like. I just find it very anti-modern, if you will. And so when it comes to use of tools to hide yourself online, part of me wants to say, sure, we should be building and funding such tools and we should ensure that everyone uses them. But I don't want to end up in a society where we all have to go and defend ourselves with our own guns that we print on CDD printers and drones that defend us from attacks from our neighbors and we educate our kids at home because there is enough TED talks and MOOCs for them to watch. I mean, yes, you can make a society that is perfectly hackable and where you as a citizen is responsible for everything because you can order everything online through an app. You never have to leave your house. But that would not be a very pleasant world to live in. I mean, that was the whole point of modernity. We wanted to build institutions and delegate things so that you can actually enjoy life and go and do something and have some kind of self-fulfillment. So when it comes to tools to protect yourself, I mean, yes, great, let's build them, let's fund them, but I don't want all of you to be paying $15 a month so that you can protect your privacy. I mean, that's why I pay taxes today, right? I want my government to go and like do something about my privacy. I don't want just to do it on my own. So that part I find a bit suspicious and I think a lot of very well-meaning organizations like ACLU and many others, they're just embracing this course very critically. They just think that as long as we can build those tools and let people use them, we'll solve the problem. And we're not gonna solve it. We have to solve it at a different level, at the structural level, at the level of reforms and the evolving institutions and the evolving governments. Is there a question about surveillance? I mean, I'm with you. I mean, that smart, I mean, I didn't really talk much about smart trash bin, but all of the smart devices that record and store and have sensors in them and that seek to analyze what it is that you put in them and to induce a different kind of behavior from you, I mean, all of them record and store that data somewhere and we end up in a situation where FBI can go to Facebook or whoever has built your smart trash bin to know what was in it three months ago, right? That was not a possibility that was on the table 10 years ago and suddenly it is. And I think that would be the case with Google Glass. I mean, there is no way that Google will invest all the money in building Google Glass only to destroy the data that they collect and analyze. I mean, that data will be linked through this single privacy policy to data in your inbox and to data in your self-driving car all in order to tell you that there are some new good deals in the restaurant next door. And that will analyze what was in your trash can the night before in order to predict what you are likely to purchase at the supermarket. I mean, so I don't think that that data will disappear. It will be stored somewhere. The extent to which law enforcement agencies will get access to it is something that's up for grabs. And I think this is where we can still make a difference before those technologies have taken off completely. I mean, Google Glass is not even available at this point. You can pre-order it, but there is still space to go and regulate it. But instead, Eric Schmidt wants us to regulate their own. Ben Schneiderman, University of Maryland, Professor of Computer Science and Human Computer Interaction. I really appreciate your fresh approach and you're taking this powerful role of critic that opens people's minds. And I'm very pleased that you've focused on design issues. I think that is a way where people can intervene. There is, though, a strong existing design movement that's aligned with this, the value-sensitive design group from Washington, B.J. Fox, persuasive technologies, our own work, and others like that. So I think one of the things I'd like to encourage is that you more concentrate on contributing something that would counter the criticism that Jimmy Wales made that, you know, you're a critic, but you haven't contributed anything. And beyond the criticism, which I value, but I think it would strengthen your position if you took on a design project and you focused consistently, not just skipping from one to the other, on a topic that you were able to make a positive change and measurable and one that would generate a movement. That would trigger others. You're now the successful critic, but to create a movement, you need to engage others. And I'd like to invite you to do that. Come out and join us, or we'll help you. It's an interesting comment, and I think it shows you that there are, at least as I said, there are two dimensions to my overall intellectual project. And one of them is to help us unlearn the internet as a concept. And that means getting various suspicious of the terms we use in the current discourse, looking at how they invoke to justify certain policies. I still think that an idea like internet freedom is a complete bunk. It has no analytical purchase. It has no depths. And whoever wants to go and promote it, whether in foreign policy or in domestic policy, is wasting time and resources. That's not the kind of policy intervention or intellectual intervention I would be able to make if I go and spend my time on design. Because that's part of a separate project, which is entirely discursive in nature. And that's trying to articulate a different paradigm to think about digital technology that can more or less keep the 1990s. To put it very bluntly. But there is another dimension to my work. And another dimension to my work is trying to articulate what a philosophy of technology might be that doesn't dabble in either technophobia or technophilia and that doesn't start with any prior assumptions as to whether technology is good or bad. And in this sense, and this is where the design project comes in. So I can tell you, I'm very interested in these issues. And the two books I'm working on now, which will appear who knows when, both pursue those two different projects. One of them is a very ambitious history of digital culture going back to the 1950s that will help us and learn the internet. Hopefully another one is a book about the future of public space that will be all about urban planning and architecture and design where I would engage with this matters. Because I think there is actually, and that's why I like the design literature and architectural literature so much. They just have, the debates about democracy there are much more meaningful and richer than the debates we've been having in digital culture where for the last 15 years, it has all been about this, ever since Cassanstein wrote his republic.com book. It has all been about whether you're exposed to like-minded people whether you're exposed to people who think differently, whether blogs are making us stupid. It's the same kind of debates that rehash the same one thinker, Jürgen Habermas, to death. I'm just sick and tired of them because they haven't moved anywhere. It's the same argument made over and over again. Every five years, you have another book, you know, Filter Bubble, Republic.com, they all say the same thing. That debate moves nowhere. If you look at architecture, design, urban planning, there, I mean, you have all sorts of ideas about how you need to involve the audience in the production of the users in the production of artifacts. You have all sorts of free questions about democracy, delegation. I'm like, I love the literature. So I'm not sure I want to build a project. I'm engaged in this project that are doing it. I endorse it. I'm not enough to say that I don't endorse it. Maratza's seal of approval. Maratza would be wrong to say that I don't endorse it. I mean, in the column that I wrote for the New York Times on this adversarial design stuff, I endorsed the project from Cornell that was just released three weeks ago. I mean, it's a very short time from, you know, a paper to the New York Times column. It just took me a week to endorse them, right? So, I mean, I wouldn't say that I don't endorse them. I read all that literature. But no, I'm with you. I'm not sure I'll be building stuff, but I'm very happy to hang out with people who do it. All right. That's one in the back, thank you. Sandy Askell in the College of Law at Arizona State University. I was all surprised when you recoiled at the suggestion that computer engineering might benefit from symmetrical thinking. Suggesting somehow rules would limit innovation. I'm particularly surprised because earlier in your presentation several times you sort of, you make these art claims about where here's what we need. We need more analysts and scholars sort of contributing to public debate responding to sort of the challenges. That's a normative claim. So it seems to me either there's inconsistency or there's an unduly restrictive view of what ethical thinking is. Yeah, but you can contribute. I mean, you can have analysts and scholars and intellectuals asking questions without necessarily having answers before they ask the questions. And that I think what a project that would try to articulate a coherent template of ethics would do, it would start with the answers before the questions have been raised. So that project I don't want to have. I don't want intellectuals who just have a ready-made answer for everything and who just, well, I mean that's how, like if you model it on bioethical thinking I think this is how it would work. Where you start with a certain set of assumptions about what life is and then you go and I mean a very conservative reading of the bioethical project, that's how it would work. So, and if on the other hand you want to explore and ask questions and then come up with answers without having any a priori biases as to what it is you're talking about, I'm fine with that. But everybody has biases. It's just whether how. Well, biases about technology. I mean, some people start. So I mean, I don't want someone who is already biased against drones to pretend that they're doing some intellectual exercise in articulating ethics of drones while all they're doing is trying to shoot them down. But so where do you find the grounding for that? I guess that's kind of a couple of people circled around this question. If it's not religion, if it's not, I mean, in some sense it's politics. Yeah, sources of normativity. I mean, that's the biggest question of modernity. What do you expect me to say with the sources of the normative compound? Well, that's why we invited you, aren't you? No, but you know, so it's easy, you're right to label this person has. In bioethics debates I think you're correct to become mired in a lot of these labels. But there are approaches to our use of technology in the abstract based on longstanding philosophical debates about what it means to be human, about embodiment, about physical reality. And so we need something to start the debate on. And I agree that you don't want to draw principles from everyone's preconceptions, but you've got to start somewhere. I mean, look, I have no problem with people coming from religion, or people coming from humanism, or people coming from post-humanism, engaging in the debate. The nerd rapture, right? No, no, no, no, no, no, not transhumanism. Post-humanism is very different. Someone like Slaughterdijk is not the same as first of all. So I mean, I have no problem with those people coming together and having debates and articulating and writing essays and op-eds. What I have a problem with is trying to label one of those projects as ethics of technology. So I mean, if we're going to create a council that would advise the president on technology ethics, and if we'll be led by ray courts, why? Well, count me out. Well, I'll be cyborgs. Well, I'll be cyborgs. Okay, no, that makes sense. There's one more in the back there that I think we need. Good morning. I'm Gadi Ben-Yehuda. I'm the Innovation and Social Media Director for the IBM Center for the Business of Government. So I have a few questions and I'll just kind of throw them out there and let you pick which one you'd like to answer. The first one is when you said that anti-institutionalism is anti-modern. I really kind of thought that the idea of the Enlightenment, which I'd always imagined America was based on, was we can do this better. Like if we think about it, we can do it better. And I think that what we would call that ethic now really is the hacker ethic, is the idea that we can look at something. What do you mean by we? We, I remember, you know, like, you know, we look at human health and we say, you know what, I think that there's a way to tweak this, which kind of segues in my next point. Like, I am one of the people who, you know, kind of monitors, like, you know, what I, the way that I walk and how much I walk, rather. And, you know, I think that, you know, it does kind of help me to change behavior so that I do, hopefully, you know, that I'll be spending kind of less on my healthcare. And I think that, you know, kind of behind this and really kind of what all really gets to you is this idea that, I'm trying to get out of phrase as well. I guess I'm completely blanked on my last question. I'll leave it with that. I'll think that one. So, I mean, I didn't, again, I think there is a tendency to confuse this, you know, buzzword of hackability with many other buzzwords. So there is nothing wrong with the visibility of our norms and institutions and practices. But you can have a visibility that is done in, that is mediated by institutions or that has some kind of representation practices built into that. So when you say talk about hacker ethic or whatever, I mean, institutions can be hacking as well. I mean, if you, by hacking, you mean that they will be revising the rules of their operation and the norms and the goals and the purposes. You can do all of that institutionally and you can do all of that in a manner that will be compatible with democratic norms of representation. I mean, there is nothing onto enlightenment about that idea. The fact that somehow enlightenment was about had enough institutional bias, I just don't buy it historically. I mean, from the rise of, it was not, it didn't drink coffee at home, right? You went and you met with people and it was an institution of sorts. You had the encyclopedia, which was to me an institution. It wasn't a bunch of people writing at home and for their own sake. I mean, the rise of universities, the rise of scientific journals. I mean, all of that, all of those institutions. So I mean, we can have that debate about whether institutions played that much of a role in the enlightenment process, but I would say that everything we know about the rise of the modern state. I mean, you can say that the modern state is no longer relevant, blah, blah, blah. But everything we know from sociology and Weber about the functioning of modern institutions, all of that has to do with some kind of mediation by institutions and by bureaucratic structures. And I see nothing wrong with that because that was a very good way to avoid the clientelism and to avoid the kind of corruption and nepotism that existed before. Where you didn't have mediating institutions that treated everyone equally. So I mean, we can have that debate about history, but I stand by my word. I mean, I'm not sure like that self-tracking question if it added up to what I think it added up to, but I have no problem with people who choose to self-track. And I think as long as, I mean, personally I would rather have them put a little bit more thought into what they're tracking and how it connects to broader models of nutrition, for example, and it would be silly to all focus on calories alone. And it might be that our tracking devices now can only track calories and not track other valuable indicators of nutrition. So I mean, there are all sorts of things we can discuss, but for me, the really important aspect here is to work stand, this creates a temptation for policymakers and problem solvers to rely on some of this infrastructure to offload some of their existing problem-solving efforts on the citizens. And this is where it gets really interesting to me because if your phone monitors how much you are walking, it can potentially tell your government or your insurance company that you're not walking enough to qualify for a certain package or a certain rate. And that's something I don't like, but I also know that a lot of people would want to opt in for this project because it will show that they're walking more than the average person and thus get a discount. And this gets us into the ethics of self-tracking because for many people who are healthier and more sustainable, so who have sort of better opportunities in life than the average person, self-tracking is excellent because you can show that you're better than the average person and thus get better treatment from insurance companies and other social institutions. You're mic. Technology, please. Okay, we, you know, there are people who, you know, there are experts and, you know, there are people, I mean, I think that we can say that walking, you know, 5,000 steps is better than walking zero steps in a day. And I think that likewise, you know, eating vegetables, some vegetables is better than eating no vegetables. And so it seems like what you want is to have people, you know, like, we'll hide the plates so that people need to think about, like their total food consumption, everything. And we know that most don't. And we know that, you know, that there are things, there are healthier ways to eat and less healthier ways to eat. So why not nudge people in the direction? You live in some utopian paradise if you think that all of those ways are accessible equally. I mean, I've lived in Palo Alto for two years. If I wanted to walk, know it to walk. I mean, I can assure you. Like, so what? Okay, I can go track my steps, but knowing that I need to walk 5,000 steps rather than 2,000 steps, it's not gonna help me a lot, right? And if, again, we are trying to solve that problem, what? I said it would help you 3,000 steps. Yeah, but like the question is that it doesn't matter that I can tell you you should need to go and buy vegetables. There is no way for you to get to farmer's market without getting into a car or a bus, right? But you're talking about arranging a cafeteria in such a way that we, you know, whether we put the vegetables first, and I... Because something talks about that, I don't. The anti-vegetable, he's very anti-vegetable. Well, no, but, and also think about it in these terms. You might take 1,000 steps in one day, but even that number is not without all kinds of implications. If you're taking those steps, pushing a garbage can on a night shift while you clean out an office building, then your approach to food when you're hungry is going to be very different than if you're strolling down Fifth Avenue in New York. So even the process of the walking can be problematized, as it were. And I think what Evgeny is saying, if I hear you correctly is, you can't have this one solution. You can't say that just the tracking is a solution to this problem because those problems are so deeply complicated and so implicated by individual experience that these solutions that policymakers are embracing are far too simplistic. I guess what I'm saying is that if you position my critique of solutionism within a broader critique of neoliberalism that would stretch back to the 1970s, you would see that there are specific reasons why delegating so much of problem solving to citizens makes sense to governments. Again, it's much easier for governments now, particularly with all of the excitement around innovation, to say that we're not only solving the obesity problem, but we are also engaging in innovation in other of those buzzwords. We are actually building apps rather than going, look at Michelle Obama, like her efforts to get everyone to solve the obesity problem. At the beginning it was all about let's go and get the food companies at the table and now it's all about let's move more, right? Why? Well, because moving more doesn't require anything from anyone. You can just move whatever you want and doing something about food companies requires a lot of political fights in Washington that no one wants to have. So if you have those political contacts to many of the things I'm debating, you'll see self-tracking in a very different light. But just because something doesn't solve everything, I don't think that it necessarily follows that it's not a worthwhile activity. I mean, you're right. Just tracking your stats doesn't mean that it doesn't come with cost. That's what I'm saying. That all solution comes with cost and the way in which now we are opting for some of those solutions have nothing to do with the cost-benefit analysis of them vis-a-vis other solutions. I'm not saying that this is counterproductive. All I'm saying is that there are political reasons for these solutions to enjoy the kind of receptions they enjoy now. And we are blind and getting blinded and blinded to alternative paths of action that would require far more political capital and far more ambitious structural reforms that we currently can afford. So of course, those reforms will accomplish something, but they will not accomplish the kind of reforms that I have in mind. And that I think we should be thinking. Well, I meant that that opens. Well, I mean, I'd like to see, like, do you see the efforts to get us to move more, opening anything to regulating the junk food industry? Well, I'm reading different newspapers then. Because the vector is in a different direction. We started with getting people to move more and regulating the food industry. Five years later, we ended up with efforts to get people to move more and not regulating the food industry. So, I mean... Karen Riley from The Tour Project. Speaking as somebody from the technology world, technologies that hide your identity aren't necessarily the solution. They are a way to tell people to come back with a warrant. So there are already norms, the Fourth Amendment, the First Amendment in the United States. And I get the impression that some of the problems that we have on organizing and treating these spaces as normal places to engage in political movements that wouldn't be thought of as scant if you're organizing on the town square. But if you're engaging in controversial behavior on Facebook, then all of a sudden that's there's more room for regulation. So how do we bridge the gap between using technologies to claw back some of the rights that we have in physical spaces and getting policy makers to treat the internet as just an extension of human life instead of this nebulous cloud that where we don't have to apply the rule of law necessarily? Yeah, I mean, so... We communicate better in order to do that. What's on our to-do list? Yeah. Well, I think you're pointing out to some of the things I've been talking about in terms of trying to do this discursive project of understanding where terms like the cloud or cyberspace come from. Because they don't just... First of all, they don't just drop naturally, but they also enable certain kinds of policy talk and disable other kinds of policy talk. So if you think that the cyberspace has a frontier, which is always moving away from us, then, of course, you would think that regulation is futile. So, I mean, that question ties to my project intellectually, but I think you're asking for something much more specific. And I must say, I have... So, I mean, I'm busy. We should be thinking much harder about ways in which to apply existing laws and norms, underpinning things like the force amendment to the use of digital technologies. The reason why we don't do that, I think, have to do mostly was... Well, I mean, people in Silicon Valley will tell you that they're just technologically impossible. I mean, everything else is technologically possible. We can send people to Mars, but somehow tweaking something on Facebook is a very challenging undertaking. But I think it mostly has to do with ideology and the reluctance to go and think her. And I mean, a lot of... I have two chapters in the book, which is basically a closed reading of one line from Lirelastic. That line is the network is not going away, right? And that's a line that he used in that New York Public essay he published three years ago on the downsides of transparency, right? And the line that Lirelastic did sort of the thinking behind that line is that Lirelastic said that, look at how newspapers have reacted to Craigslist, look at how the music industry has reacted to Napster. This is, that wasn't nice, right? So there is a certain set of responses that are appropriate and some that are not appropriate. So someone like Lirelastic would say that going after pirates and making peer-to-peer file sharing is a bad idea instead of we need to embrace streaming. And streaming like Spotify was a service that is a natural extension of the internet's culture of openness and sharing and whatnot, right? So there is an assumption that these tools and technologies are here to stay and that all we can do is to figure out what the internet wants us to do like next. So you should not be imposing paywalls, that was his argument with newspapers. You should be finding new models. And I just think that this is ridiculous. You cannot make an art, you cannot go and tell a newspaper that you shouldn't be writing paywalls because it undermines the culture of openness of the internet or this is how the internet was meant. I mean you have to decide whether you want to run a paywall based on your own business model and your own function in public debate. And if you think that it's important to build a paywall, what it does to the network should be of no concern to you. But of course, if you start answering those questions from an ideological viewpoint, you start thinking that, hey, this is gonna be a scene and like it better to have newspapers go out of business than to have them think of as the network. I just think this is delusional. So, and I think some of that applies to how we think about privacy and Facebook. And there is reluctance to go and intervene in many of those platforms in part because we think that somehow any tiny intervention will undermine the sanctity of the platform of the network at large. And if you abandon this belief that the network is somehow tied together and it's very fragile, you can have all sorts of experimentation that I don't think would undermine the network at all. Well, there's also a question of how you approach any regulation. I mean, do we see Facebook as a utility or do we view it just as a business and you regulate it that way? I mean, those questions strike me as also needing to be parsed out before we can even talk about regulation. One more question. I'm Constantine, I'm a fellow here. Wanted to ask, part of your project is sort of historicizing sort of techno utopianism and being able to point out, which I sympathize and agree with, that when the telephone was invented, when the phonograph was invented, when the radio was invented, you had a sort of boosterism very much akin to what you see today, which what I wonder is the argument that somebody today can make is that this time it's different. Which is not an argument that I sympathize with, but is a sort of simple and powerful one. And I was wondering how you address that argument. So for me, actually the word historicizing would mean something very different. So what you've described, actually I become more and more reluctant to engage in such what I call fishing expeditions, where you just go and you just look at a history of particular technology, and you, I mean, depending on how you look and what technologies you choose, you can make all sorts of arguments, either for or against any current policy point. I mean, give me a technology and give me a current policy position. I can assure you I'll go and find box in newspaper columns from 19th century that will allow me to make my argument. I may not be in this business, I've made those arguments, I know how easy it is to do. It's not a serious historical project. For me, historicizing in this context would mean that we actually want to take something like cyberspace or the internet as an idea and try to understand where it comes from. But if you are engaging in this project at the same time as you are making policy interventions, there is no way that you can be using the word internet seriously. That's why I put it in scare quotes. I mean, there is no reason, there is no way I can go and start saying that the internet is or the internet does without making my own job as someone who is historicizing much harder. That's why I'm trying to make my arguments without even ever using those terms because they think that until I have finished my historical inquiry, like there is no way I can say whether this term is a good or bad one or whether we should abandon it and move on to something else. So I'm sticking to like my sort of, I think I'm coherent in my approach. How do you detect the exceptionalism is a very interesting question that I frankly, you know, so I'm reading a lot on the idea of revolution and how do people come to recognize that they are living in revolutionary times? And if you read, again, depending on who you read, I mean, if you read Foucault, Foucault tell you that every single generation thinks they're living in revolutionary times and that's just how modernity copes with itself and you know, that's an answer I can probably also sympathize with and you can go and find enough historical evidence for it. Of course, if your other claim is that look in the printing press and look at the internet, don't they look alike? I mean, so I'm looking at that kind of argumentation because it's also very prevalent. You go and you look at the printing press and you think that again, just like the internet that dropped from the sky with its three or four coherent fixtures, like, you know, you could actually make it portable, it's fixed, I mean, Elizabeth Eisenstein has made that argument in 1979. I go after her in the book, not by myself but there's a lot of other book historians and many people don't buy that argument. They don't think that the printing press just dropped with three certain features. They think that, you know, the reason why it became what it is is because of certain social political forces that has resulted in the printing press having certain features which Eisenstein takes for granted. That's an argument I'm making sort of about the internet. You cannot presume that it just naturally promotes openness or that it just naturally has certain consequences. It might come to have those consequences but how we get there is something that you need to understand in this historical context. You need to understand the interests of the players involved and the ideologists. There isn't a way you'll understand how we got the idea of open government in 2013 without engaging in some kind of genealogical critique of the idea of open source, how open source replaced free software. I mean, I've done that in a recent essay. I mean, that stuff needs to be done because otherwise we just end up in this world filled with generalities and banalities and talking points that a lot of people are rehashing that come directly from Silicon Valley. That's a short answer. The short answer is I'm writing a history of bullshit. That's the short answer. Well, thank you very much for joining us today. Thanks. And I think are there books here? Do we have books here? Yeah, I think I can sign books outside. I book.