 I'm Professor Sophia Roost. I'm the Frederick S. Danziger Associate Professor of the History of Science at Harvard University. And it is my pleasure today to introduce Brett Frishman, who is our guest, and we'll be talking about his book, Re-Engineering of Humanity. If you haven't already read it, it is magisterial and also a very sobering read. And I don't want to say too much about that, because I think it's more important to get to talk to him than to find out who he is. You can always Google his website. But I will say just this much. He is the Charles Widger Endowed University Professor in Law, Business, and Economics at Villanova. He is also an affiliated scholar of the Center for Internet and Society at Stanford Law School and a trustee of the Nexus Center for Internet and Society at Politecnico de Torino, as well as the Microsoft Visiting Professor recently of Information and Technology Policy at Princeton University Center for Information and Technology Policy. Before publishing Re-Engineering of Humanity, he's written a series of other groundbreaking books on the relationship between infrastructural resources, governance, commons, and spillovers, including Infrastructure, the Social Value of Shared Resources, which came out in 2012, Governing Knowledge Commons, which was co-authored with Michael Madison and Catherine Strandberg, and Governing Medical Knowledge Commons with the same co-authors, which was in 2017. And I just found out also that Professor Frischman has an undergraduate degree in Astrophysics and a master's degree in Earth Systems Engineering. So he is an expert in pretty much everything. But today he's going to be talking to us about the future of humanity for about 20 minutes, 25 minutes, and then we'll move into a Q&A. So 20 minutes, future of humanity. Let's go. Yeah. All right, let's do it. Welcome, everyone. It's a real pleasure to be back here in this room. Back in 2012, I came to talk about the infrastructure book. And so it's really great to be back here now. It's funny. In the infrastructure book, I emphasize the social value of an open internet and was very optimistic, almost utopian in my discussion of the internet and open systems of a bunch of different types. I've spent the past six years working on re-engineering humanity, a book that may be a little more pessimistic about the digital networked world we've built in our building. I decided to throw the Governing Knowledge Commons book up there on the slide as well, in part because now more than ever, we need trusted governance of knowledge, of intelligence-generating systems, of all sorts of different things. And so we need to better understand how Commons governance can and does, often though not always, work. But before I dig into re-engineering humanity, I wanted to tell a personal story. So a few years ago, my first grader came home after school. Very, very excited. He says, dad, dad, I won. I've been picked. I get a new watch. And I said, you know, that's great. What happened? And then he rattled off something about being the kid in his class who got selected to wear a new watch for gym class. And so then a few days later, I received this. I know there's a lot of text, but this is the letter I received from the school. I don't want you to think about what your reaction to this would be. Mine was atypical, at least in my community. When I read the letter, I went ballistic. I started tearing out the hair, and now I don't have as much. Initially, I wondered about various privacy issues, the who, what, where, when, and why, with regard to the collection sharing use and storage of data about kids. The letter didn't even vaguely suggest parents or their children could opt out, much less that consent was required. But of course, even if it did, it couldn't have been informed consent because there were so many questions left unanswered. So I read the letter again, and this is really what I really got stuck on. Bath time and bedtime surveillance. The letter made me think of one of those Nigerian bank scam emails that goes straight into my spam folder, such trickery, I thought. I remembered how my son had come home so excited, the smile on his face, the joy in his voice were unforgettable. It was worse than an email scam. They'd worked him deeply. He was incredibly happy to have been selected to be part of this new fitness program, to be a leader. How could a parent be not equally excited? Most were, I wasn't. I contacted, after talking with him at length about it, I contacted someone at the PTA. I spoke with the supervisor of health. I wrote a letter to the school superintendent. I had meetings with the general council for the school district. What caught people's attention, I think most was aligned from a letter I had sent. It said something like this, I have serious concerns about this program and worry the school district hasn't fully considered the implications of its child surveillance program. Whoa, whoa, whoa. Child surveillance, no one had called it that. All of a sudden, creepiness of bath time and bedtime surveillance sunk in and quite naturally it triggered the conventional set of privacy concerns. So using the word surveillance generated a visceral reaction. At least it was effective in getting people to stop and think. No one had really seemed to do so up to that point for a number of pretty obvious reasons. The program, like so many being adopted across the country in various school districts is well-intentioned. It's aimed at a real problem, obesity and a lack of fitness. It's financed in an age of incredibly limited and still shrinking budgets. The Department of Education had a PEP grant to support the program. It's elevated by the promise that accompanies all new technologies, right? People trust the school district and they love their new technologies. The program presents substantial upside with little or no apparent downside. It's an easy cost benefit analysis for most people. It seems like a rare win, win, win scenario. After my intervention, I have to admit very little changed. Better disclosure, informed consent would apparently fix everything. In my view, this shows why conventional privacy concerns fall woefully short. The most pernicious aspect of the program was not the 24-7 data collection nor was it the lack of informed consent. Both of those things matter. Don't misinterpret, those things are really important. But the deeper concerns I have with the program is the unexamined techno-social engineering of children. It appears no one thought about how the program shapes the beliefs and preferences of a generation of children to accept without question 24-7 bodily surveillance that collects and reports data to others. We should expect creep, both within the schools and outside the schools. Surveillance creep can take many forms, right? It can be gradual expansion of surveillance from one context to another, from the schools to the home to the playground, right? Or it can be the gradual expansion of the types of data collected in a particular context. Or it can be the gradual expansion in the use of the data, or in the third parties that gain access to the data. Or it could be all of those things. Usually surveillance creep is something that we think about happening on the side of those doing the surveillance. You think about the NSA, you think about Facebook, right? But it also happens on the other side as those being surveilled become accustomed to it as their beliefs and preferences about technology and surveillance more generally are shaped through their experience. The programs normalize an arrangement that occurs in non-educational contexts too after all. Insurance companies want customers to self-track to set rates. And you all know in the employment sector such programs are pervasive. But surveillance in the educational sector remains especially important because schools are powerful sites of techno-social engineering. Schools shape us generation after generation. In addition to real world examples, re-engineering humanity uses thought experiments and hypotheticals to sort of imagine the path we're on. So we discuss extensions of this activity watch program setting up a hypothetical where the school district extends the range of sensors used on the device. A sensor that collects data about mental fitness, attentiveness and activity. Even mood or emotional states. It was hypothetical when we wrote the book but not no longer, right? So let's just hope that the Pavlock doesn't become itself normalized. All right, so let's talk about the book re-engineering humanity. I know this, I love the cover. It's a sculpture from an artist in Copenhagen named The Zinker. And it has a science fiction feel to it, right? So many times science fiction stories put humans against machines. Sometimes the machines are tools of powerful humans that oppress everyone else. Sometimes the machines become sentient oppressors. Often humans so, unwittingly so, the seeds of their own destruction by madly rushing down a technological path attracted by the sirens call of efficiency, optimization, perfection, only to learn too late that along the way they've lost their humanity. So what if we were rushing down such a path? Would we know? Would we recognize what's happening? Would we be able to distinguish reality from science fiction? So re-engineering humanity is not science fiction. If you're interested in science fiction I do have a novel coming out soon that explores these issues. It's called Shepherd's Drone but back to re-engineering humanity before the picture was taken. Technosocial engineering of humans exists on an unprecedented scale and scope. It's only growing more pervasive as we embed network sensors in our public and private spaces, our devices, our clothing, and ourselves. So here are the central themes of the book. When does technology diminish our humanity? When and how do humans become predictable and programmable? Can we detect what it's happening? Will we be able to evaluate it? What about being human matters? This is our table of contents. We cover a lot of ground and today I'm only gonna highlight a few of the key ideas but in the Q and A that follows and I'm even around most of the afternoon if people wanna talk about anything else, let's do that. But here's the plan. I'm gonna focus on a few familiar examples of technosocial engineering of humans. I understand this is a bit of an academic jargon but we're gonna try to make it real with some examples. And then we'll talk about humanity's technosocial dilemma and the framework that we develop in the book that uses reverse Turing tests. All right, so let's start with contracts, something I think you're probably all pretty familiar with but have you ever thought of electronic contracts as tools for engineering humans? Probably not, it's the fourth chapter of the book. Contract law shapes the transactional environments where people formulate legally respected and binding commitments and relationships. In general, contract law greatly enhances individual and group autonomy as well as our ability to relate to each other and to cooperate. These are contract laws core normative foundations and yet contracting practices have changed dramatically over the past half century to accommodate changes in economic, technological, political systems and it may be more liberating for some than for others. For many, it's an impressive form of social control. The current scale and scope of private ordering via written boilerplate contract is unprecedented. Consider a few hypotheses about the number of written contracts the average person enters into during their lifetime. First, the number has exponentially increased over the past half century. Second, the rate of meaningful participation and negotiating to terms has steadily decreased and in many areas disappeared altogether. Third, the number of written contracts concerning completely mundane affairs has increased dramatically and by mundane affairs I just mean ordinary everyday affairs for which a written contract would be cost prohibitive, inefficient and downright silly if in the absence of cheap digital boilerplate. How many written contracts have you entered into in your lifetime? Think about it. If I'd asked this question in the past, the answer would be an order if not orders of magnitude less and yet the funny thing is if I asked this question in the near future people may find the question odd or just downright weird because the idea of distinct identifiable contracts may be at odds with the experience of completely seamless contractual governance and of course this raises an interesting theoretical question. Freedom of contract requires the correlated freedom from contract. When contract becomes automatic, continuous, ubiquitous, both disappear. There's no freedom. So have you ever clicked on an agreement like this and agreed to the terms without reading them? Of course you have. You all have, right? These contracts and more importantly the human computer interface through which they're presented are designed so there's no point in reading the fine print. Much less stopping and thinking about whether the legal relationships you're forming or the third parties lurking in the background through networks of side agreements are trustworthy. Now it's a rather simple user experience by design. See it? Click it. Perfectly rational, stimulus response. What's really interesting about the example is how click-to-contract creeps across contexts from websites to apps to smart TVs and to most of the supposedly smart devices we'll soon see when the internet of things finally arrives. But the context, the legal relationships, the data exchange, the third parties involved are all dramatically different across those contexts and yet the engineered stimulus response more or less remains the same. Our online contracting regime is a surprising but nonetheless compelling example of how our legal rules coupled with a specific technological architecture can lead us to behave like simple, stimulus response machines. Perfectly rational of course but also perfectly predictable and perhaps even programmable. But it's far from the only example. Does this seem familiar to anyone in the room? Have you ever been in a social situation where you shouldn't check your phone but you just can't help yourself? There's nothing worse than when a family member, friend or colleague sits down for a meal and they plop their smartphone on the table. Sherry Turkle's written about this extensively. When you think about social media, have you ever found yourself habitually using superficial expressions like retweet, heart instead of formulating thoughtful responses? We all have. Social media platforms are optimized to get users to communicate in a particular way. The platforms benefit, they profit from a style of communication which in doublespeak is called engagement. The techno social engineering on social media platforms isn't fully responsible for our fake news or our reality jamming problems but they play a significant role. People aren't just lazy or stupid for believing fake news. That means a weird form of victim blaming and it's a distraction. The platforms are designed to discourage critical thinking, deliberation or just leaving the platform all in the favor of a cheap form or a shallow form of engagement. Beyond how we think and how we relate to each other some social media platforms have experimented with engineering human emotions. You may be familiar with this paper which presented experimental evidence of a massive scale emotional contagion through social networks. We can talk about the details in Q and A but suffice it to say a firestorm developed and then quickly died out. Most folks were concerned with process. Was there informed consent? Was there a proper IRB oversight? Not me, I was concerned with the tool being tested. We dig into these and many other examples that reveal how powerful techno social engineering is occurring everywhere. It reconfigures our lived in environments from the workplace. I'm behind, there we go. Oh, I'm going the wrong direction. From the workplace to the home, right? And one extreme scenario that's worth considering is whether the smart programming of the future will require us to accept automatically the shots that algorithms call. It's just a simple extension of the click-to-contract script. So back in March, the New York Times and The Guardian and other major newspapers broke the Cambridge Analytica story. I'm sure you're familiar with it but just in case you're not, Cambridge Analytica is a political data firm hired by President Trump's 2016 election campaign and many others in other contexts. It gained access to private information from more than 50 million Facebook users. The firm offered tools that allegedly could identify the personalities of voters and influence their behavior, right? Many folks accused Cambridge Analytica of theft. They stole the data, that's not right. Cambridge Analytica collected data pursuant to contracts with Facebook and Facebook essentially brokered the deals and enabled the data to flow and then complained afterwards that data use restrictions weren't followed by the firm. It's not theft. There's a temptation to focus on the Cambridge Analytica to buckle and react to it. Stop Cambridge Analytica, that's the bad actor. Very typical but incredibly short-sighted. After all, there are hundreds if not thousands of similarly situated companies leveraging the Facebook platform to collect data and use it to develop intelligence. Actionable intelligence. That might lead one to diagnose the problem as a Facebook problem, right? We should regulate Facebook or we should somehow get Facebook to regulate itself better. And in fact, there's a strong push in this direction. And then others say, wait, we should just boycott Facebook, just delete Facebook. This movement may make sense for some, but not for most, at least not now, given that billions of people around the world use Facebook daily to socialize, maintain their connections, organize groups, produce a whole lot of social value. We need better alternatives first. The possibility that Cambridge Analytica engineered beliefs and votes through its use of Facebook data may be scary but bear in mind. Cambridge Analytica and Facebook are just symptomatic of our disease techno-social environment. Facebook is just one of many big tech companies on the internet. We could talk about Google or Amazon or eBay or any number of other large companies, but of course we could also talk about thousands of smaller ones too. All pursuing the same basic objectives, marching us down the same path. Digital network technologies are re-engineering the planet. Our social systems and ourselves, we need to think differently at a different scale. What we really should be talking about is the world we're building for ourselves, our children, and for future generations to examine the interconnected global, environmental, and intergenerational considerations and at the same time relate those considerations to our everyday lives. I think a better metaphor, though not a perfect one, is climate change. So here's one way to understand climate change. We want energy. Energy's an essential input into so many of our modern activities. We can build different supply systems for energy but the one we've relied on for the past century is heavily dependent upon burning fossil fuels. It need not be. But fossil fuels have been relatively cheap, convenient, politically supported for past and current generations. Massive external costs from burning fossil fuels are not felt by the past or current generations. They've led us to adopt certain lifestyles, be conditioned to sort of get used to things a certain way. The costs are largely pushed onto future generations. While blame can be passed on fossil fuel companies, we all bear some of the responsibility for climate change. But bear in mind, our heavy dependence on fossil fuel consumption has been economically rational. We all make countless individually and incrementally cost-benefit justified decisions advantaged by cheap and convenient fossil fuel consumption. It's a massive, intergenerational, global tragedy of the commons. Dealing with climate change is politically and economically difficult. It's necessary but it's difficult because it requires significant structural changes and at the same time adjustments in how all of us live our lives. The digital network environment suffers from a similar tragedy. We want, among other things, to connect, communicate, interact, transact and otherwise engage with each other nearly instantly and often without regard to geographic location. Digital network technology, like energy is an essential input into many of our modern activities that generate incredible amounts of social value. Every day, we each make various decisions about technology that seem on their own terms rational and unproblematic. We adopt technology and mindlessly bind ourselves to the terms and conditions offered. We follow scripts and follow paths set by platform designers. We carry wear and attach devices to ourselves and our children, maintaining a connection and increasing our dependence. We outsource all sorts of thinking because heck, there's always an app for that. Each decision may be cost-benefit justified and yet the net effect on who we are and the lives we're capable of leading may be unjustifiable. Nothing less than our humanities at stake. We risk being engineered to behave like predictable and programmable people. But it's too easy to blame the companies that treat us like programmable objects through their hyper-personalized technologies attuned to our personal histories, present behaviors and feelings and predicted futures. They absolutely bear some responsibility but so do all of us. I think the activity watch store I opened with illustrates why. Climate change threatens our planet. Always on, techno-social engineering threatens our humanity. But you still are probably wondering what is this humanity he's speaking of? I have to admit, Evan and I struggled with this for years. Many people complain that technology's dehumanized. It's difficult to know when a line's been crossed when the social engineering's gone too far, when something meaningful has been lost. Do we know when technology replaces or diminishes our humanity? Can we detect when it happens to answer those questions we'd have to know what humanity is, what makes us human in the first place? And that turns out to be an incredibly difficult question that's been debated for millennia without resolution. So it's not surprising, there's no reliable method for identifying and evaluating technological impacts on our humanity. As an attempt to move beyond the impasse, Evan and I developed an interdisciplinary method that draws from the humanities, social sciences and computational sciences. And the idea is to examine and assess the relationships between humans and technologies that we develop and use, building off of the Turing test. So what we do is we try to radically repurpose the Turing test. Turing asked whether machines can think. He thought of his test examined the machine side of the line, of the line between humans and machines and gave rise to the fields of artificial intelligence and machine learning. So if you're thinking about, you might think about Watson as sort of an example of what's going on on that side of the line. We examine the human side of the line. We use machines as a baseline and ask when and how humans behave in a machine like manner. So we're more interested in how the Watson environment affects humans within it than we are interested in Watson. Maybe tablets will replace waiters. That's an important thing, it matters. It's not what we care about in this book. We're more interested in how tablets shape human conversations and relationships at restaurants. So we begin with different types of intelligence. Again, some kinds of intelligence are important to be human, others are not. But all of these tests could be used plausibly to distinguish humans from machines, using machines as a baseline. But there's more to humanity than our intelligence. So we extend the method to different capabilities, the capacity to relate to each other as well as things like free will and autonomy. The tests are plausible tests to employ, but more importantly, the test serves as conceptual tools that enable us to consider what makes us human and how our humanities reflected in and by the technology we adopt and use. So we spend some time developing our idea about the difference between what it is to be human and what matters about being human. It's up on the slide right here. So there's lots of things you could focus on, what it is to be human. We focus on what meaningfully distinguishes Homo sapiens is our capability to imagine, conceptualize and engineer our environment and ourselves. What matters about being human is how we exercise this power over generations to collectively produce, cultivate and sustain shared normative conceptions of humanity. Our focus on the Turing approach leads us to focus on human capabilities that are essential to human flourishing. Not all of them, we don't know exactly how to prioritize all of them. We sort of defend a form of pluralism. But my point is that across generations, human civilizations, each generation inevitably answers these questions, right? By developing and sharing normative conceptions of what matters about being human. This is the humanity that we've suggested is potentially at risk and of degradation and even destruction. And so a world where engineer determinism governs, right, is a world where fully predictable and programmable people perform rather than live their lives. And not everyone agrees with us on this and there's a discussion of our differences with Peter Singer on this at the end of the book. But we think such a world would be tragic. If given the opportunity to build this world like that in a thought experiment we would decline to build it. And Peter Singer suggests that he would so long as it delivered the happiness at the end of the day, right? To us it's not that humans in this world would no longer be human, right? Human beings would still be Homo sapiens but much of what matters about being human would be lost. And so I think that's probably about my time. So we can turn to Q and A, thank you very much. We do have a website, we're writing shorter essays to try to get some of the ideas out to a more accessible and broader audience. Please, read the word, review the book, all that sort of thing. My son were here, he would insist, and I didn't do this, but he insists that I should be playing Rage Against the Machine, the song called Wake Up, because he thinks that's what the book should be told. Instead of re-entering humanity it should be Wake Up and I should be blasting Rage Against the Machines and we should be like punk rockers dancing. But I'll stop there. I just also like to point out that we have books for sale, Brett's new book and the back corner. Do we bring chairs down in front? Choose to tweet. You can be tweeting this discussion at B.C. Harvard or you can opt out of that if you, so choose. Thank you so much. Yeah, chat and house rules do not apply. Tweet, get the word out, however you want, get the word out, yeah, that would be good. Right, so we're gonna open this up to a Q and A, but first I'm gonna use my privilege as a discussant to ask a few questions. First, on the topic of Twitter, I couldn't help but notice that you have a Twitter account. So one of the things that I was curious about was how has your use of many of these platforms changed since you began researching and writing the book? That's a great question. So my Facebook use has pretty much disappeared. So I'm on Facebook, so I started on Facebook because I teach internet law and I felt I can't really teach about something if I don't have some exposure to it. So I joined Facebook and initially, first few years, like most people when they first joined Facebook, I was more active. And then in the last six years, my activity has plummeted. So once a month, once every two months, I hop on and largely it's about, I may be speaking here, I may be speaking there, but I don't otherwise use it for much. Twitter is very similar. I use it as a marketing tool, but don't really use it as a communication or engagement tool. I'm sort of more of an in-person conversationalist by email, those kinds of things. I still stick to email quite a bit. I'll tell you another interesting story if you want about something that changed dramatically. So my kids had got me to play this game. Another thing that sort of changed in the course and I hadn't really realized it until I saw it being done to myself was there's a game called Dungeon Boss, which is like a simple game on an iPhone game that my kids had been playing and they were like, dad, you'll love it. You'll love it. There's strategy, there's guys, you build up your characters, all this stuff. And so I downloaded it and was playing it with them. And then they stopped playing it and I kept playing it and I kept playing it and I kept playing it. After a while, after maybe a, I don't know, eight months of playing it, I decided to stop because my kids, I heard them talking in the other room and my wife had said something like, where's dad? And they're like, I don't know, he's probably playing Dungeon Boss. And I said, oh my gosh, I'm addicted. I basically was addicted to playing the game, just the simple, time consuming game. Whenever there was a free moment, I just, oh, just hop on and play. And so I kind of, I had the realization, this was midstream writing the book too, where I was just like, oh my gosh, like I'm doing exactly what I'm writing about. I gotta stop, and so. So that actually leads to my second question, which is about what you refer to as the freedom to opt out or to be off. And it seems to me, I understand why that solution is desirable or promising. But I also was thinking about who gets to have the freedom to be off. And are there cases in which that is in itself a privilege because there are people whose livelihoods or well-being depend on using these technologies? So on the flip side of that, I was curious whether you could say a little bit about whether there are ways that some of these concerns could be curbed on the design side or from a regulatory standpoint. You talk a lot about contract law. But I'm curious, and could you imagine, for example, requiring that Google become something that would function more like a national resource, not be privatized, not be part of the free market to the extent that you believe such a thing exists, and instead be treated as more of a public service? Yeah, so I do think when we're thinking about alternatives, right, so the point about deleting Facebook's not really a viable solution until you have an alternative. At the end of the book, we talk a bit about thinking differently about institutions. And one of the examples, so a number of years ago, I wrote a paper for the BBC about thinking about how the BBC, in many ways, was a form of infrastructure and pursuant to the theory in that first book. And in the course of writing that paper, at the very end of it, I made a suggestion to them like, hey, have you ever thought about sort of extending some of your existing resources because they're governed a very particular way, they're highly trusted, 95% of the population in the UK sort of tunes in regularly. And they're a very, very high degree of trust in both their content, but also in terms of the way they run their infrastructure. It says, hey, have you ever thought about creating a social media, an alternative social media, where you could plausibly say we won't surveil you, we won't sell anything, we won't sell you to anyone else because we're not in the business of making money that way and we have a license fee. So we talk about this at the end of the book of thinking about what kinds of alternative institutions you could create that governments could create that we could have more trust in. And so I think there needs to be lots of consideration about alternative models. And so the BBC, it may not be the BBC, but it's something like the BBC, some other model that's governed in a way that it's not reliant on either profit or on government, sort of returning to the government for funding on a regular basis. So that's one idea. In terms of, there's a bunch of ideas at the end of the book too about how to engineer space where there can be freedom to be off, right? So even if you think about net neutrality style, like if you think about open internet, what net neutrality is about, I would say, and I have argued this for years, network neutrality or open internet is really about the governance of intelligence. Infrastructure, how smart should the internet communications infrastructure be, right? There's certain kind of intelligence when you know who's doing what with packets that you're transporting. And the question is how can you use that intelligence in particular ways? So if you understand net neutrality that way, and I think it makes a lot of sense to think about it that way, in the book we take the position that there's a variety of other infrastructural systems, whether it's smart electricity grid or smart transportation or smart city, where we're gonna have to think about engineering governance into these systems that regulates or governs the use of intelligence. It's not just intelligence, it's intelligence enabled control, right? Cause that's this, you want freedom from that kind of control built into your system. So that's a regulatory solution that we think, I should say, it's one of the regulatory solutions that we talk about at length in the book. The reason I gave you the humanities, techno, social dilemmas, analogous to climate change bit in this very short talk, I had to choose what to say. The reason I gave it is to make it clear that the problems we're facing are not one single problem. It's not engineered addiction or surveillance capitalism or privacy or it's a whole host of micro, meso, macro level connected problems, it's a wicked problem just like climate change is. There's not one single solution, you need solutions, some of which are regulatory, some of which are that operate at the macro level and some of them are very micro, it's just individuals making decisions about for themselves and their families and in communities and schools need to sort of be more engaged and that's more of a bottom up micro level and so it's gonna to actually deal with think about the world we're building in the way that I'm suggesting you have to sort of see it and see the kinds of solutions as being interconnected with one another. Yep. I have a lot of other questions including what you mean by both humanity and intelligence, but I will hold off on this so that people in the room can pose questions as well. When you ask a question, I ask that you keep it brief and make sure there's a question mark at the end of your question. Yes, would you like to begin? Hi, thanks. In the beginning you talked about predicting mood and as a note, there's no predicting mood, it's detecting, post-talk, what people self-report their mood to be. So I'm wondering if, are you taking these claims at face value about how much we can engineer determinism or do you allow for these claims to maybe be acting without actually being true? That's a good question. The claims being made in the literature about what people, yeah, what's being done, that's a good question. So it's like with Cambridge Analytica, the reason I was very careful to say, they allegedly could identify personalities and use it to shape outcomes. Whether or not they were effective at it, whether or not, same thing with emotional contagion experiment, right? Allegedly, it's a massive scale emotional contagion that was statistically significant but then there's a whole bunch of debate ensues about did they actually show it? Were they actually successful in doing it? The point I end up, we end up emphasizing is trying to be as careful as we can, I may have been not as careful as I should when I mentioned mood in particular, is there's one thing is the tool being tested, what you're trying to accomplish, right? And it will increasingly get better with more intelligence. Maybe there are some things that we won't be able to use. There's certain things you won't be able to optimize for. And mood might be one kind of intelligence that's unattainable, no matter how hard we try, but there are a variety of other forms of similar kinds of intelligence and tools that we can use to shape people's behavior and make predictions about how they'll behave in response to certain stimuli. So mood-driven stimuli might not turn out to be the stimuli that are effective in getting the desired homogenous responses that programmers might be looking for and techno-social engineers or nudgers or whatever you wanna call them might be looking for. In writing the book, we tried to be very careful about allegedly this is what they're trying to do, but then sort of that's the difficulty about being an honest futurist, because I don't consider myself a futurist at all, but we're trying to assess where we are and where we're going is honestly as we can. And so you're constantly dealing with exactly what you put your finger on, which is here's the state of the art now, here's where the direction people are heading or trying to head, and then how do we evaluate those steps? I hope that's responsive. Thank you so much for your talk. This is really interesting. I look forward to reading the book. I wanna press you a little bit more on your analogy with climate change and also given the fact that you've taken one of your prior books talks about the commons, and I'm just thinking about what the environmental movement did with Eleanor Ostrom's work on the commons, and I'm thinking how helpful is the environmental movement at looking at as a sort of analogous framework for tackling some of the problems that you've outlined today? Another great question. I think it's very helpful. I mean, so I think it's helpful in two different ways at least, right? So maybe three. So I use the analogy mainly to help you see that it's this, as I said a second ago, it's multi-leveled with wicked problem that has this sort of tragic component where it seems like a lot of individually rational decisions end up contributing to a systemic failure or long-term tragedy. I think the work, the connection to be made with the commons is two-fold, right? So there's one is the political movement piece, which you suggested. We do at the very end of the book also suggest environmental, like thinking about environmental humanism because we're constantly reconstructing both the built environment and ourselves. So thinking about who we are as human beings requires us to think about the environment we're building. Right, as you wanna talk about humanity, you don't just talk about me as an individual, you as an individual, you as an individual, you have to talk about the three of us as individuals in a built environment that affords us certain things and doesn't afford us other things. That, and so humanity, if I'm gonna sort of sneak in and response to the question you didn't ask, again, humanity, yes, is what matters about being human. It's the commitments generations make through the built environment. And by built environment, I don't just mean technical things, I also mean social institutions, right? So our constitution, our human rights conventions are also imagined built things we've collectively produced and sustained across generations to sustain certain things we think matter are fundamental about being human, right? Those are part of the built environment, they're part of what we would think of as our humanity that we've collectively produced and at risk as that shared resource that can be depleted or degraded over time. The other one, and I'll just be quick because there's so many, there's other questions, is the other thing that the referenced Ostrom does, I'm writing something on this now for, anyway. So is the Ostrom Commons world, which we do with Knowledge Commons as well, provides a systematic approach for doing social science to study how institutional governance works in different contexts, so we can learn. So we should be studying and trying to understand how to build trusted governance systems and the commons in both the environmental, but we also have lots of Knowledge Commons, we have lots of Commons governance management of lots of shared resources. And now more than ever, we need to sort of figure out how those work, which requires sort of sustained systematic effort by social science, I think by social scientists trying to figure out how they work, but also how they don't work so that we don't just assume commons always work, certainly not a panacea. Hi, thanks very much for your talk. I was wondering about the contracts, and can you talk a little bit about enforceability because it's this whole, I mean, this whole world of contracts has radically changed, but I'm not sure the rubber's ever really hit the road in terms of looking at what this actually looks like, and I'm wondering if courts have pushed back at all, and if also those contracts are so all over the place and so complicated and include other contracts that are online that can change over time, like if you actually try and enforce them, how does that really work, or have there have been real tests of that? Okay, yeah, so I'll just be quick on that. So for the most part is the click to contract interface, if you click an I agree button, and as long as there's notice that there are terms that you work, then courts have found them enforceable. At least a contract has been formed, and then whether it's enforceable, certain terms may be unenforceable because they're unconscionable, which is sort of a legal term for, they're substantively unfair or something, but to fight it that far is not where you end up really wanna be going, right? So you wanna sort of have it not be, so you don't necessarily wanna go all the way down the road where you're trying to enforce in court. For the most part, clicking on an agree button with notice of terms is sufficient, courts have held repeatedly to form a contract. But do people ever get pushed back when you start actually enforcing them? I mean, what I'm wondering is you have this regime of sort of this seamless contract world, and I'm wondering if they kind of all conflict with each other and don't overlap nicely, and I'm wondering how that has, if you could really get down to that eventually, basically, or if there's- You know, I think they, take a, have you read a bunch of, take a look at a bunch of the contracts, they don't all conflict all that much, right? They basically, many of the contracts have very similar, right, they've evolved to have very similar sets of terms across different service providers. So it's not like there's lots of conflict among the different contracts. The point is that we as users or customers don't often have a reason to sue for breach, and we often don't get sued on the base of those contracts. We sort of more or less comply with the terms, they more or less comply with the terms, and if you get upset about information flowing to third parties in the background, there's no third-party beneficiary sort of writes on your part to sort of sue on those other contracts, and so there's not a whole lot of purchase for customers to fight back with regard to those contracts. At least that, yeah, and there have been some recent courts that have sort of changed and sort of questioned about whether these contracts should be recognized as contracts, but it's not a whole lot. As long as there's notice that there are terms, you're more or less agreeing to be bound. Thank you, Stephen Kaye, visiting from the other Cambridge. I'd like to home in on the question of humanity and what it means to be human, and you brought up that it's both a normative and a descriptive notion, and of course, from a normative point of view, it's hugely contested, and as you know, what it means to be human at various times is meant being a man, having property, being of a certain bloodline, and so forth, so it's hotly contested. At the same time as a descriptive concept, it's of course enormously diverse. Different people have very different ideas of what it means to be human, of what humanity is, and so forth, so taking this contestation and this diversity, I have a worry about your worry, so your worry is that these new technologies are threatening what it means to be human, but my worry is that by defining a concept of humanity that's under threat, you are homogenizing something that is hotly contested and very diverse. Now, I know you allude to having thought about this and the plurality of these views, so I'd be very interested to hear a bit more about how you wrestle with this. Okay, good, so the, on the descriptive bit, the first part, what it means to be human, we don't have a whole lot, there's a lot of diversity and we don't make any claims about that question. We're mainly focused on saying humanity, the thing that's at risk, the thing that we're grappling with throughout the book, that's being engineered as we're engineering the world around us, as we're building the built world, is the normative part, right? Our conception of what matters about being human that being sort of reflected in, embedded in our institutions, our infrastructures, the technologies we're building and using. The idea that there's a plural, the, what we talk about in terms of pluralism is we're not committed to saying which of the various human capabilities essential to human flourishing you might rank more highly than others, right? Some might, some cultures would emphasize sort of free will and autonomy as like the highest ranking among human capabilities. It's like the most basic one we all need and then you build out from there. Others might focus, other cultures might emphasize collectivity, right? So the ability to relate to each other, to socially cooperate and connect with each other. So, sociality and that may be the capability that another culture would emphasize as being the first and preeminent and most important. So you can rank differently, you could say we're not even gonna rank them, we're gonna weight them all equally. We don't end up making any grand claims about how to approach different capabilities. What we're trying to do is identify the capabilities that are, that machine, simple machines don't have, right? And so we use these Turing tests to sort of say, okay, can we identify when there's a diminishment in a capability that matters? And then we focus on free will and we focus on moral decision making and certain thinking, that common sense. But we don't take a position on which of those matter more. We sort of leave that to be politically and culturally debated and contested. And no, we do take, the one position I will say we do take is we dispute the welfarist hedonist position that that is the uber value. So we take a human flourishing capabilities orientation and suggest that's in direct conflict with, in terms of world building and thinking about techno-social engineering, it's in conflict with the welfarist hedonist path that we're currently on. So our claim is in part that what's driving us down the techno-social path we're on right now, which is optimized for efficiency and to produce cheap bliss, right? It maximized happiness at low cost. What's the way to optimize, get the billions and billions of people really, really happy at lowest cost, shape their preferences so they don't want very much and barely surpass them. And that may be an attractive world if you wanna maximize the amount of happiness you can engineer, but it doesn't look like a very good world to us from the perspective of human flourishing. Hi, Cantadeal also visiting from the other Cambridge. I'd like to ask you again about the analogy with climate change. So the problem with climate change is that it is a global problem that is going to most affect those who had less of a hand in causing it. And I was wondering to what extent that also applies in this analogy with technology shaping different parts of humanity differently and what the role is of that technology in shaping humanity for those who do not and or cannot use it. Man, I love that question. We should write that. Like that should be a follow-up. Like I don't have a good answer other than to say yes. I think you're right. There are distributional, like the impact of techno-social engineering and the humanities techno-social dilemma problem will be felt differently across groups, across different countries based on wealth, based on income. And there's absolutely the prospect of those who are the least best off or the worst off being most affected by the dilemma. We don't get into, we don't spell that out fully in the book, but I think you're right that figuring out, you're right, that exists both with climate change and when you try to map the analogy over, whether, I think it does fit. I think you're right that there will be that impact. I don't know exactly what the impact is. So I can't say that I've thought it through, but I suspect it would be a fruitful thing to work out. So how would you slot in this technological engineering of humanity against other things throughout history that have dehumanizing effects, dehumanizing institutions? So I think of something like the modern criminal justice system, which has massive effects on certain communities, but also institutions like slavery and prisons and sharecropping and all sorts of things that have similar, it seems to me, effects on humanity. How do I fit that this into the same spectrum? That's a good, that's a good, another good question. So, yeah, I mean, I think we would make, I mean, I would make the analogy to, we make the analogy of slavery at one point in the book. So if we think about one aspect of the slavery environment, like right, so slavery itself involved a built environment, right, it wasn't just about individuals being slaves, it was about the entire built construct, human constructed environment that perpetuated slavery, right, in a sense, not depriving human beings of free will, like the capacity to reflect upon and determine their own beliefs and preferences, which is how we define free will and then talk about free will in the book, but rather in constraining the agency or autonomy to exercise their will in the world, right? So the slavery environment is a dramatic constraint on agency rather than a shaping of the restriction on free will. So it's a slightly different, I mean, you can think of it as a slightly different, it's a form of techno-social engineering that operates on a particular capability, which is my ability in the world to act upon my intentions to do what I want and exercise my, have my free will be externally sort of reflected in how I get to act and what I get to do, but I can still, but the part of the dilemma we're facing, we're trying to spell out in the book, is not only is what your opportunities in life can be restricted and shaped and scripted, but also how you even think, how you formulate your beliefs, right, what it is you believe, what it is you want is also engineered, or at least subject to, kind of pressure to be engineered. And that is the sort of underlying, one of the underlying sort of dilemmas that we focus a lot on. It's not that that has never existed either, right? So propaganda, there's other means by which people's will have been, I mean, there's a whole long history in that as well. And we talk about, we connect as best as we can to pre-other examples. We try to emphasize in the book that why what's different in our modern instantiation of techno-social engineering is it's hyper-personalization, right, it's powerful because it's hyper-personalized. The scale and scope in terms of how it operates on each individual in a very hyper-personalized way is a bit different than the institutions and built infrastructures that sustain slavery or the ones that sustain sort of, sort of the sort of thought police idea or other things. Yeah. So I wanted to ask you about trust as a resource and you gave the example of the BBC, maybe that's like a trustworthy institution and so has some resource to draw on to create trust. And I was thinking, I feel like we already have so much trust, like I put my information in so many places all of the time and check boxes all of the time and I am skeptical of the system, I think about this, but then in my kind of lived experience I'm extremely trusting of things and there are sort of like analogies to like if I walked up to a box in the street that said put your passwords in here, like I wouldn't do that, but if it's on my phone and it's something I found and I'm interested and I sort of trust it. So I think something I'm curious about is to what extent you are, what is your relationship to trust as a resource or mistrust as a new resource and kind of where, what do you see ultimately as the kind of, in terms of the behavior component of all of this, if we were to create the BBC communication platform, I might not even want to use that because none of my friends are using it or whatever that is and so the trust component might be much less important than some other things about how I engage with technology. So I'm curious about how you see trust. That's a great question. So there's different ways to think about trust and I'm almost tempted to, so we have in the Knowledge Commons series, we have a book, we're working on something called Governing Privacy Commons and there's a big piece of that that's focused on trust. I'm almost tempted to go down that route but I'm gonna force myself not to. I think of trust, at least in the context of this techno-social engineering discussion as related to sociality or ability to relate to each other, it's relational capital is just one way to think about it. It's something that when you trust, you don't trust, you know, you trust someone else or something else or you have a relationship with something that is trust where you find that you can't trust the other person, right? Or if it's a technology or a tool, we should always remember anytime someone talks about AI or technology or tool, it's always a tool or technology that's built, owned and managed by other humans. There's always humans on the other side even if you're not thinking about them, right? So if you think you trust the technology, it means you're trusting the humans on the other side of the technology, right? So trust is something that's relational and our ability to relate to each other, which in our view is one of the core capabilities essential to being human that we sustain over time, right? That can be disrupted, that can be engineered, right? You can engineer fake trust. You can engineer trustworthy relationships that actually, I mean, you can engineer trust where there's no, that aren't worthy of trust, right? And much of what gets us to trust the technologies we interact with is sort of this engineered sense that I'm engaging with this one website. It seems trustworthy, I trust the brand, right? Or I trust, it's worked for me so far, right? It seems so simple, right? And yet you don't really know about the complex network of other relationships that your relationship with that first website is creating, right, because that website has a network of side agreements in relationship with other parties. But you don't know who those other parties are, right? Your trust in that first party has led you to have trust in those other parties. It's been manufactured and engineered. Whether it's trust that is worthy of existence or whether it's actually a trustworthy relationship is often debatable, right? And that's the difficult, I think, one of the dilemmas that comes up is that our ability to relate to each other is being undermined by techno-social systems engineered to get us behave in a particular way, for profit, collect this way, collect as much data as possible. There's a whole bunch of drivers, efficient, convenient, quick satiation, right? And the systems built to engineer those behaviors also have one side effect of them is that they manufacture a lot of trust that's illusory or it's not worthy of trust, right? There's relationships you have with lots of other entities. You may not even know that they exist that aren't trustworthy, but by your actions you are acting as though you trust them. And you just, as you just put it, right? I trust lots of, you know, I trust them. I have lots of trust, right? It's interesting that if I gave you an envelope and I said, just sign the outside of the envelope, we're gonna enter a binding contract. And you said, well, what's in the inside of the envelope? I said, all the terms. In terms of how much money you're gonna pay me out. I said, here, sign the outside of the envelope. Just sign it. You wouldn't sign it. But you're willing to click a button that says, I agree to enter into an agreement. Why is that I click in a button, I agree to enter into an agreement any different than sign the outside of an envelope where there's a contract on the inside? Is there a substance, substantive difference between those two things that leads you to believe one's more trustworthy than the other? Why is your behavior different? Why do we all click the button, but we wouldn't sign the outside of the envelope saying, hey, go ahead. We're running up against the limits of our time. I think we have maybe one or two more questions that we can fit in. Hi, I'm particularly interested since we have a couple of friends from Cambridge here in your thoughts on the European views of privacy and also the new Magna Carta for the internet that's been developed because it seems to me that there are people out there who are looking at the flags and saying, we need to step in, we need to intervene, we need to do something about this in a political way. Yep, so I know some of the folks who worked on it from Torino and some people from Nexa Center in Italy, we're working on that charter. We end the book, the last paragraph of the book talks about the GDPR as being a potential, not game changer, but it's a potential wedge that can lead to change because the GDPR not only empowers European citizens to sort of decide whether or not to sort of actually sort of implement their own vision of privacy and to sort of respect human dignity and they have to actually have informed consent, but you could also withdraw consent, which I think is one of the interesting things that people haven't quite figured out how that's actually gonna work, but I can consent to something and to the giving up of some information in Europe if I'm a European citizen, I can say, you know what, I withdraw consent from your further processing of that information. That gives me some leverage. And so I think the GDPR, well it's very early to sort of see how it's gonna work out and what it's gonna mean over here in the US. But it is a signal that as you, I think you were basically saying, which is it's a signal that people are starting to think about human dignity, various basic human rights conceptions of what matters about being human and how they're at risk, it does require, the one concern I often have with it is that it depends on people exercising, like being active and exercising their rights. And that means you have to have the preferences, the beliefs, the will that leads you to act. And people are often subject to lots of pressure not to act, because it takes time, it takes effort. You've gotta slow down and think about when it is that I want to withdraw my consent. Or I'm gonna say no, I don't consent to this. Just giving someone the right to withdraw doesn't mean they'll actually exercise it. So we'll see how that plays out. But it is important. Are there any final questions? Thank you. So I'm interested in what your baseline is from which you might be assessing whether we are being dehumanized. So I was thinking in the context of, so we have a generation now of digital natives for the generations before would have grown up on television, consuming six to eight hours per day. These generations of people who were just basically spent all of their free time sitting in front of a screen consuming messages that were highly normative about who the goodies are and baddies and what a family looks like and so forth. And so I'm interested not in whether you think this technology is necessarily bad or worse, but whether you have a kind of conception of a baseline from which to judge whether this technology is making us worse than that generation say. Or whether you're looking at different bits of this technology and saying, well, these ones could be empowering and humanizing and these ones could be dehumanizing. And if we tweak it like this, then we get a maximum so forth. Yeah, that's a good, it's a good, it's a good question. It's hard to answer in part because we don't have a bit, I don't have the ideal conception of what a human, not only what a human is, but what matters about being human is like freshman's version. And I'm gonna tell you how, what that baseline is. And when we deviate from that, you're being dehumanized. I don't have it. That's quite partly our pluralist take on the thing that I mentioned before. But we do have an extensive discussion about the shifting as we go from mass media to the internet, to digital network technologies and what some of the differences and how they operate on basic human capabilities. Our ability to engage in common sense, some of the ability to, when we outsource certain thinking like how we relate to each other, certain cues can be outsourced so that we're not thinking you've got autofill text in an email that's catered to the recipients, to different recipients who are receiving it and it's expressing different moods on your behalf so you don't have to think about how you relate to the person on the other side. Those things are operating on capabilities that matter. Whether they dehumanize, whether we say those things should be banned because they're dehumanizing, we don't quite go that far. My co-authors gone that far with facial recognition technology. He's been writing a bit about that in particular because it's something that he will put in one side of an extreme. But yeah, a lot of the technology we're talking about can be humanizing or dehumanizing depending on how they're deployed and used and designed and how they work in aggregation with other technologies. Same thing, again, this is where I think the climate change thing is a useful analogy. You're combating climate change. Well, does that mean burning coal is itself not dehumanizing, but de-global-warminizing or whatever? Well, yeah, it contributes to it, but it doesn't mean you have to ban burning coal altogether to sort of avoid, to deal with the problem. Or you could think about other things about coal because coal just sounds so bad, but yeah. So we don't take the position that we can evaluate technology by technology and say this technology is dehumanizing, ban it, this technology is dehumanizing, ban it. We're saying these technologies are operating on certain capabilities that matter. And we wanna have a way to detect when our use and dependence upon this technology and others like it are leading to a law, leading us to lose something that matters. And the Turing test framework is potentially a useful one for identifying that. Oh, please join me in thanking Professor Fritzman. Thank you. Thanks. Can we get to ask your questions? Okay. Let's ask all your questions. Yeah, well, she has a destroyer of technology and science. I was thinking about prior examples of intelligence.