 Good evening everyone. We're going to start our program. I'm Christine Rosen. I'm a future tense fellow here and for those of you who don't know about future tense, we're a partnership with the New America Foundation and Arizona State University and Slate magazine. The partnership explores emerging technologies and their transformative effects on society and on public policy. So part, one of the things that we do at future tense are these kind of events where we invite authors of books or we screen movies and we then open it up to discussion both here and in our offices in New York City. You can follow the conversation tonight with the hashtag Algorithms Want. You can also follow Future Tense on Twitter at Future Tense Now. A couple of housekeeping items and about this I am absolutely diabolically serious silence your cell phones. I will call you out if I hear them ringing. During the question and answer also, please wait until the microphone is given to you so that everyone who's watching on the live stream can follow your question. And also, please ask a question. I know we all have an inner soliloquy that's just waiting to come out but my nickname in these events is the hammer because I will bring it down if you don't ask a question, ask you to keep your questions succinct so that Ed has time to answer them. Stick around after for a reception. Ed is going to sign books. You can also purchase copy of the book. We have some wine and beer and snacks if my children haven't eaten them all yet. So please stick around afterwards. I'm very very pleased to invite Ed Finn to come to the stage. He is the author of this excellent book, What Algorithms Want. He's also the academic director of Future Tense here and the director of the Center for Science and the Imagination at Arizona State University. So we are going to sit down, start the conversation. We're going to leave plenty of time for your questions afterwards, so enjoy. Okay, I wanted to start with something that happened in my neighborhood recently. I was walking down the street with my dog who's a puppy with no manners and we see this little thing cruising along and it's one of these new robot delivery drones. Have you seen this rear in DC? You might have seen them. They look like a cooler on wheels with a little flag. There's usually an awkward human lurking a few paces back in case something goes terribly wrong. The dog wanted to attack it, sort of barking, and you know it had this nervous reaction. I thought that's how I feel too when I see that. I'm thinking, you know. So I think when we think about the future and fear, it's often robots, right? In your book, however, you get it thinking much deeper. What makes those robots run? Who's programming those robots and what is the, at the heart of it is of course algorithms. One of the great things about Ed's book is his discussion of metaphor. So that's the first question I have for you. Talk about algorithms as a cathedral, as dittoros and psychopedia, even as the Star Trek computer. Why do you think metaphors are important for helping us understand algorithms? And will algorithms power eventually render the metaphors we have obsolete? Well, that's an easy question. So, yeah. So, well, let me start with the idea of the metaphor and the different stories we tell about computers. And robots are something we talk about a lot because they're an easier metaphor, right? That's something we fall into. You think about metropolis, terminator, all of these myths, some of them old, some of them new, about what computers are like. And of course, we think they're like us, you know? We imagine they're like us and it's most straightforward to imagine them as having bodies and then we can, you know, we can tell so much more about them because we can read their bodies and interpret their body language. And so, I don't know, I haven't seen one of these delivery bots, but I'm imagining they're like the little thing from Star Wars running around the Death Star. I hope that's what it's like. And, you know, even that, which is not a humanoid robot, is something that we started to attach metaphorical thinking to. Because we think it's like a pet, right? And it makes little noises. We create some kind of empathetic bond with it. This is just what we do as humans. And so what I talk about in the book is that there's this sort of layer of physical metaphors that are the first order things we talk about when we talk about computation. But actually, it's all of these deeper metaphors that we need to surface and think about more. And so language becomes really important. We use metaphors to deal with the unknown. When we, when we're confronted with some strange new thing, we have to fall back on our words. We have to fall back on our analogies. And we're always trying to extrapolate or find some kind of relationship between this odd new thing and the things we already know about. So, yeah, I talk about the cathedral as one of these metaphors. And you actually see people talking a lot about software as a kind of cathedral. My favorite line about this is from a computer science conference in like 1989, I think, where it said, yes, software is like cathedrals. First we build it, and then we pray. And I like that a lot because it's actually quite deep, you know, there's a superficial meaning to that. But what we do so often is we build these complex machines and then while we're building them sometimes and certainly after we build them, we start to forget about them as messy combinations of parts, things that we have to make lots of choices about, that are trade-offs, that are imperfect adaptations to the world. And we start to think about them as these perfect unitary holes, you know, these oracles or these magnificent specimens that, you know, and we start to, because we're so obsessed with metaphors, we start to think of them as having some kind of sentience or independent life. One thing I've noticed is how often parents, and I know we're both parents, and how often I've seen parents, especially of young children, being more patient with their smartphones than they are with their toddlers, right? This tells you something about the relationship we have with algorithms. So, you know, the sort of language we use becomes really important. And the metaphors become, they become deterministic in a way. Just like people didn't, it was only when somebody came up with the term black hole that we started to come up with a whole set of ideas about what the black hole is and how it works in physics. You need, there's a limit, well, I don't know, I suppose there are a few people on this earth who can really think in the pure language of mathematics and can explore these ideas in, sort of, as native speakers of that. But for the rest of us, you know, we're dealing with this set of translations. And I think we're in a similar space right now with computation and all of the things that are happening in the world of computation. We have a lot of translations back into human language, analogies and metaphors for what we think is going on. In a lot of ways, what my book is about is trying to call that out and recognize the places where that's diverging. You mentioned Oracle. And I think there's also, at least I believe, a bit of a divergence between how algorithms are promoted and marketed by the tech companies that use them, sell them to us, I should say. So you have this sort of, you know, oh, it's the Oracle of Delphi, right? But, you know, then you have critics, I would count myself among them and think, actually, it's about like a magic eight ball. You know, they go wrong. I remember Microsoft's Tay, which uses one of the examples in the book who went berserk and started spouting racist, horrible things when its machine intelligence was supposed to learn from us, and it clearly learned from us. I think that one of the things that your book does really well is remind us, which we shouldn't need reminding, but we do, that these algorithms are now measuring things that used to be immeasurable, right? So we've always had a network of friends, but now it's quantified on Facebook in such a way that not only does it accrue massive profits to Facebook and they own that information, but, you know, we send a different signal to our social network because we're on Facebook or if we're not on Facebook, and our value as friends is measured in this new way thanks to these algorithms. But as we know, humans are involved, right? So Facebook's news feed used to be edited by human beings, and then there was this controversy about whether or not it was politically discriminating largely against conservative news. So Facebook said, well, we've got the solution to this, right? We'll just put an algorithm on the problem, right? It's neutral. And so what happened is that then, right, the algorithm couldn't distinguish between real and fake news. I thought that was an excellent example of the challenges, the human challenges embedded in these things that we want to be neutral, but aren't. So that brings me to the humanities part of your argument. Why are the humanities still crucial in an algorithm-driven world? So there are so many things I want to say here. I'm just going to say them all because I can't. Yeah, that's right. So first, I want to start with Facebook. And I guess it all ties together. So when you think about the role of the humanities, one observation that I've had over the past decade is that if you think about how we define ourselves, and the large part is by communicating with other human beings and sort of measuring ourselves, comparing ourselves to the world. That's by reading and writing. And the ways that we're doing all of those things, the ways that we are reading and writing, are fundamentally different now. We're using computers to do a lot of our thinking for us. So many of our questions about the world now start with a Google search, right? A huge percentage if you think about it. If you think about what kind of a history of your intellectual life could be traced just by looking at the things you've looked for on search engines online. Right. But it's there and actually you can all go find it because Google lets you go and find your search history, including every little snippet of audio that Google has recorded of you, thinking that you were asking it's some kind of question. So go explore that sometime, maybe with a glass of wine. And so what it means to be human is changing and our systems are changing all these relationships. I'm fascinated by the way that the term friend is changing now because of things like Facebook. I remember a time, as I think many of you do, when if you remembered your friend's birthday, that was a significant gesture. Because you had to actually remember, you had to write it down, you had to remember it, and then you had to remember on the day to say something. Now, you know, hundreds of your closest friends are going to contact you on your birthday if you allow Facebook to know that information about you. And because Facebook will remind everybody that it's your birthday. And so then you get into this weird sort of pantomime of celebrating. And then, you know, you get all of these messages, and what are you going to say, you know, happy birthday, everybody's already said that. And then the person who's birthday it has to somehow respond to everybody. And I'm not saying that that's necessarily bad. You know, it's nice to be reminded that this is happening. It's a nice milestone, but it's very different from what it used to be. And that your friends, and the way that your friends are celebrating this occasion, all of that has changed. You know, the topography of this sort of cultural thing is very different now. So I think this, that's a little microcosm of the kinds of observation that the humanities can bring to these questions. One of the arguments I make in the book is that we need to learn how to read the system. And I don't necessarily mean you need to learn how to read code or that you need to be able to pry open this black box and see how all of the gears are connected together. I mean that we need to start to see how these choices are being made for us. To see the menu of options that you're presented with and think about, well, what's not on this menu? Or what is this menu making seem simple that might not actually be so simple? Because metaphor is so enticing, of course, we all want a giant button that we can push that says make my life better, right? And when you think about it, that's often the sort of central marketing message of a lot of algorithms is, you know, just download this app and you'll be like one of those beautiful people in the commercials. The Pinterest Perfect Life, right? Well, that actually segues very nicely into one of the things I want to push you on about the book, which is you do a wonderful job talking about metaphor world, the humanities, efficiency, why we find these things so appealing. But one of the things that pops up in every chapter in some ways, but I thought you could be tougher, is this issue of power. So, you know, people have called for something like an FDA for algorithms, you know, regulation of these companies that are creating these systems that are embedded in our lives. And if you just look at the clear question of market domination of Google, Facebook, you know, you talk about Netflix, the lack of any coherent and consistent ethical safeguards here is an issue, I think, for a lot of people who study tech. So dig a little deeper for us and talk a little bit about this power question when it comes to not just the algorithms themselves, but the corporations who keep them in a black box locked up under proprietary rule. I think that this is going to be one of the defining questions of the next 10 years. I mean, it's a huge issue and I agree that it's a real problem and I think people are starting to come to grips with how serious it is in a lot of different arenas. I think, you know, to be in a charitable frame of mind, a lot of the people who created some of these incredibly powerful systems didn't realize how much power they were really accruing. You know, I don't think that they thought of it beyond technical success, you know, a system like even Netflix, you know, which is not, you know, nobody's sort of out there protecting the streets because of Netflix, but they did write Facebook, right, people called it the Facebook revolution in the Arab Spring. So, but even with Netflix, there is this tremendous power to sort of inflect whole spheres of human activity. You know, they're having this big pull on what it means to do Tel Aviv, to do visual entertainment, to do storytelling. The business models are changing. So I think that we're going to have to grapple with that more and more. And I think the reason we're here is because of the collision course between the way that technology works and the way that society works. In technology, there is this idea of the power law. The first person in a space where networks are dominant, the first person to become a big hub and to effectively solve whatever the problem is, is always going to have an advantage over anyone who comes along later. And the system naturally gravitates towards a kind of a monopoly or a small set of big players and then a long tail of everybody else. And when you have that kind of a model, you know, technically it's great because, you know, Google knows about where all of your friends live and Google knows what you want to buy and Google knows what, you know, you can search your photos now with Google for, I had a friend recently search her photo archive for Roller Derby. And there, you know, Google, the algorithm was very effective at pulling up the pictures that she was looking for that were Roller Derby photos. So there's a remark, there are benefits to having this kind of deep index of your life, but that brings a lot of responsibilities with it. And I don't think we've yet come to grips with that collectively. And I think that you're starting to see a lot of the tech companies wrestle with that now. I think that the election was a really interesting watershed for that. And you saw how Mark Zuckerberg and Facebook were trying to grapple with the accusation that actually fake news was their fault, right? And I'm intrigued to see how that starts to play out. I was reading recently about Google having a partnership with the National Health Service in the U.K. and trying to create, adopting the notion of the blockchain, which is really just sort of, for the purposes of this, it's just a way of, in a distributed way, keeping track of stuff, keeping track of a history of objects in a sort of chain of custody almost. And so the people working on the Google artificial intelligence deep-mind system are trying to create a way for the National Health Service to track what happens to every piece of data that they share with Google, because Google is going to do all this number crunching on the vast archive of tens of millions of British citizens' health data to try and do better healthcare. And so I thought that was interesting because that's the first time I know of where a really big league cutting-edge technology initiative is from the beginning trying to build in a sort of core sort of transparency role to say, well, we're not going to show you all our magical algorithm works, but we're going to let you follow all of your information around and we're not just going to sort of sneak away with it or take it away and do anything that we want with it. You're going to actually be able to see everything that happens to each piece of data. I think that has to be part of the future. Well, that raises the issue of sort of algorithmic auditors, people who are trained to actually go in when there is a lack of transparency and examine it. So the loan debacle where basically people are being denied loans or in some cases denied parole because there are these algorithms that determine whether someone's a good candidate for parole or not, I mean, these are people's lives, right? I mean, you can't get a house, you can't get out of jail. An algorithm is central to that decision-making process and I think those are the cases, I mean, it might even, do you think it'll have to be lawsuits that spark that kind of ethical review because right now it's the tech companies that create these things and sell them to state and local governments who are like, you know, it's just a device, it's a tool and you're not, if you don't use it correctly, that's not our fault. Maybe it takes someone suing to see the algorithm or to have more transparency. I mean, what do you, but again, that gets us tangled into a different sort of web. Is there a better way? Should tech companies step up and be more transparent about not the proprietary stuff which they profit from, but the chain of command? Should they have some, you know, a Silamar-like meeting where they sit down and go, okay, ethically we owe this to the public because so much of this stuff governs people's lives and choices? I mean, I think that they're going to have to, I think that there will be lawsuits, which is interesting, you know, in a way this brings us to the question of the status of knowledge because one of the things you have to do to make this work is to create a new kind of court acceptable evidence around algorithms and data, which I'm not an expert in this, but I don't think exists now and so people are going to have to make this case that this isn't just about this individual person getting denied in this way, but there's this systemic problem, you know, this sort of moral and then legal problem with the way that a set of algorithms have been created. But you know, people have been hiding behind their tools for a long time and that's a classic, it's like, well, you know, it's not me, it's just a system, you know. And so I think that this, the literacy question is the first step because people have to start to recognize when and how their lives are being constrained or changed by computational systems, right? And it's something as simple as understanding what the deal is. You know, we've all made this bargain with Google. Well, maybe not all of us, but almost all of us. Actually, really, even if you've never used a Google service yourself, I would be willing to bet that you interact, your data interacts with Google servers multiple times a day. So understanding what that bargain is, you know, it doesn't necessarily mean you wouldn't still make the bargain, but actually having a clear sense of what you're trading and what you're getting and to come up with a set of standards around that so that people have a little bit more awareness of what the parameters are of some of these arrangements. Because we are deeply implicated in all of these different computational systems now. So, yeah, I think that there will be a lot of sort of, there'll be, there are technical conversations to have and also legal conversations to have about how to do this. I always wonder when I read these boosters articles about coding. No offense to coding, but you know, like, let's get kids to coding camp and this, that and the other. It always infuriates me. I'm like, that's fine, but they have to have an ethics class. If they sit down at coding camp and learn how to code, they should take a history of tech and an ethics class that coincides with that. And I think that that is, it's the younger generation of kids who take all of this for granted, right? They grow up with algorithms seamlessly. I mean, it affects who they choose to date. It affects, you know, what the real estate that they'll buy, it affects everything. But it's invisible. But it's, this is where I get to, you mentioned DJ Fogg and some of the persuasive technology labs. I'm going to push you a little bit more on this because they are intended to be invisible. That is their power. That member of frictionless, seamless, all these words that we hear from Facebook and other tech companies, you're meant not to see what they're doing. So it strikes me that we're on, that's part of the collision course, right? Companies that want your experience to be very seamless and people who might start saying, wait a minute, what is this stuff and how do we get in? And you have wonderful discussion of, you know, we have to get into it at the scenes and maybe that's lawsuits, maybe that's more flash crashes or other problems where algorithms run amok. But what do you see? Is there something more definitive besides just calling for general transparency which personally I don't think the tech companies have any interest in providing? Is there regulatory or policy means that could affect even small change? I mean, I think there is. I think for a long time it was possible for technology companies to say, well, it's moving too quickly, you can't understand it, there's no way you could actually get involved. And now, you know, there are people in 24-hour hackathons who are taking the AI tools that Google has open sourced using them to solve all sorts of problems from the ridiculous to the really significant, like, you know, the guy who coded up this thing to use a webcam to use artificial machine learning to visually identify when the meter maid in San Francisco was coming down his block so he could move his car before the meter maid came. This was a significant problem to him and he was willing to spend 24 hours of his life, programming a solution to it and he won that hackathon he was in. So the tools are in a lot of ways becoming more democratically accessible and I think that can be a pathway to policy change too because that means there are more people who are going to be literate in some of these things. I've been playing with this metaphor, I don't know if it's really the right metaphor but it seems appropriately disturbing which is the metaphor of the fistulated cow. You guys know what a fistulated cow is? It's about to find out. It's about to find out. This is for the kids. In veterinary science, and there are some ethics issues around this idea, people will take a cow and they will cut a hole in the side of the cow and then you can reach your hand in and see what's going on inside the cow's stomach as a way to learn about cow biology and health. So you'll see a cow with a socket, a hole. In a way it's a gruesome but very practical farm approach to let's find out what's going on inside this cow. So you could imagine, I think that what Google is doing with the National Health Service is a little bit like this in this notion of a fistulated algorithm. It's not going to be their tagline for it. No, I don't think so. I'm still working on it. But maybe it's a good tagline for us to say, well, if this is this sort of secret living entity where we can't mess it up and you can't tell us about all of the secret things you're doing, all of your cutting edge technology, you can give us this limited access to this window into what's happening. And I think the ethics of that, the ethics of what remains yours, what knowledge you have access to, and maybe what knowledge, you know, a lot of this ends up in the legal context I think comes down to the notion of property. And that's where we build out all of our public thinking about these issues based on property rights. And so if we have a shifting conception of who owns data, you know, whether we own our own data or have certain rights to our data even after it is captured by somebody else, you know, that I think is a pathway to changing some of these things. I shift a little to the existential question because actually you grapple with these quite well in the book. And that is the issue of human agency and human dependence on some of these powerful algorithms. So Pew recently did a study where they surveyed a lot of the leaders in the fields of artificial intelligence and machine learning. And I was actually kind of surprised by the results. You've probably read these, but they called it an unscientific sample, but it's basically people who are deeply involved in all this work. And they asked about the positive versus the negative impact of algorithms both for individuals and for society. So 38% predicted that there would be positive impact of algorithms and that those will outweigh the negatives for individuals and for society. 37% said the negatives will outweigh the positives. 25% said, eh, it's a draw. But I was fascinated to see those results. It's actually a lot more, it's what you were saying. There's more unease about these things than, you know, the new iPhone commercials might lead you to believe. So this gets to this question of human agency. When we talk about machine intelligence, whose intelligence is it? When we talk about human dependence on machine learning, it's not just the, you know, the little local news snippets of yet another person driving into someone's, you know, pasture because they follow their GPS blindly. Do you think we're actually adequately prepared, either from a humanities background or from an ethics background, to deal with this question of agency? Are we going to be outsourcing our rational instincts too much before we get to the point where we're asking the right questions about the power of these algorithms? I think it's a real concern. I think, but I think that also, realistically, there's no unringing the spell. You know, we're going forward. This computational transformation is happening. And so we need to find a way to ride the wave rather than just falling into it. So the thing I find really interesting, and again, this is really kind of a humanities thing, is how is the horizon of our thought being changed by computation? You know, we've all heard of the filter bubble, this notion that algorithms are pre-editing all the information that you might get before you even have a chance to decide whether you're going to pay attention to it yourself. And that's very powerful and it's very real. And so it's not so much the conscious decisions when we decide to follow the GPS direction It's the unwitting choices that are being made where, you know, if you think about Google, just take Google and Facebook, which many, many people use, they have a tremendous power in shaping, you know, not just the things you're thinking about in the immediate context of your mind and your consciousness, but just sort of the periphery of everything else that you know exists. And remembering that there are things that Google doesn't know, right? This is especially important, I think, for young people, for my students I talked about this. You know, Google doesn't, there are things that Google doesn't know. If you Google in it, it doesn't exist. That doesn't necessarily mean that it never happened or it's not possible, you know. And I think we have, that needs to become part of some sort of a core ethics curriculum as a sort of a baseline that the status of knowledge, you know, does change when we're increasingly reliant on these systems. So you don't believe that absolute algorithmic power corrupts absolutely, you know? Well, so, you know, I think that there's, I'd like to be an optimist. Professionally, professionally I would like to be an optimist. So here's what I'd like to think about is how we can become better collaborators with computation. And that, and real collaboration involves a knowing exchange, right? It involves a conscious choice or set of choices to seed some kind of agency and you get other things in return. And great partnerships are always like that, right? They involve trust, they involve empathy, these things that are not, you know, just about the transfer of information but other aspects of our consciousness and our sense of identity. So I'm intrigued by the fact that so-called centaurs, human chess players who work with computers can be both the best human chess players and the best AI chess players, right? That the human, the combination of the human and the machine together is more powerful than just the machine on its own or just the human. I think you see that in a lot of different art forums and practices today that we're all, you know, the people who are successful are using different technologies to empower their work and to do more than they could do before. But we also have to think about how the work is changing. I mean, if you're a photographer, you know, you've been through like five revolutions in the past 30 years just from digital technologies transforming what it means to be a photographer. And, you know, another thing I'm thinking about now is that this also raises the stakes for excellence and beauty and aesthetics. Everybody has a camera, a pretty nice camera that they're canering all the time. And I don't know if you've noticed this, but if you have a recent model smartphone, not only does it take nice pictures, but as soon as you take the picture it silently corrects your photograph and makes it just a little bit better. And so, you know, there's an auto tune for every art in our future and I'm interested in what that does when the baseline, you know, the basic level of creativity or the basic level of craft, let's call it, artistic craft is rising. And so one answer is to find the seams and the edges. There's lots of interesting art about what happens when algorithmic systems break. We do unexpected things. And then another side of it is can you come up with new art forms that were impossible without computation? And I would add, I think that there's also an argument that the third thing is to continue to preserve the things which algorithms either shouldn't or can't alter, right? I mean, the eyes of experience at the museum versus only seeing art on Google Art Project, right? Those are the things that I think because they, in some ways because they can't be quantified are we're losing some of those and we don't notice it until it's missing. Although that's a kind of depressing note. I'm going to turn it over to questions because I think I see a couple of people already switching in their seats. So if you'll wait until a microphone comes to you to ask a question and remember my rule, I will cut you off. She's not kidding. So just raise your hand and we'll get a microphone to you if you would like to ask a question. I don't know. Oh, it's coming. Sorry. See it bubbling up? Well, maybe you should just shout it out. Yeah, we can repeat the question. Oh, here's one. A microphone. Hello, sorry if I repeat something in your book. I haven't got to read it yet. As you've been talking, I keep thinking about the Turing law for artificial intelligence. Why do you think us humans have lost our skepticism, have lost our revolutionary aspect to not just agreeing? To trust what these algorithms are saying instead of revolting against it. Well, some of it is about familiarity and some of it is about intimacy. We used to keep the computers locked up in rooms that only a few people were allowed into and then there's a big deal when we put them on our desktops and then when we let them in our laps and now they're in our pants, you can see where this is going. And so we sort of can't help but engage in the trusting relationships at a certain point. And also we tell these stories. When you look back at the marketing for Siri and if any of you remember this, they had Zoe Deschanel and Samuel L. Jackson, these actors having these very sort of lively, funny conversations with Siri. And so the selling point of Siri was not that it was an omniscient God. It was that Siri was human-esque, you know, and there's something that you can have this rapport with, this refertae with. But I do talk a little bit about Turing in the book and what I really like about the paper where he introduces the Turing test is, first of all, that the Turing test actually has a lot to do with gender and the way Turing lays it out, which, you know, anything about Turing's life is very thought-provoking. And also that he ends the paper not by, he sort of says, look, this is a ridiculous question. You know, this question of whether we're going to recognize an intelligent machine. Because how do you know if anybody's intelligent? How do we recognize intelligence in one another? I guess the best you could do is you could have this conversation and you can just see if you can tell through conversation. But then he says, if we really were to have intelligent machines, what we need to be thinking about is parenting. We need to think about child machines and nurturing relationships because it's not going to be like one day, you know, Spock is here. It's going to be this thing that's like a toddler and like a kid and asking questions. And I find that thread, which has come up in various people thinking about this topic over the decades, very appealing. Because I think that's actually, you know, we do need to have that. If we're serious about one day creating some kind of artificial intelligence, we need to think of it that way. We need to think about a nurturing relationship and not like some kind of, you know, cold science experiment. Because that's, you know, we need to love our monsters. Hi. Thank you for a great talk. Do you think we tend to glorify what decisions look like pre-algorithms, right? Like people making them with the only behavioral and various cognitive challenges that we have? And then secondly, do you think that given those kind of pros and cons of the data that we're sharing, given a full transparent look at that, do you think people actually will make different decisions that they're currently making? So, of course we glorify the past. You know, we do that all the time and we are very good at selectively editing the past into the stories that we want to tell. You know, I think that's a large sort of part, a big part of the machinery of what it is to be human, is to actually edit out most of the information and come up with a compelling story that sounds right to us. And so, in a way that bears on the second part of your question too, we're always going to edit out, we're always going to throw out a lot of the information that we don't like. You know, there's a long list of cognitive biases and we do have to pay attention to them. I think that, I'd like to hope that we can learn to make better decisions. I worked on this science fiction project a couple of years ago with somebody who imagined algorithms as a big collective decision support framework to help people make better decisions. And his one liner about it was, you know, we imagine all sorts of new technologies in the future, but we still think it's going to be Captain Kirk in his chair making the gut call, right? And clearly that's not the right way to do it. And so there must be ways to improve on our decision making. But I think it's going to require, getting back to the first question, a kind of trust, right? And also a kind of literacy, because you can present people with all the data in the world right now, but if they don't know how to interpret it, you know, it's useless. And that's, I think that's the stage we're in right now, we're sort of drowning in data. And we don't have very many good tools for moving from the data to analysis to a decision. Hi. So I'm interested to hear your thoughts on, if you think there are some domains of society or of life that will resist algorithms in some way, that will remain, you know, fully sort of human endeavors. And yeah, I just sort of like to hear your thoughts on that. I think that the places where it's the most possible, most compelling to resist, what I think of is the thickening layer of computation. You've heard the phrase ubiquitous computation. I feel like there's this layer of computation everywhere, you know, and it's screens and data are sort of popping up as interfaces between us and the world in all sorts of places. So I think that live human interaction and live performance is one of the places where there will always be a place for unmediated live performance. There will always be a place for what we're doing right now in this room, you know. And I think that the, I don't know if you're familiar with the term the uncanny valley, which is sort of when computers try to be human, and it's better to be R2D2 than it is to be a sex doll, you know, in terms of empathy and believability. R2D2 doesn't try to look like a human, but we still believe in it as a sort of a lovable robot with a personality. And there's all these, you know, entities that are right on the edge of humanity that feel much less like humans. And so I think that live performance, live interaction, and the incredible high bandwidth level of information you have just when you're having a conversation with someone in person, I think, and things like live music, live theater, there will always be a place for that as an unmediated space. I think in a lot of ways the arts are going to be the places where this, you know, and as with all new technologies, you know, most things are, almost nothing is going to go away. It's all just going to get layered on, just like we're all going to keep reading physical books for a long, long time. It's just that maybe not as many people will do it, or it'll become more, you'll have to make that a deliberate lifestyle choice rather than something that, you know, everybody does. Your imagination opened my eyes. Thank you, and I apologize if you had touched on this. I came in late, and it may be chapters in your book, which I haven't read. But I wonder if you would share with us your thinking on what terms it makes. If females are not participating in the creation of algorithms on top of algorithms, on top of algorithms, in a proportionate to the population, do you default a number of times to the role of the arts and the role of humanity? But I'm wondering if you would discuss if it really makes a difference, and if we could envision a present or future in which females are participating in proportion to their role or their numbers in society. I think it's extremely important, that it's clear that there's a tremendous problem in Silicon Valley today and in a lot of STEM fields with gender imbalance. And I think that there's a very clear, on a basic level, it's obvious why this is a problem, because algorithms are made by people, and they reflect the conscious and unconscious biases of the people who make them in many different ways. And so if the group of people who's making most of this stuff is not representative of a broader society, that can be a problem. And it's also a problem when it's just a small group of people who are making decisions that impact billions of people, which in some cases is happening every day. And so I think that notion of diversity, and I would include other forms of diversity as well as gender, but I think gender is really important, is I think we're at a stage now where a lot of, at least the sort of big-name technology companies have realized this is a problem. Some of them are doing a better job of trying to deal with it than others, but it's hugely important. And again, for me, I think everybody, every little girl and every little boy, should be learning something about how these systems work, not because they should necessarily be expected to go off and start programming algorithms themselves, but just to become citizen members of a civic society, because this is a new kind of letter she that is about ethics as much as it is about technology. Oh, well, I think that's a really good question. I think that we would probably see a different range of platforms and services. I think the kinds of startups would be different. I used the example of the guy who was programming his, the camera to spot the meter made. Maybe, and this is just one data point, but what would a really talented woman who had, that guy was clearly on some trajectory and he was very good at what he was doing. If there were more women who were just like him, would they have chosen different problems to work on? And what would those problems be? So I think that it's still a very open space and the technology is moving quickly enough that really it is still true that a small group of people can come up with an idea that changes the world. And so I think that we need to make sure that those doors are open so that all of the small groups of people are not 20 something guys who live in Silicon Valley and shipping containers. And yeah, I do think that's happening. I actually see a lot of focus from big technology companies and from elsewhere to bring more diversity into the pipeline. But yeah, I wish I could say this is exactly what the world would be like, but I think the point is we don't know what we need to find out. And not just gender diversity, but economic and class diversity. A lot of the things that I think people are suddenly aware of post-election that, oh wow, there's a whole other world out there of people who have different perspectives than we do. And I think the tech space is very much an elite space if you look at it from a socio-economic perspective. I haven't read the book yet, but it's a quick check of the index. That's the best way to do it. Under D, it goes straight from data mining to decision making. And it doesn't include dating sites. There is one reference to Tinder, but I'm just curious why you didn't spend more time looking at e-harmony and OKCupid and the algorithms that have actually changed people's life. There have been a number of books on the algorithms behind those sites and how they're set up and what we can learn from that. Yeah, no, it's a great topic. And in some ways I felt like it was something I could point to because people were already familiar with it as another very intimate example where you may not really think about it, but really you are just entrusting this huge set of decisions to an algorithm. And so, yeah, it felt like I didn't know what else I had to say about it because other people have written, so there's been a lot of great work on it. But it's an example that I use as a way to cue people to these sort of deeper questions about how we have to redefine who we are in the face of competition. We have to perform ourselves in a different way pre-algorithm, we didn't have to do it right. Yeah. Every guy is holding a puppy now, right? Every guy holds a dog. And this is actually, to be honest, because my dating years ended before these systems really became popular, I didn't feel like I knew enough about the experience. I feel like the dating game is just completely different from what it used to be. There were some research and observation problems there. One here and then one in the front. Oh, is there someone back there? There he is, this gentleman in the blue. Yeah. Probably a stupid question, but is there such a way to trick an algorithm? Well, you do it all the time. Yeah, you do it, yeah. We do it all the time. We were just talking before that, I'm sorry, did you want to add? No, I'm a novice compared to all you, but it's just something that just occurred to me based on what he was asking, and I'm just curious, but it takes someone who's trained in such to outmaneuver, but that's part of human beings anyways, if we're smart enough to. Yeah, so I don't know that you, I think that we instinctively, another thing that defines us as humans is that we lie all the time. And so, of course, we lie to algorithms. We lie about how tall we are on the dating website, we were talking before the event about this phenomenon of delivery truck drivers. I haven't, this is a little bit speculative. I can't say that I've sort of really reported this, but it certainly happened to me where I've come out and I've been at home, nobody knocked on my door, and then there's a sticker on the door saying, oh, we tried to, you weren't here, we'll try and deliver your package again next time. And I did some research online, and I know this has happened to lots of other people. And I suspect that it's because that's a way for delivery truck drivers to catch up on their route, when that's something that the algorithms that are tracking their efficiency and their progress maybe can't capture it, and so it's sort of a shortcut. There's this French philosopher named Michel de Certeau who used to talk about the idea of poaching, the ways in which you could sort of, you know, maybe you'd like steal a room of paper from your office or make some shortcuts because it wasn't counted, and it was a way to just deal with the problems of life and sort of get along in a way that nobody, you know, and so you can see how we do that all the time with algorithms now. You figure out what's counted or what's being tracked, and then you find the ways around that, you know, shortcuts. We're always taking shortcuts. Sorry, I was still writing down my question, so I didn't quite have it, but I guess it returns back to the idea of diversity, and I guess the economic part of it brings up your point of saying being first has a great advantage to it, and because creating algorithms requires such a high level of education and competency in these fields, do we see algorithms as having a potential of being kind of inclusive or democratizing technologies, or are they going to be more concentrating to those who are creating them, and do we need to be confronting this dilemma of kind of ownership around algorithms and their impacts? I think that there's a tremendous opportunity to do that. There's no technical reason that so many of our systems need to be giant funnels accumulating wealth into pyramid shape. You could just as easily create systems that empower people to take collective action or do other kinds of things that could be, you know, collective financial stuff as well, right? And so I think that there are obvious financial reasons why it works out the way it has, but technologically, you know, there's no reason that it has to be that way. And I hope, you know, maybe this is another answer to your question about how the world might be different. You know, I think that there is potential for some of those alternative systems to come into play. And I think it does... One of the ways that this can happen is as the sort of layer of computation continues to bring more people into, you know, across the digital divide and into regular access with different types of networks. And those people are not, you know, are not probably the ones you're trying to sell a car to. They're people who have only just crossed that divide because they are more marginalized. They're less included. But if we can be effective at democratizing access to the tools to build and to create new algorithms, new platforms and systems, then maybe, you know, people will come up with ways to broaden the... to flatten out the power structure as well. I think that this is something you see in different arenas in journalism. There's been a lot of focus on creating tools for people to report on, you know, police brutality, to report on protests, to create encrypted and secure communication systems. The Open Technology Institute here works on some stuff like that. So I think there are prospects. But in a lot of ways it depends on all of us thinking of ourselves not so much as consumers, but as participants in this digital culture that we're all building together. Thank you very much. Very enlightening. And I want to share with you your desire and indeed existence of the optimism going forward. I think that it's a big thing not to just juxtapose artificial intelligence broadly speaking versus human intelligence or lack thereof broadly speaking sometimes. But too often I find ourselves juxtaposing the two and arguing which one is better and use the human aspect as being a metric. And what I'd like to ask you is if you could share with us your thoughts on how do we actually measure objectively how good an algorithm or a set of algorithms is. And one example that comes to mind that you perhaps could use as a proxy for this is self-driving cars. And the accident rate in those versus the accidents that inevitably happen with traditional way of driving cars. Yeah. So it the only way we can measure these kinds of intelligence is through these specific quantifiable objective tests. So we can tell if a self-driving car is doing a good job as a self-driving car we can measure the accident rate and things like that. But actually I really like that example because it's much harder to measure its effectiveness against the human driver compared to something like chess, right, where there is sort of a set of clear measurements where at a certain point computers were better at chess than humans were or go the game of go and recently as last year that happened. So what I find interesting is the notion that it's not really about intelligence it's about this kind of performance of performance and culture and driving is like that too. And you can imagine a future where people are going to pay more for the self-driving car that's slightly better at cutting off the other self-driving cars so that they can get a little bit ahead in the line, right? Yeah. Or some people will want the Volvo of self-driving cars that's more safe and secure, right? That until all the cars are self-driving cars, you know, and you can imagine something that's like a sort of monochrome space where everybody's following the same rules. There's going to be a lot of messiness and a lot of, and even when that happens even if every car on the road is an autonomous vehicle they're still going to be interacting with pedestrians and the rules for self-driving cars in New York might be different from the rules for self-driving cars in Rome, right? So coming to or I guess it would be sort of a sad thing if they weren't, you know, that these cultural differences didn't somehow get reflected. So I think that there will always be this space of ambiguity between what we might think of as measures of objective performance and the way that that performance takes place in the real world and all of the different modifications and tweaks and course corrections that have to happen once you take one of these computational systems and you actually drop it into real life. So I don't know if that quite answers your question but I think that we're increasingly starting to see autonomous systems that are performing in conjunction or in collaboration with humans and that I think is where it gets interesting and we might one day think of autonomous vehicles not so much like human level intellect but we might think of them like horses you know, or some other kind of work animal, you know, so it's another kind of intelligence another kind of relationship. I think we have time for one more question gentleman here. Hi, I just wanted to ask about so much of this discussion has been about consumer applications of algorithms and so does this a lot of the relationship there is there's a easy you know instant gratification like if you're using Instagram or dating site the algorithms helping you and then maybe down the line you're worried about privacy or what does some of the public policy and social conversations look like around something like industrial Internet of Things or other sort of factor automation where the benefit immediately is that you lose your job and with the hope that maybe down the line society benefits. So you know one thing that immediately pops into mind from Internet of Things is the question of security and technology, intellectual property and the intellectual property laws and the protection of access to these devices when sometimes the devices might be very easily hackable and so all of a sudden you know somebody is taking over your nanny cam and your child during this communicating with your infant you know this is the thing that has happened and so the trade-offs when you talk about that larger scale I think end up being about there's sort of a whole legal column you know of industry and profitability questions in conjunction with these other sort of ethics issues I don't know I think that this is another this is actually an area that I'm very interested in and I'm working on right now specifically in the question of AI and automation and just starting to have some conversations about that because I don't think there's a very clear playbook for how to grapple with those questions I think that what we the problem we have and this is I think something that I see a lot in policy circles is that these ideas get distilled into very a small set of vignettes or stories or exemplars that people use and wave around and say this is what's going to happen if we don't solve this problem and so we lose a lot of the nuance and complexity of the broader situation and I think what we need is actually a set of better stories about what automation is going to look like and what people are going to do what's that person going to do after he loses his factory job what's that Uber driver going to do once her car is a Robo car and thinking about those intermediate steps and exploring a broader possibility space is really important because I think a lot of decisions right now we're making in a space of pretty bad information and so that's what I think needs to change well I think there are a lot of time for questions but please join me in thanking Ed for his and stick around because he'll be signing books please buy a book and have some snacks