 Thanks a lot for that very kind introduction and thanks to all of you for being here I am excited and intimidated to be here because you know all that PhD stuff for me was about 12 years ago And since then I you know realized that sciences is really hard and Writing science fiction is just so much quicker. You know, there's you don't have to go into a laboratory You just hang out at the coffee shop. So that's what I've been doing for the last decade And that's why today. I want to talk to you about killer robots So yes, they are a staple of science fiction, but they're also real and So we need to talk about how we're going to avoid being sliced into tiny bloody pieces by Robots that in our hubris we decided to build with buzz saws for hands And I can tell some of you were thinking about doing that just from just from looking out So that sounds silly, but There are a lot of really smart really high-profile people that it turns out are really afraid of killer robots So people like Stephen Hawking Elon Musk Bill Gates Elon just gave ten million dollars to the future of life Institute a group dedicated to mitigating existential risks facing humanity due to the development of human level AI and Similar mission statements are shared by the lifeboat Institute the Cambridge Center for existential risk The machine intelligence research Institute and the future of humanity Institute. So that is a lot of institutes All right, something must be up Something has frightened our billionaires ladies and gentlemen. We owe it to them to explore what that is So let's get into it. Let's talk about killer robots. So The cultural icon of the killer robot goes all the way back to the inception of the word robot itself So the word is check and it means laborer and it was coined in are you are Rossum's universal robots a play produced in 1920 in which robots revolted against their human masters and You guessed it killed all humans on the earth bender would be proud so From there things really got going in the 30s and 40s in the era of pulp science fiction movies So back then robots were monsters, right? And they were mostly concerned with kidnapping scantily clad women from Square-Jod Space Heroes, which has always confused me. I don't know what they were planning to do I don't know what the in-game was for those robots and as it turns out. I don't really want to know actually so a hundred years later a Lot has is the same and a lot hasn't changed with the idea of killer robots as Monsters so this fundamental killer robot theme, you know that we've got this creation made in the image of humankind And yet bent on destroying its own creators This has really been relevant all the way from Frankenstein's monster in 1818 up to David from last year's Aliens movie in 2017 it's an enduring theme and I think part of that reason why is that it really Changes with the times so our robot stories have evolved alongside our society Back in the 1920s our you are was really about the working class Revolting against their bosses. So essentially it was about income inequality which makes sense whenever you're in a in a society What's ruled by robber barons? Early last century our society was more concerned was more religious and we see more stories About the fear of playing God and daring to create life and then later in the 60s the robot stories start to evolve into stuff like the day the earth stood still representing our our fears about Nuclear holocaust and creating tools that are too powerful and now today. We do have more nuanced Robots than just terminators and Ultron's We have great stories like Blade Runner 2049 Ex machina robot and Frank and her that are really exploring in depth our relationship with technology And I think that all of these stories resonate across the centuries because at its most basic level the killer robot represents a fundamental fear that we all have and But our ancestors have had and that's just a question of do we trust ourselves and in particular Do we trust ourselves with these powerful tools that we're able to build? So by the way no data on my slides just pretty pictures Ten years since the degree so we are a tool making species, right? So we have to be nature is actively trying to kill us at all times if we don't build tools to defend ourselves that's it for us our instinct to build these tools is innate and so Imagine that for around a couple hundred thousand years at least Homo sapiens have been out there building more and more powerful tools and we've been using these tools to Transform our planet and ourselves in good and bad ways and so What I wonder is would we have survived this long if fearing our tools was not an innate part of the process of building the tools so We have to be able to envision different futures the fear makes us think We're drawn toward the utopias and we veer away from the dystopias And I think that fear protects us as we grow and explore and become more powerful and capable as a species and You may also notice that what I'm trying to say here is Envisioning all these different possible futures Perhaps through some sort of science fiction, you know, not tea science, but science fiction I'm just equating that with Being completely necessary for the survival of our species. It's a very important thing for people to do So you can take that with a grain of salt since that's my profession alright, so That said the real robots out there, you know, they're they're really out there and They must be a threat, right? Why are our billionaires shaking in their loafers? You know, there must be some real reason so Let's let's look at that. I think that what we're afraid of is what we can't predict We're afraid that the rate of technological progress is getting out of control and ultimately We're this is embodied by something called the singularity So it's worth mentioning that a science fiction author coined the term singularity Werner Venge and the idea is that It's a hypothetical situation in which we build a machine that can build a smarter version of itself And then that version does the same thing and this iterates until we have a godlike intellect that's trapped in a box And so as it happens if that did occur That could cause some problems And as a science fiction author, these are my bread and butter. So I love these problems I love to think about them. The first one is loss of humanity So human beings could copy their brains into computers or replace their brains with computers and then boom We don't have any more people or at least not in the way that we understand People right now. Um, so a great example of this is in Cory. Dr. Rose novella I robot in which a father is searching for his daughter who's gone missing and when he finds her He realizes that she's made a copy of her brain and put it into a computer and then made a thousand more copies of herself and then one of those copies gets killed and He's left wondering whether he should be mourning for his daughter or how he should feel and she's entered into a post-human Existence and so she's really They've destroyed the natural ties between father and daughter and lost some humanity there another example of Negative outcome is economic collapse. So when super intelligent robots are able to take every single job It'll leave nothing for humanity to do except, you know starved to death, right? So this one is really interesting to think about What is the last job that human beings will keep you know as robots get smarter and more capable now? When you ask people this question they usually well always come back with a version of why the robot will never take my job That's not gonna happen to me. You know, I drive trucks, right? That's impossible. So Intel it happens, you know our instinct is to draw a line in the sand and if you look at the development of AI It's just lines in the sand just all the way to the horizon as we keep stepping back and realizing that machines are more and more capable so I Think that there is One ability that we have that the robots can't take away from us So think of this novel a million little things by James Frey So it's this huge bestseller. It's a memoir. It's lauded. It's incredibly valuable And then there's one piece of information about the book changes We learned the fact that no person actually Lived through what happened in the book so no human being experienced it and the author had lied about living through this It's these experiences of drug addiction. So suddenly the book loses most of its value None of the words changed nothing in the book changed Oprah got very angry, you know, it got her angered up and and so I'm the question is What is it about the book that changed and the answer is the one thing that we have that the robots Can't have and that's just the human experience the act of sucking air and being a human being on the planet earth Now if you don't think that that's valuable, you should look around because as a society We're being taught exactly how to commodify and sell the human experience We've all become experts at this right we market and sell ourselves to each other as a form of entertainment So one example of course is we put pictures of our meals, you know on the internet for everyone to share and The fact is a picture of your meal doesn't have any value associated with it Someone has to eat that meal and you know, there's no intrinsic value to the pictures of my mother's cats It's a it's the human experience that I share with my mother So between all of us we humans create this shared context that gives value to Just the fact that we're alive Now moving on to another negative outcome We've got the big brother AI where we have a super friendly AI Who begins to make all of our hard decisions for us and it's optimally for better or worse We end up as no longer the captains of the USS humanity And then that's closely related to outright slavery in which we're explicitly put to work as resources for our AI Overlords or we're all thrown into the matrix and we never realized that we're just a race of very inefficient batteries And then of course there's my favorite which is the Robo-pocalypse So when we begin to weaponize AI's and we use it to wage war on each other So these outcomes are all very very frightening They're scaring our billionaires and our physicists and a lot of people in movie theaters and everyone sure Michael Bay is going to terrify people with this stuff and all of these existential threats are predicated to some degree on the singularity happening and the problem though is that if you go ask Researchers if you talk to people that are building AI I can't think of a single person who predicts a spontaneous singularity happening overnight So in fact most people consider the spontaneous explosion of intelligence to be really similar to the idea of Elbs sneaking into the cobbler shop and building all of the shoes from them at night And then wow presto you walk into your laboratory in the morning and somebody wrote your thesis for you Apparently, you know, it doesn't seem to happen. So instead I think there'll be a long iterative process of learning how these systems can be created in the best way to ensure that they're safe and What I really want to talk about now is what actually scares the hell out of me So moving one step at a time, you know, we have walked together into this new age And for the first time Technologists have access to this amazing array of new technologies, right? We've got speech recognition face recognition all the things that we've been hearing about today Machine learning and we can just cherry-pick these things and plug them into our devices And of course, that's exactly what we're doing. We're taking all of these things and we're giving people Exactly what they want and I think of this right now as the technological age of Candy so It's just candy. It's exactly what you want to eat But if all you eat is candy, you know, what happens you get sick So I think that's what's happening right now We use mapping algorithms algorithms to get from one place to another But we're left not actually even knowing how to navigate our own cities and neighborhoods We've got Facebook and Twitter feeds that are giving us curated versions of a world That create divisions in our thinking In our dividing our nation our families and we're checking our smartphones in these endless cycles of dopamine reward Like you know rats hitting the cocaine button and some unregulated experiment And it's not just the adults who are guinea pigs in this situation. It's it's children too. So And as a father, this is something that I think about quite a bit. I've got An Amazon echo in my kitchen and I've got a five-year-old son Who finally can say Alexa after years of saying a wexa and trying to tell her what to do And now he can order her around he can tell her exactly what to do and she does it And so for the first time in the history of humanity We can talk to a little black cube and then have it respond to us in the voice of a woman And it's doing speech recognition. We we've never had this ability in the history of humanity So you can imagine our brains are not quite inoculated to this experience and so I think that it shouldn't come as any surprise that our brains will map those interactions with Machines on to our interactions with humans And so the question for me is do I want my son to be involved in an interaction in which he's giving Orders to a subservient woman who complies instantly regardless of how rude he is His mother definitely is not into that interaction She's not pumped about that. So to be clear. I don't really care about Alexa, you know, don't tell her I said that though I'm not worried about her. She's she's a pile of code She doesn't have feelings or thoughts, but I'm worried about my son. I care about him I want to know what kind of person he's going to become is he going to have interactions that help him learn to be polite and thoughtful and Kind and so in our house we say, please and thank you to our Alexa and we do it not because she insists but because I insist and I think that's a really important distinction moving forward because technologists, you know We have they have a lot of experience creating safe consumer products that are going to go into the home and be used by people So that's not to say that, you know, you can ever really win because if you put any object into enough people's homes They would get hurt They're gonna get in the bathtub with it or lick it or put their fingers in it or what people are very curious mammals and Doesn't always end well for them But and I'm sure that this is gonna be a much more difficult problem once we have autonomous Lifelike machines that are that are in our homes as well, but at the end of the day I don't think that physical safety concerns are the biggest challenge for developing new products I think that What's the bigger challenge is something that we've never really had to deal with before which is building ethical products And I really see two main dimensions to this problem. So explicit and implicit behavior so explicitly How do you build a machine that's autonomous? It's gonna make moral judgments, you know How's the robot gonna choose in the thought experiment? You know, which human life does it save if it can only save one person from a burning building Does it save the child who's got more years to live? Does it save its owner, you know Who's like got an excellent credit score and you know has still owes a lot of money. So it needs to be earning Or does it just sit back and watch, you know in order to remove itself from from any liability for its parent corporation So I think this is actually a great problem because as sort of a technologically maturing Society, this is this is the kind of problem that that we need to solve We need to be able to quantify our values as a society and embody them in our machines And I kind of love the idea as opposed to the you know, the the robot war I love the idea of our artifacts that are in our lives Being paragons and really representing our values and and acting those out and being something that that you could aspire to behave As well as so it's an idea that's been explored in science fiction So the founding father of artificial intelligence John McCarthy the guy who invented the term artificial intelligence He wrote hundreds of books and articles was brilliant But he only wrote one fictional story and it was about this it was called the robot and the baby and It imagined this future in which a personal service robot is watching over a baby. That's being neglected and abused So the robot takes the baby out into the street to remove it from this dangerous situation And then a police officer comes over and says why robot? Why are you carrying around a baby and the robot won't tell the police officer who it is or where it came from because it's balancing privacy with safety and the really prescient part of this story is that These ethical values that this robot is trying to work its way through. They're decided in the story by congressional committees that argued different Ethics that should be put into the consumer products. So I think that that could be in our future for better or worse God knows what kind of machines we're gonna get honestly, but as an aside I published the robot and the baby in an anthology that I co-edited and Just to brag about it One of the most surreal moments of my life was editing the language of John McCarthy's story When he suddenly switches into Lisp in order to actually go through the utility values of each of these decisions and work out the math of why the robot was vacillating between Safety and privacy and so I had this like moment in my life That was this is golden moment of of editing language and then debugging Lisp code I was just like this is it. This is the synthesis of everything So that was that was just a nice moment that I wanted to share so aside from explicit behaviors like that Implicitly, how is the presence of a lifelike machine going to affect us and and our children? So at my house, I also have a seven-year-old daughter who runs right past me to go hug her mother And I aggressively self-promote. I have a lot of songs about how great dad is Dad is great. What a handsome guy. They don't work. They don't work and I wonder, you know How are either of us going to be able to compete whenever there's a robot in our household? Whenever we've got our children running right past us to go hug their machines because kids are and will be interacting with lifelike machines and so the question is Can they tell the difference between what's fake and what's real and we're starting to find out so researchers at the University of Washington Have videotaped 80 preschoolers interacting with a stuffed dog and with an I bow robot puppy So they found that most kids understood the difference between these two things They know that I bow is not alive, but they still treat it as a moral entity. So they treat it like a real dog They handle I bow more gently than the stuffed dog. They talk to it more frequently. They engage in reciprocal activities with it and I Find this most interesting when a researcher Smacks both dogs really just a sharp tap the children stare at the I bow for twice as long And they're trying to figure out whether it was okay to hit this lifelike machine So if a technology Looks and acts alive then children will naturally treat it that way even if they know that it isn't really true And it's scary to me to think that negative interactions with lifelike technology like kicking a robot dog Could be mapped back on to real humans and animals But it's also reassuring to me that kids naturally empathize with lifelike machines and they want to treat them well So how do we ensure that we have positive interactions with our AI? So there's another really great study at UW 60 grade schoolers playing tic-tac-toe with a really realistic virtual face on a screen and when the virtual character Makes a dumb move The researcher standing nearby calls it really stupid. Oh, you're really stupid robot And so half the time the virtual character doesn't say anything and the other half it demands moral treatment by saying hey That's not okay. You're hurting my feelings whenever you call me stupid So when the character keeps quiet they found that only half the kids reported that the insult was not okay and that number jumps to over 90 percent when the robot stands up for itself and so What I see is that it's up to the robot to tell us how we should be interacting with them And if a robot demands to be treated with respect children are happy to comply with that And if it doesn't then they're more likely to see nothing wrong with abusing a very lifelike robot So in other words, it's up to the technologists to define this ethical dimension of our interactions with machines And it's not really up to us So I'm hoping and predicting that this kind of research is going to bring in a new age After the age of candy the age of meat and potatoes So instead of getting exactly what we want from our technology, which isn't always what we need I'm hoping that our future interactions are going to be designed to give us what we need So if I'm rude to Alexa, I feel like she should ask me for an apology I want mapping algorithms in my car to help me actually learn how to do it myself That's a harder challenge But if you have two products on the market and they both do the same thing one of them is going to slowly help you Actually learn to navigate on your own. I think consumers will pick the one that that makes them better Instead of just giving them exactly what they want immediately You know this self-destructive nature of eating candy all day is starting to make an impact on us And I and I've seen it. I think we've all seen it So a consortium of investors recently wrote an open letter to Apple demanding that they figure out how to help people monitor that Rat cocaine cycle that's happening with our phones You know, we've all seen scenes of families out to dinner where everybody's sitting at a table staring at their phone and it's kind of horrific and I think a backlash is starting to happen so Zuckerberg recently Acknowledged research that shows Facebook makes people feel bad and he said his New Year's resolution is to fix Facebook And their objective is no longer to just surface relevant content But to prioritize meaningful social interactions on their platform Even if it decreases some of their engagement metrics, so they're starting to look at a long-term outcome How are we actually affecting people's lives rather than just focusing on short-term? dopamine cycles so All of this gives me hope that our technology will make us better people and there is there's some precedence So in the 1960s a group of psychologists was asked to study the impact of television programs Which is a pretty new phenomenon on young minds and the end result for them in a limited case was was Sesame Street So this is a pretty sensational achievement, you know for the medium that positively influenced millions of young minds out there in the world and I'm hoping that that this will this will also impact our next generation of technology so killer robots are scary and Our fear of them I think is natural and not necessarily a bad thing and I do think that we should all spend a lot of money buying science Fiction so that we can envision different possible futures and examine that fear But the killer robots that we should be afraid of I don't think they're being built in secret laboratories I don't think they're coming back from the future or from outer space or they're even on battlefields I think that the real killer robots are under Christmas trees. They're in our living rooms our kitchens You guys are building them. They're coming down the pipeline and these new technologies can hurt us or help us But I'm hoping that if we play our cards right, we're much more likely to earn Powerful allies rather than being chopped into tiny bits by robots that in our hubris we've built with buzz saws for hands Thank you very much