 Good afternoon everybody and welcome to the Berkman Center's Tuesday luncheon series If you have not been over here before we have a special luncheon talk this week because we've moved venues over to Wasserstein Hall because of the interest in this topic with Kate Before we get started a couple of quick logistical announcements if you've not been here before One is that we are being webcasted and we will record this talk and put it on the Berkman Center website afterwards Please just note that and if you're not comfortable with that you can hang out and watch the website the webcast outside Additionally, if you are watching online, please use the hashtag Berkman if you are in the room You can also use that hashtag to talk about the talk on Twitter or ask questions The second is that we typically go around the room and introduce ourselves at these Tuesdays We won't do that today because there's a ton of people here But during the Q&A if you just want to identify yourself before asking the question that will be helpful And finally, we're here to welcome Kate Darling to talk to us about robot ethics Kate is a Berkman Center center fellow this year She's also a IP research specialist at the MIT Media Lab, and she's also getting her PhD in IP intellectual property and law and economics at ETH Zurich You should follow her on Twitter. She's at Groke underscore Welcome Kate. Hello. Hello Is the microphone working? All right Yes, no, maybe. Yes. Okay. Oh my god. There are a gazillion people here. I Assume you all came for the free lunch like I did but thank you for coming regardless It's not every day that I get to float my wacky ideas in a room full of brilliant Cambridge people So I do want to make this maybe less of a lunch talk and more of a lunch conversation if we can So I'll present for maybe 20 minutes, and then I really want to open it up to some discussion. Hopefully So I Just want to start by saying As Amar mentioned, I am an intellectual property scholar I'm a couple weeks away from defending a dissertation on copyright, and it has absolutely nothing What's there ever to do with robots or with ethics? So people have been asking me, you know, why have you decided to work on this now? And some of you might also be wondering, you know, why in the world I'm even qualified to be working on this and I really think the answer to both of those questions is that currently nobody is So there is a handful of people there are a couple psychologists, there are a few roboticists There's a handful of legal scholars that are working on the intersection of law and robotics But really it's a little bit terrifying that the few of us are being hailed as Experts in this like non-existent field when really I think that we still need to step up to the plate and become experts in this So I've decided to devote some time in the near future to this Why I think this is relevant So we have Robotic technology increasingly moving into a lot of different areas in our lives moving into health care the military transportation education elderly care children's toys and our households and And You know a lot of these areas raise some ethical issues and I'm not entirely Sure that our current laws are equipped to deal with all the problems that we're going to face or at least you know You could say they deserve reconsideration in light of this transformative technology coming into so many areas And so the same way that cyber law kind of creates a space for exploration and for expertise in In an interdisciplinary field looking at the effects of a new technology Ryan Kahlo from the University of Washington Who's probably going to go down in history as the pioneer of law and robotics? He has been pushing recently this idea that we should create this space kind of a cyber law like space for robotics To to study it more and I really just want to echo that here and say I think there's not just space for it There's really a need for us to be working together with roboticists and looking at this and I'm going to say a little bit more about that at the end, but first I just want to give you a quick overview of So robot ethics and issues that we feel you know off of the top of our heads could deserve some consideration in the near future or now And so those fit into three broad categories and oh I would like to talk more about these in the discussion if you want I'm going to be focusing on one aspect in particular that I've been working on but just to give you an overview The first category is safety and liability. So we have increasingly autonomous technology it's going to make the chain of causality for harm get longer and more complex and You know, we have legal rules that will assign responsibility but We might want to be rethinking whether those responsibilities are still assigned in a way that's optimal in a way That's going to minimize harm and set the right incentives And it gets more complicated even when you talk about Programming ethical decisions into machines and this is something it sounds very science fictiony But actually we are very close to or maybe we're even there to needing to program autonomous vehicles for instance or maybe even autonomous drones to interact with their surroundings and identify what they're interacting with and so we're going to have What are inherently ethical decisions in in code and who's responsible for that who's making those decisions? the second category is Is not Restricted to robotics. Obviously privacy as you may or may not have heard if you've been at Berkman recently is a huge deal generally robotic technology does introduce new ways of collecting data though and interestingly it Introduces new ways that people tend to react much more viscerally to than you know having their email monitored by the NSA So this might even offer some opportunity to push some issues that aren't getting enough consideration in other areas and Then the third category that I'm personally most interested in right now is social issues So the ethics of our social interactions with robots the ethics of elderly care of child care moral value issues like sexual behavior and so forth and so what I'm interested in specifically are fascinated by it right now is one aspect which is our Tendency to project lifelike qualities onto robotic objects so we have More and more robots entering into our lives and our homes that are specifically designed to interact with us on a social level and What studies are starting to show is that we perceive and we treat these objects very differently than we do other types of objects so we project on to them we give them Personality and we ascribe intent and states of mind and feelings to them and so Psychologists for example, you may know of sherry turkle at MIT are Starting to make a really powerful case That we bond with these objects really surprisingly strongly Now you can say okay So what like we've We've always bonded with objects people fall in love with all sorts of things They fall in love with their cars with their phones stuffed animals people will bond with virtual objects and video games but we believe that this effect is stronger for robots and We believe it's stronger because of the interplay of three factors the first two Our physicality we're very physical creatures So we react differently to something in our physical space than we do to something on a on a screen and then if you introduce Perceived autonomous behavior, so it doesn't need to be a lot if something is moving around in a way that we can't entirely Anticipate what it's going to do next that already lends itself to us projecting on to it So if you take this simple robot vacuum cleaner the Roomba It's not designed to be your friend it's not designed to distinguish between you and the chairs and Yet just the fact that it's moving around on its own will cause people to name it They'll feel bad when it gets stuck in the curtain ridiculous Much more extreme example of this is military robots So there were some articles on this recently, but we've known for a while that Robots and military teams often get named and the soldiers soldiers will bond with them They'll give them medals of honor if they get you know damaged They will insist that they get them repaired and not replaced They want the same one back if if you know they can't get repaired then we'll have a funeral for it So this is and I mean you could say well, okay, maybe this is social dynamics Maybe this is you know some circumstances in the military like serious circumstances But we really think that a lot of this is just projection I guess one of my favorite stories in the military context is this robot that they developed To diffuse landmines and it was shaped like a stick insect So it had six legs and it would walk around on a minefield and every time it stepped on a mine one of its legs would blow up and it would just continue on the other legs and The colonel who is in charge of testing it Ended up calling off the exercise because he said it was inhumane and he just He couldn't stand the sight of this thing like dragging itself along and it's remaining legs So I mean the interesting thing here is that these robots aren't designed to do that, right? We we project onto them anyway, and so if you introduce the third factor, which is social robots that are Specifically designed to target our emotional buttons then that really lends itself to this type of anthropomorphism So these robots are you know, they're designed to use sound and movement and mimic Expressions that we automatically associate with states of mind and feelings And so this is this is a pleodinos for the one on the right is called Yoshi He's named after Yochai Benkler They have met in person We had a we we did a workshop at a conference my friend honus and I and we gave Groups of six they they each got a plio. They named it. They played with it they did stuff with it and then we asked them to torture and kill them and They were horrified like they had trouble even striking the things and Well in the end we did get them to destroy one of them But only because we started playing mind games and we were like, okay We're gonna destroy all of the plios if you guys don't kill one of them They did it was very dramatic But so what we came away from this feeling and and what studies are increasingly showing is that We Respond to social cues from these lifelike machines and we do so even if we know that they're not real, right? These were adults. They they knew that these robots had been purchased specifically to be destroyed, but they didn't want to do it So now why are we talking about this? There are people who say that this bonding effect that we have is a bad thing that we should discourage it Or prevent it from happening and While I understand some of their arguments I'm not sure how I feel about this because first of all, you know, good luck preventing this We obviously really like doing it. You're not going to stop toy companies from developing this type of engaging technology so but Even more importantly, you know, there are so many great uses for this stuff. We're already seeing amazing, you know Fantastic results in autism patients in dementia patients You can imagine that for education the possibilities are endless if you have this type of engagement So really do we really want to give all of that opportunity up? But of course, you know, I recognize that there are some issues if we embrace this if we encourage this type of, you know, human-robot bonding and So like in that in that workshop that I did people were very Uncomfortable with seeing these objects treated in a way that they didn't agree with and so, you know One idea is if we're going to Encourage perceiving these objects as objects with a special status, you know at some point We might need to start treating them as objects with a special status So I mean bear with me here We protect animals from abuse Or from overly cruel behavior and why do we do that? Is it really because the animals actually feel pain? Or is it more because we relate to them? We're projecting on to them. They are, you know, giving off these cues that we automatically associate with our own feelings and I think one argument for the latter is our differential treatment of animals that doesn't seem to have much to do with their inherent capabilities in Culture or even in law. So in America, for instance, we don't like eating horses because, you know, horses and I'm European so for me, it's like but horses and cows are both delicious. What's the difference? so But you know even even if you don't agree with that if you say that's not why we protect animals It's because they actually feel something or even if you do agree with that and you say okay, but you know Protection for social robots is going a little bit too far Because you know animals do actually feel pain and we know that robots don't So I do just want to leave you with two additional thoughts though why we could We might want to discuss this as a possibility So the first is that we increasingly will have Parts of our society that don't completely understand or that have difficulty understanding the difference between life-like and alive So you have elderly people you have small children like how are you going to teach a small child that it's not okay to kick Or it is okay to kick a robot It's okay to see a video on YouTube of people kicking a robot, but it's not okay to kick a cat and But more importantly we might want to discourage behavior. That's harmful in other contexts generally so The content argument for animal rights was never about the animals. It was it was about our societal values Kant says we can judge the heart of a man by his treatment of animals for he who's cruel to animals becomes hard also in his dealing with men and we know that these types of behavior tend to translate and if torturing a thing that responds in a lifelike manner Causes us discomfort and feels wrong You know there could be reasons why that feels wrong to us that we don't want to get rid of in other contexts so You know taking away that piece of empathy within us It's it's not really clear whether that would translate and there's some indication that it does so in other contexts we do have Correlations in this behavior for instance in some states if you have an animal abuse case in a household That will automatically trigger an investigation into child abuse if there's a child in the same household because these behaviors correlate So really this isn't at all about you know protecting objects. It's about Thinking about societal values and thinking about encouraging behavior that that we want So before I open it up, I just just two final notes So that that workshop that I did with the pleos wasn't very scientific obviously it was just a workshop, but I do plan to do some Experiments where we're replicating that in a controlled setting and looking at what's actually going on And you know, I'm still in the brainstorming phase of that I want to look at you know what social social dynamics what role social dynamics play Like the difference between physical things and virtual things how much interaction You need in order to bond with something etc But if anyone wants to talk about this or anyone knows anyone who's interested in this or working on this Please put me in touch with them I know Peter Kahn and people at the University of Washington are doing some of this stuff But I don't really know many other people So there's that and then I just want to say again that we need more people working on this I mean maybe not anthropomorphism specifically although I do I mean I think I Think my experiments are important and interesting because they deal with this murky area that right now, you know Very few people are even aware of this that this could even be an issue whereas, you know drone warfare and autonomous vehicles and Medical procedure robots are getting a lot of attention right now because or they're getting comparatively more attention because they have effects that are very dramatic and visible at the moment and But I think that generally this whole field really needs more people and we need to be talking To each other more we need roboticist talking to legal scholars and the other way around I think it benefits both sides. I think you know even the roboticist like I don't want to restrict anyone's research and development I don't want to freak anyone out But you know, it's a fact that if you're thinking a little bit about the effects of what you're building There are points in time where you can make certain design decisions that go one way or another And if you're thinking a little bit about data security might make a decision and that's important because once standards get adopted They're very hard to change So there's that and then you know on the on the other side. We we are not exactly known Legal scholars for our technical expertise and so we really need to be talking to people who are developing these emerging technologies because otherwise we'll miss out on very interesting and important legal questions and you know, I Don't know I I think Cambridge is a great place to start doing interdisciplinary work on this So I would just that's my message. Please Please support interdisciplinary work And with that, you know, I I hope we can get a good discussion going I do want to I don't know I don't know if we're supposed to use the microphones because of the live stream. I Would encourage everyone though to like if you disagree with something someone saying, you know challenge it If you have a better answer to a question, please speak up, you know, please Hi, thanks for that talk. My name is Dassa and I'm post doctoral research fellow the social media collective at Microsoft Research I had two questions. Um, the first was I might have just missed it But I wasn't totally sure what the definition of robot that you are using was So I wanted to ask about that and then I also wanted to ask about like maybe a different part of Ethics around robots something about like I used to be a more of a labor activist So I was wondering about the conversations that aren't starting about You know, like labor like the effect of labor that goes into so much of the anthropomorphization You know how what kind of protections would there be for for that, you know I can almost imagine it being the other way sort of like you were talking about we need to protect robots because it makes us better people but it also sort of does something about like Protecting robots as a way of signaling our care for other workers Okay, I'm not I'm gonna ask you again about the second question, but so the first question Thank you for asking. This is that's an excellent one. So if you know if we're talking about, you know, even legal regulations For for social robots, obviously we need a really good definition of what that is um My idea of the definition would be something that's a physical object that has You know a certain kind of autonomy as defined by robotics and that's specifically designed to interact with us socially And you know, I realized that that's not a perfect definition and it would take some work And of course, you know any line you're going to draw is going to be arbitrary in some way But on the other hand the law deals with that type of thing all of the time So it's no reason not to try and I think we would come up with something So that's the the definition question. So the second one. I'm not sure I thought So I guess there's been a discussion on the IDC listserv I don't know how many of you are on that about a hyper labor and like under employment hyper employment and under employment and so I started thinking about Servers and like how much labor they're doing all the time And if you've ever worked somewhere where the servers went down like suddenly you realize all this labor is going on all the time And so I was thinking from a work from a labor standpoint. There's this question about We protect we do a pretty good job of protecting hours that people can work But we don't do that for machines until they break and then we get rid of them And so I was wondering if you wanted to have if you wanted to I'm not completely sure I buy your arguments But if I were going to I could extend it to be like, yes, and we should also unionize the robots So I believe in you know granting protections and rights according to everyone's you know actual needs and I Would be hard-pressed to come up with you know a reason to protect robots from working long hours since I don't see any Evidence that they mind doing that So so like my idea of you know protecting robots is more It would be closer to Animal abuse protection, but of course it would find a line where There's an actual difference between robots and animals and that you can work an animal to death And if you work a robot to death, that's maybe less of a of an issue Whereas you know setting it on fire on YouTube might cause some some social distress I'm really not interested in protecting the inherent, you know robot at least not until we see the type of AI that they talk about in science fiction I Don't know if that answers your question. Would you would you want to protect robots in the labor force? We should be good to animals because it signals that we are Norms of care in a society So if you have if you're concerned as I am about like, you know issues of labor and like protecting workers rights Then maybe that would be a good way to signal like just trying to draw that parallel. I I like the idea. I've never thought of that. No, I really like it. It's fun ask the Relate Adrian grab a patient privacy rights The relationship between intellectual property in robots and ethics in the sense that Animals and people Are not intellectual property there the innovation and changes happen at the edge in the family in the farm and whatever How important is something like open source? Software in the robots so that from an ethical point of view we match this idea of local Intervention local rules and not having centralized control built into the system Wow, this is a this is a big one. I mean and there's There are two ways to answer that I can answer, you know what I think should happen and I can answer what's probably going to happen I think in robotics you're right now you have you have a hardware side and you have a software side and you have a hardware side Probably if intellectual property laws stay the way they are now and the landscape doesn't change much You're going to see a similar development as in you know computers and smartphones where You're going to have Hopefully, proprietize systems with some open source types of apps and stuff although that raises other liability Questions that people are thinking about I Guess you know What what answer do you want for me like I? I'm obviously very biased when it comes to IP and I'm skewed towards open systems and Keeping things open so that everyone can innovate, but I'm I'm not sure that that counter will happen There's Yeah, is there a microphone I think might be compatible hi Ethan Zuckerman Center for Civic Media you reference drones and ethics sort of early on in this talk and You reference the really intriguing idea that I would love for you to expand on a little bit of the idea that we might at Some point try to create machinery that has ethical judgment and it is trying to figure out We know we have a target here that we have been targeted to hit But we also have visual evidence that perhaps there are children nearby who might be harmed by this How do we have that robot make that ethical decision whether to strike or not strike? That's an intriguing issue, but there's another intriguing issue behind it Which is we can easily imagine a defense contractor saying we have the finest Harvard trained Ethicists working for us on our ethical algorithms, which are part of our trade secrets and can never be revealed Otherwise the Chinese will have robots more ethical than ours if we hit a point where we have to do ethical review of Algorithm does this push us into a space where we really start demanding that this stuff be open source so that we can Actually review those ethical choices that are being made in our name through our military through our algorithms and our drones You know, I think it I think a lot of these questions have been explored already in the cyber law space and But my hope is that with you know actual physical drones out there killing people that people will actually Drive this discussion to a new level and hopefully come up with some actual solutions and not just this like Google Google thing, you know, I'm gonna I'm gonna jump in and sort of ask my name is Madeline. I'm a PhD student in anthropology, so forgive me if this is a Sort of a given in the legal realm, but I have another definition definitional question about ethics and what Do you mean? when you When when we say ethics and in anthropology, that's not a given so forgive me if that is a clear-cut answer here I don't know I don't write like I'm inclined to just pull up my browser and look up the definition of ethics I brought up con. I wasn't sure if you have any kind of judgmental I guess I'm not an ethicist, but what what is your definition of ethics and do you feel like? What I've been talking about fits in that space or not Sure, I guess my my sort of question would be how we define how we define whether ethics are relational or categorical would Change How we would be able to evaluate some of the questions maybe about animals about labor about it's more of a question about how you're Framing your research. Yeah, I guess I guess I haven't done that and should my you know off the top of my head I feel like systems of ethics are kind of a Social contract we all agree that you know this behavior is generally desirable this behavior is generally not desirable and You know that shifts depending on culture depending on what we all degree agree on but like that That would be how I feel about ethical systems not as absolutes, but it's kind of something we agree on together I don't know if that answers your question That's fine. It's just how we treat each other. That's fine. I Don't oh sure Hi, I'm Nathan Brigham fellow and a PhD student at the MIT Media Lab What stuck me about your talk, which is really interesting is how In some ways while we have machines becoming more personal. We also have people Becoming less and less like people in some cases online where you have systems like mechanical Turk where people are doing work That to the user looks like it's something that a machine just did and and I wonder If we're actually like what you're talking about is gesturing towards an ethics around personas or Identities and how we treat identities. We might have another category for The ethics of how we like treat people within systems and another category for how we treat like the decisions that Algorithms are making. I wonder if you have any thoughts about that idea of like persona and How that might fit into the the robots Yeah, that that's a super interesting thought. I've never thought about that Do you know anyone who has written anything on that or did it? So I think you know some of the work on like the Turing test and Eliza might be one direction to look Beyond that, I think we see work on pseudonyms and fake personas in online conversations I think there's some research on ethics in story situations like the research on ethical behavior within interactive video games Where people are exposed to opportunities to harm avatars and characters within video games that they know might not have a person there were for example there was research that tried to reproduce the Milgram experiments in video games and did Measure in measurements on like people's effective state In that and tried to get a sense of whether people felt that those situations were similar or not and again, you know This is all constructed around these these debates about the ethics of choices to harm and Like how people construct those ideas, but I'm sure there's other there's other work around a broader set of ethical questions as well Yeah, I mean what fascinates me most about all that stuff is whether there's a difference between online and the physical world So whether you're whether that changes the whole persona idea and also the willingness to harm etc So and then that's something that I'm going to be exploring more But yeah, I am interested also in this whole persona question. There was they recently there was this experiment where people were given I'm not sure if it was a robot or an object. I don't remember. Maybe someone knows it but one group was given given this thing with a narrative and it like had a name and it came from somewhere and the other people were just Given the thing and the people who with the narrative they you know bomb did much more strongly with the thing They wanted to keep it. They liked it more Etc. And so I wonder I Wonder about that type of thing and also, you know, whether that's different online and offline kind of Yeah, sorry Okay, I wanted to go back on the question about free software and robots Sorry, I'm Camille and I am a fellow at the Berkman Center thing And and I think that as in many Debates we're having the focusing it on drones sort of tricks the debate even though it's a really good point that you know If we if we think too much about free software and drones we're gonna get tricked into debates that we don't like but You can also think about it in terms of cars and as we're going into robot cars See what it's gonna mean for people and I think that it's gonna turn out not to be a question of you know, we want to see who makes the ethical decisions that imply but truly a question of autonomy and As we think you know through the fact that most of the cars that we're driving on the roads already have antennas that you know Put them on the network and already have software in them that makes driving decisions Even even if they're small the question of autonomy becomes a question of can the software allows to see if there's backdoors in it And it's not a weird, you know crazy science fiction question because we see through through the legal work that's being debated that there is a willingness of Governments and companies to put backdoors in the softwares and you know ultimately it becomes in the car situation a question of who's Who's driving your car and these robots who's truly giving them martyrs? So I think that For this question putting you know thinking through frameworks that are not the drones framework or the military frameworks but you know that are sorted to correct to the frameworks that you offer things that we're Driving with being with and you know that are closer to us Helps us see the questions that we're going to see. Yeah so am I getting you correctly that you're you're talking about surveillance issues because I mean, that's that's a big one, but also I think the whole Liability question and like harm question comes in there, too Actually, this ties into the intellectual property as well. It's so what I see happening is that we're going to have platforms that You know, maybe they'll start out open But then if you have like different types of software coming onto these platforms that are programmed by all different types of people and Then physical harm occurs and how you're going to assign responsibility there It you know If if you're going to assign responsibility to the platform to the manufacturer of the platform under you know current Product liability laws that will totally discourage having open platforms and will make everything proprietary Or if you're going to make some exception like we do for you know Facebook or we do for I guess for Android tablet platforms like if you're if you're making if you make an Android tablet and Someone downloads an app that was made by somebody else that ends up losing all of your data Then you can't sue the platform or at least I think we've made an exception there So what's going to happen when the damage happens in the physical world and when it's a car causing like physical damage So that's that's an issue, but and and then the backdoor thing. I mean Like I said, there there are not enough people thinking about this and there are no answers and this technology Already exists. So we need to be thinking about this more Hi, Rich Ferrante. I have a question around Adaptability the adaptation of the algorithms right and certain things are especially around elder care as I'm thinking in terms of Assisted living for people with cognitive disability. So things are going to be Let's say ethical just in a Fairly straightforward sense, you know, let's say standard what we degree on in this room It's going to be different depending upon the cognitive ability of the person that you're caring for and you're going to need to adapt The algorithms are going to need to adapt as the person Probably declines and I think I think there's issues around thinking that through like are you going to adapt to quickly? Are you going to do too little? How our caregiver is going to be able to change those adaptation rules as the patient progresses So, do you know anyone working on that kind of thing here? I don't know. I mean, I mean We're already seeing like social robots being used in in elderly care. I don't know of any Any of them that adapt their behavior, but that's Coming this it's coming. Yeah, I mean, I'm not I'm not exactly sure what your question is like I'm I'm pretty sure the company selling these robots are going to be very interested in making algorithms that will you know Respond to patients needs Right, but I think that there's a question of what a good response is Yes, oh, and and there is going to potentially be a liability issues around that And certainly ethical issues around that again think The dementia patient that may start wandering and you're going to prevent them from wandering Are you going to prevent them from wondering if there's a fire in the house? Are you going to you know, there's those kind of things are just going to come up. Yeah, so I mean in general What we should try to do with with these robots in elderly care and childcare is have them supplement and not replace actual care I mean if If the robot is replacing a human in stopping the patient from wandering around That's probably not optimal because if there's a fire, you know, that'll lead to harm if there's no human around to take care of them You know, it's probably better than nothing But yeah, I mean that's definitely an issue It's like the most most questions. I'm just like yeah, that's totally an issue someone should be working on that Why are you? Hi, my name is John weaver I'm an attorney and author and I actually have a book coming out about this in the near future, but so I just want to I have a branch out question from what we've talked about so far that I read your paper from the we-robot conference I talked about social robots and and comparing it to animal rights law and a lot of it seemed to be focused on reactive or laws of reaction to how we interact with robots of the The idea seemed to be that the robots come out of the market We interact with them somehow and then out of that interaction There's a demand for these laws to protect robots in some way. Have you given any thought to what would be? Ideal say proactive laws in the face of the technology You mean not waiting for silent demand and just saying what laws would be optimal Yeah, yeah, exactly. There's a model that you think should be introduced well You know the the idea that you know, we might want to protect these robots for to Encourage good societal values is meant to be provocative. So I'm not saying Yes, we absolutely need to do this like I want to have a conversation about it But I think that that that might be you know worth a thought that Whether or not people start clamoring for for rights, you know, it might be a good idea to think about this and think Implementing it why not I I don't think it's gonna happen. I mean other than that What do you think? What I would like to see ideally is For us to get ahead of the technology and you know, there's sort of a danger in Legislating too soon where we what we think the industry is going to do or we think the market is going to do Yeah, we have regulations and does nothing like that. That's that's a problem But Looking at some of the history of recent big technological changes It seems like if we don't get ahead of it soon We never really catch up because it moves too fast. Yeah, I'm sorry Yeah, like I was just talking about this whole like protecting social robots idea But like with regard to the other issues that we've touched on. Yes We need people being proactive coming up with solutions to you know these issues and proposing legislation I really think that needs to happen Hey, Andy Sellers Berkman fellow I want to throw out a thought about Robo exceptionalism for a second. So You mentioned the pleo study, which is a fascinating thing that you did and and you drew the analogy from there to pets and For me, I can't make that leap yet. I don't love any piece of technology as much as I love my dog I see nothing on the frontier getting close to that You could of course have gone another way with it You could compare the pleo to just a stuffed animal dinosaur So what is it about? the the pleo the robotic dinosaur that's different than the same test done with a stuffed animal and Or a pinata and I will cite the scientific study of toy story Where of course Sid the evil guy next door Abuses his toys and that's a signal that Disney sends us that he's you know, not so right So Why is it that you think or do you think that a robotic dinosaur would be different than a stuffed dinosaur? And are the same test conditions and why so It's not just that I think also Sherry Turkle has done some really fascinating work on this and on children and the gray area And she says that children do distinguish, you know between stuffed animals and robotic objects And even though they know that the robot isn't alive. They are confused. There is this gray area where they're They're not sure whether it feels pain and so forth. I think a lot of that that is movement So we are hardwired to respond to things moving and in ways that we subconsciously associate with something that's alive So there's that I like the idea of Filling robots with candy and seeing if then people are willing to smash them There's also this great. I gave it the first time I ever gave a talk on this this woman asked a question She was like Well, I would have absolutely no problem destroying a robot, you know Does that make me a bad person and I wanted to say? Yes, the next question like and there's we see, you know There's there are studies on People who who lack Empathy and and you see that lack of empathy in all different kinds of areas and in their interactions and you know But I mean, I'm sure I'm sure your dog is awesome. I like my robot Hi, I'm Rowan Curran a research associate at Forrester Research I'm sort of interested in the expanded definition of robots including Sort of autonomous software agents that we are interacting with on various levels in the sort of very light versions and Google now in Syria and whatnot and I was wondering whether you thought there was a large overlap between the physical Robots that are out in space and us interacting with them and the software robots and whether There's also a chance for the robots to sort of To channel our behavior by encouraging us to make certain decisions or by suggesting certain things rather than just Encouraging us to not hurt them and to not you know Can you explode them? So yeah, I think that there are some differences between you know algorithms and Physical robots so for one thing this type of projection tends to happen more with physical things For another thing people tend to just generally tend to react very viscerally to physical things Like in in the case of privacy like if a robot is watching you that's going to freak you out more than if the NSA is watching you in general With regard to whether robot like whether So not to undermine everything that I've just said here, but I I do wonder like you know Say McDonald's gets its hand on a bunch of like children's toys that that are social robots and interacting with the kids socially and The the toys are like telling the kids to eat more McDonald's and the kids are responding to that like that is something that we also Need to think about and talk about I think when it starts to happen that can be used for good and for evil I Am Ron Newman a software developer I was two things the minor thing is I'm wondering if there's a better term We can use an anthropomorphism because it seems like what you're talking about isn't so much You know people attributing human attributes to the robots as as attributing animal attributes to them The more substantive point is do you think the physical form of the robot has some effect on how people would Deal with it in this way for instance, you know, we don't have a very uniform idea of animal of animal protection It's really bad to kill dog to torture dogs or cats or honey bees or butterflies It's perfectly fine to do it to rats and house flies and mosquitoes Yeah, with regard to anthropomorphism, I mean From what I understand the term also applies to you know our projections on to animals because really it's just about Identifying things that we relate to but you know, I'm willing to rethink that term With regard to you know the shape of the things or the design of the things I think that makes all of the difference in the world and I think that you know one of the reasons This could become an issue is because you know animals Can be cute or not cute But if social robot designers want to make something that's incredibly endearing to you there are certain Attributes that they can make use of and Well, that might vary a little bit among cultures like we do seem to be hardwired to respond to certain things like big eyes Etc. So yes and and you know one idea for I guess, you know preventing this is to mandate that all social robots be You know look horrifying or something It's not gonna work, but But yes that that makes all the difference I Tim Davies at the Buckman Center I want to maybe look at that that question of theories of ethics and and of law because it strikes me a lot of what you're Saying fits very much with an idea of virtue ethics in which ethics is about the character We have as people and whether harming the robot creates a negative character yet law is often based much more on Consequentialist and and can't in theories of ethics of rules and very strict not so much about our culture I think Exploring those theories of ethics will be really valuable here But I also wanted to to perhaps suggest another term from anthropomorphism, which is simulation Which is the simulation of a harm is something in some places in law. We already Recognized to be deeply problematic. So whether it's child abuse images even simulated rather than actual are seen as deeply problematic Legally and I wonder if that that idea rather of the the character of the thing being the legally and ethically Significant thing whether it's the character of our of our act towards the thing and what that simulates what that relates to might be a more key part of the ethical and legal question Yeah, no, I I agree. That's those are great thoughts I have thought a little bit about you know me saying Torturing robots because you know, can you really torture a robot? No, not really But you can behave in a certain way and our behavior can be categorized in a certain way, but it's yeah It's not about the thing. It's about our behavior. So thanks. Hi. I'm saltanum I'm a citizen journalist here in Cambridge first. I'd love to have Ethan's ethical arms race. I think we need it But I'd like to go back to the comment you made about Robots being worked to death and that they don't really seem to mind it I mean, that's a design decision I mean that they're you know that they're working, you know 24 by 7 and it's not really Abstract either because if you go through a factory, that's you know partially roboticized I mean the work is being paced by the the robots and the factories are full of the normal cues that You know, you know the robots are waiting for the humans rather than the other way around whether it's parts You know, whatever or you know a robot being shut down for Repair And that also sort of loops back to some of the use cases where you started with you know where robots are You know being specifically designed to be worked to death that is clearing landmines or in more dangerous areas And the fact that you know, even when the robot is specifically You know designed to absorb harm that would rather inflict on a machine Than a human, you know, even hardened military are saying no, don't let that landmine, you know blow off the leg I'm not sure what that says, but I think it's there's something profound in there I mean if we're You know that the contradiction between you know They don't seem to mind, but we seem to mind them being worked to death Plus the you know the social cues we're getting from you know, and you know Factory robots just working endlessly. It seems to me to be the same You know as the McDonald's case of you know eat more big Macs You know work like a robot There seems to be Sort of an ethical worker case there To the comment I I'm glad you brought this back up because I Dismissed the thought too easily before I really think you know That in terms of signaling to people this could be something fun to explore this the whole robot worker ethics thing I don't think that Walmart would need to have a food drive for its its robots necessarily But I am gonna think a little bit more about this. I think that's a really fun Fun thing to explore So actually South Point was almost exactly what I want to say and just adding one more thing, which is just the really sharp contradiction this proposal like so you're proposing a sort of a Universalizing way of thinking of robots, but we you've also described robots which we use in two very different kinds of ways one of which we use to sort of Be as human like as possible and to sort of replace human functions because it's too expensive to get people like We can't hire one person for every elderly person But maybe we could buy one robot for every elderly person and that would sort of solve some of those needs We're trying to get a robot to do things as close to a human as possible To serve these sorts of niches and then you know Sal brought up this other set of circumstances where we get robots to do all the Things we don't want humans to do Which actually in some ways like there are already analogs Which I think creates some problems for creating a sort of universalizing set of ethics because we have these really Divergent set of purposes for what we want to do with the robots Like you know the robots that we're sending to go into nuclear fallout places like you know We're sending them there to be destroyed because we don't want to be destroyed and then these other things You know we're using in very different ways I think they're also parallels that go back to animals too You know the canaries and the coalmine were animals that we used to die before humans had to die and so we might so we already have kind of You know ethical frameworks of you know as you brought up before we treat certain animals and certainly like we you know It is it is not appropriate to like you know to strangle a canary in a cage to death It is appropriate to take a canary down into coalmines or dies before human does So some of these things are sort of contextual too. That's probably more just repeating what Sal brought up No, but I really like that's why I try to limit the definition of the robots that I'm talking about here to Robots are specifically designed interact with us socially because if you Broaden that to all robots and say we need to treat all robots in a respectful manner Then you can't have these military robots. You can't have you know factory robots I think it gets very complicated and messy. So there's kind of an arbitrary cutoff for me at least Although I do see your point and but the whole question also of the the military robots that are specifically made to go and be destroyed Don't make them shape like stick insects, you know, that's People need to be conscious of what they're designing and and their end the effects Yeah, hi Boris Anthony Nokia in Berlin is sort of a random seed in this room perhaps This is a really complex topic obviously right and we're trying to sort of Perceive something through the fog with with and we're trying to figure out the calibration for how we're supposed to measure these things and notions of like first questions of well, what do you mean by robot? and and anthropomorphism and What is sentient? What is alive and how do we attribute and what in what cases do we give anthropomorphic? Convey that that sense onto something that's moving. I sense you know, there might be a job of structuring this conversation with Attributes and levels of things for example, what degree of autonomy do we perceive in the thing? What degree of responsibility do we attribute to it if it does something right or wrong? What amount of decision-making power do we give it or not and these might be some of the factors involved in and then saying how Do we react to this and how do we? As a culture respond to it as as ethics and code that into law and to policy into code, right? I sense there's a calibration process which is sort of ongoing in this room And we're going from very high conceptual levels to use case scenarios really like almost production level like oh, that's a design decision Which is true That's sort of a random ramble as I often do Ethan's shaking I'm gonna be that guy who doesn't have a question really but I'll also finish with there's Probably a long list of Japanese anime. I would love to share with you which Which tackles this head-on culturally in prime-time television and it's it's fascinating to see the Japanese approach to it Yeah, it's almost the same common I used to make when I used to visit the Brickman Center, you know earlier in the decade Which is you're talking about privacy. There's not a Japanese person in the room They have a completely different perception of what you're talking about And I think that's something to keep in mind as well. I just sort of want to lob that in Yeah, the reaction to the technology is very different depending on your culture the cultural reaction. Yeah, that's true in certain parts of Asia, they're very much more accepting of Robots as beings and as participants in society like they've already Kind of adopted that that approach whereas in our Western society We're kind of indoctrinated by these science fiction films that generate this like fear of the robot revolution And we also you know kind of have this background of believing that living things have a soul and Non-living things don't so that that creates all of these cultural differences with regard to you know How much autonomy a robot actually has and how what what it's inner inner, you know design is I Would note that there's a significant difference between What robots are capable of and what we perceive them to be capable of? So there's that to think about in social perception versus Inherent, you know intelligence that the robots robots are not smart. They're not getting any Like they're not reaching science fiction levels anytime soon But I think that the projection that we have on to them is going to be an issue for sooner than Then it actually matches what they can do Yeah Agreed, so I'm really glad that we're going in the direction of science fiction because I was certain that we weren't gonna get to this day without talking about Phillip Dick and and Isaac Asimov But and I think that's entirely appropriate not just because we can talk about robot on robot ethics Which you get often in science fiction stories, but because there is one thing that's really valuable about the story approach which is that Stories are very often ways of structuring or identifying or sharpening points of ethical dilemma And I wonder in the context of your research and this is the question of if whether you have started to look back at this kind of 50-year history of American Soviet Asian futuristic writing that is also all about these kinds finding and Declaring and exploring these ethical dilemmas of which these stories that's the point of these stories Yeah, so probably one of the reasons I got into this is because I read way too much sci-fi as a Kid and a young adult so I have Read a bunch and I'm a huge fan And yeah, a lot of these questions have been explored. I'm a little bit I Find that a lot of the robot ethics questions in science fiction deal with you know inherent qualities of robots So back to you know robots being actually capable of feeling things or being intelligent I don't know if any of you have started watching that new show on Fox that premiered yesterday in the day before called almost human it's You know, it's about what happens when robots can actually have emotions and experience things and it said in 2048 which is entirely unrealistic like we're not gonna have that type of technology by then And while while it's interesting to think about and fascinating and you know, maybe we we should be philosophically thinking about these questions It's not a very practical discussion Because we don't know what type of world we're gonna be living in when those questions actually come up So I'm more interested in you know and and probably what what brought me on to this projection thing becoming an issue for sooner is One of the differences between for instance Blade Runner the movie and the book by Philip K. Dick is that In the book he falls in love with the Android, but she doesn't love him back And he knows that but he falls in love with her anyway So that in the movie. It's like Hollywood romance and whatever it's still a fantastic movie and everyone should see it who hasn't but Yeah, so I Science fiction is is important and does does Answer some of these questions and some questions that were not there yet Chris Peterson Center for civic media first of all I want to go back to Andy's point because I spent most of my teenage years blowing up furbies in the woods And I'm now wondering was I a sociopath or was I just a teenager in New Hampshire? No, maybe those are the same thing Let me answer that if it was furbies, that's normal furbies are fucking annoying. I would destroy them as well I was gonna say isn't your picture in civic like holding a skin to Furby. Yes All right, just so on the same page now the substantive question is this We were talking about liability, especially for autonomous systems and there's a really interesting alternate legal history from Early modern pre-modern Europe and where inanimate or animal Actors were often put on trial So for example, it was not uncommon for if a vase fell off of a ledge and fell and killed somebody the vase would be put on trial and Destroyed in one really interesting case in a Brazilian colony in the 1700s a bunch of termites ate out the foundation From a church and the termite colony was put on trial and appointed a defense attorney Who argued that the church had taken all the wood from the surrounding area and therefore the church was partially responsible And the judgment was to set aside a set aside a pile of wood so the termites would have something else to feed on We look at that somewhat condescendingly right, but I do wonder if there's an interesting space to think about emergent autonomous Algorithms actors robots that are not that often have properties or behaviors not intended or capable of being Built by their designers where we might imagine things like okay this learning High-frequency trading algorithm Developed and is really bad and it should be put to death Can we think about a criminal? Legal regime that is capable of saying of distinguishing between the builders of systems and those systems as emergent semi-autonomous or fully autonomous actors from a legal liability perspective So I would say depends on what you believe the purpose of a criminal law system is right Is it to reduce crime and harm or is it to? satisfy some societal notion of justice and Prevent anarchy by giving people the sense that justice is being fulfilled somehow and so if you subscribe to the ladder And people are very upset that you know a robot killed someone and they just want to they want vengeance They want the robot to be destroyed before their eyes Then sure, you know, whatever social contract man. I'm I'm on board if that's what people want. I kind of I wonder if you know We might want to think more about Setting incentives for for harm reduction in those cases like we don't put little children on trial if they can't Well, I mean yes, we do sometimes but only if we believe that they Had some responsibility for what they did and had some inherent capability to understand what they were doing But yeah, good point well again, I mean it depends on on what purpose you're trying to serve with a with a justice system There was I wasn't it called noxals law or the the law you're talking about where you know They would put the neighbor's cow to death if it trampled your corn and stuff that's existed in human history and There's societal desire for that then, you know It's like we're caught here between the human gift for imputing meanings where meanings haven't been imputed before This animal this machine is what I say is kind of thing And we police that human inclination with what Austin would call felicity conditions So you can call your pet or a machine anything you want But unless certain conditions are satisfied, it's not accepted as a as a proper constitution So the question here is this sort of in some kind of an anthropological issue It's at what point to do or so and you can get a Acknowledgement of these felicity conditions in an ethnographic interview You can say to somebody look you've turned this robot into a sibling or a child and they'll say well It's really just me they'll back off They'll acknowledge the felicity condition and and and and they disengage with the imputation of meaning Do you see any evidence that these felicity conditions are softening or changing and what's the kind of public forum in which people will Kind of license themselves to to move away from those felicity conditions or to rewrite the conditions That a wow I think that's a really interesting question and I think that's obviously something that still needs to be explored in this space so You know go do it, please Or help me, you know, I I'd be interested in exploring that as well, you know in experimentally or By you know talking to people. I think that that's that's a super interesting question Well Nathan again I Wonder if it's actually Ethically better to kick the plea out There've been Like there's a long tradition of argumentation that say stories and role play a really important part of human development Understanding different alternatives exploring empathy and having conversations about what's right and wrong So like Alison Gopnik looks who looks at childhood development talks about the importance of the imagination and Having kids role play things that are wrong or factually incorrect as a way to think about what's right I'm wondering what your thoughts are on whether it might actually be morally valuable to people to Put themselves into situations where they could actually do very nasty things To robots as a way to explore ethics. I Think that's a super super interesting question I think but my intuition would be that it really depends on the context So if you're having a conversation around it, then probably it could be something valuable If you're not having a conversation around it, then that does raise the question it raises, you know the same question of Does violence and video games translate to the real world or not? but it raises on what I think is a different level because it's in a physical space and You know, there There are different areas where where this is going to become more of a question and already is I don't know if you guys saw This is probably not something I should bring up on the live stream But I'm gonna do it anyway so recently there was this like virtual little girl going around and catching people who are watching child pornography and That raises a whole slew of questions because this wasn't an actual girl So were they even breaking the law and why is that unethical and you know, you know Should we be developing 13 year old boy robots to give to priests so that they can do whatever they want? And then they'll leave the real boys alone and that's this this nature versus nurture question That's never really been answered that you know, I think with with Social robots could become an issue or maybe even an area to study that more Thank you guys