 Beautiful day out there. Thank you for joining us here today. This gives me great pleasure to introduce you to two thought leaders who actually inform, inspire and shape my own thinking about the relationship between technology and society on at least a weekly basis and I'm not kidding. It's really fantastic to have Iad Ravan and Joe Ito here with us for an hour and a bit to talk about the big topic AI and society. Iad is an associate professor at the MIT media lab where he leads the scalable cooperation group among other things. He has done really amazing work over the past couple of years looking at the interplay between autonomous systems and society, how these systems should interact with each other. He recently published a study in science that got a lot of press coverage addressing the question whether we can program moral principles into autonomous vehicles and maybe he will talk a bit more about that. And then of course Joe Ito, director of the MIT Media Lab, professor of practice, a person who doesn't really need an intro so I'll keep it extremely brief just by highlighting two of the must-reads from recent month. Once is an interview that he had a conversation actually with President Obama in the Wired magazine on the future of the world addressing also AI issues among other topics and his book Whiplash which is somehow a survival guide for the faster future that we're all struggling with. I highly recommend it as a reading I greatly benefited from it. So these are not only two amazing thought leaders but they're also wonderful collaborators and colleagues and I have the great privilege together with the Berkman client team to work with both of them as part of our recently launched joint venture, the AI Ethics and Governance Initiative. And so it's just wonderful to have you here and spend some time with all of us and share your thoughts. So thank you very much and welcome. Thank you, Urs. First of all, some of you may be here wondering, wait, this wasn't the talk that I signed up for. So to just give you sort of the provenance of this, originally I think there was a book talk that I was going to do with Berkman and then I said, oh, well, why don't we bring somebody else interesting in. Joshua Ramo who wrote Seven Sense joined. We were going to have a dialogue about his book and my book and then he had a family emergency, couldn't make it. I grabbed Yad and also realized just as Urs was saying, we're doing a lot of work together with the Berkman Center on AI and Society and I thought this would be a sufficiently relevant topic to what we were going to talk about anyway, so it wouldn't be that much false advertising and it's sort of I think an idea that relates to my book as well. One, I can't remember who it was, but a well-known author told me, when you give book talks, don't explain your whole book because then no one will have to buy it. And so this book actually started about four years ago and we were just wrapping it up as we saw a lot of this AI society controversy slash interest start. So the book sort of ends where our exploration of AI and society begins and so in a way, it overlaps what the book is about but is sufficiently different that you have to read the book in order to understand the whole story. But let me, I'll just start a few remarks, we'll have Yad present some of his work and then we'll have a conversation with all of you and feel free to interrupt and ask questions or disagree. I think the, so I co-taught a class with Jonathan Zittrain in January in the winter semester and his traditional course that he teaches called Internet and Society, I think it's the Politics and Technologies of Control, Brish Nair was there, others were there, it was a fun class. But one of the sort of framing pieces of how we talked about this was the sort of LeSigian picture that many of you may have seen in his book where you have law at the top and then you have markets on one side, you have norms on the other and you have technology underneath and that you have you in the middle and somehow what you are able to do is determined by sort of this relationship between law, technology, I think technology is on top and law is down here, but anyway, but somehow these all affect each other so you can create technologies that affect the law, you can create laws that affect norms, you can create norms that affect technology, so some relationship between norms, markets, law and technology is how we need to be thinking in order to design all of these systems so that they work well in the future and I think one of the key reasons why I think the collaboration between MIT and Harvard Law School, Media Lab and Berkman is so important is that you kind of have to get all the pieces and the people in the same room because the problem is once everybody has a solution and they're trying to convince each other of the solution, I call them people selling doll houses rather than Legos and what you want is you want a whole pile of Legos with lawyers and business people and technologists and policy makers playing with the Legos rather than trying to sell each other on their own doll houses and that was sort of what was fun with the class was that I think a lot of lawyers realized that actually in fact, whether you're talking about Bitcoin or differential privacy or AI, we still have a lot of choices to make on the technology side and in fact those can be informed by policy and law and conversely I think a lot of the technologists thought that law was something like laws of physics that just are but in fact laws are the result of lawyers and policy makers talking to technologists imagining what society wants and so we're sort of in the process right now of struggling through how we think about this but importantly is it's already happening so it's not like we have that much time I think it was Pedro Domingo in his book that's Domingos who says a master algorithm and this isn't the exact, I'm paraphrasing the quote it's something like I'm less afraid of a super intelligence coming to take over the world and more worried about a stupid intelligence having taken over already and I think that's very close to where we are, I mean I think if you see Julie Anglin's paper article in ProPublica I guess it was a little over a year ago where she shows this, she finds a, happens to find a district where they're forced to disclose court records and so she was specifically going after the fact that machine learning AI is now used by the judiciary to set bail to do parole and even sentencing and they have this thing called the risk score that the machine sort of pops out after it does an assessment of the person's history looks at their interviews and she found this is great because she's a mathematician in a data science, she crunched all these numbers and shows that in many cases for white people it's sort of nearly random in some cases so it's a number but it's still almost random and then for black people it's biased against them and what's interesting is when I talk to a prosecutor the other day and he said well they love these numbers, she didn't say I but in general they love these numbers because you get a risk score that says okay this person has a risk rating of eight and so then the court can say well we will give you this bail because the last thing that they want is for them to give them some bail the person goes out and then murders somebody it's sort of their fault but if they've taken the risk score and say well I just looked at the risk score it absolves them of this responsibility and so there's this really interesting question which is even at random it's still sort of there's this weird moral hazard where even though you have agency you're able to push off this responsibility to the machine and you can say well it was math and the problem right now is these algorithms are running on data sets and training systems that are closed. We see this happening in a variety of fields I think we see this happening in the judiciary which is a scary place for it to be happening and so we're as part of this initiative with AI Fund that we're doing we're gonna try to look at whether we can create more transparency and auditability we're also seeing it in medicine there's a study that I heard where when the doctor overruled the machine in diagnostics the doctor was wrong 70% of the time so what does that mean so if you're a doctor and you know for a fact that you're 70% likely on average to be wrong are you over gonna overrule the machine what about the 30% where the doctors are right and so it creates a very difficult situation and you look at imagine war right so we have you know we talk about autonomous weapons and there's this whole fight about it but what if all of the data what if not what if in fact all of the data a lot of the data that's driving intelligence when you get the way that you get onto a on the termination list the list that as a target a lot of it involves statistical analysis of your activity your emotions your calls and there's this great interview I think it was in the independent or with the intercept I think it was independent it was this guy who I think he was in Pakistan I'm gonna get this wrong but it's close but had been attacked a number of times where the collateral damage was family members being dead so he knew he was on the kill list but he didn't know how to get off so he goes to London to kind of fight for wait look at me talk to me you know I'm on this kill list but I'm not a bad guy somehow you got the wrong person but there's no interface in which he can sort of lobby in petition for getting off this kill list so even though the person controlling the drone strike and pushing the button may be a human being if all of the data that's feeding into or a substantial amount of the data that's feeding into the decision to put the person on the kill list is from a machine I don't know how that's that different from the machine actually being charged so we talk about sort of these future autonomous systems and robots running around killing people as a scary thing but if we are just pushing the button that the robot tells us to do pick A, B, C or D but robot says it's C you're gonna push C right I mean apparently that was how a Kissinger controlled Richard Nixon was always the answer was always C but anyway that actually is when we think about practice we may already be in autonomous mode on many things and then I'm gonna sort of tee up to Yad which is I think one of the first places that where the rubber meets the road literally is with autonomous vehicles and a lot of the people that I talked to say well the real soul searching around this is gonna happen when the next big autonomous vehicle accident happens where it's clearly the machine's fault how is that going to play out so that may be one of the things but one of the last thing I'll say is that I do think and this is where the media lab is excited is I think it's kind of an interface design problem because part of what the problem is is you may think that pushing the button the right to push the button the right to overrule the computer the right to launch the missile may be your finger if you have no choice morally or statistically other than to push the button you're not in charge anymore right and so what I think we need to think about is how do we bring society and humans into the decision-making process so that the answer that we derive involves human beings and how does that interface happen what is the right way to do it because I think what we are going to end up with is collective decision-making with machines and what we wanna not be in is human agency with no real decision-making ability and then we can talk more about some of the ideas but I'll hand it over to you yet. Thank you. So I'll just give a short overview of the research that we've been doing on autonomous vehicles and I'm not a driverless car expert I don't build driverless cars but I'm interested in them as kind of a social phenomenon and the reason has to do with this dilemma that people keep discussing that what if it's an autonomous car is going to for some reason harm a bunch of pedestrians crossing the street because the brakes are broken or because they jumped in front of it or whatever but the car can swerve and kill one bystander on the other side in order to minimize harm in order to save let's say five or 10 people should the car do this and who should decide and more interestingly what if the car could swerve and hit a wall harming the passenger and killing the passenger in order to save these people should the car do this as well what does the car have a duty towards minimizing harm utilitarian principle protection of the owner or the passenger in the car duty towards them or something else or some sort of negotiated outcome in between and do we ignore this problem do we just say well let the car makers deal with this problem and it seems to be a very controversial topic because there are lots of people who love this and lots of people who hate this and people who hate it say well this is never going to happen it's just so statistically unlikely and I think that's kind of misses the point because this is an in vitro exploration of a principle so you strip away all of the things that don't matter in the real world so that you can isolate a factor you know does drug X cause you know this particular reaction in a cell for example you know you don't do this in the forest you do it in in a petri dish and this is the petri dish for studying human perception of machine ethics and what are the factors that people seem to be ticked off by and I think you know when we started studying this we used the techniques from social psychology we framed these problems to people we vary things the number of people who are being sacrificed or otherwise whether it's an act of omission versus act of commission and things like this and we're sort of interested in how do people want to resolve this dilemma but what's fascinating is that we there was something that's so obvious that we missed initially which is that it's not really an ethical question it's more of a social dilemma it's a question about how you negotiate the interests of different people and this was the strongest sort of finding that we found which is that no one wants to be in a self-sacrificing car but they want the whole world to be to drive once and it's really fascinating and the effect is so strong that you know if you look at for example the morality of sacrifice and this is you know if you kill a pedestrian to save 10 kill a passenger to save 10 and so on so you can see that you know I think it's moral and desirable in both my car and other cars to sacrifice other people you know for the greater good so I'm happy to kill pedestrians to say you know one pedestrian to save 10 that's great but as soon as you tell me well you know would you would you sacrifice yourself would you sacrifice the passenger well I think you know I think it's moral I think it's great but I would never want this in my car not in other cars and definitely not in my car and this is where you see these things split now this is the tragedy of the comments right I want public safety to be maximized I would like the world to be a safer place where the cars make the decisions that minimize harm but I don't want to contribute to this public good I wouldn't want to pay the personal cost needed to do this well we thought maybe regulation you know that's how public goods problems are solved you know let's set a quarter on the number of sheep that can graze so we don't have to overrun the pasture or let's set a quarter on the number of fish you can catch so you don't overfish and kill all the fish and basically everybody loses out and we asked people whether they would support this and we found that people think it's moral but they don't want it to be legally enforced at least for now right this is the PR problem and maybe it's a you know maybe we need to develop the law so that people feel comfortable with what this means and I guess just ask a question because we talk a lot about the evolution of cultural things and I assume all of these are people I guess you don't know but most of these are people who have never been in a self-driving car right and I think one of the things that we found again this is not my work but some of our colleagues they did a self-driving car Uber-like thing where you get an app but it was actually for normal sort of the public and they're sort of to your point their impression about the safety of self-driving cars changed substantially after they had actually experienced it for a little while and they felt in sort of anecdotally I felt safer than with dad you know and so I think that once you sort of are in a self-driving car and see how much control it has your view on its safety as well as it's like and the other thing that happens and this may happen more in Japan and in the US they in Japanese culture you often sort of personify and identify with machines and tools and things like that so they start to feel trust with the machine which I think unless you experience it you don't can't imagine it you know but anyway I agree I agree so I think there's all sorts of things you know we're now interested in studying for example agency perception you know do people perceive these things to have minds and you know and if not then why not what's the missing component which becomes really interesting with drones for example so the other thing is you know when we ask people well again you know this is people think it's moral to sacrifice but they don't want it to be regulated and they would definitely not buy it if it's regulated but they're much more likely to purchase those cars if they were regulated I think this is really really important question because if people don't purchase those cars you will not save lives I mean you know people estimate you know scientists are estimating that 90% of accidents today are due to human error so if we could you know the sooner assuming the technology will get there quickly the sooner we have wide adoption the sooner we save more lives but if the people are so worried about you know edge cases or that their own safety is not paramount they may not purchase the cars and we may not therefore have wide adoption and as a result we can more and to map this on to the Lesigian quadrants this is a clearly one that you can't just leave up to the market if people aren't buying the thing that they believe is has a common good right exactly and if you and if you regulate it then you kind of there's a backfire effect which is well fine you know that's great that's a good social contract for other people but I will continue to drive my own car and probably be more likely to kill myself as a result so people are not rational in the way they assess risk for example of getting on a plane or you know will I be hit you know eaten by a shark people overestimate those risks and there's a good chance that if we don't trust those systems then we will overestimate those risks too and prefer to drive ourselves so we have an ethical dilemma that we started from then we realize it's a social dilemma but now we're realizing that there's a meta-epical dilemma which is if you solve the social dilemma by using regulations you may actually create a bigger dilemma a bigger trolley problem which is you know do we drive cars do we continue to drive cars ourselves or do we lead to wide adoption to promote wide adoption of autonomous vehicles so we want to collect more data we want to understand this issue in more nuanced terms and we started when I move fast on this we started collecting data these things have made it to transportation regulations now or guidelines which is good we've created a website called Moro Machine in which we randomly generate scenarios so in this case it's not just you know one versus ten or one versus five it's a dog in there and we vary the ages of people sometimes they're children sometimes they're pregnant women sometimes people are crossing a red light so do they deserve the same level of protection as this is very interesting for this group here what if they're children you know do they should they are they expected to know that not to cross the red light and so it gets really hairy really quickly you know these are still cartoons they're still very simplified scenarios but I think they still bring out lots of interesting questions and we show people you know results this is a former postdoc of mine who has a cat he's happy to kill babies to save cats and we also show people we show people how much they care about different factors and how that compares with others so people love this because it's kind of a mirror to their own morality you know do I care about the law a lot you know and how do I compare with other people on this matter do I protect passengers more than other people or less and so on so we also have like this design mode where people can create their own scenarios and they get a link to them and a lot of people have been using them in teaching ethics now in high schools and universities and we have all sorts of you know the species preferences should social value be taken into account should age be taken into account and so on and we also value whether there's an omission or commission distinction you know which action the action that minimizes harm is that an omission or commission and there is definitely bias in the data that we're now analyzing so so far we've translated this to 10 languages it we've received 3 million users have completed more than 28 million decisions like binary choices and we have 300,000 full surveys and this is growing still growing fast and these full surveys allow us to tease out whether these people have cars themselves which age bracket which income bracket they come from and so on and this is really really interesting because you can start then saying well people who already have cars may be more or less likely to to support this particular ethical framework so we have a lot of global coverage and so far we've been looking at cross-cultural differences and because this is recorded I don't want to talk about it yet basically we're observing some very interesting cross-cultural differences in terms of the degree to which people are utilitarian or to which they would prioritize the passengers to which they're willing to take an action so omission versus commission and so on I think it's really fascinating and it would be a very important precondition to any sort of public relations or efforts to make the cars more acceptable but also potentially to the legal differences in the legal frameworks as well also beginning to look at partial autonomy so whether it's autonomous cars or drones or judges making bail decisions again you can have a machine do everything or you can have a human do everything so and in the car you have things where there's a driver assistant so the person is in control and the machine sort of watches over them so Toyota has been promoting this model and other car makers as well but also there's autopilot where the machine does the things and then the human has to kind of keep an eye on again whether it's a car or anything else and then you have full autonomy and the question in here we're interested in is we're comparing these models and we're investigating empirically whether people assign different degrees of blame and cause a responsibility depending on the control architecture we can call it if a person overriding a decision made by a machine is different from a machine making overriding a decision made by a human and it happens again this is now in submission but it happens to really matter so it really matters who you think is ultimately responsible and who's liable and I think this is a psychological sort of input to potential legislations that will come out to deal with these scenarios so this is a broader picture that I like which I think Joey alluded to initially which is that there is a gap in between where you have on one side you have engineers who think everything is an engineering problem everything can be engineered away and you have people from the humanities and social sciences who study the nuances of human behavior but also who know how rules can get sort of abused and have a good sort of knack for this you know how do you ensure that you have a coherent system of ethics and values and checks and balances and so on and I think that these sides often don't talk to each other so I think there's a sizable community of people who complain about you know who are very good at identifying problems and you know violations of fairness and rights and so on in technology but who don't have the tools to express these objections in a way that a computer scientist can operationalize likewise you have machine learning and AI scientists who feel that you know this is problematic I can see that this has can cause problems this can violate some people's rights but they don't again they don't have the intellectual framework to raise these issues in a way that you know in a way that humans and society can evaluate right so what we're hoping to do and this is the part of the partnership between the Media Lab and the Berkman Center is that we you know Berkman Center are you know kind of are from this side and they understand us and we come from this side from technology but we work on interfaces and we hope that through this we will create a kind of frameworks interesting frameworks and this is I think where many of the interesting questions are so I think we're ready for maybe a discussion and taking some questions yeah I guess the one other part that I would add to this is just one other you know access is going back to judiciary but we can have this in cars as well is on the one hand I don't think anybody thinks that speeding tickets issued by speed cameras on the highway are I mean some people may not like them but that that that's an inappropriate use of machines right because it's really a fact there's a speed that you're allowed to go and a machine's probably more likely to accurately measure your speed than a human eyeballing it and probably more fair as well on the other hand I don't think anybody believes that the Supreme Court decisions at least for now should have really that much substantial role of machines at least in the deliberation part and so there's a spectrum there's a there's a there's this thing where on the one hand where you're just establishing a fact which is for the implementation of a law of which we're not even disputing the justice of it to the Supreme Court which is supposed to try to reflect the norms of the day in making determinations about laws but then there's a continuum in between right so somewhere in the middle there you have this this uncanny place where it feels like the machines have some influence and I think what's kind of interesting is just about all of these hypotheticals that we have there's one extreme where you do want the machines in charge is another extreme where you do want humans in charge and those are actually not that those difficult it's sort of the space in between and I think that's also why it's kind of an interface problem is that it's it's it's very unclear at how the human and the machine pieces whether it's a societal thing or as an individual get get together and and so so that's again it's it's related to the autonomy question but I think it's a and and and I guess on your point it's sort of technology and is it ethics or morality there's some sort of stack as well and maybe everything to an internet person looks like a stack so maybe that's my problem and I think it's sort of like an interesting thought experiment which is you know suppose that we you know I think we need we need tools to you know it's not just a a legal question I think new kinds of tools and new kinds of data can and reliable data can make a big difference so let's assume that we invented the cars and they started going on high speeds but we didn't invent the radar that can actually accurately measure speeds so we relied on human guesstimation of speeds of your speed driving so there's a policeman standing and kind of eyeballing cars now that looks that looks like 120 right and you could you could very well imagine that under this scenario you know if if policemen were discriminating against one particular group they could maybe overestimate their the speed of people driving cars from that particular ethnic group and underestimate the speed of other people but somehow the tool solves this question because it makes the final you know it's it's recorded and and somehow becomes objective it becomes a fact and we haven't you know it's not disputable so can we do something similar here you know what would that but I think that this is where it hits the slippery slope so if you're doing the speed of car it's only what it's a very small number of data points that the machine is getting to guess your speed but the risk rating to some people may seem very scientific especially if they don't understand math and statistics and so they may say well the machine said they have a risk rating of this and actually in the forums they never ask you your race it just turns out that when you collect the data and you collect the questions the result is biased against race and and so so one of the the questions of what's difficult is if you don't understand how these algorithms convert data into results and this is the problem of the black box thing where a lot of the machines and again there's progress of making machines that can explain how they got to the decision but a lot of the machines that we currently use have are unable to describe how they got the number they just give you the number right so if I may pick up on that and ask a first question so this question of the normativity of the autonomous systems and who makes the where's the source of the norm that that seems to be a key question and I'm wondering picking up on your earlier description whether we're on a particular trajectory by what you described I think there are roughly three phases right that I've heard one is okay we have these autonomous vehicles and now it's a question for lawmakers and regulators how do we apply existing norms to these new technologies sometimes you need to update the regulations which we see happening you made a reference to that but there is also a second phase it seems where it's and your work informs that I think is can we somehow program some of the laws and values and rules into the systems themselves so that the behavior is closer to what we have normative consensus around as a society and as lawmakers and policy makers but then there is a potential third phase and I'm particularly interested in your views whether that indeed is a trajectory in the area you study or more broadly potentially that as you could envision a future where more and more data accumulates in systems like autonomous vehicles based on the rules we programmed them how to behave and how they learn how these rules are obeyed or not what the compliance rate is and the like where suddenly the norm itself becomes computer or machine generated and how do we feel about that because that maybe inadvertently will get us closer to the other end of the spectrum that you're describing that the norms are no longer developed here and then somehow programmed into the system but at least the evolution of the norm happens within the automated system is that the charge? I think you have to remember to tease apart the norms and the laws right so so you know one of my favorite ones says the engineer to the lawyer yeah is is like my favorite one is one of the charts that Yad gave me but I did this in Japan but I could do it here so imagine you have a car and on your left there's a the two motorcyclists on the left and on the right the one on the left has no helmet and the one on the right is wearing a helmet and there's a helmet law so the guy on the left is clearly breaking the law completely disrespect for the law so you have to swerve somebody jumps in front of your car you have to swerve do you hit the guy without the helmet or do you hit the guy with the helmet the guy with the helmet's more likely to survive but he's following the law so who hits the guy without the helmet so I did this at a Japanese car company I won't say which one and half the room raised their hand said well that's on of course you go after the guy who broke the law right but this is this is a very interesting normative question right so so and you know and there's all these versions of it like do you run a red light if you're you know so so so what's interesting I think and what a lot of Yad's work is trying to do is how do you train a machine to reflect the norms of that particular community in which it's serving and it's also going to be diverse so so I often think about you know my wife moving here from Japan it took her a year to get used to the idea that you're fighting traffic that you're not like we always joke you're not stuck in traffic you are traffic that's sort of a very Japanese way of thinking so you're always trying to let people in and make sure that you know people aren't upset at you here so I was like ah you know and it's a and they don't and so but the car that's trained in Boston won't be able to drive in Japan right and and that has less to do with the law and more to do with this kind of normative intent right and so it's a personality thing I think I guess there's another complication which is that I guess there's two of them that I want to discuss but one of them is that we the norms in the law are you know always changing to keep up with with how the world is changing right so you know country becomes poorer and then new norms come about you know it's it's well known you know if there's like the government is no longer efficient then people turn more to religion and there's something called compensatory control theory and that looks that tries to quantify this so we're always changing our norms we're always changing our laws and what's happening is that I I guess now the the complexity of the problem is increasing significantly so how do you keep you know is that what at what point does it become combinatorially impossible to have a law to to regulate something you know like what if I mean we have like billions of possibilities just from from the moral machine in a simple you know we just which combination of children and dogs and you know omission commission law breaking the law and that's like a very simplified view then and we already don't know like what what law would with cover you know would give you a function basically like this is okay and this is not okay from just these astronomical possibility uh... possibilities let alone the norms as well and and again if we had just self-driving cars we would get rid of a whole bunch of the laws there'd be no reason to have stop lights though complete stop at a red let at a stop sign i mean all these assume that you have actors that can't communicate with each other right and so so and i think also the complexity increases to where if even if you create a law like full stop at a stop sign you could design a car that stop for just a split second and kept going and there was a little uh... uh... uh... dampener so that the person didn't notice that you stop and there's like all these stupid ways that a machine could get around a lot of the laws and so laws are sort of designed to be enforced against slow somewhat stupid human beings but if you have a very fast complicated machine then it and and and even things like regulatory things like this uh... there was a great threat on the uh... epic mailing list about uh... the uh... the books wagon uh... thing if it had been a machine uh... a i it probably would have still cheated the e p a thing but uh... the the the emissions thing but would just have known that if the hood is left open and it's going to this person's there that's supposed to have low emissions but if the boss is saying i need to get to the airport in a hurry it does it and that that you know but you wouldn't be able to tell anymore because there aren't lines of code right it's a sort of it's learning its environment so it's it will be interesting how these machines deal with regulations and and laws excellent christof graver uh... what are your reflections on on the relationship between law and these kind of norms of the system uh... actually i have a question so if we look at at the law uh... over uh... the last two hundred and fifty years the paradigm of the law the dominant paramedic paradigm was even in in uh... common law traditions that you have uh... so to say abstract rules that are applied in uh... concrete cases now my question is whether we are now entering into a new paradigm where you have actually uh... concrete cases that uh... are statistically uh... appreciated and uh... then are used to develop uh... abstract principles that should then be uh... resolving typical cases i think that might be that's very interesting like uh... that cut to me i think uh... i i think that we it's kind of the immoral imperative that is created by a new technological capability so uh... this is how i think of it so you know you you start from a situation where you can deal with the abstract rule you have the abstract rule don't kill other people you know if they're following the law there's a zebra crossing don't you know don't run them over and so on and then uh... there are all sorts of other scenarios that you could maybe approximate with the rules but then if they don't follow under any of the rules then you just call it an accident we just call it the day everybody goes home right uh... this is because we there's no way a human being could have known uh... for example uh... or could have could have reacted fast enough in time and there's not even a way for us to find out what the human knew when exactly and then all of a sudden we invent a tool that can in theory record all of this information so we know what the car knew a number of milliseconds before an accident and it could have we can start creating counterfactuals and so on so i think so this is kind of from from the top down so i think the uh... there's pressure on making the rules more complex because we can no longer call it an accident we have to specify if you knew you know if the speed of the the the processor clock is this much in the the car the sensor has this type of resolution and it the car knew and it could have done something else then you you know then is the manufacturer liable um and at one point is the manufacturer no longer liable liable because the speed of the machine just cannot possibly reliably swerve in time right so i think we have more and more pressure to increase the complexity of the rules top down rules but at the same time we there are things we can't really observe we can't anticipate so we notice for example through the interaction of cars in the real world that one car maker is you know one particular design of the shaft or whatever is causing this you know slightly more deaths for cyclists right the positioning of the of the radar is under you know it it has a trade-off you know maybe it's over it maybe there's a trade-off between how efficient you are at detecting pedestrians or other cars and maybe cyclists and there's a parameter that you tune and it would it would impact the relative risk of these people maybe statistics so this you can only find out through experimentation so then but and and literally right but then you need again you need you need a an interface between you know society and these systems and maybe maybe scientists it's a new form of science it's what it's not completely new like i was looking at the uh seat belt bruce actually was the one that inspired me but i remembered something that he doesn't remember that he said so i may have just made it up but but but it was something about the role of seat belts and the fact that seat belts protected the driver but didn't really protect the environment right so so i looked up the the the the number of pedestrian deaths with the introduction of seat belts and it was it was non insignificant so you could there was a noticeable uptick in sort of thousands of people because you could imagine you know you wear a seat belt you care less about like these days with with airbags i see people taking a lot more risk than they would have without them huh what more seconds than the helmets economists in the uk talks about this is a risk thermostat yeah that as our environment gets safer we take more risks yeah and then this you talk about the knock-on effects to the environment and so so so that is really interesting because i don't think we had a conversation about how many pedestrian death increases we would be willing to take in order to save more drivers and and and that there's a version of this and i think that to your point about sort of this thing is i think i think that when you you have to sort of understand and think about how ai's are trained ai's are trained by a bunch of engineers who go in there and they try to optimize for a particular thing which is sort of a high level first principal value and then they're feeding up particular data so the these data sets that are biased against certain races for judiciary clearly they weren't thinking about because they didn't test against that they were testing against another variable right so so what happens is that instead of making rules what you're doing is you're creating um sort of algorithms that are designed by people who are inputting sort of an order of priorities of high-level principles and optimizations and so if you're forgetting to optimize for something that society cares about you've created a algorithm that is sort of inconsistent with the norms of society and i think what the problem right now is this is a lot of black box i mean this gets to the collaboration we're trying to do a lot of the engineers are doing sort of these one-off bespoke machines and they can't even transfer it to the next engineer who comes to take over and so so the problem is that all of that sort of black box and it's not accountable and i think that first is sort of transparency and auditability but the second phase i think is how do we have these higher-level conversations because we can now have the higher-level conversation and say do we optimize how much do we care about motorcyclists that don't wear helmets how much do we care about you know this and and maybe you can't even do it as a negotiation maybe it's polling societies or watching how people behave but but it's becomes a very weird question and i think that the training systems become sort of the the not a replacement but often are the proxy for laws which are the way that we humans try to take these first principles and to try to create incentives to then reinforce those principles. Hi so sure i'm j.m. poorup i'm a journalist if i could just zoom out just for a second one thing that i try to do is look for useful metaphors to discuss these ideas with non-technical people and a metaphor that that this discussion provokes for me is a sports metaphor in terms of the what metaphor sports like let's take world cup soccer i can imagine a near future where you could in theory have a real-time automated machine refereeing of a world cup soccer game and get the human out of the loop but would you want to watch that soccer game you know it isn't gaming the system isn't the the human fallability of the referee an essential and necessary part of the game and do we want to have perfect enforcement of the law i mean isn't the law ideally in a perfect society not perfectly enforced if you hear if you see what i mean and is this is a useful metaphor in discussing automated refereeing of society as it were i think it's an interesting metaphor but i don't think it's necessarily the only one in that there are a lot of sports where we have allowed machines to take over any sports that require timing you know machines now call the time and it's not a human being necessarily with thing they have like auto races and stuff like that are pretty automated and i think people like the fact that they're and i think part of it is sort of an evolution of our of our norms but i don't know if you if you have i'm wondering if also that the sometimes when the machines are not well designed then we go back to you know there's this human thing that you know that the machine is not really capturing the thing as machines get better and better then in a way we can no longer say that we we have you know maybe we'll say you know if it's a person with a helmet versus a person without a helmet we flip a coin you know and the machines can flip coins very fast you know they can generate random numbers and maybe that's what society would end up deciding because we and actually in some of our surveys certain number of in one of the surveys we we ask people you know should the car swerve or stay or would you flip a coin and about 25% of people said flip a coin and in a way they're they were happy they were more comfortable with fate right deciding and that may well be what we decide to do but i think now we have to choose you know if including the choice of choosing randomly right yeah that's that's what i mean and then one one example story to not pull suck back in again but there was a distributed autonomous organization the ethereum bitcoin like thing based thing which was interesting because it was a fund where they were selling these coins that turned into investments in the fund and they announced that the fund the code of the fund was the entirety of the agreement there was no legal agreement and that the code will be done and people bought the coin invested and then somebody found a exploit and they started draining they raised 150 million dollars you start draining 50 million dollars and people were sitting there watching they couldn't do anything about it because there was no courts to take it to there was no jurisdiction there was no agreement and so what's interesting about that is that in every x number of lines of code there's always a bug and you can exploit the bug and if the bug is the entire if that code is the entirety of the law the law becomes fallible in a new way and in human court you can go and say that's not what i meant and if you agree you can change it but in machine land at least in the current version of what we're calling smart contracts you can't do that so so there's an argument that that's completely never going to work and and Bruce and i've talked about that having said that i just talked to some of the probabilistic programming guys at mit who are working on ways to create these sorts of systems where you just tell it kind of the general idea so what's the goal and then the ai then tries to so you so you can appeal to a machine and say there may have been a bug but that's not what we meant when we got here and the machine would say all right let me look at the conversation no you're right you're right in fact the machine this code isn't operating as publicized we'll we'll roll it back and so there may be like like he had saying you know layers upon layers where we find a hole but maybe there's a machine that fixes it but it's going to be an interesting thing and i think the role of law in and code as they sort of intermingle is going to be an interesting one my name is yazo i'm a fellow of the berkman center and i was wondering about a similar story that i'd i've been even in brazil uh there was certain train stations where we had too many too many suicide attempts and the the regulation was to kill the the the person that was trying to suicide himself to save the lives of people inside the train so we had yeah really because the the driver needed to make a choice and the regulations that the the rule was to kill the the suicide person and then uh someone came up with an idea to surround these stations with transparent glass and resistant glass and there was not any kind of suicide attempt anymore so some designers may say that when you don't find the solution for a problem you may be looking for the wrong problem so don't you think that with this interaction and all this new environment where we are mixed with these platforms of artificial intelligence we are going to find other solutions then laws and regulations i i mean i i agree i think we we i hope we will but i think that it's not obvious that you know we i don't think we're going to stumble on the solution immediately without without having a mechanism for looking for solutions so you know even with the case of somebody decide you know suggesting that we put you know let's put glass and you know the door opens only you know is aligned with the door of the train then i'm i'm going to bet that this took a while even simple idea like this took a while for somebody to think about it now what if the solution is some piece of code in the bowels of a deep learning algorithm uh that's not going to be easy to come by unless we have a way of sort of interrogating and auditing those systems but if they remain black boxes and we we may never know that there is a counterfactual world in which you know this is avoidable or this trade-off is you know can be resolved differently um do you want to add yeah and i i i would think about um danella meadows uh uh who was a uh uh system dynamics persons from uh the yesteryear um but what she has this great stack of 12 different ways to intervene in complex self-adaptive systems starting with the least effective which is fiddling with the data or the parameters when she goes changing the rules and then she has changing the goals and then she has changing the paradigm and so what's interesting about these systems are fairly complex adaptive systems and so so there's a higher level thing which is you know should i be traveling anyway you know um our our you know is fun to drive a thing that we should aspire for as people you know or you know is capitalism good or bad you know so so there there are these really interesting things where as society starts to evolve and let's say we get to a point of you know again i'm not necessarily supporting it but just one of the ideas that people are talking about is universal basic income if we get and meet a certain basic material need for everyone what is what be what does the nature of work how should we rethink who we are and what we are because i think one of the problems that a lot of these companies do is they go in and say assuming cities are still the same let's try to optimize cities let's make traffic lights more efficient but everyone's doing the same thing but let's just make it more efficient is a very low level intervention when i think a lot of things we need to do a high level and then the high level stuff is actually above the law and it's above you know but but it's also very gets a little bit abstract you know hi hi my name is paula Villarreal and i'm a fellow at the Berkman Center i have two questions uh what are the assumptions you are now challenging like what whether we need to build a car that goes full speed instead of building a car that is thriving at the speed that allows it to break in all the scenarios and or maybe to reduce harm the second is where are the accountability mechanism mechanism embed in the engineering processes in this well i think i mean to answer the first question i think there are trade-offs you know like even today with without autonomous cars i'm sure we can you know reduce and eliminate a vast majority of accidents if everybody drives for you know until miles an hour everywhere right and i bet that this is not going to be a popular solution right and so society is kind of comfortable you know that that's precisely an assumption that uh and we are not challenging it like maybe why why don't we or we can ban cars completely right like then there will be no car accidents so i think i think the point is that that as a society we we we have you know our thresholds change you know and and whatever rules we have now is the result of kind of some negotiated outcome you know that you know car companies people who want to get to work on time economic efficiency obviously would suffer but safety would increase and and where is the sweet spot there is something that is not obvious a priori so what societies do is they experiment with with regulations they experiment with with rules they look at what other countries are doing and uh and this happens to be the equilibrium now but it's not by no means the the final word right maybe if somebody people discover that in school zones you need a difference feed limit right and then maybe somebody else will discover something else again and i think that's the kind of system that i think is important now how do we how do we bring this kind of experimentation into systems that are much more opaque and in which the actors that are acting autonomously and learning and adjusting are no longer just people that we can put in jail and punish and and interrogate but they are machines that are opaque and that you know for which we have no interfaces i do think some of that change is emergent so it's hard to force somebody to change with an external intervention but there's just this random story i heard about la these days which i found interesting is the traffic has gotten so horrible but things are so expensive that people have and now that you have like wireless people have been building these humongous cars where the inside they have a driver and inside is just an office and they're leaving really far away takes them two hours to get in doesn't matter because now they are fully functional in the car right so so that's a weird adaptation to an environmental system and i don't know how many people it is but but you could imagine that with self-driving cars like i have a tesla i realized that i no longer am like stuck to the fast lane anymore i just i have less stress and i actually would rather not drive too fast and let the car just kind of move along and you know i'm it's weird because you know you're getting into partial attention there's other problems but but i but but i'm not sitting there trying to optimize my speed anymore i'm more relaxed i don't care as much so so i think your behavior changes as the systems change and it could be with autonomous cars you won't care as much driving slowly because you might be having meetings in the cars or something like that so so i think it'll be interesting to see and i think there some of these are market driven like with electric vehicles for instance the the the system was staunchly against electric vehicles when the ev1 and others came out but it was when the japanese electric vehicles came out of the market just said oh we like these and then the regulations this all changed because business wanted to do it but when they were trying to mandate these laws early on in california they were just completely uh uh overthrown because the dealers thought that they wouldn't make any money so so it's interesting to think about how um now i'm shifting on to the enormous market side is some of these solutions i do think that that if we if people feel it's it makes sense to think it will happen hi uh thank you for your time my name is griff i'm a berkman client fellow i work with peer-to-peer university and i've been subscribing to car and driver magazine on and off since i was three and uh the reaction there uh you know it's very lackluster about you know uh autonomous vehicles and it's obviously a very specific community but when i sort of reflect on the ways that autonomous vehicles have entered my life in the last five or six years it's it's there's a lot of sort of top down me being told that it's something i should want um but i'm curious in your reflections and maybe the work you've done with uh the moral machine will shed some light on this is you know what consumer groups do you see really pushing for this and in different societies or different communities do you see it maybe being advocated for in different ways you know trading my personal car in front autonomous vehicle versus thinking about ride sharing as an opportunity and if you could just share a little bit from the perspective of consumers and what you've seen i'd love to hear about that i must say we haven't really run any sort of consumer surveys on like alternative modes we have we have been hearing a lot of especially for people who think this the question is irrelevant there are lots of groups that basically want to revamp the entire transportation system so they think this will be irrelevant because we won't have cars with people on the streets we won't have streets with you know where people are next to cars in the first place it's all gonna be you know self-driving dedicated roads or elevated roads or something like that um so i think there's people who just basically dismiss think we should revamp the whole thing and there's a big economics you know the economics of it is very challenging i think then you have the petrol heads who you know watch top gear and want want to keep their cars and the question is whether will they just drive around in circles one day you know in a in an arena or will they be actually will they still have a chance to drive around in the real world i don't think we no one has has an answer you know will the change be gradual or not i think it's a it's it's not just a sociological question it's an economic question a technical question will the technology be fast enough will it be affordable will it will it be fleets you know or versus a consumer owned i must say that i'm not an expert on these on these questions um but uh you know given that this is a likely scenario which is kind of partial transition where where you're going to have mixed uh you know people who you can't just take people's cars away from them um i think um we're going to have to deal with this problem for a while of of mixed environment and this is where this question we have a lot of history though i mean i i think you know tobacco ads i still remember them but you don't see them anymore and no one misses them you know i think that's going to be the same with fun to drive ads you know it's be like you know in in you know automobile magazines are going to be like gun and ammo magazines you know it's a it's a it will be because i think when you connect it to deaths so i think the problem with the tobacco industry is they had this whole fight against this you're killing people thing and i think that if we get let's say a community let's say you take california and they go all self-driving in a couple of communities and you see traffic accident death going to zero you know just like with drunk driving when there was indisputable evidence that drunk driving caused deaths you just couldn't argue that it was okay for me to have a couple martinis and hop in the car it just seemed like the wrong thing to do and so and i jokingly say this but like can you imagine if if because right now a million people million three people die from traffic accident deaths a year in the world right and so if you said well you know but i have the right like the second event to drive my own car and i love to have fun to drive at the expense of a million and we have a drunk driver magazine as well but i think i think it becomes harder to make that argument i think right now it's theoretical just like the tobacco stuff and the drunk driving stuff seems theoretical and i think that that and and and i think leslie teachers used to teach a course on this too where you see this kind of the norms and the laws go from of course women shouldn't vote or of course we should not everyone is equal to that how could we have thought that before and there's this really interesting pivot that you make and i kind of feel like with with with these systems as they get deployed we'll hit that point so we have a question on twitter from one of our affiliates malavika jararo we've talked about um ethics and morality in specific ways her question is more along the lines of the morality or ethics of gamifying moral decision-making um did y'all comment on that that's the morality of your work right so i think uh i think it's a good question you know the the the question is uh is it okay to you know have people play you know i mean we never call this a game but you know suppose you know people choosing as a form of entertainment let's say you know who should live and who should die and whether people are taking this question seriously enough um well i think uh obviously if this is how it's used you know if for me like if the entertainment that let's say this is entertainment for some people right um and if that happens to produce an outcome that is that is superior then i'm all for for entertainment you know there are people who play games and as a result they solve protein folding problems you know that's okay that's that's that doesn't diminish from the science so i think as long as we don't take it too seriously a priority as long as we don't sort of rely on votes as something that we're just going to take you know from online maybe there are trolls people who are just kind of playing to kill more people and you know in the modern machine just for the heck of it as long as we don't take that too seriously and put it straight into code you know this is why i think we need we need filters and and this is just one part of a very broad conversation that involves legislators car makers you know and other stakeholders in society so for me this is just one piece of the puzzle it helps us understand people's perceptions including you know who who are the jerks who who don't take this seriously or who have you know skewed ethics um so for me the fact that it looks simple is actually the advantage there is that it can engage a broad group of people you know we can we can actually collect data from people who are not technical and they can still have a way to engage with this topic to recognize that it's you know the rule that i use in the first scenario where i said you know let's save more people now the more people are breaking the law you know it's not simple as simple as that or now there's there's children involved you know should i take that into account so this simply i think having people engage in this topic and the majority i think would take it seriously you know just just appreciating that this is not an easy problem and that it cannot be solved only by car makers but also requires you know broader discussion for me is is an advantage so maybe one last question to Joey if i may you wrote the book as i said in my introduction that makes us think about the ways we can deal with this faster future you also pointed out at the beginning of the talk that some of these technologies are already here and are widely adopted and to your point however describing that this co-evolutionary mechanism between society and technology is full of challenges at at various layers how much time do we have to figure it out the things we discussed today what's our timeline here so i think it's like the climate i don't i think it's already too late so i think with privacy and with climate i think we're going to sustain substantial damage even if we do everything we can and i think that even just looking at the judiciary system i mean it's deployed Julia writes this article where nothing happened you know it's kind of like you can sit there and describe exactly how it's broken and nothing happens you know and so so i'm very concerned that we're too late in preventing anything from happening it's already gonna be a train wreck and i think that what we need to do is so there's a couple of key things for me just like in climate i think there's key things i think one thing is you want to attack these things before they gain a tremendous amount of financial power to develop into a lobby so for instance the the AI risk scoring companies they're being funded by foundations that are trying to do the right thing it's possible that some version of what they do may be the right thing but there isn't really i mean but they're small enough now that they don't have a lobby but if they got as big as say guns and the nra it gets much harder to displace them so what i think is important is to sort of figure out where we have broken systems go in and try to point them out come up with solutions and attack them at least attack attack them try to fix them before they become cancerous and so cancerous systems i i think are systems that no longer are able to be held back through our self-regulatory immune system which should be our legislature should be our judiciary which are dysfunctional that's suppressing these things partially because these new cancers that we have work in patterns that these legislatures and these enforcement systems aren't very well equipped to identify those and so part of this bringing the engineers into the conversation helps us create these patterns but it really if you i think the sort of biological metaphors where we're about to be infected with a whole new category of pathogen that can sometimes be turned in our favor turned into favorable microbes but i think we're going to sustain a tremendous amount of damage and so so but the but but i but i think having said that it's not too late to prevent an extinction event but i think it's too late to prevent a whole bunch of damage on this semi pessimistic note thanks so much for joining us today thank you