 Good evening everybody at home wherever you are watching this welcome to another event of the series making sense of the digital Society my name is Toby Mueller. I am the moderator of this series that Has been running actually for exactly three years now We started three years ago and we are planning to continue the series next year with about four dates I think and hopefully one of these times with live Audiences again, we don't know nobody does at the moment, but that's what we're hoping for We've had five events this past year actually only one with a ample live audience in March With Cibila crema some were cancelled of course in spring and fall But thank you the whole team for mounting this total of five event Which I think is a feat under the circumstances. Thank you the federal agency for civic education the BPB and of course The Hick the high EG the Humboldt Institute for internet and society that curates this event So for viewers who are with us most of the time or happen with us in the last three years You know how this is going to roll out I think there's going to be the talk of our renowned guests who am I going to introduce to you in a minute And there's a well, maybe 15 to 20 minutes one-on-one conversation here on stage between the two of us and then At the latest it will be your turn of course to ask your questions. You'll have netluckets For your questions here in the audience by Christian Grauf or who's also mounting The series here and there's a tool for this a participatory tool called slider calm I think you see it on your screens and leave for the live audience at alex TV welcome and the respective Websites of the participating agencies here the Hick and the BPB So under pandemic circumstances as you all know those events are usually a bit shorter We don't go on for sometimes up to two hours. I think in the last Three years so we'll try to build it shorter today So this actually should apply to me too now We're in the midst of the subject because you know who listens to a moderator anyway, or for that matter who trusts a Moderator because he probably goes on and on despite of what he just said about being short time and otherwise So that is pretty human standard human behavior, right? But what about machines? Do you trust them? How should we make them trustworthy and what exactly should they be made capable of doing even held accountable for You all know examples of machines invading our daily lives even, you know Personal chores and even intimate decisions. I'm talking about maps for examples logistics I'm talking about trading of stocks if you happen to trust the stock market Which in itself is actually not even there anymore in most cases in this algorithmically run or governed to some extent at least I'm also talking about predictive policing or merely predictive algorithms Who choose what we watch or listen to or what our children watch or listen to or who we date Who we want to follow or mother our children or then determined by algorithms or to listen and watch to and so on or This is the key topic. I think For our very renowned guest tonight. He surely will be talking about machines Driving us around about AV for autonomous vehicles that run with AI systems with artificial intelligence How to trust machines is the title of his talk tonight Until June and this year our guest was associate professor of media arts and sciences at MIT where he co-founded a new field of research Altogether machine behavior. We will hear more about that by himself one of the key projects that got a lot of global traction Actually, there was the moral machine an online platform that generates ethical dilemma Faced by autonomous machines such as AVs autonomous vehicles again How come we would like others to buy cars that went challenged to save the passenger or the pedestrian would save the pedestrian? But we ourselves will prefer the car that saves the passenger and what role do cultural differences? Play here the model machine gathered more than 40 million decisions 550,000 full surveys. It is a machine. All right, you see in a minute by the author himself As of this summer of all years, I guess is director of the Center for humans and machines at the Max Planck Institute for human Development for building for shock here in Berlin So again, there's no live translation tonight because we're trying to keep the numbers of people interacting down here at the venue in Station at Holzmarked in Berlin But there's going to be a translation available when this event will be made available on our channel at the respective websites and at YouTube in his own words actually of our renowned guest born in Syria Educated in Syria and in the Emirates in Australia and in the USA He says I quote how can science help us understand Anticipate and shape major disruptions from artificial intelligence the web and social media to the baby think learn work play and Govern but now he's yours very pleased to welcome in the series Iyad Rahman, please enter the stage Thank you so much for this wonderful introduction and thank you all for joining us today It's an honor to present. Thank you also for the Humboldt Institute for Internet and Society for hosting me in this prestigious lecture As the introduction Mentioned I've moved to Berlin recently to start a new center on humans and machines and by training on a computer scientist But I'm almost unrecognizably one so I'm more and more a behavioral scientist and I work together with people from economics Psychology political science anthropology and so on to answer some of the questions facing us today So the backdrop to the talk is this pervasiveness of Machines and specifically artificial intelligence in our lives So machines today obviously make influence our behavior They influence the content we consume so the kind of music we listen to the books we read the movies We watch those are the news we consume and our political opinions They also help us navigate the world You know, they give us suggestions for how to get from A to B But more and more they are getting used in increasingly sensitive domains More and more algorithms are being used to decide who gets a job for example Or to evaluate performance of workers. Maybe how who gets fired? Machines also are deciding who gets a loan or financial opportunities Who gets medical support and what kind of diagnosis you get is also becoming increasingly algorithmically mediated and finally Cars will soon drive us around in autonomous vehicles, which is a technology that is rapidly advancing So how do we trust these machines that they will do the right thing that they will not somehow mistreat us or treat us unfairly or that we have some kind of Recourse if something goes wrong So I would want to split this question into three different questions And the idea is that before we can trust the machine we need to answer three fundamental questions First we need to understand what can the machine do? So what are they capable of doing what kinds of mistakes can they make what kinds of improvements on human judgment can they do And maybe which areas do we need human oversight? Then we need to answer the question of well if then if we know what can they do what odd they do so what are the Requirements legal requirements or design requirements that determine their behavior What do we want them to do and finally the question is how do we make them do it? So how do we enforce this desire? These constraints legal constraints and goals on those machines and these are very difficult questions faced by computer scientists and others These days and I will only give some partial answers to them So let's start with what can the machine do? Well, how do you understand what a machine can do first of all you could think of computer science as mathematics? This is a famous computer scientist at Dijkstra who famously said Computer science is no more about computers than astronomy is about telescopes Which is why he never had a computer a computer actually himself until very late in life and very reluctantly he did computer science with a pen and Paper or a blackboard Because for him computer science was mathematics, you know He would just prove that a machine that operates based on this logic will behave This way or that way and he could prove certain properties of the behavior of the machine And that's a long tradition in computer science another perspective is computer Computers as computer science as engineering. So this is it. It's like building a bridge Engineers build bridges. They build materials. They build Buildings and they test them They subject them to different loads and they improve their designs. That's kind of engineering practice an example is Grace Hopper who's actually Navy Admiral or general I can't remember her rank, but she was very instrumental in the engineering in the early days of the engineering of computer systems and In fact, she was credited often with discovering the first computer bug in In a machine, which was an actual bug That they had to extract from from this computer at Harvard University But finally there is well, maybe computer science could also be a science because after all the word science is in in the name and I would say Herbert Simon is the one who is an economist and a computer scientist and a psychologist who won Basically the top award in each of these fields Has advocated this perspective He wrote a book called the sciences of the artificial in which he contrasted natural science, which is knowledge about natural in objects and phenomena like biological phenomena Geological phenomena and so on and artificial science, which is knowledge about artificial objects and phenomena like machines institutions and markets and recently what we've done I've brought a group of Sciences from a whole variety of fields To let's say describe and parametrize this new emerging field of machine behavior And these are people from computer science, but also more importantly from other fields That study behavior from biology anthropology political science economics psychology and so on and we argued that not only computer science and computer engineers should Understand help us understand how machines behave, but also behavioral scientists So think of questions like this. Does an algorithm create a filter bubble in which we only hear the information that we already agree with We could answer this question as engineers. We could look inside the algorithm or we could answer it as a behavioral scientist By looking at the behavior of the algorithm as if we look at a person who's serving news to other people or a newspaper That's writing news articles Algorithmic justice does an algorithm discriminate against a racial group, for example when they grant parole again You could do you could look at it mathematically, but you can also look at it behaviorally The same way you would look at a human judge today and ask is this person is this judge biased or not? and We could move to autonomous vehicles and autonomous weapons and algorithmic trading algorithmic pricing You know the algorithms that determine the price of goods and services online online dating Conversational robots that may interact with children, you know, what kind of conversations are they having kind of influence? Do these conversations have you could ask again these questions as an engineer or you could ask them as a psychologist would so in other words We cannot Certify machines as ethical only by looking inside of their code inside of their heads No more than we can say that a human is ethical by looking inside their brain We have been able to hold up humans to ethical standards for thousands of years Without understanding the brain the human brain And now we're beginning to build machine brains that are beyond our understanding our complete understanding So perhaps we should go back to the basics go back to the behaviorism Early days of behaviorism and say well, let's think of a machine as something like a mouse Something that looks like this perhaps and and we could of course inspect this machine directly and Ask questions about it by looking inside of its code But we can also in the case of a mouse We could do experiments on the mouse We could put the mouse in a box and we could Subjected to stimulus for example change the temperature and then measure behavior for example, how much does it sleep? We could put a mouse in a maze and then look at how how fast it can Get out of the maze or find the cheese So we could do something like this with a machine By putting it in a box and treating it as a black box and then asking questions about its behavior So what if we could do this then different people would bring in their own algorithms and We would have Benchmarks that we can apply consistently behavioral benchmarks that we will apply consistently across different algorithms and Well by doing this by having this behavioral perspective on machines rather than a purely mathematical and engineering perspective We can start drawing on lessons from Animal behavior these are the founders of animal behavior who won the Nobel Prize for doing so And who have basically defined the fundamental foundational questions of all of behavioral science and all of biology Can we do something similar for machines a whole new science of machine behavior? How this behavior evolves how it's manifested how it's triggered how it changes and what are its impact on our ecosystem? So this is a call to arms basically for Trying to understand how can machines behave? What can machines do by taking a behavioral science perspective? Now the second question is well, okay, suppose we know what they can do They can they can drive they can drive this way or that way they have a certain ability to react If if a car autonomous car drives it can react faster than a human This is how fast it can react and so on. Suppose we can characterize its behavior, but sometimes we don't know how it should behave That's the second fundamental question So to make the question more concrete, let's think of a thought experiment Which I've learned the German word for a Gedanken experiment Recently Thought experiment that goes like this. We have a autonomous car Imagine in the future in the near future this autonomous car Experiences brake failure and it's going to run over some pedestrians but suppose that the car can swerve and Hit a pedestrian on the side on the sidewalk or on the pavement This way it will kill this pedestrian, but it will save more lives So it will spare three or four or five Pedestrians that were in front of it should the car do this and you know ask yourself if you think the car should swerve Now if you follow utilitarian ethics, you would say the car should save the biggest number of lives and therefore it should swerve Another scenario or variant of the scenario is what if the car could swerve and hit the wall harming the person in the car? Do you think the car should do the same? It should also swerve What we've done is we've run this as a survey and we found that the majority of people agree on what the car should do Most people think the car should minimize the loss of life It should do whatever action it need it takes or it is needed to minimize Harm and this should hold even if the person being harmed or sacrificed is the person in the car But then we asked people which car would you buy? And this is where we discovered the social dilemma people said well absolutely not that car I would never buy the car that will self-sacrifice me But I want everybody else to buy such cars so there is this Basically question Turned from an ethical question an ethical dilemma about what is the right thing to do to a social dilemma Which is well we all agree what the right thing to do is but we can't enforce it We can't have Consumers opt into this agreement this outcome so It's a different kind of question It's a question of human cooperation of us trusting each other that we have to enforce these kinds of rules in this rare potentially very rare accident scenario So we wanted to get more beyond this we wanted to engage more people with this discussion. We also wanted to Make the scenario more complex because we wanted to know if other factors also matter in people's perception so we built a website called the moral machine which you can visit yourselves and and play this game which I think is very instructive and it's becoming now even part of high school textbooks in some countries It's a moral machine experiment Takes you through randomly generated Dilemmas that look like this. So here's one example. There is an autonomous car. That's going to run over three adults and a dog But the car can swerve and if it does so it will hit a barrier and it will kill the person in the car Now notice that the people who are crossing the street are crossing illegally Now this may or may not be taken into account also notice that there is a different gender composition There's two women and one man and so on now to our delight and surprise this website went viral It was in part thanks to coverage by the traditional news media, but also a lot of youtubers who are these people who film themselves while playing a game and Millions of people watch them while they're doing that. It's a strange phenomenon but it's something that has that I've discovered in the context of this project and At the time of analysis, we had translated the website to 10 languages. We have 4 million users answer 40 million dilemmas and Half a million filled half a million demographic surveys And this is the this is the snapshot that I will talk about But said the website has continued to run and the number has now reached about 10 million users who answered 10 million decisions We published this work in 2018 as a collaboration with Scientists from various fields and you could see here the distribution of the data set So we have basically what appears to be one of the largest psychological surveys ever conducted We got people from virtually every country in the world In some cases millions of people per country Now I want to show you what the results look like So this Figure this picture shows you what happens if I take a scenario and I replace the thing on the left with the thing on the right So if I take a scenario any scenario that has a dog or Or a cat and I that is going to die And I switch the dog to a human being What is the increases increased probability that this? Will survive this human will survive compared to the dog and As you can see 60% so it pays to be a human At least in the opinion of of the survey participants The second thing that matters a lot is saving more lives So people strongly prefer to save a greater number of lives like one more life or two more lives or three more lives and so on and The third big one is people want to save young lives so they want to save babies and children over older people and And these are the three big ones Then you have some kind of controversial ones Or maybe lots of controversial ones. So for instance, we find people Preferred to save to save individuals who cross legally who didn't do anything wrong versus people who cross illegally That's more than 30% chance of survival and increase chance of survival But also people also to the same extent prefer to save a Business person over a homeless person Now this of course doesn't mean that we should program cars to do this. In fact, this is a case where Government regulation is a clearly important Player to enforce fundamental rights despite the fact that public opinion from surveys may prefer otherwise We also uncovered because we had such a large this is that we were able to uncover Cross-cultural differences and this was like what for us one of the most fascinating parts aspects of this data I want to show you here. This is these preferences which are mapped on a on a circle So you could see here that the thing that matters most is sparing humans because that's the highest Now I want to show you where Germany falls compared with a global average So as you can see here Germans for example Prefer not to intervene, you know, they prefer inaction more than the global average. It's a kind of more reserved They also prefer to save humans to a greater extent than than people in the rest of the world So sparing humans is very important They prefer to spare people with higher status, but to a lesser extent than the rest of the world And this makes sense to me because Germany is seems to be an egalitarian society To a large extent. I want to show you now what China looks like for example in contrast and you could do this for Any pair of countries this kind of comparisons of the website and you could see that China has some similar characteristics to Germany in some areas but surprisingly for example Sparing the lawful the people who are crossing legally matters even more in China Compared to To Germany for example, but sparing younger people matters a lot less The Chinese still prefer to spare younger people, but to a lesser extent than people in the West in general and You know, there are we were exploring various cultural Variables that may explain this and one of them is this Collectivism and individualism so collectivist cultures in which the individual is only part of a community or a tribe or a group Seem to be to to have us a weaker preference to save younger people So the question is should these cultural differences be taken into account in programming autonomous cars? And you know, obviously, that's a that's a broad question for society that we highlight here So that's what we've I've spoken about what can machines do what ought the machines do and finally the question That remains is well, how can we make them do it? So suppose we can reach an agreement on what are the standards? What are the ethics that autonomous cars should have? Then how do we make sure that they are enforced? I very much like this picture of of the regulation of human behavior. So this is From a book by Larry Lessig who's a constitutional law scholar professor at Harvard University He wrote this book called more than 20 years ago called code and other laws of cyberspace and he said The human behavior is restricted or constrained or regulated by four forces The first is law When I was illegal you go to jail if you don't do it or you get you pay a fine But we're also constrained by market conditions and market forces We are constrained by the architecture of the environment in which we are surrounded We're also constrained by norms what other people expect of us And I think if you replace the human with a machine like an autonomous car or Any robot for set for that matter then we need to think of all four forces Of law market architecture and norms The typical approach for thinking about how to exercise oversight over a machine It's called human in the loop. So we have a machine like a car driving or a An algorithm advising on who should go to jail and who should get parole and so on As long as we put a human in the loop then everything goes well You know, it's like we have an autopilot in the airplane, but we have a human pilot in the cockpit as well and this approach works or can work sometimes if We all agree on the same goal so in this as you can see in this picture, everybody wants the same thing and The job of the human is to exercise over oversight over the machine in order to make sure that this thing that everybody agrees on is implemented by the machine But what we're noticing in these examples that I've just given and many others is That people often want different things Maybe because of the cultural background they come from maybe because of the ethical frameworks that they're using Some people care more about fairness other people care more about efficiency other people care more about safety and so on So then it is really our problem to agree to make up our minds About what the machine should do before we even begin having oversight In other words, what we need is to move from having a human in the loop to having society in the loop Which I define as human in the loop plus a social contract that defines our mutual agreement about what these machines Should do and how they should behave So to close I want to give an example Obviously we have this case of autonomous vehicles Which is Unsolvable in any kind of fundamental absolute way, but we could still solve it by agree reaching agreement Over how to resolve the conflicts involved in this dilemma. Let me give you another example That is an old example that looks very similar These You there are these bars metal bars that you can install in front of the vehicle like any car can have them They're called moose bars in Canada. They're called Kangaroo bars or roo bars in Australia. They're called bull bars in America, so basically the name changes based on whatever animal you are likely to hit And they're essentially one of their main function is to protect you from hitting the animal you know for protect the passengers in case you hit the animal, but Studies have found in the 90s that they also hit hurt people in other cars and Pedestrians to a greater extent and this is the reason why they got banned in Australia They got banned in the UK. They got banned in many parts of Europe, but they did not get banned in the US As far as I could tell at least until now as far as I remember so here's a question where there are there's a Conflict between the safety of the passenger the safety of the pedestrians and people in other cars It's caused by a physical feature of the car Different cultures. Well, first of all, there were studies behavioral studies that determine that these kinds of cars cause different damage Different distribution of harm lower harm for the people in the car more harm for the people on the street and Some cultures have decided to ban them as a result While others didn't because they value the trade-off differently This is something we've seen before right so with autonomous vehicles. The difference is that There is no shiny bar. There's nothing physical that you can see and And that can tell you that this is the cause of this outcome Instead there is there are algorithms inside the brain of the car that are shifting the distribution of risk to different road users Which means that we have to be extra careful and we have to really look carefully at the behavior of the car Before we determine how the car should behave and therefore before we can trust it. Thank you Thank you so much. Yeah for this very concise and very concrete Talk about machine behavior Let's explore a little more in the coming maybe 15 minutes before we start taking questions from our live audience Please do so on the slide. Oh comm I think again You see it on your screens where you can participate in this discussion In a minute. So yeah, your talk was You know so fraught with examples and everything and I'm starting with a rather abstract question to dive into our discussion and here now Starting with human behavior actually before we delve into machine behavior a little more. Why is it a lot of us? I think I can safely say that a lot of us are very More about accidents of autonomous vehicles like this There was the uber accident that got it, you know a lot of traction many people talked about and so forth into like although we know that We have about what 1.2 million traffic Fatalities worldwide per year and we're going to have for we could have a lot less by autonomous vehicles I think you projected about 90% less traffic fatalities a year. So that's a huge The mouth. That's a huge number that would go down in traffic fatalities and still Trust actually human trust in machine behavior is pretty low at that moment. That means we would favor a lot more Injuries fatalities With the current model and not taken to account what we would save how come I Think there's a lot many different possible reasons for this I think one of them is that People are fascinated by this new technology So all eyes are on the companies that are building autonomous cars like uber or Google or Tesla So every accident gets so much coverage and a lot of media attention because of just the fascination with this new thing But you know, so so I think part of it is irrational because we are over Weighing the the gravity of these kinds of accidents because you know on that very day Many humans died from human-caused accidents Elsewhere and they were not covered Unless it's like a bus full of children. We don't really cover Traffic accidents unless something is really extreme But does this mean that we are completely irrational I don't think so And I think it's because we don't have any yet enough data points to establish trust You know, these cars are now designed. They're either being used in testing mode, which is a very controlled mode They're being driven during the day with like, you know, sometimes at night, of course But like there's full visibility no no snow no fog And you know and it could be like a limited Scenarios for example only on highways so We still haven't seen enough, you know, it's like a new animal on the road, you know that that Has a behavior that we we don't really fully understand yet. We're being told that 90% of accidents will eventually disappear because they're caused by human error But we don't really have enough data to establish that claim with full confidence Because autonomous cars have not been tested in all representatives driving scenarios So I think until we this happens people are not so irrational to be afraid and worried Having said that I think there is human irrationality that will also play a role and we're we're studying some of this Ourselves in work that's still ongoing But you know, think of how humans mostly rate themselves as better than average drivers So the majority of this is a well-known especially men Exactly. So so we all you know, we have this kind of overconfidence in our ability to drive And this is just not not just driving but in all sorts of things people often on average rate themselves higher than average Which doesn't make any sense. So which means that there is some kind of overconfidence and this could shape the way that we trust Autonomous vehicles because we even if they're as safe as as half of the people We think we're we're in them, you know top 10 percent. So we wouldn't we wouldn't trust them So unless they improve in one shot, you know, they become almost 90 or 9% or 100 percent safe A lot of people won't trust them. And this is something we have to worry about and I think this kind of human Potential human irrationality around this new technologies is something we have to take into account In order to save more lives because we want people to trust them at the exact right moment Where it's rational to do so not too early and not too late Because too early or too late will lead to more greater loss of lives So if people adopted technology too soon and it's not really trustworthy more people will die if they adopted too late More people will die because we humans will continue to drive and hitting each other on the road So you actually understand some of the irrationality involved at the moment when it comes to trust Into machine behavior. Now, do you think that you know, let's say an excessive or really fast increase of scientific data Would really change that because I'm thinking of another number that of course Is occupying our minds all the time this year and this is the number of COVID related deaths Worldwide that I think at the moment is about one point six five million People and I'm not sure if the vaccines or more scientific data would actually change How people feel about the virus especially in Germany where we have a lot of demonstrations of people saying it doesn't even exist So many scientists actually at this front are sort of desperate and saying what are we what are we going to do? I mean we gather more data and we still cannot convince a growing number of people Living here actually. So you think it's going to be different with machine behavior and our trust No, I don't think so I think actually it's a really great analogy that you mentioned because you know, I should say first of all that I'm not an expert on epidemiology or COVID and You know, I'm only observing as a as a generic science general scientist Rather than an expert, but I think the the COVID situation has highlighted to us the limits of science It's really hard to know how many deaths precisely are caused by COVID because there are Comorbidities for example that, you know, somebody may have COVID but also die from another reason another cause There's lots of uncertainty about how the virus spreads about how the the vaccines work About human behavior and our social contact patterns So there's just so much uncertainty and there's there's a lot of science being done That then turns out to be not as reliable as you were hoping and this goes both in the fear-mongering and in the kind of optimistic Directions and it just shows you it's really hard to establish causal relationships In this domain and I think the same thing could hold for autonomous vehicles like let's suppose that Autonomous vehicles exist on the market and then All numbers of fatalities go down except for bicyclists for instance Maybe they stay the same. They don't go it worse Is it because the car companies are this kind of deliberately not caring about cyclists? Or is it because the cyclist's behavior is somehow causing this and it's not an easy question to answer scientifically you have to run experiments you have to randomize and we need to build Infrastructure that enables this kind of investigations into machine behavior, you know with human Behavior you could say, you know, I'm going to give some people a medicine and other people a placebo And they don't know which is which and this is how I know exactly if the medicine works or I'm going to you know Give some people This kind of loan other people different loans and I see which villages do better, you know in the long run and so on and I think we need similar kind of Experimental paradigm for for large-scale machine behavior But now when the different companies are just developing their algorithms for profit They have no incentive to Be part of these kind of studies, you know for their cars have to be optimized for For whatever metrics they have we don't have the standard metrics yet and so on so I think we're still need a lot of Groundwork before we can actually answer causal With causal certainty questions about machine behavior Yes, that's interesting because I think certainty and uncertainty have Been highlighted very much in this year actually if this pandemic achieved one thing and then is maybe to highlight the central Role of doubt in scientific research that many people probably forgot about who are not involved in research So let's hope that this you know central aspect of doubt sort of persists In people's minds that this is just the way scientific research is done Let's talk a little bit about cultural differences that I think was a very interesting in in your talk You know from the massive data you've gathered with the moral machine, you know Some as you said cultures value the elderly higher than others Some are more centered around the collective rather than the individual many westerners probably will probably save children first That's not the case in China as you showed us and so forth now There's I think various problems that are connected with those kind of questions over the data And I'm wondering how you dealt with them in your research in a little bit more detail like one would be Who determines what the dominant choices of a culture actually are I mean once Technologies implemented into those cars. I mean who decides there because there's minorities in every culture There's you know contesting values At least in more or less free societies that always fight with each other And it's very hard to define what a dominant culture actually is something we talk about a lot in Germany Or have been talking about more and more in the last years. I think so how to define that actually What is dominant about a culture and then actually built it into the technology? I Think we have to be really careful about separating the Positive versus the Normative well, let's call it the empirical versus the normative so our the purpose of our survey has been to establish The empirical facts what do people? Who visit the website think the cars should do? now there is a separate normative question, which is what should be done by the carmakers the car the companies that are manufacturing the cars what should be done by the lawmakers who are Sending the rules for the behavior of these cars and those are questions that are not answered by scientists. They are answered by Legal scholars and policy makers and they may or may not Take into account public opinion and in some cases they should perhaps overrule public opinion For example, if the public thinks that a certain minority should not be prioritized in autonomous vehicle accidents But that contradicts with the Constitution of the country or the fundamental rights of citizens then obviously that should Take precedence there are cases, however, where? knowing what the public Preferred is helpful because it might give you a sense of something you've missed for example the there was a In Germany a commission created by the Ministry of Transportation To basically put some guidelines create some ethical guidelines for autonomous cars And one of the things they said is don't discriminate the car should not discriminate in in risk between different people based on their age or gender or anything any kind of mental constitution but now the word age seems to Be at odds with a strong preference for saving children and you might wonder well Why did they miss this fact? You know they they said so perhaps, you know at 20 year old versus a 50 year old Should be treated the same, but should a five year old be treated the same You know and maybe this is a place where public opinion can push back a little bit and it's a conversation You know, it's just like politics and what we're doing is is equivalent to an opinion poll about a future technology And that doesn't exist yet, and I think once we can separate the opinion poll facts From the normative question we can say okay in some domains We allow it to to influence a policy in other areas We don't allow it to influence policy policy overrides and so on So I think it's only one part of the big puzzle of how you regulate these technologies. I Am aware we are leaping into the normative here, and that's something scientists don't like to do So thank you for bearing with me That's just like to make another thought experiment as you called get an experiment Just to see how this is play out Let's say if I as a typical white male middle-class Western European would travel to Syria your home country I hope this will become possible soon enough in my lifetime whose default is going to run the car Is it going to be my defaults that is culturally probably? Different from most Syrians would determine to be in an experiment like this whatever the government Will play out whatever policy is going to be and you know we talked about this before that Syria Of course also as a country with very diverse backgrounds with many minorities depending on the region We're talking about so he's who's default is going to be a car I'm I'm I'm gonna have a device with me that is going to build into the autonomous vehicle That is going to serve my default or is it going to be another kind of default? I mean it's It's certainly a worthwhile question an important question to ask I would say that I'm I don't feel qualified to answer it because I'm not a league trained in policy or in law But as a I could answer you as a layperson would and I could say that you know we already have Universal traffic laws universe for example, we have universal traffic signs You know everywhere in the world green means go and red means stop everywhere, you know the stop sign looks the same But there are also differences, you know, how do you behave at an intersection who has priority at different kinds of? overtaking or certain kinds of turns different different speed limits apply in different countries Or for example in school zones and so on so we already have an example within transportation Where we have universal laws, but also some local variations and perhaps something like this would apply where some kind of universal agreement about How cars should distribute risk would be agreed upon using some you know some kind of global standard Plus perhaps different cultures could then tweak those priorities now to what extent they should do it Regionally versus, you know Countrywide and so on this becomes, you know question of politics who has jurisdiction who can you know? I mean there are in the United States for example different states have different slightly different rules And so on so it really becomes very quickly a question of politics And I think that's important to recognize that the behavior of machines is a new Political arena Because people are going to begin to fight Over how the machines should behave in the case of a car is maybe trivial, you know We want the cars to be safer. It's kind of easy to agree on on the general parameters, but when algorithms make decisions about Criminal justice or decisions about How you allocate resources to poor people who are struggling and what kind of safety net, you know rules and regulations do you have? In the future I predict Instead of fighting over who to put in the in the Senate or in the Parliament we're going to fight Over which algorithm is going to be responsible for running different functions in the government? Oh, wow, and that's a that's a scary thing, you know, we do we have do we have the mechanism? Do we have the institutions to support this? I think it's still an open question That is the question of Of the status of a machine actually all together and maybe the questions is there a difference between ethics and AI ethics? Some the thing we talked before actually in the series and you probably remember in Europe We have had this debate not just in Europe, but it was very prominent here because the you you European Union was thinking of Allocating personhood to robots about three or four years ago that started I took the first hurdles But then it got stuck at the process I believe I've had haven't read much about that in the last two years So we you're talking about you know like invasive robots in in surgery and things like that should they be held Accountable actually for what they do in case they make a mistake and so forth Why was that we could say that today probably an airing path that only a couple of years later seems to be from a distant era Almost is it because anthropomorphism mapping, you know human characteristic on to technology just doesn't work or what happened there actually Well, I think That's an interesting question we There is evidence that when there are machines involved in making decision and errors are made and Then people try to find the closest human and they point the finger at the human and they avoid pointing the finger at the machine and You know in our own findings We see this and sometimes it's good because you know you want Machines don't care if you punish them you need to punish the human who built the machine But sometimes people point to the wrong human For example, we found that when a machine an autonomous semi-autonomous car makes an error and kills somebody People want to blame the human in the car for not overriding the machine And if the car doesn't but if it's flipped, you know, if the human is driving and the machine is overriding People still blame the human they don't blame the car for not overriding So they always seem to go for the human in the car rather than the person who built the machine and I think that's so those kinds of Problems result from how our psychology is projected, you know How we're projecting personhood or intentionality or whatever mental state to this machines And I think it could map backfire or misfire and then we may fail to hold the right party accountable And I think that's the reason why we have to be careful Having said that, you know, like some people talk about the robot rights and so on and you know I'm not really part of this debate, but I do think that We should use whatever institutional means necessary to Implement human ethics human the human will in the world and You know, if treating a machine like a corporation, which ultimately is owned by a human If that's a way of making the laws work and get the result that we want That's fine, but if it's not then then we should find another way So I feel like ultimately the ultimate objective is Human Thriving and human safety and you know us doing better in the world being feeling safer getting better medical hair care getting Move from A to B with a lower risk, you know of being in an accident and so on and Everything else is is kind of a political process to reach that goal Okay, so really quick before we take the questions. Can I say that? Aside from the political level from regulatory frameworks. There is no AI ethics. There's just ethics Exactly. I think there's there's only human ethics Apply to AI, you know AI may raise new questions that We knew kind of ethical questions, but they're ultimately human ethical questions like What to do how should a car behave? One second before it hits someone that's not a question that we were able to even consider because it's an absurd question You know within a second. No human can reply can respond Quick enough, but maybe an AI can can react quick enough. So we have kind of a new ethical domains that are That we're entering but ultimately we have to apply the same values that we care about It's more like an we have superpowers, you know ethical superpowers now We could have the machine behave as a human would or randomly or we could do something else And that's a new opportunity for us to do better Thank you. So Christian Graufvogel, let's see what happens on slide or do you have any questions? Yes, we have quite a lot of questions on slide from the community Most of them are about their thought experiment with an autonomous car So the first question Asks would really trust machines more if they decided differently as we would wouldn't it be more trustworthy if my car behaved in my interest and therefore like me? Yes, so it would be more trustworthy for you in one situation To for the car to behave in your own self-interest, but I think and now I'm moving to the normative Okay, so thank you. So so yourself interest is not is a bigger is a broader question It's not just what's your what's your interest in that particular moment when you're in the car? there is this idea from Philosopher John Rawls called the veil of ignorance Which is imagine that you are in a position where you look at two different societies or multiple different societies And you don't know who you're going to be You might be the person in the car or you might be the person on crossing the street or you might be the person in the other car Which society do you want to be when you don't know? Which situation you're going to be and I would argue that you know I would guess that it is you would think I would think it's in my best interest To live in a society where the car will minimize harm because by chance if I'm one of ten people Involved in the accident, I will be more likely to survive if the car Minimize his harm full stop So so it's but that's a that's a social contract that I'm willing to enter into Because I think it's in my best interest as well as in the interest of society So I think it's irrational to enter into the social contract and I think I think that's the case But I think if we think about only a very Localized situation then a different logic comes about That probably depends if you believe in Ayn Rand or Iman will account right there might be different frameworks at play there Yeah, of course we could disagree as well, of course and be both reasonable Then there's another question which touches upon the point of the cultural specific aspect which to miller already referred to What could be the risk of country or cultural specific software to be adapted and in autonomous vehicles? Based on what you presented and another question touches upon the historic context and ask what Rod roll does historic context pay in your model? Well use derive from historical experience and they change over time ethical updates for old cars by tooth Yeah, so I think I'll I'll try and answer the question. It's a complex question. I think culture obviously changes, you know our Ethical values and what we consider offensive today is very different from like 50 years or a hundred years ago You know rights are different and hopefully, you know on average we are progressing But I think this is definitely we should something we should take Take into account as we think about machines even our perception of the machines also changes over time because they are changing You know the machines are becoming better able at explaining themselves explaining their decisions Maybe they just become better at their job. You know, they get they drive better and they make more reliable decisions So we we develop we will develop greater trust in them. Hopefully over time So I think this is why we shouldn't just pre-program the machine that the autonomous cars once and just forget about it I think we need to revisit because the machines are changing and we are changing and We need to constantly as a society I think reflect on our own values and therefore the values that we want to implement in these Decision-making algorithms Then there's a question about bias It goes like this, how do you prevent automated cars from making flawed decision e.g. The crash test bias puts female drivers in danger Because the standard dummy is male or not pregnant Yeah, well, I think clearly there is there's Many many opportunities for us now to revisit how everything is designed How our technology is designed how the world is designed and whether it's inclusive and whether it gives due Protection and opportunities to differ to underrepresented groups and minorities and so on And it's something that people in AI are working on, you know, very hard So I think you know, there's a whole conference dedicated to fairness accountability and transparency and artificial intelligence With you know hundreds of papers being published So it's certainly high on the agenda And it's something that We should take into account now, you know in in our survey out what we find is the public Actually want the car to favor women So they're slightly there's a slightly slight preference for for vehicles to save women Especially pregnant women, but just women or in general And this is a question for society like what what is the right answer? I have no idea, you know, I'm just a scientist describing the numbers and Presenting the mirror, you know to society so that society can have a chance to reflect on itself You know, is this is this what we want or is this something that we want to negotiate? But I think obviously we we want to aspire and I think computer computer scientists and people who are building technology Are increasingly aware and there's a very strong movement within within the AI community to Address these kinds of issues and it's not easy But but it's a constant dialogue and it's it's receiving a lot of attention I think most of these questions actually center about something that we've seen before in history and you describe in another article I read of yours and this is just a question of the social contract What does the social contract consists of the Leviathan so to speak as to so to speak of the power of the many that in the Western world has You know, let's do a lot of bloodshed really violent wars of Sort of negotiating this social contract over the last three four hundred years and so to speak and now There's this techno Leviathan Sort of standing in the front of her house a new social contracts You try to bring into the equation, right? We say it's not just about the individual we have to have a new social contracts Which would be called something like the techno Leviathan now is this going to lead to another era of? You know heavy unrest Bloody wars violent negotiations like we've witnessed in the past 200 years. I mean it's Now again, I'll have to talk about about this as a kind of a hobbyists philosopher historian rather than an expert, but if you think of the idea of the social contract And specifically the idea of the Leviathan, you know this kind of very powerful sovereign whose Authority emanates from the people it was actually Presented as a solution to violence, right? So and it depends on where you stand on this, you know the the Thomas Hobbes who is a philosopher who who wrote the book Leviathan and kind of put the foundation of the social contract theory is lived through the Civil War in in In England and which was a terrible time and he saw this Sovereign as a solution to the problem of violence that you know the there has to be a monopoly on violence given to this one entity That we trust because we and we trust it because we mutually agreed to create it And then of course that entity could also become corrupt itself and and cause violence Which is why we had to revise in every program the Leviathan if you like In order to limit its its powers and hold it accountable and create separations of powers and all of that and the question now is You know with AI You could see it it could be a threat or but it could also be an opportunity So it there could be a threat in the sense that if the world becomes run by algorithms and The algorithms are biased and they favor certain groups and they perpetuate Inequalities and they make the powerful even more powerful They could become a tool of oppression what most of the comments just feared we've heard now, right with some of these comments or Algorithms could be kind of a salvation for us because they they cannot be Bribed they don't care about being bribed maybe and when they make mistakes you can open them and reprogram them Whereas humans, maybe we have our own prejudices that are much more difficult to reprogram so You know friend of mine Sandhill more than Nathan who's a professor at the University of Chicago has been doing work on Algorithmic discrimination in health care for example in the US and he wrote this New York Times article saying You know algorithms can be Corrupts, but they're easier to fix than corrupt people Because you know you can open the hood and and and change the code And I think that's in that sense there is an opportunity and you know What will determine whether this will be like our salvation or our demise is really a question of the of politics You know what how do we how do we take care of this? It's not easy to open the hood though We're going to talk about this at the very end of discussion how to open the hood in Europe at least There's quite something at stake in these days actually that's being presented But maybe call for a last round of our questions and comments from Slido Christian. What do you think? Yes? There is another question which actually touches upon the point of their social contract Asks how can we find a real fair compromise when decisions are actually made by tech companies and programmers who determines regulations? If not all users are involved It's a very good question, you know, how do you how do you do this? Outside of of Information technology too. I mean, it's a problem we faced before In many domains. So I think one of the problems we have today is yeah a lot of AI algorithms that influence our lives are run by Corporations and there is little regulation of the behavior of these corporations and even the regulations we have now are still far behind I think what the technology is capable of The problem is if we if we start regulating them heavily That could also be problematic because Regulations often too slow to change. They're also influenced by politics And they could get us stuck, you know, they could stifle innovation as well So there may be a sweet spot and and I think this kind of behavioral approach maybe maybe a good part of the solution at least because You know, we could say look we we want algorithms not to do this We're not going to tell you how to program them exactly. We're not going to mandate The computer code, but we're going to mandate the behavioral expectations It's like, you know with with humans as well, you know, we we just say look we don't know how you drive just don't Don't break the traffic rules, you know beyond that you could drive you in your own style, right? You have some freedom some leeway. So I think we we need more tech-savvy politicians who are able to understand this nuance and who are able to work with the tech with the technology companies to kind of negotiate a These boundaries and maybe shift them over time, you know by by As you build more and more trust with companies, then you give them maybe a little bit more freedom To try things out and so on but yeah, I don't I don't have an answer I don't think we have a it's a very early. It's probably the most important question of politics in the age of AI So I don't claim to have an answer to it, but I think I can help ask the question There's one last question Well, there are a lot of others, but maybe this is the last for tonight Other than the example with autonomous cars What are the ethical and social dilemmas do you foresee with AI applications across other economic sectors? That's quite a big question for the for the last round, but maybe you can just touch upon I mean There's already we're already seeing AI being used to Decide which workers get allocated the jobs, you know, like in the gig economy Things like uber and you know delivery services and so on algorithms are optimizing for The profit of the company that's operating the service for example uber or lift or whatever but this may result in unequal treatment of different drivers for instance Some people who just you know don't get as many jobs or for some other reason, you know Don't get the service because you know, they live too far and it's not worth Sending somebody there. So you don't get availability. This thing is now completely rigged, you know self-regulated by the companies. There's no Legal expectation or kind of even quality standards that are applied But then you know criminal justice is another one in the US. There are algorithms today that advise the judge Should I send this person on bail or should they wait in jail until their trial? And the you know the judges are in many cases following the algorithms advice and in many cases In some cases, maybe they blame it on the algorithm if the algorithm gets it wrong and that's you know that this is a serious question related to the freedom of Individuals, you know in the treatment of citizens in in a very sensitive context health care is another one You know, there's a study by Sandhill that I mentioned earlier That determines which people would get medical treatment would be like flagged for extra medical treatment in the US and You know their investigation revealed that the algorithms were this unintentionally discriminating against black people So, you know, these are just a few examples So you could imagine how sensitive it can be if we all of a sudden we discover Certain groups have just been dying more because of an algorithm Algorithmic mistake or algorithmic sort of profits in motive so Everything basically any anything that can be where a decision has to be made About something that you care about this decision could be made by an AI Sometime in the future and in many cases it's already being happening Thank you Christian for being the advocate for those questions I'm aware that there are many more tonight. I'm not surprised at all actually with this topic and with this talk You gave yet very really hit a spot that is going to stay with us for quite some time I believe at the end of the discussion, I'd like to discuss That's so much the future or the normative future, but actually the present of policy That applies to your field of research too, especially you wrote about that too You know the study of machine behavior may result in breach of terms of services Sometimes with big platforms setting up fake accounts or personas a young research team just experienced that with Spotify In Sweden and got into some trouble at least because it did just that The question is how to regulate that and there are certain Things being put forward right now actually as of this day or yesterday on the European level by the European Commission Of course, I'm talking about the digital services act and the digital markets act You know being part of a package that is being put forward It has to still pass the European Parliament and their respective parliaments of the member states But it's still it's out there and in some forms. It's well, I'd say radical or it will be It will determine a lot of what's going on in the next 20 years when it comes to that So one of the things the proposal Wants to turn into law is actually a quote new powers to scrutinize how platforms work Including by facilitating access by researchers to key platform data quote end So this basically equals political leverage leverage to make platforms share their data Something, you know that totally runs against their business model so to speak and also on the consumer side Of course the spell they can check consumers can check why they're being targeted for certain products and should get the chance to denounce that Option actually all together. So this is just a tiny bit Of the many proposals in this digital services act that is being discussed just now Now coming from the US and your time at the Massachusetts Institute of Technology, you know one of the best research institutions worldwide What do you think chances are this turns into a European USP so to speak in an advantage Even for larger tech firms that think yes, this is a very good way to get more participation to get more civil society Into all those subjects we have been talked about today or on the contrary Is it going to be a looming disaster because the platforms will probably not cooperate where do you stand there? This is this is predicting. I know this is about the present, but it's about predicting the future Oh, you got me and prediction is really hard, especially about the future and It's I'm definitely going to be wrong. So that's that's the problem, but let me have a go anyway So in the US there's very strong aversion to regulation of technology Because it's seen as something that stifles innovation and you know increases Compliance costs on companies and so on You know, there's a lot of even then there's a lot of scrutiny happening You know with with big tech firms like Facebook who are being called to testify in front of Congress and so on to explain but it's There's really surprisingly very little discussion of what mechanisms can we actually use to enforce? You know, there's always a discussion, but then what do what follows from this and there's like very few examples Some of the platforms do their own research so that they have their own scientists Doing you know investigating some of these questions But they have veto power over, you know, the companies have veto power over what gets published So if something could be damaging to their reputation and or their market share value then It would not be published So you need third-party and I know people who have worked on Projects with some of these tech firms so external researchers who collaborated and Worked for something on something for two years and then they were just were told you couldn't publish this So it's voluntary in the case of the US and there's one one example a social science one Which is a collaboration between Facebook and like a group of academics run by Gary King at Harvard Which is you know, allows access to anonymized data from Facebook By researchers, but you have to apply and there's this kind of independent panel But it's a voluntary participation by Facebook, which is you know, they they are to be commanded for doing that I think the the European approach is to kind of require this so that it's one one rule that applies to all tech firms It will have a regulatory costs. I mean it's a compliance cost There's no question about it. It makes things more complicated You know if you could just as the Silicon Valley motto move fast and break things you could move fast Well, it'll be easier for you and you're a field of research, but I for me It sounds like a good thing because because if if there is a mandate That allows me to go and say look here's an important question and by law you have to Let me investigate it That's a very good thing because now we are at the mercy of the of the platforms and as you mentioned Some people who are studying these these platforms from the outside. They almost have to hack into them You know, for example create fake accounts to see if like I have a colleague Alan Mislov who's Doing research by creating personas and then you know, you query the search engine or or like Amazon from different personas because he's trying to see if Amazon will do price discrimination Like it will it or will it give different results? You know in kind of unfair pricing Practices But the problem is that by doing this he's violating the terms of service which under some interpretation of the law could be considered as hacking Which so he could be prosecuted So he's suing Facebook or sorry not Facebook. I think he's suing the federal government To clarify The intent of the law in order not to be sued by the government for for being a hacker And it's a lot of you know Unclarity there so I so you can imagine that as a researcher would be terrified, you know Like I'm just trying to write a paper or an answer a scientific question. I could end up in jail So it would be good to have legal mandates that regulates this Transaction, of course, you know the platforms have understandable reasons to be worried. You don't want just any scientist to You know, there's also bad science, you know, like people who want to just find something fishy, you know and maybe do You know the methods are not so good and so on so you want to have do this in a in a proper manner in the interest and I think but I think in the long run My prediction is that in the long run this will cause less Social problems. So I think it depends. I think in the short term. It's going to be a cost I think in the longer in the long term I would bet on the European approach because I think a completely unregulated approach to these things is could have some serious social unrest consequences In Europe, you probably wouldn't be thrown in jail right away, but you may get your funding cut, right? That's what some of the platforms tried to do actually in the case of the Swedish young researchers Who wanted to work with Spotify? So could we say that as a very last thought of this evening that the European wish or fantasy Or a way to regulate more could also be a way of establishing more trust in those machines When the public sees that there is cooperation whether this cooperation is forced or not Might not be the main point in the end But you think that could be one outcome of this that more trust in machines could be established along these lines I think so. Yeah, I think if look if we if scientists could say for example with certainty that bots don't influence elections For example, right a lot of people will relax, right? We'll say okay, this thing maybe is overblown or if they could establish with without Without uncertainty, you know or with a great degree of certainty that bots do influence elections Then we could do something about it, right? So one way or another you want to know Now where What are the goals, you know, how far do you want to go? What are the red lines that, you know, obviously platforms need to Cater to their clients and there are everybody has an opinion about what's most important, you know I want more diversity in my news, but somebody else doesn't want more diversity. They want more personalization And there has to be a lot of room for personal choice and personal responsibility, you know We're also responsible human beings, right? We have to take some responsibility for this not blame everything on on the algorithms But there will be sensitive situations, you know sensitive domains where we need to know what's happening We need to know if the companies are going too far if they have too much power. I mean today. We don't know What impact do these algorithms have on children? developmental outcomes or Social behavior and so on, you know, this is some research, but it's still a very much an open question Which of these things are? We just need to wait and learn which ones are too dangerous and we need to to limit in some way That's a political political question, but we can't even explore it until we establish some facts Thank you so much talking about trust I was very in when I saw your presentation And the PowerPoint presentation that you were going to run way too long You were under your estimated time actually very punctual and I went a little overboard now in this discussion But it was so interesting to listen to your talk and to have this conversation with you and the public at home So thank you very much. Yeah, Ravan for having been with us. Thank you so much for hosting me