 My name is Edmond Awad. I'm a postdoctoral associate at MIT Media Lab in the research group at Scalable Cooperation. Last year I co-developed a website called Moral Machine, which basically tried to understand the human perspective of moral decisions made by machines. The Moral Machine is a website that generates random moral dilemmas that are faced by driverless car and it asks you what do you think the car should do in these dilemmas. So for example we could show you like the car is heading in the street and then suddenly like a few pedestrians jump in front of it and the brake fails. The car has either to continue to those pedestrians or swerve into a barrier getting the passengers which have like different characteristics and it asks you to choose one of the two options. So there are two purposes. The first purpose is to collect data about what people think machines should what kind of moral principles people think the machine should employ when they are facing a moral trade-off. The second goal is to popularize the discussion about moral decisions made by machines. I think the most interesting part is that there are some cultural differences like people in different countries have different preferences and this of course will have its own implication on the policy making between different countries. One of the questions that we always receive about from people is saying that are we are we going to use this data that you collected into programming as self-driving cars which is also kind of scary because it means that anybody on the internet could have made a decision in this and you know the human bias and of course the goal is not to say like people should like the majority should decide what kind of moral decision the machine should make. The idea is to provide like one input for policy makers and regulators who are always interested to know what the people's reaction will be. We'll just decide to report what's the public's view or reaction to this kind of decision.