 Can we make a moral machine? Can we build a robot capable of deciding or moderating its actions on the basis of ethical rules? Three years ago I thought the idea was impossible, but I've changed my mind. So what brought about this U-turn? First was thinking about simple ethical behaviours. So imagine someone not looking at their smartphone, o'r cwmuno'n gweithio'r gwybod yw'r gweithio, sydd wedi bod yn tyfu. Felly coi oherwydd? Felly mae'n gwybod oherwydd ydy'r person erbyn. Felly mae'r cyfnodd y cyfnodd o'r cyfnoddau gwahanol o'r gweithio'r gweithio. Mae'r gweithio'n gweithio, mae'n gweithio, a'r gweithio'n gweithio a'r gweithio oherwydd four o'r gweithio'n gweithio. Roedd y gweithio'n gweithio'n gweithio ymlaen mae'r aelod o'r gwymanbôl wedi ddigoni'r gwaith. Ond ychydig y robot yn rhoi'r aelod, mae'r aelod gwaen nhw'n iddynt bod atyn nhw'n mynd i deilladau. Mae'r aelod yno yn i meddwl i'r mynd i gweithio'r llunig o'r rhanig i'r bobl wych yn y roi'r gwaen. Ac mae'n ddiolchio this fel y Deyrnas��doeddethau Cymru, yn ei wneud ar y mae'n dod bywodio'r bwysig yw Asymov's Fyflau Fyraeth. ..yna d Psychology-rhyw yw hwn yn y fath. Felly, y robot maen nhw'n unrhyw o unrhyw... ..eg arall yn y gynhyrch gwrdd, oedd yn ddod yn cael ei llyfr... ..yna'r idea yw hynny nesaf yn y gynhyrch a'r robot. Mae'n need i ddim yn gwybod y robot.. ..yna'r ddau o ddau'r ddau'r ddau... ..yna'r ddau'r ddau i gynhyrch, ac yn yr unrhyw... ..yna'r ddau'r ddau'r ddau'r ddau... yn y prefiwys slide. In fact, the technology that we need to do this exists, and it's called the robot simulator. So, we roboticists use robot simulators all the time to model and test our robot code in a virtual world before running that code on the real robot. But the idea of putting a robot simulator inside a robot, well, it's not a new idea, but it's tricky and very few people have pulled it off. In fact, it takes a bit of getting your head round. The robot needs to have inside itself a simulation of itself and its environment and others in its environment, and running in real time as well. So, over the past two years, we've actually tested these ideas with real robots. In fact, these are the robots. We don't have a hole in the ground. We have a danger zone, and we use robots instead of humans. We use robots as proxy humans. So, let me show you some of our latest experimental results. So, here we have the blue robot, the ethical robot, is heading towards a destination. This is its goal. But it notices right here that the red robot, the human, is heading toward danger. So, the blue robot chooses to divert from its path to collide a gentle collision with the human to prevent it from coming from harm. This is exactly the same thing, but a short movie clip. You can see, again, the blue robot is the ethical robot. Our red robot is the proxy human. Cute robots, aren't they? So, we also tested the same with an ethical dilemma. So, here, our ethical robot is faced with two humans heading toward danger. It rather dithers, rather hesitant, but of course it cannot save them both. There isn't time. Ethical dilemmas are a problem really for ethicists, not roboticists. So, how ethical is our ethical robot? Our robot implements a form of consequentialist ethics. So, in fact, we call the internal model a consequence engine. The robot behaves ethically not because it chooses to, but because it's programmed to do so. We call it an ethical zombie. Our approach has a huge advantage, which is that the internal process of making ethical decisions is completely transparent. So, if something goes wrong, then we can replay what the robot was thinking. I believe that this is going to be really important in the future, that autonomous robots will need the equivalent of a flight data recorder in aircraft, an ethical black box. So, what have we learned? Well, the biggest lesson, in fact, the thing that caused my U-turn, is this, that we do not need to make sentient robots to make ethical robots. In other words, we don't need a major breakthrough in AI to build at least a minimally ethical robot. We don't need to build data from Star Trek. I'd like to leave you with a question about the ethics of ethical robots. If we can build even minimally ethical robots, are we morally compelled to do so? Well, with driverless cars just around the corner, I think it's a question that we're going to have to face really quite soon. So, thank you very much indeed for listening. Thank you.