 Welcome back to the ITU headquarters here in Geneva. It's day two of the AI for Good Global Summit, and I'm really pleased to have Professor Stuart Russell, who's Professor of Computer Sciences in Berkeley in California. Thanks for being here. Now, I remember in 2001 how I tried to take over the shuttle, whatever it was, wasn't it, in some sort of space odyssey. Can that really happen now with AI? So, what happens with HAL is HAL has a mission to carry out, and it's different from what the humans on board the spaceship think it is, and so they end up in a conflict. And this is the main source of concern that we have about the future of AI, that by giving objectives to machines that turn out not to be quite the right ones, we will end up having a conflict. And you could call this the King Midas problem. So King Midas gave an instruction that he wanted everything he touched to turn to gold, and he got exactly what he said, and then he regretted it because his food and drink and relatives all turned to gold. And so that's the nature of the issue, right? When we have machines that are more intelligent than us, and they have objectives that are not quite the right ones, then we have this problem. We're playing a chess match against the machine except the stakes of the whole world, and we don't want to be in that situation. So, I haven't heard this yet in this conference. So you're basically raising, is it a taboo issue that you're saying machines will actually be able to run our lives and be more intelligent than us? I think in many real senses, yes, they will be more able to make decisions than we are. They will, for example, as soon as they're able to read, which is already starting to happen, but will happen probably within the next decade, they'll be reading text in a real sense, understanding the content and being able to extract information. As soon as that happens, they'll be able to read everything the human race has ever written. And we already know that, for example, in Go, or chess they can look further ahead into the future than we can. So now you have something that knows more than we do, that looks further into the future than we can. It's hard to see how you'd be able to outplay a machine like that. So the only alternative is to make sure that it's on your side, that you're not actually competing with it because it has some other objective, to make sure that it really understands what your objectives are and only wants to help you with those objectives. So that's how we approach the problem. Don't take this personally because I'm not referring to you. What happens if there's a mad professor out there who decides he wants to destroy the world by creating an intelligent robot? So that's a great question. There's a lot of Star Trek episodes and science fiction movies with that plot. And I think we have to think about this problem in the same way that we think about nuclear weapons, for example. A mad professor could destroy the world with nuclear weapons. This happens in all of the James Bond movies. But we put in place layers and layers of security. We have tens of thousands of people who spend every day of their lives preventing that from happening. So I think in the future we may see regulations about the actual structure of the software that people are allowed to build and deploy, that has to conform to certain standards that guarantee safety and so we say good behavior. And if someone tried to build something in secret, we would have to constantly, I think, be checking what software was out there, running on the web and make sure that it was conforming. So what would you like to come out of this three-day summit? Are people, I guess, that message really? I think I'd like people to understand that it's not something that is going to result in immediate laws and regulations in government policy. But it is a research question that we have to address. In the nearer term, there are very important things that the UN can do. Preventing the development of autonomous weapons, I think, is the most immediate need. Many countries are already trying to put AI into weapons so that the weapons themselves can decide where to go and who to kill. And the problem with that, it sounds nice in principle because then your soldiers and pilots don't have to get killed, but what it means is that you can then have an arbitrarily large number of weapons deployed by an arbitrarily small number of people. So it becomes a weapon of mass destruction where you can launch 10 million or 100 million autonomous devices to attack a large city. You have the same effect as a hydrogen bomb at a fraction of the cost with much lower technology. And so you're creating weapons of mass destruction that would proliferate much more quickly and that seems like a very bad idea for human security. So the other things I have to do with, I think, development and education is a huge opportunity that we can use AI to bring personalized, high quality education at extremely low cost.