 Welcome to the AI for Good Global Summit here at the ITU headquarters in Geneva. This is the second edition of course. And who's come back for a second time? Of course, Professor Russell, who's of course a leading professor expert in this field at the University of California in Berkeley. Professor, thanks for coming back. If you've come back, it's because you enjoyed the first time round. What's changed in the last year? So I think as everyone knows, AI is in the news daily, almost hourly, with advances in many areas. Some of the visible ones we see like defeating the world go champion, achieving human levels of capability in dictation and machine translation in some languages, and a huge proliferation of applications of AI in all areas of life. The acceptance of AI assistants like Alexa and Cortana and the various other home assistants that have become part of people's lives. We're also seeing a lot of discussion about the future. Whether AI is going to disrupt employment on a massive scale, whether AI is going to take over the world and threaten the human race itself. In the near term, a lot of developments in the area of autonomous weapons where the United Nations has been leading negotiations among the nations on possibly banning autonomous weapons. And that's moved forward. I think we're starting to see a kind of hardening of the battle lines. There are those who are very strongly opposed to autonomous weapons which would include the entire field of AI and robotics. All of the researchers I think are convinced this is a terrible idea. And then a number of nations have come out in favour of a treaty. But then other nations, the US, the UK, Russia, seem to be opposed to further negotiations. So that's quite disturbing. On the question of whether AI is going to take over the world, we've had very public disputes between Elon Musk and Mark Zuckerberg, for example. And I think there's a danger that as happened with nuclear power and genetically modified foods, there's a danger of people hardening their positions and refusing to listen to the other side and caricaturing the arguments made by the others. I think if you just ask two simple questions. If you ask the AI community, are you guaranteed to fail? Are you guaranteed to fail in your long-term goal of creating human-level AI? They would largely say, no, of course we're not guaranteed to fail. If you said, do you know how to control something that's more intelligent and more powerful than you are? No, we don't. OK, well then let's work on it. It's not a debate, it's a simple engineering question. If we are pushing in this direction towards human-level AI, we have to solve the problem of how to control it. One of your colleagues earlier talked about how this is the 50th anniversary of 2001, Space Odyssey, where Hal, of course, takes over the spacecraft. So are you in that, do you think that vision is a reality or not? Yeah, I think actually in that respect, the movie is quite realistic because Hal has a mission to carry out and he is opposed to attempts by the humans to interfere with the mission. That's how he's been programmed, so it makes perfect sense. So that's terrifying. Yeah, I think so. We shouldn't think of this as something that's likely to happen tomorrow or next week. AI systems still have a long way to go before they understand enough about the world to actually pose real threat to us. On the other hand, do you think AI overall on the balance is a force for good? So I think that's the wrong way to think about it. Is nuclear technology a force for good? Well, if we choose it to use nuclear technology for making cheap electricity without pollution, then yes, if we choose to use it to make weapons and kill each other, then no. Like many powerful technologies, AI offers us a choice. And the real question is, are we good? Not is the technology good? And at the moment, I think with conferences like this, we are seeing a real awakening of the AI field to this question, to the challenge of using AI for something other than just making money or killing people. And I'm very, very optimistic that once it's awakened and realises the social responsibility that we have, our research community is going to step up. Now, your research community is based in the hub of it all, of course, California. Are there dividing lines there as well? I think probably there is a dividing line about this question of the long-term future. I think there are people who have been doing technology all their lives and they feel very much I am pro-technology, pro-progress, I am anti-Luddite. And therefore, anyone talking about long-term threat from AI is a threat to this way of thinking. It's a threat to progress, and I think that's a mistake. And I like to draw the analogy to nuclear power. It's not a perfect analogy, but someone who says nuclear power stations could explode if we don't take extreme precautionary measures to make sure that doesn't happen. That person is not anti-technology. It's just common sense. And the same is true for AI. You're making AI systems that are more capable than human beings and you're giving them objectives without thinking through all the ways that the system might achieve the objective, which, of course, is impossible because if you could think through everything that an AI system would be doing, you'd be more intelligent than you are. So it's simply common sense that if you're building something with that kind of capability, you have to take precautions and be aware of the risks. And denying the risks is precisely why the nuclear industry essentially destroyed itself because they didn't take enough precautions, and they had at least two major catastrophes. And now, nuclear power plant construction has essentially ground to a halt since Chernobyl. Just recently, four major European countries have renounced nuclear power altogether. So the nuclear industry by denying risks has destroyed nuclear power and all the benefits that it could bring. So it's not risks versus benefits. It's you can't have the benefits unless you address the risks. Which means that summits like this or elsewhere, we have to put in the guidelines to make it safe now. Yes, I think that's exactly right. People have got to start thinking about ways that it can fail and how to prevent that. We've seen failures with bias in systems that are making decisions, important decisions, financial or even parole decisions. We've seen image recognition algorithms describing people as gorillas. We've seen all kinds of very public failures. We've seen cars killing their drivers or pedestrians. And I hope that those events will actually cause a change in the culture away from this idea that we're in favour of progress, that we're necessarily good and anything we do is good, to actually taking ourselves seriously and saying we have responsibilities. Professor Russell, it was a pleasure. Thank you very much for your time. Thank you. So that's Professor Russell from the University of California talking to us about how and 2001 and future guidelines which need to be put in place to make sure that AI stays good.