 So the first question is, what is a good decision anyway? An economist will tell you that it means maximizing your expected, over the whole future, utility. And this applies to everything from lottery tickets to Davos meetings to building radio telescopes. On maximizing AI has made a great deal of progress. 18 years ago, Deep Blue beat Gary Kasparov at chess. Just last week, the game of Poco was solved perfectly, laughing and all, and humans could no longer compete. And right now, the Deep Mind system is playing 29 different video games superhumanly well that it learned entirely from scratch just by watching the screen. Imagine if a newborn baby did that. On expectations, these depend on perception and learning. Again, a huge amount of progress. The Watson system extracting information from text, cars watching the world as they go by, learning algorithms that classify images and write descriptions, even a system that discovers the concept of a cat entirely for itself just by looking at millions of images of everything under the sun. Now a lot of this progress comes from mathematical ideas. Here are just a few of the equations from my undergraduate course. And there will be a test if Linda allows some time later on. There's also a lot of progress that comes from commercial investment. So every one of these areas, a 1% improvement is worth billions of dollars. So we may see in the future domestic robots, for example. Search engines that read and understand every page on the web, even a machine that will discover the missing sock, perhaps in the very distant future. So the point of AI is that everything civilization has to offer is the product of our intelligence. So if we can amplify that, then there is no limit to where the human race can go. But I actually want to point to a problem. And that comes in the utility part of the equation. So imagine, for example, that you ask your robot to maybe make yourself some paper clips that you might need. And your robot's very, very clever. It takes you very literally and pretty soon the entire world is six feet deep in paper clips. So this is the sorcerer's apprentice and King Midas all rolled into one. Now technically what happens is that if you ask a machine to optimize and you leave out part of your preferences, the machine will set those elements to an extreme value. For example, if you say, Google car, quick, take me to Zurich Airport, it will max out the speedometer. And they say, oh, I didn't mean brake the speed limit. Well, it'll still put its foot on the gas. And then when it gets to the airport, slam on the brakes. So this is the problem of value alignment. And if you combine misalignment of values with a super intelligent machine that's very capable, then you have a really serious problem for the human race. So the point is that machines can and will make better decisions than humans, but only if their values are aligned with those of the human race. Now my colleagues, my distinguished colleagues, may argue that super intelligent AI will never happen. Let me take you back to September 11th, 1933. Lord Rutherford, the world's leading nuclear physicist, said that atomic energy was moonshine, could never happen. The next morning, Leo Zillard invented the nuclear chain reaction. The next morning. So we have to be careful. Let's look at nuclear fusion in particular. Long ago, they invented a method of generating unlimited amounts of energy. Long ago, it's called the hydrogen bomb. So now fusion concentrates on containment. And AI has to do the same thing. If you want unlimited intelligence, you have to solve value alignment. So one way of doing this is called inverse reinforcement learning. What that means is, for example, a machine sees somebody making coffee in the morning and then figures out the purpose, the underlying utility function that explains this behavior, namely that having coffee is a good idea, as long as it's not too much of it. Now, it's not quite as simple as that. As I'm sure you all see, humans differ in their values. Cultures differ in their values. None of us behaves perfectly. But there is a huge amount of information that the machine can access about human actions. Every television program, every book, every novel, every movie, every newspaper article is about human actions. And in particular, about our attitudes to those actions. So the rational thing for a machine to do is to engage in an extended conversation with a human race about its values before it can take any action that affects the real world. So my claim is that in the future, we will be able to design super intelligent machines that do exactly what they're supposed to do, which is to support greater realization of human values. And I think this is maybe the most important conversation that we can have over the next 50 years. Thank you.