 Welcome to the second tutorial of the first week of the deep learning course. So what happened last time? The first one, and hopefully the most important, is that we get to know one another. Welcome to the deep learning course. We get to think about the pram of playing a teller, because intuition is always what deep learning is built on. We thought about how game playing can be expressed, and we saw how one can use the deep learning system to estimate what value. And then lastly, we built simple game playing agents. All these are important pieces, they're necessary pieces, but they aren't enough if you want to think about deep learning systems that really are good at playing games. And I should have a slight apology here. There's going to be a long development towards one goal, which is the alpha zero situation that is atypical for this cause. But I thought it was important to have that in the first lecture here, because it really gives you the overall view of things. So what will we do today? We will start with the ideas from last time, and we'll push them forward. And we'll focus on ways of planning in the future, not just the value right now on the board, but thinking about the future. Now, there's always many futures, and in fact, the future has the structure of a tree. And that's where we'll introduce Monte Carlo Tree Search. And finally, we will arrive at alpha zero. The thing why this is so cool is it's in a way one of the coolest deep learning systems that exists in the world. And yet you will see that in a way it's rather simple. It's a relatively small number of ideas that when combined in just the right way, produces very strong gameplay. And we will have arrived there within a week with the hope that you will see the overall structure of what we are doing before thinking about all the details. But first, before we dive in for today, let us recreate what we were last time and make sure that we remember what we were talking about.