 Good morning everybody. It's time for AI forecast. I'm not a weatherman but I have one advantage on my side. I'm one of those guys who is making the future. So this forecast is a little bit biased towards what we are doing. I'm sure that all of you have heard about deep learning and we have come to expect ever-new results from machine vision, speech recognition and playing games for instance. There are also other things that have been happening. For instance, generation of all kinds of images, speech has developed enormously. So some people might think that AI is all done. There is not much left. We already have human level performance in many things. Some other people say that we are far away. There is a huge gap between what current systems can do and what humans can do. What I'm going to try to argue to you today is that there is one very special mechanism which the brain has and AI doesn't have this year but it will have it next year. Let's see how it will happen or whether it will come true. How many of you have read Daniel Kahneman's book Thinking Fast and Slow? Okay, so pretty many. You remember system one thinking which is fast and system two thinking which is slow, deliberate. Right. So what we have nowadays, the technology, this deep learning technology, is a lot about system one. It's about distilling a lot of data, a lot of expert knowledge into neural networks, learning to make rapid decisions intuitively, recognition, actions, all kinds of things. The crucial thing there is you need a lot of data. You can act very quickly once you've learned but learning takes a lot of data and time. We humans have this kind of system one thinking. That's what experts develop when they study their field for 10,000 hours at least or so. System two is slow, deliberate thinking. That's needed when new situations arrive, something you've never seen before. The first time you started to ride a bike for instance or the first time that you started to learn a foreign language, you were using your system two. Every time you encounter a new situation where you don't have existing rules and habits, you rely on your system two. And this is something which current day AI doesn't have that much but it will be there. So let me give you an example of a system which is getting there a little bit. So system two, the way it works is that it relies on internal models, planning and then selection over all kinds of different possible future outcomes. It's slow because you have to simulate the world and then select the best outcomes and so on. You may have heard about AlphaGo that managed to beat the world champion Kyrgyz in the game of Go this year and Lee Sedol, one of the strongest players in the world last year. AlphaGo is an example of an AI system that relies on planning. So it has kind of system two. It's not general artificial intelligence because the model that it's using is hand coded. It knows the rules of the game, Go. They have been hand crafted there. But having this ability to plan forward, it's able to start to learn the game of Go from scratch. It doesn't know how to how to make good moves. It doesn't know how good a particular board situation is when it starts but it does know the rules of the game. So it starts playing against itself in its head. It's thinking, thinking, thinking, millions and millions of games. It's thinking about the game and it's developing these expertise because it's able to plan because it has internal model of the game and it's able to simulate it. It is able to develop this knowledge. So that's the kind of thinking, this slow deliberate planning that AI systems need to have. Okay, so AlphaGo does it. Why don't we have it everywhere? The reason is that if you take any other situation, a new game for instance, or if you want to invent a new game for it, the AlphaGo is able to come up with all kinds of clever creative solutions inside this game but it will not be able to learn a new game. And we humans, we encounter new situations every day. In our childhood, everything was new. We had to learn everything from scratch and this is something that neural networks in principle are very good at. Neural networks can learn to predict that's something that neural networks do every day but these kinds of neural networks that have learned are not compatible with planning. So nobody has managed to combine those so far. That's why we have AlphaGo where it doesn't need to learn anything. It doesn't need to learn the rules of the game from scratch. And that's why we have neural networks that can learn to predict all kinds of things from electricity consumption to text or whatever. But they don't combine. They don't mix together. And this is what I'm arguing. This is what's going to change next year. We are going to be able to combine this. So we are going to have AI systems that can learn internal models and then think about it. When they encounter a new situation, they're going to be able to plan and think just like us slowly, deliberately, come up with a solution to a new situation and that's what's going to make them really flexible. Nowadays AI systems rely on humans on all of this. So they need a lot of data to cover all the possible situations in the future. Next year possibly. Some systems already will be up there next year. That's my prediction. AI will become more autonomous. It will be able to when it encounters new situations, it will be able to rely on its internal models and planning and come up with clever creative solutions. This will mean that AI will no longer rely so much on humans. Now if you take an AI project in practice, it needs a lot of human work. So the AI systems of today are not going to be just pluck and play. You have to have human experts training, choosing data and selecting the right kinds of networks and so on. In the future, the AI is going to be able to do this itself. That's not the only thing which is missing. So I'm not saying that next year we will have human level AI or singularity for that matter. There are other things that neural networks can't do as well as humans. One of them is thinking in terms of symbols, thinking about objects and their relations. Again, this is something which works perfectly fine in handcrafted systems, but neural networks are not able to learn all these objects and their interactions from pixel data. So if you just feed in a lot of videos, the neural network will not learn that. That's not going to happen next year probably. It's going to happen in maybe before 2020. Who knows? Any year now. Once we have that, we will have a template for human level AI, but we will not yet have human level AI because any mammal has these kinds of abilities. So it's not like we are going to have singularity next year or the following, but we will going to have something which is approaching a mammal brain. And building on top of that I think will be very interesting indeed. So it's going to be big. It's going to be a major change, but singularity is maybe not here next year. Thank you.