 In 2012, for the first time, my team, led by Dan Girijan, was able to win a competition in medical diagnosis. The goal was to take human tissue, slices from a female breast, look at the images in the microscope, and discover the harmful cells which are in pre-cancer stage. So this was about cancer detection. Normally, you need a human doctor, a histologist, a trained histologist who looks at all these images and he says, that's a harmful cell, that's a dangerous one, that's a harmless one, harmless, harmless, bad, good, and so on. And he takes a long time to analyze these images. Now we can train our artificial neural networks to imitate the doctor. Today, lots of startups, but also huge companies such as IBM and Google, are focusing on that. Just improve medical imaging, radiology, cancer detection, plaque detection in arteries of heart scans, all kinds of applications like this. And this is going to transform all of healthcare, because very soon, in most domains, these artificial doctors are going to be superhuman. At some point, lawmakers will say, this has to be mandatory, and only under exceptional circumstances there will be humans allowed to do medical diagnosis like that. The market by itself is probably something around 1,000 billion euros or dollars per year. So that makes clear why there are so many startups and big companies moving into that field. Deep learning with deep neural networks that analyze images is transforming all of healthcare, and it makes me happy to realize that this research is making human lives healthier and longer. So how do we build a conscious little robot? Our robots have vision sensors and microphones, and they have pain sensors, and they have a recurrent neural network, which is taking in all this data coming from the sensors, and it's trying to translate that into actions that move the robot around, and it's trying to achieve goals. Now its main goal in life is to maximize the sum of all pleasure signals until the end of its lifetime, and to minimize the number of all pain signals until the end of its lifetime. So it's feeling pain, negative numbers are coming in whenever it's bumping against an obstacle. It's feeling negative numbers coming from the hunger sensors whenever the battery is low. It doesn't like that. So it's trying to come up with internal connections, which make it achieve goals such as whenever the hunger sensors get active, run to the charging station and reload the battery, but without bumping into obstacles on the way there. Now to achieve that, these little robots have a separate module on board, which is a prediction machine, a compression machine. This predictor just learns to look at all the data and learn to predict what's going to happen next if I do that and that. And so over time it learns to become a better prediction machine. The better you can predict, the better you can compress the data with fewer computational resources, you can encode the same data because whenever you can predict, for example, how apples are falling down when they fall from the tree, then you don't have to store extra these bits that you could predict well, which means you can greatly compress the sequence of observations. Now in the interest of data compression, our little prediction machine, our compression machine, our encode of the data is always trying to find patterns that it didn't know yet, where it still can learn a pattern, a better compressibility than before. So this means that all the time it is trying to invent internal symbols and internal sub-networks, which represent stuff that frequently occurs in the real world. For example, if there are lots of faces in the real world, then it's really efficient to set aside a few sub-networks in your artificial brain, which encode prototype faces. And then a new face comes along and all you have to encode are the deviations from the prototype. So everything that frequently occurs in the environment gets a little prototype sub-symbol basically and this is happening automatically, just as a side effect of compression. Now there is one thing that is always active when the agent is walking through the world and actively interacting with the world, which is the agent itself. And so just for data compression reasons, it's really efficient to create internally a little sub-network, a sub-network that represents the agent itself, a self-symbol which stands for the things that this agent can do and how it can interact with the world and so on. This allows it to greatly compress the data. It is motivated to create that self-symbol just because it's useful to better encode the data. So now you can argue that whenever during problem solving this self-symbol wakes up because maybe the control mechanism which is generating the actions and therefore shaping the data stream is trying to use the model to plan ahead and think about which action sequence might be more rewarding than other action sequences. Also this planning mechanism is waking up the self-symbol, you can say, this little agent is now thinking of itself. And to us all the issues connected to consciousness and self-symbols and so on, they aren't really issues. We already have those in our stupid little tiny self-learning robots just as a natural by-product of data compression during problem solving. We have built little artificial conscious systems like that for decades. They are just not yet as impressive as human conscious beings because they have brains that are much smaller than our brains. For example a large LSTM network of today as it is used by Google for translating languages has maybe on the order of one billion connections. Your cortex here has a hundred thousand billion connections, more than that. So we are still in terms of number of connections one hundred thousand times more impressive. However, every five years we are gaining a factor of ten. Every five years computing is getting ten times cheaper, which is a trend that has held at least since 1941 when Suze built the first program-controlled computer that really worked. And he could do maybe one operation per second while today we can do a million billion operations per second for the same price. And in 25 years we will be able to gain another factor of one hundred thousand. That means that we will have within 25 years for the first time an artificial LSTM network which contains as many connections as the human brain for the same price that our small networks cost today. So I think we can be really optimistic that within the next few years and decades we are going to see amazing feats in many ways superhuman feats of learning machines that not only slavishly do what humans told them to do but that learn by generating their own goals and their own experiments to figure out how the world works and what is their place in the world and how do they relate to the other objects in the world and how can they learn to use this model of the world to become more and more general problem solvers. This is something that I've been working on for many decades and it's very pleasing to see how this is now becoming reality. So how can we make sure that artificial intelligences are going to become valuable members of society? We have to educate them. Like we educate our kids. Whenever they do something wrong we are going to punish them. Whenever they are doing the right thing we are going to reward them. That's what we do with our kids. That's what we are going to do with our artificial intelligences and we are already doing that in, for example, my lab. They try to maximize rewards and try to minimize pain and that gives us a great handle on shaping their evolution. Of course it is true that in the long run you cannot predict what your kids are going to do and we also won't be able to predict exactly what our artificial creations are going to do because they are going to set their own goals using concepts such as our artificial curiosity that are necessary to make them intelligent because only if you give an intelligent being the freedom to set its own goals only then it will be able to learn how the world works and what you can do in it without slavishly obeying human commands. The only way of building a smart system is to give it some freedom to set its own goals. At some point when this is going to scale this will transcend humankind. This also seems obvious. We can shape the initial stages such that these artificial beings will be valuable members of our society but in the long run they will not be very human centric instead they will realize what we have realized a long time ago namely that most Arthur resources are not in our thin film of biosphere. No they are out there in space less than one billionth of the sunlight hits the earth and the rest is wasted at the moment. It's not going to stay like this. The AI civilization is going to expand to where the resources are and there will be trillions of self-replicating robot factories in the asteroid belt and the robots will shape their own destiny by setting themselves their own goals and of course they are going to expand in a way where humans cannot follow. Space is hostile to humans but it's really friendly to appropriately designed robots. What we are going to see we are going to see in the next few decades and centuries the beginnings of a huge new development where a new form of life is going to spread from the biosphere into space they are going to cover within a few hundred of thousands of years the entire galaxy by senders and receive us such that AIs can travel the way they are supposed to travel the way they are already traveling in my lab namely by radio with lightspeed from senders to receive us and they're going to use more and more of the energy of the stars and other sources of energy so the universe now wants to make a new step towards higher complexity and we humans we are important for that but we are not the final step we are helping the universe along to achieve this new level of complexity what is now happening is something that is comparable to what happened about 3.5 billion years ago when life was invented a new form of life is going to make the universe itself intelligent and it's a great privilege to live at a time when we can witness the beginnings of this incredible new step towards higher complexity in the next five to ten years we will see lots of benefits of artificial intelligence deep learning neural networks like the ones that we use to win cancer detection competitions already five years ago they are going to transform healthcare they will be superhuman in many many domains and humans are going to live longer and healthier through these better artificial doctors all the big companies are investing in AI only to sell products to customers so Google and other search engines and Apple and other cell phone makers they want to create little smart companions that understand what you're saying that always have the camera on such that they can perceive your environment and can help you in your daily life and can talk back to you and help you in all kinds of situations where you might need a little assistant so this whole goal of building smart assistance that is a very important one where the big companies hope they will be able to make lots of money people are going to be happier and live longer and healthier and they will be more addicted to their smartphones that's what we are going to see in the next ten years