 We're going to go back to our roots to solve one of the biggest challenges that we face today, which is the volume of data that we are producing. I'm going to show you how we're going to have to go back to our roots as well to solve the next challenge, which is the speed at which we process the data. The amount of data that we produced over the past 100 years, we are today producing it in 10 to 20 minutes. Every 10 to 20 minutes today we produce the same amount of data we produced over the past 100 years. In the next 10 years, we'll produce that in 5 seconds. What is absolutely clear to almost every technologist out there is that we as humans can no longer read and digest this information. We need help. We need serious help. Actually, most of the technologies, if you really dig into it, the ones that are taking the leading edge are the technologies that are getting that help. The essential help is in the form of algorithms. In the past, we could write algorithms quite tractably. You could have a very good mathematician or computer scientist, a theoretician, and you could develop your algorithm. An algorithm is really a series of steps, followed by rules in each step. Calculate this, take this data, merge it with this data, come up with this decision, it's a decision tree, and then you come out with a digested form or an interpretation of that. So it's basically taking your zeroes and ones and transforming them into something that you find useful. Now, there are basically three kinds of algorithms. This is stated very simplistically. Today, that can go beyond the kind of algorithms that we used to use in the past. Basically, we need very sophisticated algorithms, and we actually need machines to help us build those sophisticated algorithms. The one that is very popular today is deep learning. Many of you probably have heard of it. This is what Google is going into. It's with Microsoft, Facebook. Most of the people are using these deep learning strategies. What deep learning really is is a series of neurons or nodes and with successive layers. And when the information comes in, the next layer has to combine that information in a certain way. And in the end, you can train one of these nodes to recognize all the different features and the conditions of a face. So it becomes a face-detecting node. And if you show it enough images of faces or pictures with faces in it, you can train and develop the algorithm. Actually, the algorithm is so complex that you cannot actually understand it, but the machinery can run the algorithm. So this is becoming a very powerful tool, and it will be a very powerful tool that lives in the cloud that you access. And when you want to recognize something, you won't necessarily know or realize that actually it ran through this deep learning algorithm to decide what you were looking at or a certain pattern of information. It's going to become more and more important because the trend is that everything is becoming digital. Our self is becoming digital. Our health is becoming digital. Being able to recognize patterns so that we can make decisions on that is going to become increasingly important. The second is what you can think of as brain-inspired design. Although deep learning is partly brain-inspired design, it's a very structured network, but a brain-inspired design is more of a massive set of interconnections. It's a concept of what the brain could be doing, and we try and mimic that concept. So IBM's Watson, for example, is probably a very good idea of kind of cognitive computing where we look at the brain and we see that it's got sensory areas and it's got reasoning areas and decision-making areas and reward areas. And we mimic those mathematically and try and get the machinery to make these decisions. So Watson can take all of the millions of pages of Wikipedia, for example, and it can actually run it through this kind of conceptual model of the brain and it can make decisions on them, and it's actually incredibly powerful and very useful. The third direction is the emerging direction, and of course this depends now on much more concrete information about the brain, which you could think of as the brain-derived design. Mimic the brain as accurately as possible. After all, it is the product of four billion years of evolution. It has to have the intricate connectivity in the circuits. To get to brain design, you need to understand a lot more about the brain, how it's put together, how the neurons are structured, and the essence is really that you have neurons and you have a lot of cables. You have enough cables in your brain to wrap around the moon a couple of times in just your brain. So there's a lot of cables that are connecting and forming this intricate network and what it's really doing is carrying out an algorithm through these different networks. Just to give you an idea, you also have synapses that have to connect these neurons and these are synapses in a piece of the brain the size of a pinhead, 40 million synapses that have to connect about 30,000 neurons in a small piece, just the smallest possible piece that you could look at. They are the messengers between cells and by controlling these messengers you can control the algorithm but as you can obviously realize, nobody can program this. You have to let it learn and that's why the brain is so powerful that actually learning involves adjusting the algorithm that is happening at all of these different synapses. We are beginning to be able to piece together how these neurons are fitted together, how they connect together and how they function. And this is a real simulation on a supercomputer and the color that you see are voltage fluctuations. What you're really seeing is the brain carrying out algorithms but they're very, very sophisticated algorithms that you have to... and as you learn, you adjust your communication between your different nodes so that you can actually execute that algorithm better and better and faster and faster. This is just an example of the kind of circuitry that you get in such a tiny piece. Again, it's about the size of a pinhead. You have about 7 million connections, 40 million synapses that are connecting them together and we're starting to get a good idea of the blueprint circuits and these circuits could be printed into silicon chips and then run mimicking closely how the algorithm that has come out of evolution. You can of course take that further and this is a simulation of a region of the brain that is where you have now about 5 million cells and they're interacting together, executing the algorithm. Now, of course, you can see that this is very different from what you would look at if you looked at a computer processor and how they are transmitting information. Here there's much more patterns are forming and these patterns are reflecting the kind of algorithm that is being executed in order for the animal to make a decision about what it is it's seeing or to be motivated to change its goal or to achieve a goal. But you can take this now further towards a whole brain simulation and this is the beginning of a whole brain simulation at still a very simple neuron level and one can synthesize these branches because these branches are really the result of the brain evolving to be able to execute more and more sophisticated algorithms but also to be able to learn. So actually one of the reasons why the brain is so complex is that it had to advance the algorithms but it also had to allow them to become adaptable. So you could throw the brain into a different environment and the algorithms would change. So what we are doing to be able to allow these algorithms to change is to embed them into virtual objects structures or virtual mice or virtual animals or a car as I'll show you in a second. So to do that you actually have to, we still at a stage where you still have to run the brain simulation on a massive supercomputer going towards the human brain this will be a billion euro supercomputer or half a billion euro by then you actually still need supercomputers to be able to create this virtual environment with virtual robots that would be controlled by this brain. So this is just an example that you would then take this and today already we can couple this in this case it's just a plate that has to balance the ball in this case it's a very small brain circuit that already is able to drive, self-drive a car and as these become more and more sophisticated you will have a single chip that will be able to be plugged into your car and allow you to self-drive it as we go further into the future. So this is an example of a robot in the human brain project one of the things that we're doing the European human brain project is that for many years researchers at the technical university in Munich have been evolving physical robots and this is one of the most advanced, latest robots called Roboy and he's actually a showcase he goes around to stages and to schools and people can interact with them and talk and question and there's a lot of intelligence that's gone into this but now what we will try and do is to see how we can add intelligence to that by using these brain derived circuits that closely mimic how the brain functions. So basically why would you want to go from these fantastic chips that we use today which are fast and highly reliable to a brain like circuit which is somewhat much slower in some sense they communicate in the range of hertz and not gigahertz and very messy appearing very messy. Well there are many different reasons which you can see here it's adaptive, it's iterative, it's self-learning you can throw it into an environment and the algorithm will find its way until it can learn it's contextual which means you change switch of the light and it will adapt so that the algorithm can still operate and it can become very personalized, it can be adapted to you. The most important technological reason really is power and cost efficiency there are ways to get these circuits onto computer chips that now it will be about 10,000 or 100,000 less energy to run than the conventional chips. This is an example of a project run by Karl Heinz-Meier at the University of Heidelberg where they have for many years developed the technology to be able to print these circuits, print, program these circuits with neurons and then they can put them onto a chip and they build them onto a wafer and then you can build them onto a big kind of brain-like computer so this is not just an idea, this has already happened this is a lot of happening and I'll show you many examples there's a small board you can get today, it's a USB stick and you can actually start doing brain-like programming the state of art is this chip called the brain scales chip it developed in Heidelberg where you now can have 4 million neurons and this is some of the most advanced neurons you can put onto silicon with a billion plastic chips so you have a very large network it would cost, it would be incredibly expensive energy-wise to run simulations of that and this can do it 10,000 times faster than you can simulate it on a computer and actually 10,000 times more cost-efficient most important becomes highly user-configurable so even a non-expert can start programming these circuits the second generation is developed by Steve Ferber at Manchester University he was the architect of the ARM processor which is in all your cell phones here he has taken a different route to neuromorphic computing all of this is called neuromorphic computing where he takes commodity hardware which is the ARM processor and it is configured to be able to act as neurons and connections and then they can embed this and they've reached up to about 50,000 chips or a million cores this in principle today can already capture a trillion neurons that's more than the human brain but that's only with one input to each neuron so they could actually go to about a billion neurons with close to 100 billion synapses and as we go further in the next 10 years we will have the hardware to capture the scale of the numbers that they are in the brain the two projects that they are those are both run in the human brain project but there are other projects at Stanford the behind group has built an amazing neuromorphic chip where they basically have only neurons sitting on the chip they can build this into a board they can build a million neurons into it it's about 100,000 times more energy efficient than any chip out there and then they have the connectivity that you can configure the brain's connectivity they have that outside on the conventional system which is something that makes it again energy expensive and not easy to scale but nevertheless this is one of the other exciting developments that have happened in the past few years IBM has developed a very exciting chip as well which basically allows you to put about a million neurons together the more neurons you have the more sophisticated the algorithm is the more sophisticated the connectivity is the more sophisticated the algorithm is the more plasticity you put on it the more you can adapt the algorithm so these are one bit synapses so it's really just a binary system but you can still scale this to very very large systems today and they've built a very nice software on layer on top where you can actually now program these to start building the algorithms that you need for whatever big data challenge you need this can run at real time on very very large data sets Qualcomm has also produced the chip it's a bit of a secret project we don't know exactly what is inside this chip it's called the neural processing unit and it will be added to the normal thing in your cell phone they say where you have the normal compute processor unit you have a graphics processor unit and you have some other processors as well as a neural processing and that allows you to do this kind of human like analysis on the data very low cost very high volumes of data very fast so there are already these five major initiatives different directions some of them complimentary some of them have certain strengths and other weaknesses but they exist today and they are being evolved at an incredibly high speed in order to make it possible for us to digest, make decisions on big data we're talking about petabytes of data or exabytes of data in the future as fast as possible so there's clearly hope that not only can we deal with the volume with DNA storage but we're going to be able to deal with the speed of making decisions on such massive volumes of data there are many applications in the airports you will probably find in the future a neuromorphic chip which is sensing and analyzing odors and it will make a decision whether there is a threat there's the brain like capabilities of an owl to detect sound location is being implemented into neuromorphic systems so that you can easily have a device that would be able to track the positions of anything that's happening in terms of sound around in the world you have something like Watson where you have a massive amount of actual knowledge and data that's coming together and to make decisions on that in the palm of your hand on your phone you would need to still use a neuromorphic processor to decide what is it that you want to know and not just what is possible to get out of the system digital illography is another possible technology that could be supported by neuromorphic computing a big development is that in the brain the whole aspect of intervening by putting in circuits to adjust brain circuits in neuro prosthetics for Parkinson's disease or other types of disease or epilepsy they would have neuromorphic processes to help make decisions as to when to stimulate where to stimulate there's a whole range of others I think the most exciting thing is that what we're really going to feel probably more concretely than anything else is that we're just not going to need to go through all these settings that you have to put your preferences you know your preferences in an iPhone or in whatever application you go into is going to get more and more complicated and you get very irritated I fill in my preference I don't want this I don't want this I don't want this in the future you will have these accompanying chips that will adapt to the world not because you tell it to but because it observes what you do so I think that's going to be something exciting about having this new kind of solution to how we're going to deal with the massive speed of big data thank you