 The fact that I'm a physicist already tells you that the EU human brain project is probably slightly different from two presentations we had before. As you will see, the HPP has a very strong component in computing and there are two aspects to computing. One is that of course there are computers which have improved enormously according to many power laws like Moore's law for example, so they are available. We can use them today to do science and we all do that, I think, but of course we also know that the brain is in a way a computational device by itself and as you will see in my presentation, one of the goals of the HPP is to improve computing as such by actually building new computer architectures and to use them to learn more about the brain. So before I really start, I also wish to thank the organizers for giving us the opportunity here to present the project. It's great that these two, three projects are in a single session here. Just last week, I received a mail from Japan where they are now planning a Japan brain project and they asked me various questions and so what I advise you is that for next conference you receive three hours or so for the world's brain projects. So let's start with the HPP. The HPP is in fact a fat flagship. Fat is an abbreviation of course. It's a European language and it's future emerging technologies and again, that already tells you that the project is a little bit different from what we have seen before the funding for this particular project does not come from neuroscience. It comes from funding that has previously been spent for new technologies and particular computer-based technologies. Fat has been an increasingly or very very important program in the past and we are glad that they moved very much into our direction now. So to answer the funding question which was asked before, I think here we have a project which actually moves additional money into the field. It's partly being taken from other activities, conventional computing for example, but I personally believe that this is a very good movement. The idea of having these flagship ships is actually a very old one. It goes back to 2009. I remember we had early meetings in Brussels in those days and what I have on this slide are original quotes from the European Commission. So this is their language. It's not my language, but this is what sort of was our basis for the work we did over the last more than three years now. So what the European Commission asked for was they wanted large-scale and the second quote is very important here, science-driven. So these are science-driven visionary research projects and what people are expecting is that these projects should have a single goal, a single unifying science goal and they always referred to the to the landing on the moon. Which of course was not done by Europeans, but this is a project or they wanted projects which are kind of equivalent now whether they achieve that remains to be seen. But that was the original motivation in a way. Of course, what they wanted in a way like the moon landing were effects on society and technology and innovation, economic exploitation and of course benefits for people. The numbers were big at the beginning. The duration was and this is a real revolution for European funding was planned to be 10 years and the budget was suggested to be 100 million euros a year, so a billion in total. But of course right from the beginning, this is Europe after all. They said it has to be a federated effort. It's not only the European Commission that can shoulder this kind of activity, but there has to be have to be contributions from national and regional funding agencies. And that of course makes as all similar European activities makes an activity like that a real challenge because you really have to work on a very large structure, which is Europe is always very different political and scientific systems. Now, the human brain project was selected as one of two projects. The other one is the use of graphene two dimensional carbon structures. As I said, the human brain project started to work as a branch project back in 2009. So it is not what some people believe just an idea which we wrote up in this nice blue report and published and then we got a billion euros. But it's really based on very, very, very hard work. I mean, we just counted the number of meetings and discussions and planning meetings. We had during the preparation phase and that number is well above 100. There were more than a hundred meetings to prepare this project. And the proposal, which is not this report, this report is the public thing. You can download it on the website. But the scientific proposal with all the very concrete planning actually has about 700 pages. So there is really a big effort in preparing this thing. So what is the human brain project about? It's a coordinated effort. Of course, it's to understand, improve and exploit the brain. And you will always see these three aspects of the project throughout my talk. Of course, by understand, I mean the basic neuroscience aspect by improving. I mean the brain disease aspect of the project. And by exploiting, I mean the computing aspect of the project. And that means to learn from the brain how computing can be done in a more energy efficient way, possibly in a self-organized way and certainly in a fault-poor way. As one of the previous speakers, I will, of course, give some emphasis on my own work here because I'm not a neuroscientist. So I have a hard time presenting to you research on brain diseases on neuroscience. I will give some emphasis on the computing aspect. But I also think this is interesting for this conference because this is a conference about informatics, but I will also cover the other areas. So there is a website where you can collect information. As I said, the project selection was in January. So this is a couple of months ago now. It was those were happy days for us. And in the meantime, we have been fighting with the European Commission very hard to get the contract in place. And we now have an approval for a 30 months. So that's not 10 years. It's really 30 month ramp up phase, which will start very, very soon now. First of October and a couple of weeks. And the project size is indeed huge. And the funding is not so huge if you compare it to the number of groups that are involved. The initial EU contribution is 54 million euros for the 30 months. So that is a relatively small number at the beginning. But I think it's reasonable in the ramp up phase. We will make good use of this money. And there is another aspect to it, which Henry, Henry Markram, who is the head of this project, of course, as you all know, said from the beginning is that the project like that will be a signal to the world in a way. But also, of course, to the European funding agencies. And we have seen in many European countries a large interest in contributing to this project. So there will be and there is already additional funding showing up going on top of this 54 million. And already now, if we add up the promises that we have received, it would be much more than the funding we received directly through the European Commission. So it's really exactly what we wanted. It's a program that has started and that will certainly have a big effect on how this field evolves in the future. Now, are we starting from scratch? Definitely not. I mean, it will be silly to do this just to throw together 80 groups and hope that they produce science within 30 months is totally ridiculous. It will never happen. So it's absolutely essential that you build your project on groups that have already worked successfully together in the past and have demonstrated that they can do in particular the interdisciplinary collaboration between neuroscientists and electrical engineers, physicists, mathematicians and many other areas of science. And there are four projects here, which I think it's fair to say, really form the basis of the HPP of the human brain project. And it's interesting that all four of them have started around 2005 or so. So they are by now eight years old and they have all had very good successes, I would say. So that is, of course, the Blue Brain Project at the EPFL run by by Henry Markrum. There are two European integrated projects, which are sort of mini, I should almost say, nano versions of the human brain project. They had the same kind of idea of using neuroscience data, building models, building theories and then coming up with new computing architectures. Those were the fastest and brain-scale project. And then there is the Spinnaker project, which is run by a very famous computer engineer, the one who designed all the processes you have in your cell phones with Ferber, who founded the Spinnaker project. It's a national project in the UK. And it's those four projects together which really make us confident that we can really achieve certainly the goals which we have set ourselves for the first 30 months. If there is one thing very important you need to remember and always know about HPP, it's the red thing on top there. It's a collaborative project and some brain research, of course. It's about understanding the brain. It's about understanding brain diseases and creating brain-like computing technologies. But the very, very important thing is ICT-based. You will see that everywhere. So the basic idea of HPP is we have these computing technologies which are really very, very powerful in terms of hardware, in terms of software tools, and in terms of the availability of data storage concepts. And those things are there, independent of neuroscience. They have developed independent of neuroscience, but they are available to be used by science. But what we have seen in other science areas like particle physics, for example, which is my original field, that a lab like CERN has enormously influenced ICT. I mean, they have, as you know that, they have invented the concept of the web-based access to the internet, and they have pioneered grid computing and also many, many concepts of data storage. So science can actually influence technology, can do excellent basic science and still have a huge impact on the technology. And that's sort of what we also want to do. So we want to do what is available, but we also want to drive the technology. So the basic idea is to aggregate, synthesize, and understand what do I mean by that, to aggregate the data. So this project is not so much about generating its own data. We know that there are other projects out there which you have just heard about, and of course we absolutely want to collaborate with those other projects. The idea of HBP is more to mostly to use what is available to aggregate than to use it for synthesis, to use it as a basis for simulation, and of course eventually then if you understand and analyze the simulations that you gain some understanding about how information is stored and processed in the brain. Now essentially it's of course the infrastructure aspect of the project. We really want to build something which we call a lasting infrastructure, and the CERN model is a little bit behind this. CERN was founded back in 1953, and it had the goal to understand what the universe is made of. Have we understood what the universe is made of today? No, definitely not. But many, many more open questions came up, and we have learned a lot and built what's called the standard model of elementary particle physics, and in a way the same should happen with the infrastructure, hopefully that we build an HBP. Let me go through the three areas which is in neuroscience and brain diseases and computing, and there are a few points of things that people plan to do in HBP. And in ICT, it's always a key base, of course, neuroscience, is that the big goal is to build and simulate unifying human brain models. That is really the ultimate goal of the project. And in order to do that, the first of all, of course, you have to use data, and we use a lot the concept of fragmented data because the data at the moment is not really available in a form that it can be easily transferred into models and computer simulations. So we have to prepare the data and we have to put it into databases or atlases, as I say on the point six, that provide publicly accessible brain outages based on data that is available in the world and partly also gathered by this experiment. Now, of course, there are knowledge gaps in the data and there are various ways of filling those knowledge gaps. And a very important one is to use ICT, computer-based technologies computer-based concepts themselves to fill those knowledge gaps by using predictive tools like the Blue Brain Project has done that in the past, I think very successfully. Then, of course, out of those existing data and out of the predictive tools, there will still be gaps and those gaps, they will be filled by really strategically selected experiments. So we also hope that the project will prioritize future biological experiments. And of course, all this should then be integrated into brain models which are simulated on the computers. On the ICT-based neuro-medicine, the initial goal or the goal is to do ICT-based diagnosis and treatment or to provide support for ICT-based diagnostics and treatment of brain diseases. There are four points here, which are in a way very similar to the neuroscience aspect. Again, we start by collecting and organizing existing data, existing fragments of data, then to derive biologically grounded signatures, characterizing brain diseases to understand them better, to put them into categories, understand similarities and differences of brain diseases and then provide tools for pharmaceutical and nutrition companies to prevent, diagnose and treat brain diseases. So the third aspect of the project is the computing aspect, really closest to my heart. I will explain those pictures later when I really go in some detail through the computing. The big thing in the project are brain-inspired future computing technologies. And we will start by improving or building dedicated supercomputers. Supercomputers that are based on the concepts of von Neumann and Thuring as they have been very successful in the past, and we'll just scale them up, just scale them up with an easy word, but we'll scale them up to reach the exoscale. There are exoflops, of course, which are 10 to the 18 floating point operations per second. We would not like to use the word exoflops, but we would like the word exoscale because as you will see, computing, operations per second is one aspect. There is another aspect, which is at least as important if you go to the exoscale and that is the memory because you have to store all of the data which you have to build your brain models in your computer, which in fact is a major problem. We also want to change the way people use supercomputers in the past, what people did when running simulations of any kind, not only neuroscience, but also material science. And many others is that people run simulations and then store the data in large databases and then spend ages to analyze the data. It is as difficult to analyze, simulate the data as it is to analyze a real recorded data from nature. So what we want to do is to make supercomputing interactive so that if you have a brain simulation running, you can directly interact with the code, rewiring your circuit, exposing the circuit to different data, changing the way the circuit interacts with a simulated environment and then see how the system changes its behavior and to see whether it has a behavior that is maybe biologically relevant. So system interactive supercomputing is another important aspect. Then of course, using the knowledge we derive, we want to build your computing architectures from the insights of brain function. They of course then should have properties similar to the brain and we want to employ them in not only for neuroscience, but for what I call your generic computing and communication devices. For example, to temporal and spatial pattern recognition. And then of course, there's a very important aspect which I personally think is very, very important in the project. We do have a theory division and we want to work with the theories to develop an experimentally and theoretically founded way of describing the computation going on in the brain. The project is technically structured into sub-projects. You don't want to read all this. So there are sort of the neuroscience sub-projects at the top. There's also an ethics and society sub-project. There's theory department as you can see. And in the middle, there are six platforms, as we call them. And those platforms are important, we think, because they are probably forming our telescopes. They are forming what we also feel should be sustainable in the project even after 10 years or more. So we want to build these six ICT platforms to accelerate the understanding of the brain, its diseases and to develop the computing technologies or always these three aspects. And the important thing is that the platforms will be developed by those who are in the project, so by us, but they will be made available. They will be open to researchers globally. And, of course, obvious partners are the other brain projects, but also, of course, individuals from all parts of the world. So the six platforms are the following. There is a neuroinformatics platform, a medical platform, a brain simulation platform, a high-performance computing platform, a neuromorphic computing platform, a robotics platform, and not a platform, but in the same spirit in a way there is a European Institute for theoretical neuroscience. I will now spend my time, the remaining time, to discuss a little bit the computing aspect of the project. So if you look at the computing, there are three things which we want to do. We want to work on high-performance computing. We want to work on neuromorphic computing and based on those computing devices, we want to build what we call synthetic cognitive systems which are interacting with those computing devices. We will have virtual robots which interact in a two-way closed-loop fashion with the simulated environment. So there is an action-perception loop and we really want to expose the systems that we synthesize or simulate to data. Okay, and of course we want to do applications in biology, in neurobiology, but also outside neurobiology like data mining and to prediction making. Let me shortly go through the computing concept of HPP. This is a two-dimensional space where you see the simulation speed on the horizontal axis and the size of the system and the vertical axis. There is a one-to-one in the middle here, meaning this is a system running at biological real-time like biology. On the right side are what we call accelerated systems running faster than biology. And on the left side, slowed down system which is typically do if you run brain simulations these days. And the size goes from a single quadruple column, 10 to the four neurons, up to the human brain of 10 to the 11. If you have a given computer, things typically look like this. You have a certain processor speed and independent of the size of the problem, you reach a certain performance in terms of computing time, like one-to-one, almost one-to-one in this case. And this is just given by the speed of the processor, of course, and also by the complexity of the problem. People call that strong scaling. So it's the performance you can reach independent of the system size. You have a fixed system size and you run the thing as fast as you can and then you get a certain performance. As long as you have enough processors, you can, if you follow the line upwards, you can keep the speed as it is and making the system bigger and bigger by employing more and more processors. This is then what people call weak scaling. At some point, you run out of processors, but of course what you can do is you can use your memory to put many, many neurons, for example, on one processor. Then you can still increase the size of the system but at the cost of reducing the speed. At some point, you run out of memory and that's the end of the world for your simulation. Now, there are different computers, for example, currently in HPP, initially in HPP, we'll have these two systems here and there is another system, which is this system running in Lugano in Switzerland where you see it has less processors but more memory. So it's the same type of processor, the same strong scaling. It has less processors, so it starts to use memory earlier, but it has more memory and at the end, you reach the same kind of complexity with that kind of system. So that's the processor memory speed trade-off. Now, in HPP, we want to build an exascale system which has a performance like that. Again, you see an improved strong scaling limit. You see the use of the memory and you might reach the human brain. Of course, the question is what is the precision and what is the detailing of the simulation? And I will discuss this in a minute. Now, these are neurons which are simulated on the cellular level with quite some detail. Of course, you can go to simpler neurons to point neurons and by that you may shift all the curves upwards. You make the simulations faster, so you push the strong scaling limit and you can go to bigger system, but of course, at the cost of having simpler simulations. There are two other computing aspects in HPP and those are these vertical bars here. Those are the neuromorphic systems that we plan to build and neuromorphic system is a system where a neuron is one entity on the substrate. So independent on the size of the system, you will always have the same execution speed. So that's the extreme of exploiting strong scaling. And as you will see here, we have two different neuromorphic systems, one running at biological real-time and the other one running at an accelerated mode of 10,000. The computing systems in HPP, like all other platforms, by the way, will be distributed. They will not be located in one place. This is an example of the HPP simulation and data platforms where the large X-scaled computer that we plan to construct, of course, only after 10 years, will be located in Julli in Germany and then there are four other locations, in Lausanne, in Spain, in Switzerland, in Lugano and in Italy, doing different aspects of the simulations. Now, brain simulation, of course, or simulations of all kinds, is always a question of detail. How detailed are the simulations that you're going to do? And of course, if you know that biological systems have very, very different scales at which they operate, there is the scale of individual molecules, there is the scale of cells, and there is a scale in which you integrate over larger brain areas. And depending on the scale you look at, of course, the computational performance that you get, the speed and the memory usage is very, very different. So there are different tools that will all be employed in the human brain project like the neuron simulation, for example, able to simulate very detailed neurons, the nest simulation for point neurons. There are mean field maps and Bayesian approaches where you do not go to individual cells. And down here there are reaction diffusion approaches and even simulations at the individual molecule level. Of course, if you want to simulate a neuron, then the storage you need, the memory you need to simulate a neuron is dramatically different depending on the detail of your model. For example, the cellular level neuron model typically takes a megabyte per neuron, a point neuron is much, much less than a megabyte, like maybe 100 kilobytes or so, whereas if you go to the molecular level, a single neuron may take as much as 100 terabytes. And it's clear that you will not be able to run a full brain model with neurons that need 100 terabytes to just store their information. So the solution, of course, is to do multi-scale simulations, which is on the menu of HPP. If you look to the cellular model, the cellular model simulations and you plot the roadmap of HPP in a frame where you have the computational performance starting from gigaflops here, over teraflops, to petaflops, that's what we are now, the exaflop, and on the vertical axis you plot the memory, you see that these things more or less line up. Okay, so we are currently at the level of petaflops so we can have a cellular model simulation of a smaller mammalian brain, but there is a long way to go, of course, to the human brain, and it means that development of computing architectures will have to be done. And there is an interesting observation, is that of course we all know that the computational performance will grow, there are plans to implement exa-scaled computers, but there's another observation, is that the memory actually scales slower than logic, and that means then computation. So it will be a real challenge to just store the information about the brain, the hundred petabytes you need to store a brain on a computer. So you will not be able to do this with DRUM, but we have to invent some really creative, mixed memory technologies. Going on the right side, so would there be a use to improve the strong scaling situation to make individual processes faster, for example? Why would you do that? Well, of course, there are things which are not part of the simulation now, like the glare, for example, so you could make the models even more complex, but there is also another aspect, and that's the dynamics of the circuits. Of course, they develop as a function of time through plasticity, learning, and development, and if you want to address that question, and I think it's a very important one, you would have to work on the strong scaling front. So we are also going to do this in HVP, and that is the aspect of neuromorphic computing I will shortly go through. That aspect, the idea of neuromorphic computing is to go to an extreme computing architecture where a single cell is just identical to an entity on the silicon substrate. So a neuron is really, you can point to the neuron on the silicon substrate. There is no global synchronization, there is no clocking anymore, and time is a continuous variable in neuromorphic systems like it is in the real biological brain. The arguments on neuromorphic computing, I think, are well known, potentially very low power, poor tolerance, plasticity, learning, and development, so there is no algorithm that you have to write. The speed, of course, is very important here, and it's a fundamentally new and different approach to beat the strong scaling limit, and of course, in principle, the scalability. Why did people not do neuromorphic computing in the past? Well, it's always, of course, a lack of neuroscience knowledge which calls for flexibility, configurability, and there are other aspects which are of more technical nature, which I will not discuss here. In HBP, we are planning to build two neuromorphic computers which will be, I think, quite unique worldwide, and these are complementary approaches. There will be two sides, two geographical sides, with two different strategies for neuromorphic computing. There is what we call a many core system where we use many individually clocked digital processors, arm processors, which communicate via address-based, small packet, low density communication protocols, and are effectively running at biological real time. These systems are able to emulate robotics experiments, for example. Then we have what we call a physical model where we have analog elements, analog electronics elements, which have physical time constants. So the speed is not coming through a local clock, but it's coming from the physics of the time constants. They have a high density communication fabric, and the interesting thing is that the time constants are tuned such that they run at 10,000 times the biological real time. We are a little bit in the situation of these two gentlemen here, I think. This is John von Neumann and Robert Oppenheimer. At the beginning of the 50s, when at the Princeton Institute of Advanced Study, they built the first programmable computer. This is what many people think, that they built the first programmable computer. I still show this picture because it's a very nice one. These round units down there are actually the memory tubes at 40 kilobytes, bits 40 kilobytes of memory in there. And of course, they used it to do nuclear simulation. This was definitely one of the first large-scale machine. And why was it so important? It was the first time that non-experts, non-computing experts, non-hardware experts, were able to run their own programs. And in a way, we tried to do the same thing for neuromorphic computing and HPP. Now, the real story I have to tell you is that the first ever stored program computer was actually on the other side of the Atlantic. It was in Manchester. It was a so-called small-scale experimental machine built by Williams, Kilburn, and Hattuto. And it ran its first program back in 1948. This is what this thing looks like or looked like. It was called the Manchester Baby. And the amazing thing is in HPP, one of our neuromorphic systems will actually be in Manchester in the same lab. And this is built by Steve Fugger. So they will build this rack system with a million ARM processors and they will make this system operational in only 18 months, which sounds completely unbelievable. Of course, we do not start from scratch. They have a chip. They have designed the Spinnaker chip with 18 cores on a chip with a relatively simple only integer operation core running at moderate flux speeds, having on-chip memory and most importantly, a very clever communication system to build these boards with 47 chips. And this will be our purely digital but many core neuromorphic system running at real time. There is another concept and that will be built in Heidelberg and Germany will be built a mixed signal analog system based on silicon wafers. We will build 20 wafers containing 4 million neurons, 1 billion synopsis. Again, communicating with a conventional computer to run closed loop experiments to study the dynamics of system like that. Also, that system will be ready after 18 months, we hope. And also that system, of course, is based on previous work. The foundation here is the EU Brains Gales Project. This is just a computer drawing but it exists in real life. Down on this wafers are the neurons and the synopsis and the thing on top is a digital control system where you can do neuroscience type experiments. You can record spikes, you can record action potentials and then analyze the data. You gather. Are the two promises fulfilled? Can we improve the energy efficiency of systems like that? Can we run them faster? Well, energy efficiency is an interesting aspect. If you go to the human brain and you calculate how much, for example, you spend for an elementary operation like a synaptic transmission, then in the human brain you spend 10 fem per joule or 10 to the minus 14 joule. If you look to the very detailed cellular level simulations like they are being done in the Blue Brain Project, you spend a joule. So there are 14 orders of magnitude in between. You, of course, can go to simpler cell models and then you reduce this to 10 to the minus four joule but the huge gap remains. And that gap, and this is a logarithmic gap. I mean, that is not just a linear gap but this is a big thing. And right in the middle of this gap, the existing neuromorphic systems actually sit with the digital system and the mixed signal system. There are 10 to the minus eight and 10 to the minus 10 joule and we really think that neuromorphic computing in this project will be used to do real science. There is the timescale aspect. Of course, there are some characteristic timescales in nature, for example, on a real-time system, STDP detection of causality works at the time level of milliseconds or sub-milliseconds. Plasticity learning development reaches into days and months and years. Evolution even longer in an accelerated system you can compress the timescales from one day to 10 seconds. And with a system like that on a large scale usable by non-expert users, that should be very useful to study dynamic processes like learning and plasticity and development. So these are our goals for neuromorphic computing. What is a typical workflow and HVP to use all the features we have in the project, all the platforms? This is just one example. It's the reducing complexity example. We are based on data, on the data aggregation, on the integrated data sitting on the neuroinformatics platform. This is the box on the left. Then of course in the brain simulation platform we do circuit building. We run simulation experiments and we visualize the result. Of course it makes no sense just to simulate an isolated circuit. It has to interact with a robotic environment which is also provided here in the project by the neuro robotics platform. And by that of course you can study already some behavior. You can maybe have an idea on how cognition works and then derive theories and computing paradigms for that. What you can also do then is if you take Henry Markram's very complex cells reconstructed cells, reverse engineered cells, you can take out complexity and can go from many compartments maybe to fewer compartments, maybe even to point neurons with a certain set of parameters. So we use the complexity to export the circuit to the neuromorphic platform which typically runs simpler neurons, simpler plasticity mechanisms. You execute your robotic environment with the neuromorphic system, NCS is the neuromorphic computing system. And then you start to exploit the configurability. These systems are reconfigurable and they can react very rapidly because of the accelerated operation. And then you can tune your circuit, search large parameter spaces and really hopefully then learn something about the operation. So that's the plan. Let me come back to more administrative aspects at the end of my presentation. We are always being asked, can I still participate in the project? And the answer is yes, definitely. There is what is called the open or competitive calls. And HPP has defined eight competitive call themes and has provided a budget of eight and a half million euros to be given to new groups that are then joining us. And these are the subjects starting from neuroscience, cognitive neuroscience methods for rule-based clustering of medical data using neuromorphic computing systems, running or providing all the software for the robotic environments, agents, sensory and motor systems down to theory then theory of multi-scale circuits. The dates are very, very soon now. You can already read the call texts on the website. The call technically formally opens the 1st of October. Your proposals I expected for November 6th and then the successful group will be able to start early next year like April or so and then work with us for the next two years. There will be two years left then. And of course we hope that the groups that join us will also remain with us in the phase to come after the ramp up phase. Are we alone? No, definitely we are not alone. We have seen many other activities already in this session or two other activities in this session. But there are other things going on of course in the world. There is INCF for example, there are projects on computing, like praise, deep, Mont Blanc, many supercomputing projects. There are projects on neuromorphic computing in the US. There is the DARPA Synapse project and there are many others. And so we start and partly have already established a connection to all these projects which work on similar themes as HBP. I will stop by showing a picture of the moon. We were expected to land on the moon and we were approved on January 20th. And it was about later, February 18th when the New Yorker published this funny picture. And we are very glad that this time there will be at least two people on the moon at some point. So my conclusion is that, well, this is certainly an interesting project. I'm really looking forward to it. It's a large-scale coordinated effort to understand, improve and exploit the brain. Those are the three things we hope to do. And I think it's an exciting challenge. Thank you. I'm interested in a more general question why all these projects are starting now. Is it scientific drive or is it politics or why is this happening now? Yeah, why is it? That's a very interesting question actually. And I would say the fact that HPP is starting now has, I would say as a technical reason, it is that we feel that the tools that we need to do what we want to do are only available now, in particular the computing tools and the ability to build new computing architectures. I must say what we do, my own work, we could not have done that 10 years ago. So it's the technology that's there. And I think we are almost forced to do this. There's a lot of data out there that would be more data generated. And I mean, we have to use this data to do the synthesis process. And we can only do this now. This would not have been possible, certainly not 10 years ago. So that's one of the reason maybe there are others. Yeah, that's a great question for all three. And more than 10 years ago when the Allen Institute was a gleam in the founder's eye and there were work groups saying what could he do in the area of brain science? And very presciently, they put together well more than 10 year, 30 year plan starting with things that could be done a decade ago. And a decade ago, they very wisely chose these gene expression maps of the brain because the technology was there. And they really have a roadmap of going through neural computation, large-scale physiology all the way through behavior cognition and to the distant future of cognition. And I think the reason we're embarking on this now is again, the tools are there and it's at least threefold. One is Moore's law, the kind of large-scale physiology and anatomy that we're doing requires Moore's law and where we are exactly now, the Terra, Tepeta-scale science. But then we have advances in recording technologies from the physics to photon microscopy to large-scale recording technologies. And then finally, the genetic revolution and the genomics and transgenic revolution. And for us, all three of those coming together with cell types computation and large-scale physiology, it really only is in the past few years that most of these techniques are possible. And it's the ability to handle big data. And you made this comparison with astrophysics. Astrophysics, I think, is going through a similar development. I mean, why do we suddenly know so much about the large-scale structures of the universe? It's because we can analyze the data. We can gather and analyze the data. This is really a new ability, which is a very, very exciting phase, I think. Electron microscopy, every electron microscope on Earth in principle can send about a gigabyte per second of data onto the floor. And only now can we grab a couple hundred megabytes a second and create head-of-scale data sets. But it's the fast data acquisition and storage of the computation to manage it. Thank you for your great talk. I think, indeed, it's a very exciting and challenging times, I think, for that project. Just my question on when you saw the slide with the spinaker. So basically, my question is, how open are you going to be for, basically, technologies like, say, for memory, holographic memories, and then silicon photonics, then just things like going away, as you said, from the von Neumann and the automata of Turing machines to completely nova architects, which, in my opinion, it's going to be the key for breaking the petascale. I'm not sure I exactly understood your question, but, I mean, is it that are we going to use solid-state circuit technologies that go beyond what people use now? That is a very interesting question, and we had long, long, long discussions in HPP, whether we should go for very advanced memory technologies, for example, like memorystiff memory, resistive memory in general, magnetic memories, which have very, very high integration density, and on paper, it would be very easy on a single silicon wafer to increase the integration density by a factor of 100 or so. The problem is that those technologies are in a very early phase, and if we would embark into this kind of research, we would end up producing little demonstrators for the next 10 years, which are very nice, and we would probably help companies like IBM or TSMC or whatever to advance their technologies, but we would not be able to have a system that can be used. So we made a very, very conscious decision, which is criticized by some people, but the conscious decision is to use ancient technologies. Okay, I mean, both systems, the analog mix sequence system and the ARM system are currently based on a solid state circuit technology that is 10 years old. It's 180 nanometer transistors, and the reason we do this is because these things work extremely well on the component level. They are extremely well understood, and they are cheap. Okay, so here, for a wafer mask set, we pay $100,000, which is really cheap. If you go to 65 nanometer, which is a four-year-old technology, you already pay a million dollars, just for a single mask set. Don't ask what you would pay for 20 nanometer mask set. They don't even sell it to you. So the point we have here is that this allows us to make a real-life system that is useful. It's useful for neuroscience, for example, to study plasticity and development in this circuit in an accelerated way. It's also useful to derive new computing architectures. So we really want to work on the architectures and not on the components, but that is our decision. Now, I'm also hoping the success of this project, so I'm asking a question, but the human brain project may be orienting to create a computational model of human brain. And now our institute is creating a very detailed data set from the rodents, but not about the human. I have any plan to cover this probably big issue. I, can you maybe shortly summarize, so I didn't quite let me, you referred to the human brain and the rodent, and what is your question? How are you planning to correct a detailed data of human brain? Ah, how are we planning to collect data? Well, I mean, we will mostly use the data that is available, but we also have a program, of course, of collecting data. In particular, we need a lot of data to plan the cognition experiments that we want to do. For example, if we have a circuit that operates in a robotic environment in a closed-loop experiment, then we have to define and design experiments that tell us, is this synthetic system doing something which is interesting? Does it show cognitive behavior? And for that, we have a very strong group in one of the sub-projects delivering experimental protocols that are based on human behavior, for example, that we can then match with respect to the experiments we do on the synthetic system. So that's a very important aspect where I think data is really missing. What kind of behavior would you expect in a synthetic system that you can say, well, this is an interesting cognitive behavior? So what kind of experiments to do? What kind of robotics experiments to do? And that's where HPP will collect data by itself. Otherwise, we will mostly use the data that is available, of course. Yeah, I said that Minescope is some fraction, you know, slightly more than a third of the Allen Institute. There is a program called Cell Networks, but very importantly, there's a program called Human Cell Types. That's closely analogous to the mouse cell types, which is a component of Minescope. And the human cell types won't have, obviously, the in vivo physiology. The neural coding component won't have the MAT of modeling analysis and theory, but it will have cellular biophysics, synaptic physiology, gene expression, profiling, and anatomy of human cell types. Related question, trying to understand how much of the data will come from, you know, external type of brain measurements on humans versus animals. But also, is there gonna be a representative human brain like there was for the human genome project or something of that type? I mean, how are people thinking about this? Well, I mean, it is called the Human Brain Project because the goal of the project is to provide a simulation that is equivalent in performance to the performance of the human brain. It is different from the human genome project because it is not a screening project. It's not a project to gather data. It's a project to use the data that is available and to use it to build models and to use predictive tools as much as we can to build those models. Just following up on that, the genome projects and the man on the moon project have very specific measures of success. So one knows when they've succeeded. So you say that you want to emulate the performance of the human brain. So do we have measures, quantitative measures by which we can measure how it's going? Of course, I mean, if you work with the European Commission, you have to define very well what your measures of success are. I mean, this is what they keep asking us all the time. And I mean, we very clearly say that our measure of success is the delivery of the six platforms with the performance number that we have promised in the proposal. You can read to every single detail what kind of software tools will be available, what kind of computing facilities will be available, what the neuromorphic systems will look like. So that is our promise. In a way, again, using the comparison with astrophysics, we promise to deliver the telescope to do the research. What we do not promise is the research results. So the only firm promise is that the six platforms will be delivered with the performance as described in the proposal.