 It's a pleasure to be here and I hope to be able to share with you a little bit of what we're doing. I will not give you a general talk about the human brain project or the blue brain project. You'll be able to, you either have heard them before or essentially you don't want to hear them or you'll hear them in another place. What I'm trying to talk about is really essentially something which we learn to a method which I think is common to these two projects which we learn to adopt and which we think is essentially complementing our approaches to understanding the brain, which first and foremost is experimentation and then theory and really using in silicon neuroscience and simulations we think adds to this perspective as a tool in our toolbox to help understand or tackle this problem. Now I have to get something off my chest right from the start. You don't have to be a bird to fly, right? You can very well build something very different that essentially can mimic certain properties of flight. Now what do these pictures have to do in a talk about brain and in the neuroinformatics context? Now the thing is when we, the brain is an interesting system. It's a physical system, but it also is an information processing system and it does things. It has features and exhibits function which we often times think of when we want to understand the brain. So essentially when we talk about learning, we talk about object recognition, we talk about adaptability, we talk about taking smart decisions, all these sorts of things are properties the brain as a physical system exhibits. But we're learning from look at the game of chess, look at jeopardy, look at other things that we meanwhile have found engineering solutions, a lot of mathematics, statistical correlations, higher order mathematics, all of that. We've learned that we can mimic some of these functions like playing chess in a very different way. So essentially we can mimic the function of flight with something very different. Now sometimes this might be sharing the same principles, I mean sort of why is actually something lifting and hovering in there, but essentially you might be even finding types of the same function which actually is based on different physical principles. So bottom line is I mean you can and it's very legitimate to do that and essentially I think it's just important to discriminate that sort of if you want to mimic a function of the brain, you don't necessarily need to look at the brain of how to do that, you can whatever brings you there might be legitimate. If you want on the other hand want to understand how exactly that bird is flying, you have to study that bird and that's really where it brings us back to the brain that's sort of if you want to understand the particular brain of how that brain works, how it actually may be failed to work and sort of how actually it all comes about, you have to study that very system in the detail that system presents itself with. Now obviously a naive way of going about modeling a system is sort of that you start with a certain type of experiment, you actually have a hypothesis about what is it that system in this experimental condition is presenting itself and you make some assumptions about how you want to capture this experimental outcome and you essentially choose a formalism, you choose a set of mathematical equations, you essentially parameterize it and then of course you essentially check whether the mathematical equations describing your experimental system come up with predictions about that very system and more often than you actually try to validate and close that loop and more often than rarely you essentially have to of course tune your parameters, adjust your mathematical formula and include another set of mechanisms. So essentially this is a typical naive view of how a hypothesis driven modeling might go about. Now what we've seen is that essentially we actually end up with a lot of these different models and explanations because people pick a certain experiment say like how can I model this type of observation and come up with a different set of mathematical equations and assumptions. And as a matter of fact as a physicist, physics has been very successful with this model. I mean this way we actually managed to simplify to extract the basic features of certain parts of our world and built an entire theory system that essentially is a consistent view of most of the physical phenomena we find. So essentially this keep it simple, stupid principle. So really try to minimize the description of a system has been very successful in physics. However of course it is true that there are some systems even in physics where sort of it is not so clear whether a simple hypothesis a simple explanation is achievable. And think of for example climate. If you really want to understand how in 35 years from today the water levels in these beautiful water streets of Leiden will look like it will not be easy to come up with an easy back of an envelope calculation of what will be the water levels. It will depend a lot on the currents, the slope of the coastline obviously what all happens in the rest of the world. So essentially for a complex system like the climate we will have to put in a lot of detail. Sort of coming up with individual small independent explanations for certain features of sort of how water is evaporating, if the sun increases or sort of how the water at the ice will be melting, that might not be enough. We will have to put it all together. And so the point there really is that essentially some physical systems seem to be having so much layered detail on top of each other that sort of we might have to really look at the intricate details of that system. And physics actually has found another way of going about it and essentially there's a term called Appenizio model. So especially for these complex systems where it's not where you can't expect that you really will have a simple understanding for the phenomenon, you actually start from the first principles, take the fundamental physical elements, put them all together with all the little details and then see what emergent phenomena come out of essentially the solving of these equations. And examples for that, especially from solid state physics, where sort of if you want to understand whether a certain material is a high temperature superconductor, it's very difficult to say a priori whether a material has this property. So what people do is they actually model every single type of atom and every layer, look how the different quantum mechanical wave functions overlap and then produce, calculate an emergent property and see whether such a system could be a superconductor. And it's this type of Appenizio type modeling which we started to adopt for the brain. Now when I say Appenizio, I really have to say Appenizio like because essentially we didn't start at the quantum mechanical description of every single atom in our system, but we chose a certain level but very much with this idea of capturing what is there in the brain without any pre-assumed hypothesis of what is the function, but essentially starting to describe the individual pieces. And that's what I want to share with you. Now we don't call it Appenizio like model or Appenizio model, we call it a unifying model. So the idea is that instead of building for one experiment, one model, that you invest a lot in the infrastructure that essentially extracts out of the data certain parameters and essentially you build a single model and then you validate this model, so you've modeled individual pieces and I'll go into more detail. So you model the channels, you model the cells, you model the interactions and then you essentially expose that model to a lot of differential experimental situations and essentially refine this model in a circle. So instead of building different models, you're trying to improve this one model with all the details that you experimentally find. Now obviously if you look at the scales relevant to the brain, it's scary in a way. So anything from the level of nanometers to the level of decimeters, so if you look at this from a physical point of view, that's nine orders of magnitude difference of scales. And then of course biological systems in addition exhibit very different time scales from lifespan to essentially chemical time scales that sort of are maybe on the nanosecond scale. So essentially if you put this all together, everyone scales for the brain, anything in the order of nine orders of magnitude of space and 18 orders of magnitude of time. So this is a huge complex physical system we're looking about. If you look at it from a computational point of view as a computer scientist, it's interesting. So this one is what we would call weak scaling. So this is essentially something which you can get at if you have a bigger computer versus this one or something, you will have to essentially accelerate time. You have to get to your solution faster. And that has quite some interesting applications because essentially this one might be the easy part, but sort of bridging time scales of years in a laboratory time frame so that within a half day you can come back and come back to your results is something difficult. And that's exactly what climate for example is challenged by climate research. It takes easily two months for them to run the model. So essentially you have like one output, you can come back to your results two months later. I mean it's very hard to iterate that. So this is computationally speaking as a computer scientist it's a great challenge because that tells us that for years to come we have a lot of work to do. Now if you look again at these spatial scales, it's not only the spatial scales but there are different abstraction or different types of physics that you might have to consider. And it really you might with sort of abstract ways of describing the whole function of the brain. There's an entire neuron based abstraction where a lot of tools also from people present in this room sort of have been working on how to describe the physics of these spatial scales based on the neuronal abstraction. And then you go into what would be more chemically relevant. You can describe this with a reaction diffusion, mathematics or even you have to go to do the description of how atoms interact with each other. And essentially these are tools that are out there and people at every level of this to attack the problem. And so there's no pre assumption really as to what is the right level of scales. So as a matter of fact you might find relevance of all of them at the same time. But of course you have to start somewhere. Now even if you go to what I would say in computational neuroscience terms is the most commonly used representation on using a neuron based abstraction of the physics happening here at this part of the realm. There are very different types of representations. You can describe a neuron with a single set of equations which make it sort of you have to solve two equations to describe the spiking behavior of a neuron. You can go to a more physically based representation using Hodgkin-Huxley in the single compartment with describing the membrane and everything. Or you can really mimic and model the physics of the different parts of the branches of the neuron and essentially you can go one step further and model diffusion processes. Once again there's no right or wrong as to what is the right abstraction. But as a matter of fact in the end if you consider the brain as a physical system ultimately you will have to consider and choose this representation to the degree as to what is really the biology and the physics you're finding. So I mean you will start certainly at some point where you have most data to constrain it with but ultimately there's no decision as to sort of this is the right assumption or this is the right assumption. So really think of the way the mathematics you're using as something which is volatile in time. I mean you will really have to adapt that according to the biology you're describing. Now for the sake of the argument though I circled this one which is a description which is a multi compartment Hodgkin-Huxley which is commonly I think accepted as an interesting description which is very linked to the physical observations you can make because essentially it describes individual types of ion channels it describes the spatial properties of voltage and current flow in the different cells. So it allows you to relate what you're modelling very much to the biology you're measuring for example in a neurophysiological laboratory. Now if you do that and actually look at what is the computational cost of these things in the sort of case to model a single neuron you might need a megabyte of RAM and maybe a gigaflop of computational power. And if you simply multiply that out by the numbers of neurons we have in the human brain you actually end up with computational complexity which might be in the order of an exaflop which is 10 to the power of 18 computations per second and 100 petabyte of memory. So I'm not talking storage but I'm talking something which your computer has to have like a stereo memory. And this is for the Hodgkin-Huxley multi compartment where you can actually go into the subcellular detail and add reaction diffusion and essentially these numbers will look even worse. Now again from a computer science perspective this is fantastic news. This simply means I have a computational problem here which again for generations to come will keep us busy. But the interesting thing as well is coming from a very different reasons of why people build large computers there's something which is called the top 500 computer sites. So it's something which for the last 30 years or so people have been tracking. What was the fastest computer in the world in the scientifically accessible public world not the military world? What was the fastest computer at any given point in time? And there's this red line and you see here the certain names of computers that essentially mark this number. And what you see here this is by the way this is logarithmic retic is a factor of 10 and these are the years. So what you see is that these computers had a steady exponential growth in all these years and it's called the top 500 because they're tracking not only the fastest computer in the world but also the other 499 fastest computer. So essentially here you see the fastest computer, here you see the 500th fastest computer and there's about a difference of a factor of 100 in speed for that. And interestingly enough here you see a typical notebook at the same time that's about another factor of 1, 2, 3. A factor thousand less powerful than one of the 500th fastest computer in the world. So essentially if you sum that up so this computer has a certain power at any given moment in time then sort of 500 organizations have a computer which is a thousand times more powerful than your laptop and then there's at least one organization which has another 100 times more powerful computer at any point in time and then sort of you see the blue line is the sum of all of these things. Now this is all historical data and essentially this is if you draw a line through that you can make predictions and it's very interesting because as a matter of fact computing so you can do many things you can sort of see how long does it take for the top number one system to become a number 500 system. How long does it take for your laptop to be as fast as a super computer 10 years ago for a lot less money of course but essentially what is happening is that sort of this development actually reached certain landmarks and hit barriers and this exoflop which essentially in the previous slide showed you what is sort of the type of requirement for a cellular level detailed human brain scale model that is coming into reach. Now this is not happening because we want this to happen, this is happening because countries like the United States they don't do any over ground nuclear bombs testing anymore and they essentially decided to not do that anymore because thanks to simulations can predict whether their weapons are still working but essentially now suddenly the computers that calculate whether their weapons are still working become part of the national security agenda and that drives the development of these computers. Now we think we can do a lot better things with these computers so we are very happy that sort of there's investments done that these computers are being built but we think that sort of it is a very interesting opportunity for biology and neuroscience in particular to actually leverage this type of computing power which is coming available in the next couple of years that essentially wasn't there 20 years ago. I mean we couldn't have had this talk or this level of detail of modeling 20 years ago simply because computer power was very different. So we think of this as really being an opportunity so it's essentially something we can leverage and so that's what is at the basis of the Blue Brain Project and also the Human Brain Project so that our university indeed actually acquired supercomputers for us and as well as other computational science disciplines and that these computers on this map where I showed you a computational power and memory actually you see systems that were installed over the last nine years that sort of now have really the capability for us to do up in its like modeling of certain types of brain complexities and sort of this is a system which is installed in Eulig which is the largest computing center in Europe and for reference this is a system which is actually currently the fastest in the world which is the Chinese machine so essentially the growth is steadily climbing and so that's what we've been using at the basis. Now thanks to this development we actually decided and thanks to a data set Henry Markram had from about 20 years of neurophysiology in his lab we decided can we actually make a proof of concept of this type of modeling for a certain part of this physical space and again there's nothing preventing or saying that we'll never go down there but it was simply the starting point of where we thought we had the most data and the most leverage to do. So as a matter of fact this is really from channels to circuit physiology and this is in the spirit of modeling what is there so not modeling a function but modeling the different pieces we actually started out to describe and map out experimentally and then mathematically model different types and families of channels, potassium channels, but not just one channel but essentially with different time constants, calcium channels, chloride channels and so on. We actually there's an interesting resource where we put this and future work in a channel pdr.net where all the information, the literature and these channel models are accessible but so similar to how you map the different channels you actually start to look at what are the different types of cells and this is maybe a somewhat uncommon representation of the cell but sort of on the right hand side you see a histogram an x and y of sort of how the dendrite of this type of cell looks like and in blue you see the histogram of the axon and so if you have pyramidal cells and you have all types of interneurons and here comes I cannot mention it often enough but sort of normally you ask the question well what is the usefulness of putting a bipolar cell or neuroglial form cell into your model well the honest I don't know I put it in because it's actually there right because essentially I put these pieces together in an app you need to like fashion where I put it in and then sort of ask what is the emergent property for example if I knock out that cell so in this sense what you're seeing here really is the result of the experimental mapping and classification and clustering of the different types of cells and these are the shapes of cells for the different layers which we actually put into our model. The same logic goes on now for the electrophysiology where you also see that some cells have this adaptation of the frequency others essentially are non accommodating others are stuttering and in red you see experimental findings and in blue you see the models that essentially we created to model these types of firing behavior and not just for the somatic firing behavior but we also put in sort of how the dendritic interplay from for example calcium hot spots in the dendrites of pyramidal cells can be modeled and that's of course again possible thanks to the multi compartment modeling where we actually simulating every single compartment and solving about 20,000 differential equations per neuron in there so again you put them these different types of firing together you essentially then get to the point where you ask sort of where do you have to put them so essentially you use stains for where these cells are you use different types of markers to sort of find where excitatory inhibitory cells are you then sort of use different types of data from electrophysiology as well as other markers to sort of distinguish where essentially what are the proportions of the different cell types so you actually sort of assemble all the pieces together to ultimately come to the circuit physiology which then tells you which of these positions should which proportion should be which E type so electrical firing and then sort of what type of synapses should they have and you put this all together and again first of all it's understood that this is it's a draft right so I mean you will find another type of cell you will revisit your experimental classification you will essentially come up with better descriptions of channel models none of that sort of invalidates what we've been doing because the principle is we've invested in the framework to put this all together and validate or sort of evaluate your model so essentially what I'm showing you here is a status of the first integration that we've been doing for this piece of somewhat a sensory cortex of a young rat it's not the barrel but it's the hind limb cortex and essentially you see this type of putting it together you see how many different types of morpho-electrical classes you see the number of intrinsic synapses you see the density of neurons you essentially get all these properties of your circuit which is essentially the best integrative approach or view of that piece of tissue you can have but again it's a working draft so in a year from now there will be glial cells in a year from now there will be vasculage and there's nothing that says we will not put these things in it's really it's a matter of essentially the right time and the amount of resources you will put in now what is it you can do with that now you can actually ask all sorts of questions because essentially this integration forces you to really look at data sets and see whether they fit together so essentially if you have a certain cell density that will yield a certain tissue amount of because essentially the cells you bring they have certain dendritic tissue and axonal tissue and that in again through the overlap will form butons and synapses then you can ask questions like what is the inter-buton interval so like what is the emergent properties this tissue have so essentially you can look at that model after you integrated it and ask the question so what does this type of data integration come to and one example we've been publishing is sort of that certain types of inter-nurance if they project onto pyramidal cells essentially show different innovation patterns so essentially every block here is where this type of pyramidal cell receives synaptic contacts from for example a small basket cell or from martinotti cells and essentially they are you can distinguish them because essentially martinotti cells innovate the distal parts of the dendrite and if you do that you can essentially really data mine your tissue model more carefully and sort of how does this look experimentally so for example in this histogram you have here the post-synaptic view where you see sort of where on the dendrite for example these types of cells to receive contacts from the presynaptic type or here's the post the presynaptic view where on the axon is it and you can essentially see where this how this looks on experimental pairs and how it comes out of your model and this is really an emergent property because essentially we didn't force this model to have this type of innovation pattern but it's something through the way you integrate the data comes out as an emergent property and so for this type of this is pyramidal to pyramidal cell you can see that essentially the way of how we reconstituted this data in the model actually allows you to have cell type specific innovation patterns so this has been published a few years ago now why is this interesting because now you can do that for certain types of cells but essentially you can now really do that for any given type of cell you can ask the question where from all the parts of the tissue are you actually receiving your inputs from so you get a micro-connectome as to what how's the cell connected with the rest of the circuit so essentially you can make really predictions as to sort of what type of pathways do you have in your in this part of the brain model and essentially you can of course do that for individual cells you can then do it for the entirety and sort of see what type of pathways do you actually have in this model and again it is an emergent property because essentially we didn't force these cells to come together but we use certain principles to actually decide when they connect and then sort of you can observe these different outcomes as a prediction now another view of that is in a way that sort of you can think of the data we will do we do classical data basing and so if you have this big axle sheet which in a way is the sheet of for the species and the age developmental age of the animal you're looking at and certain parameters it doesn't really matter which this for the sake of the argument but sort of like put on entity synapse for connections cell types you can actually measure these data from different type of experimental protocols but then there are some parameters which you will not be able to measure because either it is not possible for example if you wanted to exactly get the distribution of an iron channel type across the dendrite of a cell I mean is not easily experimentally accessible so some of these things will actually be blank and sort of what we've been doing with the model we created on the one and we took certain parameters which could observe and these could be individual parameters or principles and then essentially you realize if I know the volume in the number of cells something like the density for example is something I don't have to compute so essentially you can predict certain emergent properties of a model that for example synapse density you may or may not have measured it but essentially you can have a prediction now obviously this is a prediction right that doesn't mean that this is right but it's something which possibly you can experimental test for or you can overwrite it because I know I really think that's wrong I should sort of put something else in here so essentially you create a hypothesis so in the end really what we're doing I mean you can really think of it is a huge excel sheet of things you can measure some things you can't measure we can use principles to predict certain gaps in this data but we can expose it and make it measurable for validation so this is really what we think is useful and not only useful but ultimately really necessary because one of the things this model predicts is that they are about more than 2,000 viable types of pathways in this about the size of a pinhead part of the neocortex of a rodent and sort of out of these 2,000 pathways about 22 have been experimentally characterized and so if it takes about a year or so to really do that you have to do the patch clamp experiment and it's something will you really think that we will measure these other 2,181 it's there we don't have the incentive there's no one who will get this PhD for this there's no one who's getting promoted for sort of measuring these other pathways now is that they're still there it's sort of its data which essentially is waiting to be measured or essentially in the absence of us measuring them well let's make an educated guess of what these things are and then sort of override our educated guess once we essentially find that this one really is not there and we think it's worthwhile to chase it so in that sense really we think this type of data integration exposes what data we have what data we're missing it can give you predictions about data which is not which we haven't currently measured and hopefully through all the types of data which is coming online from the Brain Initiative Alan Institute and others I mean hopefully we will be able to sort of fill more and more of these predictions with real data but essentially the framework is ready in this type of integration technology to sort of then absorb this data so we think that sort of this gives us a novel tool so this this piece of visualization of the voltage activity of this piece of brain tissue it is not the answer to sort of how the brain works it simply shows you that sort of it is a tool which you now can expose to experimental conditions and so you can essentially knock out this cell you can stimulate that cell you can block a certain type of ion channel you can actually test what are the emergent properties of this type of integration of data for this brain tissue and we've been doing that so essentially it's quite nice so we can actually expose this tissue through changes in there in the bath so we can for example increase the calcium concentration and what you see is that such a tissue exhibits very different types of synchronicity and asynchronicity regimes from a completely asynchronous to a completely synchronous type of reaction and of course they're very simple coupled networks where you can recreate the same thing but this is an emergent property of this experiment so you can actually now look and ask the question what a certain ion channel plays in the role of essentially this dynamic stage or that you can ask the question what a cell type does to that and sort of what happens if you knock it out because it's an emergent property which allows you to link it from the most basic ingredients you put in and then sort of make forward and causal chains of predictions of how that changes so this is the type of sort of how we think what other disciplines again in engineering again I mentioned made reference to the nuclear weapons where essentially again they do the same thing they ask the question how does the aging effect still work and so in this case sort of you're trying to sort of say can I make an educated guess or prediction through a model how a certain type of cell affects the dynamic regime one of the things we can do with that is we can for example make predictions about how the local field potential of this circuit looks like depending on the activity this is work we've done in collaboration with Christoph Koch and Costa Sanastasia and sort of that is of course possible so instead of looking just as what is the current voltage across the membrane you actually add up the currents so as current sources on the field you add up the field in three dimensions from all the cells and you can then sort of see and predict how depending on the activity state the local field potential looks like and again it's an emergent property and really allows you to sort of for example assess the contributions of the synaptic currents versus the active fendrite currents of a neuron to the local field potential now this really is possible because we built a system of software tools and we combine software tools that on the one hand were already there at the community for example we're using neuron as the operator for all our for the current level of level of detail we are using but essentially we did a lot of work in sort of optimizing the way of how we can in a data-driven way build cells we essentially build systems and software that allow us to sort of put these neuromorphologies together and build circuits and then put this all together in a framework that allows us to really run through these cycles in a way press a button and re-run this again once you have a new data type so essentially we've in the mean in the last couple of years we have published a real number of technologies that essentially make this all possible now when I say push of a button it is not that simple I mean you still need quite a lot of people that sort of help you work with that and that really on the other hand is something which we would like to get better at and which we think there's a chance for this to sort of really take the next step and that is what to part the human brain project is about so now the human brain project is something which really is something which was possible because the EU has introduced a new instrument a new funding instrument of how scientific projects might get funded so instead of funding something just for four or five years the idea was that you could possibly apply for funding for up to ten years now this funding instrument was introduced in the ICT branch of the EU which is the information communication technology so it's not medical it's not biological it is from the technology branch of the EU and sort of we applied with the consortium at the time we applied AT partners and by now we are 112 partners so as a matter of fact in early 2013 the human brain project was awarded one of the two flagship grants the other one is in the area of graphene which has nothing to do with neuroscience it's really about bringing the single layer carbon structures to bear but those two were selected and sort of we were granted the ramp up phase of two and a half years which is still under FB7 by now there was an open call where sort of additional partners could join we are now 112 partner institutions across 24 countries of Europe and essentially we're right now in the phase of sort of coming up with a legal framework for the next seven and a half years which will be happening under horizon 2020 and sort of which again will have partner projects now the main point I want to make is again I will not do justice to describing the human brain project but I want to sort of link it to this idea I was coming earlier in this talk and so in a way one of the fabrics of the human brain project really is the unifying model so the idea in difference to what we've been doing so far on the rat we will essentially focus on the mouse and we will sort of do the same type of idea like the needs your type modeling of the human now obviously the main difference is for the mouse we'll have a lot of data for the human we won't have the same amount of electrophysiology we won't have the same and we heard a number of how precious the anatomy and how precious the some of the functional data is so essentially we will have to learn of course how to get some of the principles right at the mouse and essentially how they actually allow us to make some predictions about structure of the human but also how they're different so the idea clearly is to use both data sets to really come to the first draft of what the structure what essentially these brains look like at a certain level of organization and again this is not saying we've picked the right level of description so essentially this really is about integrating all the type of data we can get our hands on so it's a data integration project sort of using these two specimens as the use case now there's more parts to it obviously you want to understand how these things all these brains behave in a closed loop behavior there is because it is a technology project you would want to understand because their information processing devices is there something I can learn about how these cells and systems process information and of course we're not just interested in the healthy brain but we want to use medical information to understand whether some of the objective biomarkers we've been talking about if we introduce them as variability into our models whether essentially some of these models might have certain disease state as an immersion property and of course that would allow you possibly to simulate some of these disease states but the way the human brain project will do that is essentially it will build six ICT platforms which will make these capabilities available to the researchers within the project as well as to researchers outside of the project and sort of in a way I really today focused about what's happening in this brain simulation type performance computing but really there's all these neuroinformatics platforms which obviously is very relevant where INCF is involved in building this up we have people here from Spinnaker in the UK and Heidelberg on the neuromorphic side we have people sort of closing the roof in the neuro robotics and then of course a big part really talking about reaching out to hospitals now what the human brain project will do is that sort of these six platforms and now this is time in a way sort of this was before the HPP and this is the ramp up and first of all what we can promise is that we can build these platforms and make them actually publicly available by month 30 into the project so one of our deliverables is to make this available at month 18 which is going to be spring next year internally and then one year later this is going to be released to other researchers to actually use and use the type of building brain models first for the mouse as I for example showed to essentially do that on this platform and sort of connected to data sources to do analysis and research on the medical informatics platform and this is possible of course because this is not starting from zero but it really starts from initiatives in Europe and the BlueBrain project I've been talking about is just one of them but there's really many other for example Fests, Bransk and Spinnaker which are in the neuromorphic realm, there's a lot of prior work at CHUV which has gone into the medical informatics work, INCF obviously has really pioneered the neuro informatics part so essentially what this flags very much in the spirit of how you come up with it is meant to sort of really coordinate and synergize these initiatives to a common goal and so then what will happen after the ramp up phase in the phase which is under Horizon 2020, there will be different steps of improvements to these platforms after they've been made public and there are certain commitments as to sort of that we will have first draft models of the two specimens I've been talking about on the one hand on the mouse as well as eventually on the human which essentially will obviously be not the final models of how the brain works but it will be something the amount of the data we will have at that time integrated at the formalisms we can parameterize at that time which we think on the one it is going to be cellular and then more and more multi-scale in the end but I think what's exciting about all that is that sort of this platform and there's funding foreseen that sort of this platforms can be used by partnering projects so that essentially outside scientists can actually receive funding to actually do research and really leverage the type of technology we've been developing in these unifying models and the other platforms we have so essentially there will be different calls and different possibilities for scientists to join along this journey now what I've described is sort of just a glimpse of how the human brain project works and some of the fabric that keeps it together but I want to end on really something which I think is going to be the one really one challenge and that is a unique multidisciplinary effort we're trying to do now CERN has shown that sort of scientists can work together in the order of groups of thousands and really pull off amazing things I think that neuroscience hasn't yet sort of come anywhere close to that number and not only of course it is that we don't have the experience but it's also the different types of discipline with the different colors here really I mean while at CERN you might have mostly physicists maybe some engineers here we're really talking about that we have experimental physicists from from mouse the human we have theoreticians we have engineers and computer scientists we have people working in neuro robotics and essentially the amount of training in different languages amongst them I think is really the challenge for us to come but I think that the brain needs this type of interdisciplinarity to work together and really make a major push forward so I think we're excited to try it but I really think that sort of it will ask every single one of us to sort of come up with a willingness to sort of go this extra step and collaborate with your different peers in this project if you want to learn more there are websites of the Blumen project and the human brain project there are several people in this audience that work on the human brain project and we have also Jeff Miller where's Jeff back there he's here he's working hard on sort of making these platforms consumable by researchers from the outside so if you want to have a sneak preview I have some discussions how that might work he's around for the next days to talk to us and we are here to answer any more questions thank you very much so what does the validation framework look like when you're actually testing the model is it like a suite of data driven unit tests that you run every Saturday night and then you get the scores back and you have a meeting and talk about them and do you have like then take versions of that and split that up into different models and just try to move up the leaderboard in terms of the performance in that suite or is it more like manual than that what is it like so it's very much like a unit test where sort of you have reference values put and stored somewhere and so you track all the provenance of sort of how this model with this parameter is first of where did the parameters come from to build the model and then sort of how does this model perform against the validation data which also has to have a reference of origin where it's coming from and sort of we do that across different levels so we actually do that for the ion channels we do that for the individual cells we do that then for pairs of so we actually repair like for example run thousands of dual patch clamp experiments in virtual to sort of see whether for example the statistics and the properties of those work and then sort of you go to the network level and so on so yeah that's very systematic and that's really the investment which you can reuse if actually you change the model or you have the next version of the draft model that you can then sort of see how does this newer version of the model compare in all these metrics to an older version of the model you built so it's hierarchical too then you have the best version of the cell model and then that goes into being tested at the circuit level and so on well so the interesting thing is so there's it's not a pure ranking on what's the best model because a matter of fact you don't necessarily so you have a certain I mean certain data you use to build the model and sort of then you validate at the level but it could be that suddenly at the network level you have no longer the emergent property so that wouldn't cause us to sort of now suddenly do a search and sort of try to tune the model but it would simply be an outcome that says okay your model has no longer this emergent property which actually triggers a scientific question as to why that is I mean why did you lose and it's actually a very interesting question because essentially it tells you that wait a second how did I miss something did I sort of introduce a certain mistake so it's really it's part of the method to actually sort of not automatically choose the best but sort of see how the model behaves and then sort of trigger the next scientific curation action of that process. In your presentation you took a very strong physicist point of view on matter however the brain is not only a physical system but also a living system and each cell in the brain is as at least a complexity of an enormous production unit including intrinsic organization logistics and homeostasis and a continuous gene expression profile so if you want to do up initial modeling you should start with the modeling of a single cell in its all living attitudes and properties so my question is is digital planning or not and if not then what are the impacts of this kind of limitations on the internal structure on the properties of that cell right I think it's a good and fair question I mean first of all absolutely I would still consider that even a living thing is a physical thing a physical system in the sense that sort of by its organization it has living properties but again I would at least take this standpoint as a physicist that sort of I can describe the rules and the dynamics that sort of determine the development of the system or the actual reaction to it at the moment and the living I think it's still physical properties that work so you captured the gene expression profiles that's the result of an evolutionary I think that's a secondary question that I mean there are many properties which I presumably will never be able to capture and I mean sort of they might be gene expression profiles that might be on the other hand certain types of microstructure for the human brain which I will never be able to really find because we will not have the imaging mechanisms to do that so I acknowledge that there are certain parts of the physical mechanisms of the system which I will not be able to sort of describe Appenizio so essentially whenever I describe it I sort of mention Appenizio like that I can at some level describe the physical processes governing certain type of plasticity, homeostasis that I essentially can model these physical processes to some degree now essentially I'll have to be pragmatic about it and sort of choose a level where essentially I can find some data about how to specify them and sort of choose the parameters and essentially hopefully over time I'll learn more and might be able to put more physics into that to me the questions of how to for example model a homeostasis or plasticity governs the same idea of how I model cells I mean I have to find which data from the experiment to have and sort of what is it, what type of parameters I can extract from that so I mean to me this is the dynamics of a system is not really different from sort of the structural part of the system but I acknowledge that sort of for example many developmental processes today I think they're completely under constrained and sort of us understanding all the physical processes at work which is why we actually chose to have a snapshot model of a certain point in time and we will have to test with all the validation suites we have before whether the emergent properties are good enough if they're not good enough we really see that we're missing something and we'll have to dig deeper so none of that is really magic in the sense that sort of I'll somehow come up with a fast bypass to that physics if essentially a homostatic principle is the crucial principle to the functioning of the brain well we'll have to find it because essentially I mean that's then this way of approaching the problem really simply exposes to you that if you don't model that part of the physics the system will not behave properly so that will not be a failure for us because it will simply show us where do we have to put more effort into finding and measuring certain physical properties so beautiful presentation Felix please look at the gallery okay we are up here since the HBP project is an IT project I have a sort of a computer science question here you made it very clear that there was a beautiful development in technology essentially moves law and everything shows exponentially upward but conceptually nothing has changed the underlying computational model is the Turing machine and so my question is what is the underlying model for the HBP do you believe that what we are trying to do is build a different Turing machine or is there actually a different model and then what is that in particular what is the memory model so I would be very curious that actually I think it's actually one of the reasons why I am personally interested in this project I think that for computer science I think there is a lot an exciting road ahead now I would we thought a little bit more which I can give us an answer to that now obviously I think the Turing machine and the classical digital computers that implement a sort of a Turing machine and sort of which the trend I showed you I think they are very good at what they are doing I think they are extremely good at solving differential equations actually human brains are not very good at it so as a matter of fact I am very glad about that we will continue to build classical Turing machines because it allows me to sort of solve systems of differential equations of ever growing size but I do believe that there are other ways of doing computation which in a way the brain shows us that you can do that and then there is quantum Turing machine so there are other types of computational paradigms that you can see that they are happening so I would answer to that that yes I think there will be especially computing paradigms where sort of memory and the operations are not as separated than digital computers but I think digital computers are extremely helpful for us to describe the physical system that actually has this other computing paradigm so I think to me one is the means to find sort of the computing paradigms of another physical system then you can go one step further and you can build artificial systems that sort of use the way the brain works to sort of possibly do computations similar to the brain which doesn't mean that they are very good at solving differential equations so I think the future if I look at I don't know 10-15 years from now I think we will have hybrid information processing systems ones which are very good at solving differential equations and others which are more interesting to sort of come up with approximate type of solutions to very real-world complex problems and sort of depending on what is it you're trying to solve your mobile phone will have different types of processors one which is sort of trying to do the calculate the compression with classical digital computing and on the other end trying to interpret your spoken word by a more neuro-inspired processing model I don't know if this is on here I have a comment but way at the other end the brain is a physical system but it's also at the level of neurons and sub-cellular processes a chemical system for sure and I think that's important not to leave out physical chemistry is the way that one would model what goes on with the chemistry but it is chemistry and the memory processes involve chemistry and physical chemistry so I think it will just be important for you to realize that going forward I get I squirm when I hear physicists sort of want to take over the understanding of the brain because there's this tension always between chemistry and physics so I just wanted to say that chemistry and physical chemistry are incredibly important at the level of understanding for example memory processes in the brain I actually squirm when physicists take over and sort of want to explain the brain and I would say that in my hand waving since chemistry at the base is governed by the laws of physics I would include it but I do fully I would fully grant to you that the language chemistry has developed is extremely useful to sort of describe what's going on and I think that's the right way of putting it and so I should revise my language there and sort of talk about biochemical and bio yeah so it's very impressive in terms of the computational power how you basically can project when you will be able to have the computational power to run these models but there's another limitation and this is really I mean this immense search space in the parameter space to search for example if you really want to go to function the question is is there any hope to have enough observations that you actually can constrain the strength of the synapses for example in particular if there is you know higher order statistical structure in them and so my question is I mean I can see in it's nice how you can show that if you put in certain connectivity primitives of the cells that you can actually predict other properties of the connectivity of the density of synapses and you can test that but my question is I see really another principle stumbling block here in terms of getting to function which has to do with the limited number of observations that we can have about for example strength of synapses I think you didn't imply it but just to make sure so we don't search I mean it's not sort of that we are doing an optimization based on a fitness criteria but I think you right but that loop is sort of on a sort of scientific decision level so it's not on a I think there again there's the possibility that this actually will never sort of exhibit certain types of functions because maybe in order to do that you would have to simulate the entire development of a neural system with all it's epigenetic stimulus as well as the environmental stimulus it's a possibility that that's comes out and then sort of we would acknowledge that I think on the other hand so if there is quite a lot of finding that of course I mean if you look at certain types of how synaptic strength is actually related and predicted by other things so maybe for example you can infer it from some of the EM data which is coming online from Connectome project for example to sort of have a pre-configuration of a certain type of brain connectivity or on the other hand we know from our own work of sort of how common neighbor connectivity actually has a very large influence on sort of the actual weight and then sort of of course we can put these systems through learning as well we can simulate the physics of the plasticity and sort of explore how far we get with it but I do grant to you that sort of I mean there could be limitations I think it's part of the scientific process to actually see how much of a history of a biological system you'll actually have to to model. Hello I have a small question. So you mentioned that the first idea is to simulate the entire rat brain and move to human brain don't you think it's a very big jump immediately? Because if I move from Windows to Macintosh there are a lot of challenges which I face but like rat and human brain immediately how do you think? It's possible just that like in the pipeline it's too big. Let me put this right I mean so I think it is another possibility that sort of obviously you would have to sort of model additional biological systems on an evolutionary path to sort of really come closer and I think which ones of those there anybody's pick right I mean sort of some it's the marmoset others say it's the macaque I mean you choose whatever you want. The main point I want to say is that the interesting thing about the mouse of all it's a mammalian brain. Second of course a lot of the genetic tools we have available will give us really an amount of insight into principles of how structure cells, cell functions, synaptic dine channels and all of that interactions, kinetic rates and stuff how those things actually come about which I think will not be easily accessible in a higher level mammal. I mean I don't think we will do the same type of experiments with a marmoset for example. So essentially I think the mouse really first and foremost is a model which is really accessible and sort of will allow us to learn a lot of principles and I think then through transcript single cell transcriptomics the idea would be that sort of you can go from there to sort of how a slightly different genome and sort of how would that express and unfold into certain functions and if the jump is to the human is too high we will find out but the human is the goal. I mean I think sort of what we really care for is that essentially research around the world is happening on the human and we essentially want to position this tool as an additional tool to help us understand the research we're doing about the human brain and sort of in principle there's no conceptual problem that we couldn't apply these types of data-driven model generation for other species we would have to organize the data sets in the exact same rigor but essentially we would be able to produce models of those brains as well. So I think it is part of the scientific agenda. I cannot give you any certainty but there's reasons for why we really want to get the mouse going and I think there's reasons for why we really want the human data because that's what we care for. That's what essentially we need to deliver as a community on results and I think the methodology we are implying and developing really are not specific to these two but essentially could be applied to other specimens as well. Thank you. So Felix. Rodney. Very clear talk. The part which I find less clear is your use of this word emergent which you use very frequently. So can you first of all explain what it is that you mean by that what makes it special and then you could explain perhaps with that what Blue Brain Project now in its seven years of existence brings me towards designing a neuromorphic system that has a specific task. So I'm trying to understand something about what we've learned about brain function at the level where I can use this for design for the solution of problems. So I'm absolutely not thinking the beautiful collection of data. I want to know what I'm learning in terms of understanding. Okay so first for your first question on emergence. I mean to me emergence is that you actually model the constituents of a system as well as interactions and you have non-trivial solutions from the time that you were born. So that's the definition of the description of the constituents and their interactions. I think that's sort of the physically accepted definition of immersion and that's really very much how I'm using it and understanding. Now sort of what have I learned from the Blue Brain Project to build a neuromorphic system. An emergent phenomenon that relates to processing. I've been informed that I must must at this reception that we're having. So if you can take that offline I would greatly appreciate it. Thank you.