 Yeah, so my background I want to highlight that first because I have a little bit of history to come up So I studied physics in in Bochum in Germany, which is actually not too far from here Yeah, you see it's it's a shocking long time ago. I did a PhD in an institute for neuro informatic Long before the field neuro informatics was born and I will comment on that a little bit and then I spent 13 years in industry in Honda which is a car or motorcycle company depending on your Fences and then in 2011 I moved to the blue brain project and currently I'm working mostly on the neuro robotics part of the Human brain project, but along the way since basically since my diploma work I've been working on a neural simulator called nest that some of you might know and I will refer to that a couple of times because practicalities mean you do something with it, so Yes, I guess I understood I guess I understood your question. So you're asking what I was doing at Honda so we were trying to build a visual system like so many other people in the world and At one point The management decided well when so many other people are doing it Why are we doing it and why what is our chance of being any better than all the others that have been doing this for the last? 50 years and I said fair point I go It Wasn't quite as that but in a way I mean if there's one Inofficial advice that I would like or can give to a young model is keep away from the visual system The impact you're going to make is epsilon There's about 10,000 researchers doing this and Your probability of being visible among these 10,000 is well unless you're a genius Probably very low Yeah, so that's why I'm not doing any visual system anymore And you have others that recognize your genius. I mean that's okay, so Nest is a simulator for large networks and large always changes with the next computer generation So I replaced the number here by whatever fits into your computer's memory People like to measure the efficacy or efficiency of simulators by the number of neurons But that's really an irrelevant number because what really counts is the number of synapses because typically Each neuron has about a thousand up to ten thousand some say even hundred thousand synapses and if each synapse Eats up a few bytes and a few differential equations to to cover then this actually outnumbers the neurons by orders of magnitude So you don't really worry about the number of neurons that you have The the specialty of nest is that it runs on a variety of different platforms So single processor Multi-core multi-processor and supercomputers which still are here clusters and on top of that is a simulation language which is for most purposes nowadays Python and I will go a little bit into that and Also nest has a long history and along the way Sorry, does it also explore graphic cards and similar architectures or Just with clusters. No, it doesn't it doesn't do any GPU processing now the reason being that currently the You always have to trade off development time With usage time. Yeah, so how long does it take to develop a piece of code versus? How long do you actually use it? Yeah, and GPUs currently have the Advantage of being replaced very quickly and the programming model also changes very quickly and also developing code for GPUs is very I would say Unstable in the sense you make it minor modification and a tenfold speed up goes into a tenfold slowdown Yeah, so it is not something that I would like to spend my time on until it has reached a state where you can Really you not only have a general purpose GPU But you also have a general purpose programming model on top of a GPU that doesn't force you to Buy into one of the few vendors that you have So there are a few project where they we try to use GPUs, but generally tends to be not worth the effort currently So as you see nest has a long history and along the way a lot of things Happened which I'm not listing here But one thing is if you want to keep that's also something for the practicalities If you want to keep a piece of a tool a piece of software alive for a certain number of years you have to make certain decisions Wisely and wisely of course is a show-off term for you happen to be correct in retrospect But looking forward you never know Yeah, for example when we started things like QT and GTK for the Linux geeks among you this means the the things you used to draw windows onto the screen. They didn't really exist every Unix variant had their own library for putting things onto the screen and then we decided okay We are just not doing it because it's just too short-lived not worth the effort like with the GPUs nowadays And that was a good decision because they said okay, then we can focus on the things that are actually important and leave The fancy displays to people who like to play with fancy displays And that has stayed like this So now the graphics is done by Python and they they do a much better job than do we could the only thing that we added at one Point was parallelism and distributed computing Which from the perspective of the user is usually hidden you don't have to worry about this So you just say I want to have so and so many neurons on so many connections You will see this and so on so the largest benchmark was last year when we simulated a network corresponding to 1% of the human brain on the case supercomputer in Cobra in Japan That just means that this was a purely random network that in terms of the number of neurons and number of connections Matched 1% of the human brain nothing more. It wasn't doing anything. Yeah It wasn't really a brain in the sense, okay, and yeah nest is freely available But most of you will know that and I will now do a jump back in time to catch up on this problem or this term of neuroinformatics and I start with the origin of the word informatics which was Actually coined by Karl Steinbuch. He's a German computer scientist But he actually meant it in the terms of of it was very neural the way he meant it because he was the inventor of what is called Lern matrix one of the first associated memory models in the late 50s and He's kind of the founder of what Kava meet later called neuromorphic engineering and This is actually here the publication from 1961 in a in a journal called kubernetes Does anybody know what kubernetes is called nowadays this journal? It's biological cybernetics and it was very new at that time and just replacing the biophysical journal as the prime publication spot for computational work anyway, so this is the memory matrix here and basically you had little iron rings across wires to store information and Anybody who will will have followed the press there was a recent announcement by IBM They published their true north architecture and in effect. They're actually implementing This idea here also correctly citing it in modern hardware methods Yeah So that's a term informatics or basically in in Germany and many neighboring countries Informatic actually means computer science nothing more nothing less in the UK I Found this definition here from the University of Adam at the study of the structural and behavior interaction of natural and artificial Computation systems in the US you get this somewhat vague definition see application of information technology to the art sciences and professions. I Wouldn't really know what to expect in a informatics course Or the interdisciplinary study of the design application use impact of information technology that sounds even more Esoteric Okay, yeah, and then in the late 80s many European countries I can't talk about the Americas They use the term neuro informatic no informatic neuro informatics to denote the field that nowadays would be called machine learning artificial neural networks self organizing maps back propagation all these things that That used to be fancy at that time and still there are many your universities today that use this name Yeah, so in Germany there's a number that I know there some outside of Germany Zurich for example And then I think the the usage that we are kind of having In addition to that newly is what the US human brain project adopted Namely the term in the knowledge to go to bio informatics, which is Coarsely described as data basing the brain Yeah, so neuro informatics as such is I would say it's an umbrella term for many things so We should expect to have to deliver an explanation once we use it. Okay, and now I'm Fast-forwarding again, and I want to elaborate on two concepts or two words We've heard a lot today the first one is model and the second one is simulation and What the relation between the two could be so I start with the model Yeah, so if you Google for pictures of models You find various different things Yeah, and if somebody tells me you can't build a brain model I would always say it's wrong you can and it's actually commercially very successful You can make a lot of money with these things Yeah They won't recognize your images, but you can use them. They are useful in some sense Yeah, so it's again all models are wrong. Yeah, some may be useful Okay, to be a bit more serious There are certain classes of model and if you restrict yourself to the scientific Real and not so much fashion then we can distinguish These categories which are not exclusive. Yeah, so you have what is what I would call phenomenological models With some which are mathematical descriptions of phenomena or systems without referring to their constituent parts and I would actually say that all fundamental laws of science are actually Phenomenological models if you take the equations of motion for example, yeah f equals ma Nobody tells you why this is so there is no part in there No mechanistic explanation why you would have this and this is what particle physicists are struggling with that They're always looking further to find a mechanistic explanation or a principle that is so trivial that you don't need it anymore But in a way, this is kind of your ideal model Provided that it doesn't have free parameters Then you have these mechanistic models, which is a level up Yeah, so basically if you have a system you can break it down into mechanistic models down to the point where you are at the Phenomenological level and you don't know any further Yeah, and then there are the models where you actually can't look inside. These are these black box models That Jonathan has been talking about this morning. They are often statistical models where you basically use random variables and their distributions to describe the behavior of the system but Everybody is is kind of well aware that this behavior comes about By some mechanism or whatever so you can't really hope that you have discovered a fundamental law of nature here Yeah, and often you have a combination of these and So you can have a mechanistic model where certain parts are just probabilities or statistical things and the question again is how useful is a model? Yeah, what are the criteria and I'm saying this a couple of times. So I Think the most important one is that you describe the behavior of a system as you know it this In a more precise form would be it should describe existing data or the data that you've put into deducing your model and it also should describe how the system behaves in new scenarios. Otherwise, you would just have a fancy way of Just saying what you already know. This is usually called generalization Yeah, and I will cover these two points here in a bit more detail You would also very often like to gain an understanding and particularly if I have a very complex system You're really not happy if you have a model which actually describes your system in every situation But it's as complex as a real thing because you don't really learn anything Yeah, so and This is also important You want to have as three parameters as possible a free parameter as a parameter which you have to find By tuning in order to make the model match. I will give an example Yeah, and optional provide a mechanistic understanding of the system Yeah, if you are finding a law of nature You get away without having this so To have an example, you know the gas equation that Gaut already talked about so you have this little relation here so you have pressure volume and this is temperature number of Material you have there and this is the Reynolds number and In a way, this is a it's a good model. It describes the behavior of Gas under varying pressures and varying temperatures, but it doesn't really tell you why this is so and it took Boltzmann a considerable effort to do So yeah, even there the solution is so difficult that it doesn't really help you And it has one free parameter namely this one here, but it turns out that the same number shows up in other equations So it's not really a free parameter anymore, but rather it's a fundamental constant like the speed of light and There's some observation here. Yeah, so this model here is certainly wrong because There is no ideal gas around but it's useful because it's a very handy equation that you can use And you also see that there is no mechanistic understanding really I said that already. Yeah, so just to gauge a little bit the expectations About a model that one can have so we now go to a Pragmatic definition of what a model is so for my purposes. It's always Basically, you have some independent variables, which you call stimulus if you're an experimenter you have a system You have some parameters That determine how you change the system and you have an a response These are your dependent variables and basically a model is one function or a set of functions Which describes your dependent variables as a function of the independent variables and the parameters that you put in that's the the most basic definition of a model that The one can give and it doesn't really distinguish between Mechanistic phenomenological this would all be in in the equation that you put in here Yeah, so you can use a framework like the ones Jonathan described this morning or something else an example if you for independent variables these are the things you can actually vary So you can choose the recording site if you take electrodes and poke them into a brain You can modify the amplitude of a current that you inject into a neuron or Something else or if you have visual experiments the orientation of a moving light bar the type of the visual object position direction of motion You name it The dependent variables are the ones you can measure like membrane potentials at your recording site spike rate of a visually responsive neuron or over whole area absorption properties of voltage sensitive dice at cortical surface for example if you do optical imaging or Time of spike relative to a theta phase if you're somebody in the hippocampus and looking at rats on a treadmill Yeah, and For these you can build then models and I will go back 110 years to this paper here. Yeah, that was The original integrated fire model 1905 Louis Lapic. Yeah, and he Basically observed the change of membrane potential in dependence of a current that he That he applied to a piece of membrane and then he looked at different models That could describe this discharge and of course in the end you wanted to get out that you have basically a Capacitor and the resistor in parallel which gives you this exponential rule here rather than this one which the people had before and these are the observed values and these are the predicted values and Initially they're all similar in a way, but then if you look at Infinite charging times then you will have the better agreement with this formula here If you go to more complex Scenarios you might look at things like this. So for example you inject a current into a cell which has a noise Shape here and then you observe the response, which is a somatic membrane potential and then you ask yourself Can you explain the response of the neuron based on the noise current that was supplied and you can do Modeling about this and there was a paper Where this is all summarized. It was actually a competition. So you can actually put this data sets out there and the interesting thing is that the Neuron model that won this competition was a generalized linear model something similar again to what Jonathan talked about this morning and Also interesting was that no compartment model actually took Took part in the competition which was a bit disappointing because the original idea was to see Whether if you really give this this type of benchmark, which one would give the best generalization So the idea was you were given a certain number of these response traces and a certain number was held back for testing So you could actually see how well your model generalized to the unseen data Yeah, and this is Then giving us to the steps that you need to take if you want to do modeling because you have two steps that you have to solve The one is model selection. So which set of equations do you think is best? To describe the response of your neuron in this case and the other one is fitting of the free parameters This is called fitting so you have to find the best parameter set beta here for your model that you have and This is basically the same problem that people have to solve in machine learning because machine learning basically means that you use a neural network as a model for the input-output relations that should be learned, yeah, and They formalized it nicely so they said okay your data is always something like this So it's a pairs of x and y x is your stimulus y is your response your independent and your dependent variables and Basically you have to Find for the model selection model fitting you first have to select your equations here And then you basically minimize the difference between your model output and the actual data On the basis of your parameters here So you find the set of parameters that minimizes this least square expression here and you can do some analysis on this And there's a beautiful paper by bean stock in jean and from 1981 if I'm not totally mistaken on Machine learning and the bias variance dilemma and the bias variance dilemma Comes about in various shapes and I will come to that later because I'm jumping ahead a little bit here But I want to actually show this process here a little bit by another old paper here namely Gerstein and Mandelbrot that's the same Mandelbrot that did the little fractal figures that were so Popular in the 90s and it's a random walk model for the spike activity of a single neuron So the idea was you observe spiking in an in vivo situation. It's very variable We've seen that and the question is can you explain this variability and The model they proposed was the following that they said, okay This is what we actually observe. This is time and each of these dashes here is a spike and This is what in statistics. It's called a point process Failures of light bulbs mining accidents. All these are point processes radioactive decay There is a whole mathematical framework. Let's see if we can use it So I just introduce you a little bit to this frame mix So basically the idea is that a point process is a set of random points These ti here are the various spike times. You only that these are random variables So from one realization to the next the spikes might be somewhere else But the firing rate or the interval The intervals between the spike they follow certain common rules Yeah, so this is the common mathematical description of a spike train by so-called delta functions here They basically a delta function is a function which just has a peak at the prescribed point in time and The alternative ways that you look at the cumulative function where you integrate over this guy here This looks like this so whenever you have a spike you will increase by a fixed amount and so on Yeah, and now The idea is you want to have a process that basically describes how you get there and This is the membrane potential equation that People knew since lapik. So this is a change of the potential as a function of time. This is a decay With a membrane time constant and these are inputs that are coming in and the idea is you have a certain number of excitatory inputs a certain number of inhibitory inputs the effect is kept in kept Is captured by the J set you have here and These ends are the number of the cumulative number of spikes that the Pre-synaptic neurons produced yeah, so the derivative will give you again these delta functions here and The idea of the random walk is the following that you say, okay, let's ignore the decay here Let's assume that this time constant here is very large compared to the speed of the signals that are coming in and out and then basically what we get is is something like a random walk and a random walk is Basically this game here. Yeah, you take a coin and You start tossing the coin and you're making bets whenever there's head you earn a dollar and whenever there's tail you lose a dollar or a golden or euro, yeah, and And then you ask yourself after n Tosses of the coin. What is your cumulative winning? Yeah, and this process is a random walk people have looked at this Immensely because you can make money with it if you understand it correctly if you go to the casino and Well, basically this is a mathematical definition but Random walks have the interesting property that they are amazingly irregular and that was what inspired Girls in a mandra boat to take this as a source of variability in in the neuron so what you see here is a number of tosses that you have Yeah, and the Each of these curve is one realization. So let's say you have a hundred players that do this game Yeah, so then this would be player one player two player three player four player five This is the average That you have and with the first thing you notice is that none of the players is actually spending any time near the average You're either completely losing or you're completely winning and what you also notice is that The number of times you actually cross Or reach the the average you cross the average actually decreases with time It becomes less and less likely that you will break even again. Yeah, so never play this and This is illustrated here in this red curse which is the standard deviation that you have across the population And that is a monotonously increasing function. It goes with square root of the number of Toy coin tosses that you do and The this is called the property of long leads that you don't revisit the average and it's a very striking Effect which is quite counter-intuitive. So if you read fella, that's a very nice statistics introduction book book So he writes suppose that a great many coin tossing games are conducted simultaneously at a rate of one per second Yeah, that's quite fast. Yeah day and night for a whole year on average in one out of ten games The last equalization will occur before nine days have passed Yeah, this is so basically after nine days You've seen the last equal and then you will be for the rest of the year completely in the plus or completely in the minus Yeah in one out of 20 cases the last equalization takes place within 21 fourth day. Yeah, and in one out of hundred cases within the first two hours and ten minutes So it's it's amazing how counter-intuitive that is and This model makes predictions about the intervals because in a way what you're doing is every time you cross The line here in the language of a neuron that would be a threshold crossing in First approximation this I will come to the the catch. Yeah, but you can then make out the interval distribution which turns out to be Like this so this was one of the first computer simulations because at that time Even that you couldn't really do by hand. Yeah and The model of course was a bit more complex because the threshold wasn't at zero But they had a finite threshold which was here 32 steps above you will see these are all numbers which are Powers of two to make it feasible on a computer. You had 10,000 runs, which was a huge number at that time and What you're doing here is basically a first passage time problem Basically you're looking at this particle until it hits a wall Which is called absorber and then basically you go back to start and have to Renew and then they count the interval statistics So this is the number of steps you have to wait until you reach the threshold and this is a distribution to get out Which is a gamma distribution and you can actually nicely fit it to the experimental data that they had at hand So the conclusion at that time was yes, the random walk model is a nice description of the variability of the neural firing Very this is actually really important right now in behavioral neuroscience and systems neuroscience because people are you Talking about how Decision variables in a binary decision process are or are not fit by this model and there must be 20 papers in the past year or two by substantially good algebraists dealing with this and So if you want to look at something that's completely modern go back and read all this old literature Because it's all about first passage times and time to getting to the barrier and whether animals really obey this or not and I think the the original Hardcore analysis this was by Einstein Schrodinger in motion. Ah Yes, yes for the and the reason he did it was to prove that molecules exist This was 1905 people still didn't believe there were molecules okay, um There is of this of course the one catch that we had that the that we neglected the time constant or we made it infinitely large What turns out is that the standard deviation gets bounded as soon as you have a finite time constant So that limits the variability to a certain degree and what we also don't learn here is why are the Inputs to the neuron themselves random processes as we have Implicitly assumed here. So this is then more for the balanced model that gout to talked about Okay, so the next step or similar case is compartment neurons And I would like to use compartment neurons to come back to this bias variance dilemma. That's a very fancy term But it has very practical and very also very counterintuitive Effect so compartment model Just to repeat. So this is a reconstruction of a neuron here and for anybody who wants to see the synapses They're actually sitting here, you know, there are these little things and they are the neurons are really covered with it from top to bottom and Typically the the workflow is you take a neuron you fill it with dye and then under the microscope You can reconstruct it and then you try to find segments which have roughly the same diameter and they will then be Translated into this equivalence cable model Where every every of these sections here might be turned into one of these cables here and that works under certain assumptions Which typically fit very well. So there's nice body of literature that shows that you can very nicely Predict and describe what's happening in Individual dendrites and then you turn it into this type of compartment model Then you have your hedge Hodgkin-Huxley Equations on top to do the Action potential that you have but the Hodgkin-Huxley formalism did much more it provided a mathematical framework for expressing iron channels that you can put in and the interesting effect of iron channels is every iron channel you add to your membrane equation increases the complexity of Your model description of your neuron Yeah, the more iron channels you have the richer the dynamics of the equation is or are and That has interesting Implications that I will come to so first Of course first you need to have the morphology of a neuron. Yeah The electrical parameters Attached to each compartment which in some cases you can measure in many cases you can't That's why the old or very old Meaning 80s compartmental papers that you read there will suffer the Noah's Ark problem as I called it Basically you have two parameters from each species then The biggest unknown that you nowadays have is the number and the distribution of the iron channels across the cell membrane This is in a way free parameter Yeah, that's very hard to measure actually in particular. It's hard to measure if you have many cells Yeah, you can do it maybe for one if you're very tedious it will take your lifetime but you certainly can't do it for an entire brain and The model selection process is in a way finding the correct configuration of iron channels in each segment compartment because that determines the number of equations that you have and the number of Equations I told you determines the richness of the dynamics that you have yeah to give a very simple example If you strip away all the iron channels, you just have the passive membrane left You add a few iron channels. They give you a leak In you add a few iron channels you get a membrane potential and you get an action potentially add a few iron channels You might get bursting you have add another few channels and then you might get age currents or whatever Yeah, so there's a whole literature about the various effects that iron channels have so that increases the capacity or the richness of your of your model and This is in a way. Is it a little problem because? If you happen to have a model that is too complex For the data that you have then you're not necessarily being better Quite the opposite so in mathematical terms usually if you have you get different models and one and two and up to mn And basically the the diameter of the circle here is to illustrate the richness of the dynamics that you get and Basically, the more you have the richer it gets Yeah, and you wonder when when is it time to stop? Yeah, and there's this very old guy from I think oxford. He was He said yeah, and he turned on suit multiply panda beta necessitate him, which means you should multiply things unnecessarily that was Ockham William of Ockham Then there's this smart guy here who said things would should be made as simple as possible with the catch not any simpler, but As a theoretician you would like to have a more formal Description of that so we jump back to a very simple example of the problem that we are facing So let's say these dots here are your data points that you've measured from some neuron Yeah, so you can make one model here this straight line It will basically miss most of the points But if you add a new point it probably won't miss the new point much worse than the other ones Yeah, so you're basically a little bit you're bad on everything, but not too bad And you can have another model, which is very rich in basically it's a high dimensional polynomial it will hit each data point exactly zero error, but you can be sure that the next data point will be missed greatly and And that tells us that the true solution here is probably somewhere in the middle But the question is where is it and is there a way to actually determine where it is and this is the the The origin of the core of this bias variance dilemma in the one case Yeah, you're putting a lot of bias Into your model a lot of knowledge which gives you this one here and in the other case You have some variance left Yeah, and you have to trade off between the two you can't you can't have it all one is called underfitting Your model has two degrees of freedom to describe the available data points so you get a large error on the data that you're having and overfitting which means you have a beautiful reconstruction of the data that you put in to make your model, but you will be appalling on anything new you won't generalize at all it's actually not a good model and You can actually show mathematically That There is an optimum somewhere. So you have two errors one is called the empirical error. This is what you do on the data that's the The error your model makes on the data you put in to create the model and then there is your actual error That's the error your model makes in the field when it is exposed to new situation that it hasn't seen before and This is the model capacity here so what is the dynamical richness of your model and It turns out that both errors initially go down and the Empirical error as we've seen eventually goes to zero But the funny point is that the actual error actually goes up again and it can become arbitrary large What's happening is that effectively you're modeling noise that you have in the data or in your system Or you're not modeling the noise you're just Being nice to the equations in a way like in the polynomial case. Yeah, and In machine learning this was realized Some 10 15 years ago and there's a solution to this and that turned out to be nowadays called support vector machines they are built around finding this optimum here and And This is of course something that is not possible if you don't really know the equation So you can't really come up with a with an optimal equation system for this So we have to find other methods and these are typically called cross-validation if you look into the statistical literature You will find a lot about this basically what you do is you're taking your Data set which is sparse to begin with and cut it into Two halves in the first half You put away into a safe and never look at it and the second half you use to make your model and once you're happy with your model you take out the first set of data from your safe and you see whether your model is really as good as you think it is and And Basically, you're evaluating again these least square Errors here in good models should minimize both empirical and actual error. Yeah, you're probably Missing the optimum, but at least you should be sure that you're not making anything worse by having too many parameters in your model. Okay, so Summary of this first part. Yeah, a theoretical model is a formal entity usually expressed in words and equations A good model helps our understanding Many models represent an ideal that cannot actually be reached by the physical system. That's a point that actually shouldn't be overstressed Which makes this model useful in some cases, but use less in other cases for example Who knows the canoe process? Who knows what that is? Okay, that's a theoretical model for heat pumps Refrigerators air conditioning systems. It's a beautiful model that tells us some limits of this But if your refrigerator is broken, this model doesn't help you at all Yeah, and This is I'm jumping ahead a little bit if you talk about the human brain project and trying to simulate an entire brain It's a difference where they have a generic understanding how visual processing works Or I know exactly why this particular animal was blind Yeah, this is that's the difference between an Abstract ideal model and a physical instance. I'm coming back to that Yeah, but these models are nonetheless useful because they show the limits of the non ideal physical system Yeah, because every physical instance will be worse so they are giving an upper limit in this case on the Capacity of a heat machine on computing on whatever you can do with a gas, yeah, and Yeah, all models have a limited range of validity and must be checked against the physics of the original physical system I'm yeah, or all models are wrong, but some are useful. Okay now after talking a lot about Models I'm coming now to the question. What is a simulation? And you can hardly see it This is actually the the column from the blue brain column simulated Yeah So We are using or I'm using the words almost all the time simulation model almost interchangeably But anyway, I took the trouble of looking up the word simulation in the Oxford English Dictionary The definite record of the English language. So if they can't tell us So here the action or practice of simulating with the intent to deceive false pretends Deceitful profession or the technique of imitating the behavior of some Situational process by means of suitably suitably analogous situation operators for the purpose of study or Personnel training and this is I think this is our meaning here Yeah, and Probably not yeah, okay, so yeah this is a model of Flight in this case lift. Yeah, and this is a simulation of it. You put it into a You make it alive. Yeah You're evaluating the equations on a computer. You can also do it by hand Yeah, I'm solving it takes a lot of time, but these are also simulations here Yeah, I'm not sure whether we can switch off the lights here to make this a bit more David, can you maybe switch off the lights possible? Oh, yeah, perfect. So you see here. This is actually a reproduction of the cockpit of a B747 yeah, and this is what you would see out of the windows is the flight simulator This is of course also a model But it's a very different one and both are useful This one certainly doesn't tell you why it flies but it flies in a way at least Convincingly enough that a pilot gets scared if he has to exercise in it And this one here will give you some understanding why it flies even though whatever you have learned about why a plane goes up Is probably the wrong explanation Yeah, so Simulation versus model I stole this here from Wikipedia computer model, but I liked it So you have a real system and you make a model And then you have a model system and the two kind of stand side by side Yeah, and now you can perform simulation you get some results and you can compare it To improve the model yeah, and then you can also perform experiments on the real system you get results And that can go back into Improving the model yeah in parallel you can also go ahead and make theoretical predictions Actually, this also could be done on this side here. Yeah, so you can use real data and simulated data to make your theoretical predictions and If you think carefully about it a lot of the debate of the usefulness and Uselessness of large-scale modeling is largely on the idea of whether they're useful for making theoretical models and That's of course a question that nobody really answer, but just from the terminology here Yeah, so a computer model refers to the algorithms and equations Yeah, so we already a step behind the model I will come to that and the simulation is actually executing this With a piece of software Yeah, so the workflow is something like this. So you have your abstract model So you've write down your equations and now you have a computer model and then you have a simulation So a computer model means you have to translate your equations into algorithms and data structures And that's a non-trivial process and that's even almost completely independent of the programming language You're using or a compiler you're using or the system you're using it's really plain old computer science here and then you have to implement the algorithm in your favorite programming language and That's another step. Yeah, and then you have to run the program and check the results and probably start all over and What I want to highlight a little bit is where in this process errors can come in because it's very easy to mistake Whatever you get out here as a result of this one here, although you'll never look back Whether you've actually implemented this model or something completely different To give an example we start with Two neurons. Yeah, so this is the equation here for an integrated fire neuron. We've heard about it. So this is our membrane potential this time it's called you for fun no real reason and these are various currents which go in yeah and This is here is the synaptic current and that's the question from that we had earlier Basically you subsume everything that happens at the synapse at this level into a little function here Which is called post synaptic potential which you might write as a simple delta function In which case the post synaptic potential will look like this or you can have more elaborate things. There's a lot of around yeah, and if this is your Pre synaptic Membrane potentially you inject a current you will charge the membrane and then will discharge once you switch the current off We've seen this already if you reach the threshold value here You will produce an action potential a spike and this dash this the spike is actually just cosmetic I painted it on it isn't actually in the equations here the equations don't do this Yeah, and then you get two spikes until you switch of the current and then you don't make it anymore and you decay back Yeah, and these two spikes will look on these in the second neurons like this Even this thing here. Yeah, so this let's assume. This is our example brain and we want to simulate this Yeah, so you have these equations and now your task is to turn this into a computer program and If everything works out now I Should be able to have the little program here and where's my mouse So this is a piece of C++ code that actually does this so you have a lot of numbers here So these are our membrane potential the synaptic Currents are actually stupidly called you and there's an external current called I Yeah, and then down here is the main. So these are all the parameters of my model and Then if I go a little bit down Which I can't so if I go a little bit down I have my main loop here and Basically, this is doing the whole logic here This thing is evaluating the equations and This thing is doing the transport of the spikes and everything But it's very difficult to see of course because a lot of thinking has gone into it So basically the equations are solved here Yeah, the first step was discovering that this type of equation can be solved analytically so you don't have to do numerical Solving of differential equations you just have to multiply by a constant factor and this does the job for you here You compare the membrane potential of your neuron against the threshold and if you hit the threshold Then you reset the membrane potential to zero and you tell the other neuron that there was a spike and there must be a buffer that keeps The spikes while it is traveling along the axon because there's a finite delay and you somehow have to take care of this delay So basically this is This thing is a few lines of code But the disadvantage is if I now I'm changing my my little example a little bit basically everything of this will change as well Yeah, and that's why Specialized tools have come around which are called simulators and which Have you do this so this was here our talk version. Yeah So Basically the computer model is a set as I said this already Yeah, so the computer model is implemented in C++. We've seen this Yeah, and what we have to realize that generally a computer model is a lossy representation of your original model because Computers can't really compute. This is one of their big faults. Yeah, just trying add up 0.1 sufficiently often you will see that it will very quickly diverge from the correct answer Also any numerical solution of a differential equation This is an easy case because it's analytically solvable Yeah, but in the case of the Hodgkin-Huxley equations if you have the Hodgkin-Huxley equations, for example, you will always have an error in your numerical solution and Sometimes it's not only the error that you have but also whether a system of equations is stable or unstable Can depend on the type of numerical solution that you are using The best example which was kind of famous was in climate research where the equation system is Mathematically stable, but as soon as you discretize it to simulate it on a computer it becomes instable Which means that Basically the solutions that you get the numbers will go to infinity positive or negative Which is something we typically don't observe in a real climate and in the 70s somebody developed a little fix for that But it took them 20 years to figure out why the fix worked Which is also interesting. Okay, so Is that a numerical Does it use numerical integration it depends on the type of neural model you're similar so nest has many different many different Neural models some so every neural model that can be solved Explicitly is solved explicitly the only problem that is still remaining if you have So basically what every simulator does even neuron genesis or moves whenever it comes to sending a spike from one neuron to the next You have to detect a threshold and this detection of the threshold crossing is very prone to errors Yeah, because you can't really afford to do a bisection and really find it precisely But you rather get a little interval and sometimes this interval can be very large. Yeah, if you imagine a cross that very Smoothly approaches your detection threshold Then tiny errors will will move you a long distance on the time axis Yeah, and this huge error in the spike time is is then of course propagated if you have a recurrent network It typically has a chaotic dynamic and that will mean one spike Moved by a millisecond will change the network trajectory Considerably is there an actual fix for that that wasn't is implemented Well, you you can do certain fixes, but well, they will they will They will improve the accuracy But it's a fundamental mathematical problem of root finding. Yeah, and you have to trade off between the accuracy and the speed yeah, if you have a million neurons to care about and every spike Will take you a millisecond to To Determine it precisely that simply oh we're getting there But do you also think in the brain that small fluctuations also could trigger the threshold trigger the spike So in the brain certainly it does. Yeah, but there is nice. So there is one So you very often hear the argument you don't have to be precise with your simulation or your model because the Brain is also imprecise and this is a very bad argument because noise in the brain is very often unbiased Unfortunately mathematical errors don't do you this favor. They are not unbiased. They give you a bias in a certain direction Yeah, for example a spike might always come a little bit too late and not also a little bit too early Yeah, and and very often you have this or your your error is in fixed multiples of your integration time step and That you will quickly see these types of biases Wouldn't add an unbiased source of noise like white noise. Maybe fix part of the problem Maybe But you will probably have a long way to go to prove that it is indeed the case But if if if we have a model and we we can just try to validate it and not so much Or at least not in the first step actually just to fight the so what what you can do is you can you can make Theoretical models that make certain assumptions of this sort. Yeah, and then you can use it for validation, but typically The the rule is if you have an equation try to solve it as good as possible. That's the the safe bed otherwise you would have to You would really have to show that whatever you're doing is is not introducing any artifacts Yeah, so If you still remember the code you will see that the model these two neurons the synapse or whatever was only implicitly Implicitly represented in the code most of what you saw actually did something else like initializing parameters and running a loop and whatever So the corresponding nest version is this one here. Yeah, so this is I don't have to switch So basically you create two neurons It's actually called and this is the name of the neuron model as a naming convention Which I'm not going into here. There is a tool that helps me to measure the membrane potential so I can nicely visualize it and then I have Neuron one which gets an external current of a thousand pico ampere I have a voltmeter and I tell it how often to record the membrane potential. I connect the two neurons I connect my voltmeter to my neuron and let the time run for a hundred milliseconds. So here you You still have a few lines But you see what's happening and then you get the nice tool and the advantage here also is that with basically One of you lines you can change the neuron model you're looking at So whether you want to have the it's a kivich model and Hodgkin-Huxley model or whatever. It's still the same Surrounding code. It's just one line that gives you the model or two lines that gives you the model here And I want to make a stronger case here by looking at a more complex example namely this one here Or basically a model that describes this one. So it's it's a model for what is called delay period activity So this is recordings here from a monkey not from a model and these are three seconds the monkey is given a cue and Then some signal then there's a delay period here And then it has to respond and this is here and basically this neuron here at the cue Increases its firing rate you can also see in the individual traces here and then the firing rate stays elevated for Roughly two seconds here and There's no stimulus presence whatsoever during this period. So the the neuron just Decides to stay up and once the monkey has responded it goes down again and the question is how does a network actually do this how you can you switch the network on and off and there's a number of Papers that has investigated this the idea is you have a recurrent network and this recurrent network can change basically the firing rates and The standard model that has been proposed is this one you have two populations an excitatory population and an inhibitory Population and population you mean the neurons are basically doing the same thing and there's some background activities so our populations are sitting somewhere in the larger brain and they are bombarded by spikes and That's a model put forward by Amita Brunel and Nicola Brunel in you are roughly 10 15 years ago and If you look at the system and you play a little bit with the ratio between excitation and inhibition This is subsumed in this factor G here You see that the network goes through various states and one state is what is called the asynchronous irregular regime Yeah, there are other regimes which are mainly oscillatory or very sparse depending on this so that was very interesting and it has become something like a standard model for network dynamics based on this original question and Questions, can you reproduce this? Yeah, so the The model is actually 10,000 excitatory neurons two and a half thousand inhibitory neurons round about 15 million connections Basically the network description is a few sentences. It's very compact in English. Yeah, and I Can come up with an a talk version in Python or in Or in C++. I think I have the Python version here It is this one. So again, you have a lot of variables which I initialized and this is our main loop here and It doesn't really look too much different than the other one It only takes a huge amount of memory it takes a huge amount of time and it doesn't even produce the right result and I spent the morning trying to figure out what the error is but I couldn't because This type of code is not really debuggable this Yeah, and this is coming from me, right? Anyway, the the corresponding nest version which of course I will not Take away from you Well, okay, so you have this is my statement here again. Yeah, so it runs actually very slow I was surprised how slow it is the problem takes That was a lot of memory some time ago. It isn't nowadays But the the conclusion is that if you're inexperienced you're likely to choose the wrong algorithms and the wrong data structures and this is also the reason why the Version was so bad. Yeah, so this is the the nest version here It's roughly the same number of lines of code again You can you can actually read it and see for example here that you're creating 10,000 of these Neurons here and not one like in the previous case and these are our inhibitory neurons. You have a Poisson generator which Creates the background spikes you have a spike detector which Reads the spikes and these are our connections here which have different strengths So the excitatory connections have 0.1 millivolt and these ones here minus 0.5 millivolts. This is part of the model and then you connect them and these Well the textual description is actually roughly the same which says Yeah Now you can with the new release. Otherwise you would have to do it in in Python, but now It's but here. I just use a fixed fixed weight because this is what the model calls one here Again, I say assimilate and then I get this output here and just to give you an idea How long that runs So this is here my thing and I should be able to just This is your talk Let's take the other one. Yeah, I can just call it here and so now nest is loaded and It connects And then it simulates And so these are 15 million connections and now it plots the results and they show up on this screen, which are here and Then Also in principle, you can inspect the data that you get out of this Yeah, just as a I can just Just for the fun of it if I start my a talk version here it It never comes to an end takes 10 minutes or something like this before it goes anywhere. So Of course, you can make a C++ version that will be faster Okay, so how long did it take you to write that and debug it in nest To write that well, that's difficult to say because this is an example we keep around for for 15 years now Yeah, so basically but I Remember writing it actually I reverse engineered it from a C version that somebody gave me And I've wrote it on the train from Frankfurt to Götting. So it took me like three hours to but but that involved reverse engineering it from the from the C code which was a Much larger piece because it was distributed code and then I cross-checked it with the with the original reference Yeah, but it's it's relatively straightforward because the The connections are the difficult part because basically Brunel says each neuron gets input from ten percent of the excitatory and inhibitory neurons That's a sentence and that's basically exactly what we've done here. The syntax was a bit different at that time But basically it's a one-liner Okay, so the again the nest version is more explicit because it uses domain-specific commands you can relatively easily change the different parts and The the useful thing is that the common infrastructures like buffers and American solutions are basically hidden from you Yeah, there is a question back there. Could you please run the second one second simulation? It's still running Yeah, the second parallel. Let's compare the results Parallel with two different simulations Is it possible? I'm not sure what you actually mean now. I mean let's Let's do two different simulation and compare the results. No, it will always be the same result Yeah, if you yes because because what nest does is it it takes control of all the random number generators And actually nest has a test a batch of test tests that run through and that will actually check that the the results are To a very high actually to machine precision identical between runs Yeah, so I will come to that a little bit. I will come back to that point, but Yeah, I told you the C++ version as a bug Yeah, we can look at the result, but the firing rate is not correct that comes out and There's some I don't know I looked at it for a few hours when I got fed up. Okay, so Yeah, so simulations I lo-fi representations of your models and you should always remember this Yeah, if you have a fancy equation and you put it into a computer system, it turns into something rather nasty and There are several sources of errors so you can have semantic errors in the computer model Various types so for example you simply could have chosen the wrong numeric algorithm for your equation and it turns out to be in stable under certain Configurations if you look through the neuron code for example, there is a huge number of specific tests That check that the solution is not running away. There could be of course programming errors in the implementation which You probably never make there could be Numerical behavior. Yeah, my numerical errors simply due to machine precision and this accumulates in a Chaotic system and then there could be of course errors in the system behavior which result from the first two three first three points and my experience is that we We as humans and I'm not excluding myself. We have the tendency to invent stories about beautiful pictures Why they are useful Before we question the correctness of our results and I've seen this over and over again, and I'm seeing this at I Student a co-worker comes with a nice picture. We say yeah, this is so because Only to figure out later that we just had an error in the code Yeah, so cross-checking. Yeah unit testing here unit testing really means that you find little control Cases where you exactly know what is supposed to come out there might be as trivial as be Yeah, for example, if you have a neuron model and you inject a finite current, which is sub threshold then For simple neuron models your memory and potential So it should saturate at some value that value you can typically compute and that is would be one test I mean, these are all just little punctual tests, but in Crumulation they help you to Get better so nest actually has a battery of I think now almost 300 tests that run through the test the most important models and features and These tests are often written that they work for a range of different simulation step sizes This is also one of the critical point that you do a Wrong sampling so actually nest was then accordingly also praised here that For many cases nest was equivalent to the reference up to rounding errors Yeah, so it wasn't shown in the comparison here of different Simulation tools and I'm putting up this warning simply because Errors in software can have severe effects. So this is a paper from science reporting that five Science and nature papers had to be retracted because of errors in the self-written Simulation code it was actually very bad because the paper that had the errors prevented the publication of other papers Which were probably correct? So one has to be very careful here. Yeah, so this is The kind of final picture on that thing here Yeah, so you have you have your real system your experiment and you construct a model around it And then you construct an implementation and there are two steps here, which are crucial the one is the Model building step and the other one is the simulation building step the computer model building step and they are separate things and Many people like to put them into one bowl. I Yeah, so when people talk about model sharing In the INCF context they usually mean simulation models of some sort and This is another issue here. Yeah, so simulations are computer implementations of models And this translation process is always lossy because computers have a finite precision Whereas mathematical objects typically have infinite precision So in that sense a model a simulation is a model of a model in a way Yeah, so you have to validate it against the original model. That's that's difficult Okay, that brings me here to another step, which is remarks on reproducibility, which in the context of INCF is very important There are many people even in some publications that Mistake reproducibility with rerun ability. That's not the same thing Scientific progress comes about by Barry giving me a result and I'm redoing the experiment myself and Get the same result. It doesn't mean that I go to his laboratory use his tools and his Equipment I get the same results. Yeah and model sharing very often Has this implicit idea of I give you my software and then you can just start over again But this has always the danger of just proliferating and propagating the error that others have already made So if you rerun simulation code, this is not reproducible research. Yeah, it just means that your code is running It doesn't mean that the model that's implemented there is actually correct Yeah, so if you really want to reproduce a Simulation result you have to go through the painful step of taking the The theoretical model and doing your own implementation Or you have to use very well validated tools that do this. Yeah, so simulation code may be useful Yeah, but only if it's used to reimplement rather than rerun the model you can use it as a guideline Yeah, but not as a real starting point. Yeah, and and maybe as a sanity check. I mean, this is also with respect to model To code sharing when it comes to simulation I mean, how often do you read your own code and how good are you actually in reading your own code? I mean most code we know that is not shared for the fact because the authors think it's it's basically right only Yeah, so you've written your mudlap axel IDL Python script and you find it works, but you find it too appalling to show it to anybody else. It's a little bit It's a little bit like you are You know the little cabinet you have in your bathroom where all the cleaning materials are you never have anybody look at that It's probably the same here Okay, so that concludes my talk. I'm almost on time I would have five slides on the on the human brain project, but That only if there is time and interest otherwise we can keep it for maybe the forum tomorrow Thank you Re-emphasize what Mark Oliver just said And it may be even worse than experimental labs than in computational labs I'm not glad to hear that No, no, it's just a horrible problem people doing experiments that are too complicated or too difficult to think about reproducing and I work at a place where I have to justify why I go to even small things like this and one of the Statements I make all the time because they the Funding agencies are terrified of the following which is I put in a justification That says I'm going so that I'm up with the state of the field and that we're not going to Unnecessarily duplicate previously done experiments I'm very careful to put in the word unnecessary. I was just about to say It is so critical to reproduce and frequently when we do an experiment We build in Reproducing some either previous experiment of our own or of someone else's because you have very little to compare to if you don't do that And it's a disaster how many times it doesn't work And this is unfortunately a very hot issue there. I think politically there's going to be a lot of pressure to make sure to Come up with standards that make it more likely That we can all re-reproduce not just resimulate our results But the pressure is always going to be there not to duplicate what you've done already and those two things are just in in Gonna end up in a dynamic tug of war all the time What are the difference the differences between? The nest and the Genesis and the the neuron the other one. Okay. Yeah, so I start with a neuron and Genesis. So neuron and Genesis are simulators for compartmental Neuron models and as the name neuron indicates it was actually written to simulate one neuron Whereas nest sits a level above and the idea always was to simulate large networks where the effect of the individual Neuronophologies are largely washed out Yeah, so Now in the in the context of the of the human brain project the these levels are going to be connected Yeah, but the thing is at the network level you can do different Optimizations and if you if you do single compartment models And that's why nest is lie is faster for networks even if you're doing the same level of abstraction on the other hand If you did want to do compartment models with nest that would be extremely tedious even if it is possible So they all have their realm, but they sit at different Graining levels of the of the brain Okay, should I run through this few HPP slides or is that something? Okay in fact, it's it's there's one slide which which in my view says it all but it's always difficult to explain it and I'm also in retrospect. I'm I'm not happy that the word data appears here So the idea is basically of data you make a model and and then you validate the model and then you're looking for new data but data is a very abstract term and in particular with big data and data mining it has It has a funny flavor Because it means it's a bunch of numbers or a bunch of information and then you have some magic tools Which turns this bunch of numbers into a goldmine of some sort Yeah, this is not the data that I mean here the data that I mean here is actual Data that describes a system. So in fact, this should be Modeled is a system description. So for example, if I give you this room here And I want to fill it with furniture You will take a rule measure and you start measuring the lengths of the walls and Tell me where the windows are and tell me where the Electrical lines and the plumbing and whatever. This is a type of system description that I'm talking about Yeah numbers That you can get about the brain that describe the brain at all the possible measurable dimensions I'm not talking about Data, which I cannot yet interpret. That's a different set of data. Yeah, so for example, if I Think about the first talk from today where we had this nice image slices and so on and they tell me where the synapses Are and where the neurons are this is the type of data that I'm talking about data that quantitatively describes what is where in the brain for example and There are many data sets like this out there Yeah, but they are not really used and the question is That's one of the basic questions you can ask is if you take all these types of quantitative data Can you build a computer model of the brain from it and how far do you actually get with what you have? Because one of the biggest problems we have currently in neuroscience is that we don't really know what we know and what we don't know Yeah, so the the known unknowns in the unknown unknowns As a famous American defense minister used to say they are completely unclear. Yeah, so and in a way it's It's a data integration project and the the nicest so the the Typical question is what is a good metaphor for it and and many people in my lab They say you want to build a microscope for the brain So you can look deeper into it and whatever or a telescope or whatever and I think that's actually a bad metaphor because it expects an outcome once you're done Whereas if you're in the cycle you're getting an immediate outcome namely You know what you don't know in the sense that you can always see where your model fails And in that sense a bad model is always better than no model Yeah, and the first attempt where this was actually done was in 1496 93 I'm lying here. So this is B. Heim's Erdapfel. This was the first globe ever produced. I Have a physical photo of it and It took together all the different maps that people had drawn and try to arrange it on an empty sphere Yeah, it's called earth apple because the word globe didn't exist yet Yeah, it was just about to be invented and this is the modern version that we have Google Earth There are some 500 years in between It's also something to Get our expectations right here. Yeah, so Martin B. Heim. He did this he integrated basically all available knowledge So this is the real thing again if we switch off the light we maybe see a bit more the top yeah, yeah Okay, so it's yeah, it stands about this high and One of the biggest tasks was actually taking these maps and they had to be aligned and Registered to a common coordinate system. It's the same thing you have to do with neuroscience data nowadays And also if you look here, there are other pictures on the web if you're interested that it's filled with Encyclopedic information about what you would actually find there most of it was wrong because never nobody had ever been there But still it was quite useful and also if you look at this one here You will see here is Ireland England. This is Spain Africa. This year is Katai. That's an old name for China Somewhere here. We should expect Japan and there's a big void in between Yeah, and at that time actually it was known that there is something in between namely Americas But it wasn't known whether it's just an island or a few islands and our whole content So it's actually not put put in here Yeah But nonetheless it was a useful exercise Yeah, because it really delivered a consistent view about The knowledge of the earth in this case and what we can expect from the HPP is a consistent view of the knowledge of the brain I will give an example yeah, where you basically fit each piece of information into a coherent global picture and the first version of course can't be perfect Yeah, so in the case of the globe it took 500 years to arrive at something like Google Earth and we're still not done yeah, at least we know how far the knowledge stretches and So the strategy is that you try to build a tool chain that uses descriptive data to reconstruct the brain in silico Yeah, so it's a mixture between you could you could see it as a living brain at last that stretches various levels where you can make the different parts alive in in terms of a simulation and That also means that on the structural level you have neurons Maybe you have connections depending on how much data you get But the interesting point is that the type of models you use to populate your or to animate The individual cells and synapses or whatever They're of course not fixed you can change them and that's also a way of hypothesis testing that otherwise isn't possible and Since you start with a relatively low complexity you are relatively sure that you're not overfitting here and Any further data can improve the model. So the artistic rendering that you get from this is this one here Where's my mouse? So this is actually a video So this is what we got out of the ellen brain data here And it's I use it here to illustrate what what we actually want to do So you take data at various levels From various sources and then you build in a way a voxel based database where at each position You actually know exactly what's inside And why do you want to have it voxel based because you can turn it into a simulation model and Then you can label it additionally with Semantic information which sits on top. So for example, you can know how many cells you would have per unit space And how many of these cells are neuron and how many is glia how many fibers you have and so on and all these data Said they're in a way also at least when it comes to space mutually exclusive that is if there's a neuron There can't be a fiber, right? I mean because they would have to take up the same space if it is a neuron It can't be a glia cell and so on so whatever data sets you combine They will mutually constrain each other and you can relatively quickly arrive at a nonsensical Simulation of this so you can use cell densities to create or to populate a brain model. This is down sampled here the color code here is the Annotation which brain region we are talking about and Then at least for some cells, it's it's relevant They are relatively good models to synthesize the morphologies others. You can actually take Kent morphologies from experiments There are various studies that tell you How at least certain regions are connected DTI or tracer injections and then you can have a very cause at least at the level of point neurons assimilation of this so this Can be done relatively quickly and then the next step is and this is also interesting to actually observe the brain in its natural habitat That is put an animal or a virtual animal around it. And this is what the neurobotics part is about to have Have the brain also attached to a body with Somatosensory input whisker input optical input and so on okay So this is just roughly as a as an outline so in a way It is a a tool to integrate data and to see where new data is actually needed. That's the Shortest summary that I can come with thank you So maybe maybe just real quick. What is nest's role in the human brain project? So nest nest is the these the tool that will be used for the point neuron level simulations So is it a single tool or will they no no that's it's a whole so the the strategy is to have multi-scale so At the lowest level, I think it's m cell and steps then comes neuron then comes nest and I'm not sure if there's something above. I Can't recall that most of the data comes from Either rat or mouse or anything, but like if you see in the long term how we compare with the human brain It may be it may we may realize that what we saw is not happening in human brain Yeah, so the That's that's a very common comment From the from the perspective of this tool chain here that that the HPP is focusing on It's completely agnostic with respect to the species that you're looking at Yeah, because all you have is space that you populate and then you create a model Initially mouse is convenient because a there's a lot of data and bae it is small Yeah, so even if we wanted to we couldn't do a human brain at that level right now It's simply not possible simply because of the amount of data that you would need and the amount of commutation time to simulate these types of models, but The idea is you sharpen your tool using the mouse and then you reboot with human data Yeah, but it will not be straight forward red So in here was how it will go like can you predict something? Okay? What's like a pipeline or something? So from my perspective it will probably be much easier because by then the imaging techniques will have improved a great deal But how I'm not able to say that right now Yeah, it's it's if you if you consider the genome project as an analogy the first few sequences They took forever. Yeah, and then suddenly the tools were so Improved that you could sequence within a few days. What took month before? Yeah, and if you look at if you look at the advantage in imaging that we have It's it's tremendous. What is happening there? So I suspect for example, we already have this this big-brained data set from Katrin Amunds in in Yulich Which is very high resolution, but currently it doesn't really make sense to use it because it would simply out Outsize any of the equipment that we have and would artificially Drag us down. So I think from at least from the from the methods that we're using now It's not a difference whether you use a mouse a Monkey or any other mammal as long as you're not talking about insects You know in this last picture that you showed with the eventual simulation of the mouse in by the robotics department. I wonder This this what you're developing right now is for simulating the brain the CNS And I wonder if there's any steps taken to also model the peripheral nervous system And then the integration with this mechanical part this environment. So yeah, maybe I wasn't clear So this this tool chain is completely agnostic with respect to what part of the brain is done It's the entire brain the only problem is that in this particular data set from the Allen brain It's it's the eyes are chopped off Yeah, so we have to kind of artificially Reattach the eyes and there's also no spinal cord. Yeah, so this is actually a big problem because what we want to do is really to Let's put the other way around if you have let's say you have a complete brain model. Yeah, but it's completely deafferent it Yeah, what are you going to expect? It's very difficult. Yeah, but it's an interesting question actually and also highlights a little bit where there's a There's a big of cortex chauvinism. We know a lot about cortex, but we know very little about most of the other structures and If I for example talk to the neuropostetics people or spinal cord people that want to do spinal cord injury recovery They are complaining that not enough people are actually looking at these structures because everybody thinks They want to understand cognition when you don't even know how to re instantiate walking And and considering that wouldn't it also be isn't already the mother to ambitious of a project How can a project be too ambitious what I mean? I mean in the sense that right now Tackling the problem of the mouse seems like a wiser decision then go already for the brain because of the dimension And I wonder if considering we need still need to take into account the periferal and even the environment Yeah, wouldn't it be make more sense to go to even lower-order organisms like See elegance or something that yeah, but I mean these animals were completely different so the nice thing about the mouse is you have all these genetic manipulations that you can do yeah and Also in terms of for example the the if you talk about the peripheral nervous system and so on it's also a model that is used for spinal cord injury and and neuropostetics and In that sense, it's actually a very nice model to work with and the connection for example spinal cord models You can you can really I'm really really confident that you can relatively nicely validate them because there's a lot you can measure along The way from the muscles spinal cord neuromuscular junction and so on so it is difficult but again Every model will be wrong that you get yeah the question is do you learn something from the model? This is a real point and one has to emphasize what you actually learn and I think The biggest benefit in the current stage of Of our field is that these types of models should guide us where the next Experiments should go because this is one of the biggest failures of current computation neuroscience. It doesn't guide experiments It's in a way very selfish because it's it's solely focusing on getting an abstract idea on visual processing whatever processing And now Barry has to jump in All I was going to say is I agree with you. There's not enough communication And on the other side of the coin and the experimentalists as a rule are not very keen to spend their time Answering a question for because the simulation came out a particular way Or a model says something. I mean the experimentalists have their own ideas There's a real cultural Separation here that means somehow to be overcome. I mean, there's a lot of effort. There are a lot of places where you see Things are changing But when you examine them carefully, they're not changing as much or as quickly as I would have thought that would It's a real problem. I think Jonathan wants to Wash my head It's a bit of an open it's a bit of an open-ended question and maybe some of it's better better for dinner or something but but So in the first part of your talk, you know, you made some very very important things about overfitting and underfitting and about stability for example when you try to simulate a differential equation and finite time steps and That it's real important to be in control of these things and one has very good theoretical tools for knowing when you get into trouble I mean still not perfect, but with theoretical tool. So I Think a lot of people are kind of worried that one does a large-scale simulation with large numbers of parameters including unknown unknown parameters That how do you have how do you have a control over overfitting versus underfitting? Network size effects robustness to getting the model, you know exactly right versus just like structural stability, you know The technical term I think I think this is very important But one has to I think one has to learn how to deal with this because one of the biggest problems We're currently facing is the following that the brain is an integrated system It solves many many tasks sometimes simultaneously sometimes one after the other any given model of Whatever brain function you think about is an isolated model and we have as yet. No idea how to make all these models Co-exist into a single system Yeah, so even if we had perfect models of vision audition touch, you name it We wouldn't be able to reconcile it in in an actual brain Because Occam's razor tells us to leave out anything that is not part of this Task that we are looking at so this task of of how do you actually do this? Yeah, how do you how do you how do you populate a model that is actually bound to overfit? in a way that it's not overfitting somehow the brain solves a task and My only answer is I don't know how to do it. We have to be aware that this problem exists and We must get away in this project from the idea that the results we are getting represent a Scientific news in the sense that we've discovered a phenomenon rather than that we probably discovered another mismatch To reality. Yeah, so experiment of course is the ultimate judge of What is doing? So it is not so that a particular simulation result In in a positive sense will guide an experiment But rather that a particular negative simulation result will make us aware of data that is urgently needed This is My current prediction of course if you talk to me in a year from now that might be completely different Yeah, well, I think I think these these are really important issues and it also has this important Sociological aspect. I think Leiden is also known at least that's what I've been told. This is the was the first first Place University in Europe or University anyway who got the particular or dedicated professor in theoretical physics Before they were professor of physics But then they like in physics there's been like this division of labor between theory and Experiments for four hundred years because it just realized it's just too difficult to do these things I mean for one person to do both so in typical physics departments You maybe now have like I don't know 50 50 or people doing like we call it theory But it's not a modeling and people doing experiments and of course is very tightly interact They have like similar educations, but nevertheless they specialize different things because these techniques are different I probably in neuroscience. I would guess maybe like 1% of the scientists could be called theoreticians And I personally think that this really this has to change But it's also like it's Sociological experiment how how how will this is sort of like a probably some of you heard about this This this discussion about the human brain project and which is sort of interesting because it's it's it's also It's many many things which are about these things which are sort of interesting But I think it's also interesting that the first time I ever heard about that somebody starts a petition against another project is actually sort of against this Minority project after all about like this this this of this this new techniques going on There was there was for the genome project. It was a very similar petition Which was actually much larger because it was a mass letter to the representatives But but anyway, so this is the point is that there's all this is this both not only is this a scientifically hard problem It's also sociologically a very hard hard problem because a lot so of because it's there was this story that About the people who started quantum physics They were asked well. Well, well, how did you convince all these people over training classical physics? That about quantum physics and I said well, we didn't they just died out It's very just difficult for people to change your opinions And I think that's the same thing. It's going to be have to be in neuroscience It's quite of this this new techniques. It's like the people are sort of like Like them dominating the field to deciding what should be funded and generally will be these people have like Don't know about these opportunities don't know informatics and that's what sort of what society to go to Obama That's what change looks like that. That's sort of this is what it looks like in neuroscience I think it's just it's very that's why it's very important I think for the new people coming like you to have this more Like open you and open and also the background to make to make something to think about these things I mean that you said it's a minority project and that's actually an interesting point because I mean the human brain project gets One billion over ten years the neuroscience funding in Europe is nine billion per year Now the petition said we should distribute the money from the HPP to the traditional research funding Which would hardly make a difference if you do the math because basically would add 50 million to nine billion Or a hundred million to nine billion. So it's it's relatively small change that you would actually make The one in the u.s. Has the same problem The NIH alone gives between five and a half and six billion a year to neuroscience And they were talking about a hundred million dollars. Well, you know, it's not even 1% and another interesting thing is so A quote that I heard from a famous cognitive neuroscientist is That we have to do new experiment because we can't trust the old data But if you really digest this argument Yeah, it really means why should you do experiments at all? Yeah, because it's it's a time invariant argument You could put it on any place on the time axis which means next week You will also say that the data you produced last week is not trustworthy Yeah, I mean then there is something fundamentally wrong in the way either we produce data or we treat the data or we design our experiments They used to be my Sorry, I want to ask the question that is not related to the cost Because I consider you are connected to human brain mapping Human brain mapping no human brain mapping project No Yeah, what that's the American version, right? It's not the European fashion So I'm so I'm working at the blue brain project in the HBP and my my my main my main working field. I'm a woman brain project. Sorry. Yeah, I human brain Yes, yeah, because I there was a critics about the from There was a critics from some neuroscientists. Yeah, so on the woman brain project. I know I want to know about your position Well, yeah, part of the position we just discussed. I mean the other the other maybe more Faster to launch answer is the following the EU wanted to have a high-risk Ambitionary or visionary project and There were 26 original contestant and then there were six contestants left after a first round of two years or so And then there was another two years and four were left and then finally two were selected now the EU has two high-risk visionary projects and after six years of Going through an public Competition why now complain about the project being high-risk?