 Thanks for inviting me here to this, what I think is a very interesting and potentially important workshop. So I think we all agree that if you think about what the final theory for the brain should look like, which is quite far away from that. But we need to integrate and bridge different levels of detail all the way from molecules up to behavior. And I think the people in this room also agree that this knowledge has to be stored in mathematical models somehow. And this mathematical model cannot just be a large collection of molecules put into a large simulator. You have to do more than that. If you ask the question, what should the final mathematical theory for the brain look like, it must involve some kind of multi-scale approach where you describe the system with a set of interconnected models, which together span the interesting temporal and spatial scales. So we have worked on this in our group, particularly on the focus on our main interest, has been on early sensory processing in the visual system and the somatosensory system. And I've identified a few key challenges which we have met, which I think is quite general for this multi-scale approach in biology. So one thing is that we need to develop this interconnected set of models at different levels of biological detail, what we call multi-granular modeling, bridging together spatial and temporal scales. And then you also make modeling what you can measure. You also need to make connections from what you model to the things you can measure. And this is something which I think has been under-emphasized in computational neuroscience altogether. People have been modeling spikes and not so much more else, but these are difficult to measure, at least for many neurons. And then a key headache, modeling when you don't know all the numbers, meaning we don't know the parameters. I used to do semiconductor physics and there we had four parameters, I mean there's like electron mass, Planck's constant and two more, and they were known to 10 digits. And here we don't know really much about the numbers at all. And this is sort of what I call inherited sin. We don't really talk about this as much as we should because we don't really know how to deal with it. So we sort of prefer to keep it or we don't address it as often as we should, but this is something we have to do. And then of course there's a question, if you have then these different candidate models, eventually you would like to use the models to find out what is the best model, what is the model that best explains this brain part or whatever this thing. And then you need to compare models with experiments. And how do you do that? And that's not so easy, particularly when you sort of don't know all the numbers and it's also not always so easy to make the connection between the models and what you measure. And then the fifth point is what I call modeling tools and hygiene, meaning that you actually do reproducible computational science. So I'll go through these elements and with the basis of starting with work that we have done in our group. That is the starting point. So two main model systems which we are focused on in our group. One is the whisker cortex in the rat barrel cortex. As you know that each or some of you know that each the rat has this fantastic whisker system where each whisker has a principle or reserve place in the cortex which makes this both conceptually and sort of technically not easy but less complicated system to study. Marcel is going to talk a lot about this later. And then we also have looked at early visual system in mammals and in particular the LGN and the processing from thalamus and the thalamocortical loop and so on. But they are not really the main focus point of this talk. It's more mainly that this is where we got the examples from. And this talk and our work goes from the neuron level up. So all these things which are below the neurons, I mean the multi-scale things going on below, there is not something that I'm going to address even though there are other talks in this meeting which will address that. So first, multi-granular modeling, reaching spatial and temporal scales. The first thing we need to do is we need different representations of a neuron. And I think if you look at the modeling approaches people have had the neurons, I think they could, I mean one way to group them, organize them, is that you have at level one this detailed multi-compartmental neuron, level two simplified spiking neurons and level three firing rate models. And of course these different connections of these different levels of modeling must be connected. Just like for those of you who study physics, remember from that if you want to describe a gas of molecules, then you can either do it at the level of individual molecules and Newton's law and velocities and positions and so on, or you can do it at the level of thermodynamics, pressure, volume and so on. And these are perfectly sort of valid descriptions at the different levels in some sense of the system. But they must also be interconnected. So it cannot be sort of like an arbitrary like, and Boltzmann Maxwell showed 150 years ago, how these were connected, this is how statistical physics came about. And it must be the same thing with neurons, that these different levels of description of neurons must be systematically connected. And our group we have done quite a bit of work or like some work on the connection between these simplified spiking models, like integrating fire type models and the firing rate models. Both in the case, I mean for different situations, one in the case we have really strong synapses, so you don't have strong inputs, that's for the retina geniculate connection, but also in these recurrent networks, and this is a work that is just coming out. I mean there's been less work, lots of people have done work on this connecting level 2 and level 3, it's been less work from level 1 to level 2. And one reason is that there's very few like ground, truth, or like gold standard multi-compartmental models, which you can trust, which you would like to, if you want to sort of to make an approximation to something, you would like to have a solid starting point, something which is worth approximating. But it's really not that many multi-compartmental models out there, which you can sort of trust to the extent that you really would go out of your way to try to approximate them to into the simpler models. And of course at the level of the barrel column, this three level approach would be that you have the level, different levels description of the same cortical circuit, which has about like, I don't know, 10,000, 20,000 neurons. And of course the blue brain project is then at all of that level on really detailed models, but you also need detailed neural models, but you also need the same kind of model, you need to represent the same system at these different more coarser levels. But these must be interconnected. And then there's another thing that we worked on, which also Eric addressed, most of computation neuroscience has been on synaptic integration, been on a millisecond time scale or 10 or 100 millisecond time scale. How does a single neuron get input and produce new spikes? And then typically the salient variable is the membrane potential, and maybe you model the calcium and concentration because there's a single molecule. But all these other, the main players, sodium, potassium and chloride are not modeled at all. You assume this invisible janitor that take care of this. And this is sometimes okay, but if you want to model things at longer time scales, for example at second time scales, diffusion starts getting important and you have to keep track of all these ions. There are processes where maybe like potassium is funneled out, like the spatial buffering. If you want to include that, you need to take care of the concentrations. And if you model, I mean, ion pumps and so on, that must be included. And so in order to make the connection to model these processes at longer time scales, we need actually new schemes that takes this thing into account. And we have done some modeling of this interaction between neurons and astrocytes, where these time scales comes into play. And when they try to actually get a spatial component into that, meaning that some parts of the astrocyte are closer to high firing than others, then we ran into trouble with accessing schemes. So we had to actually develop a new scheme to make sure that things stay electroneutral and don't get unstable and so on. So this is something that we are actually presenting at this poster at this upcoming Neurinformatics meeting in Munich. So meaning that the basic physics has not really... The modeling schemes have not been really developed. It's not only the tools that have not been developed. We don't really know the basic... The mesoscopic physics are efficient schemes for that. So that's the first part. The second part is modeling what you can measure. So from a test model, we should be able to make predictions for all measurement modalities. I mean, we have not only electrical, but also optical and the kind of available measurement modalities that we have. The reason that we got into this is that we have a long-term collaboration with Anna DeVore and Anders Dale, now at UC San Diego, where they did these laminar electrode recordings from the Bratt Paracortex. And then if you flick a whisker and measure the extracellular potential as a function of depth, it's really pointer that works on this screen. Mine doesn't, it works outside, but not on the screen, which is somewhat inconvenient. So anyway, if you flick the whisker and you measure the electrical potential, I mean, after you flick the whisker across the depth, you see something in the low frequencies and you see something in the high frequencies and the low frequencies, presumably, reflect mainly the synaptic, the processing of synaptic inputs and the high frequencies, the firing. But this is all... The interpretation of this just by hand or just by looking at it, this is not trivial. That's one of the reasons why these... the people are focused on single neuron properties because they are at least... it's limited, but you know what you measure. But the good thing about this is that we know the basic measurement physics. We know the forward solution. If you have, say, an activity in... if the only thing that happens in your piece of the brain is that they have one orange neuron receiving an excitatory input up at the red triangle there, then what you will get, you will get when there's an excitatory or impinging a synaptic input, you get the current sync and you get the current source, which leaves... leaves from all over the neuron, but if you simply did just for this... just for now, I assume that all leaves through the soma. This, from the cable equation, it follows that this current source must have the same but opposite sign as the current sync. And if you know these current source and syncs and you know the positions of these syncs and sources, you can calculate the extracellular potential model measured at the electrode. And this is quite well established based on volume conduction theory and so on. But of course the problem with this is that well, the good thing is the problem is that when you measure something in the brain, you measure from many sources. And so it's not so easy to disentangle. The good thing, though, is that this is linear. So it applies for multi-compartmental model. You just have to sort of keep track of all transmembrane currents in all compartments. And it also applies to neuronal populations. So we have done quite a bit of work on investigating or to doing this kind of forward modeling scheme to understand, make the connection between neural activity, like spiking and what you measure on the outside or synaptic input and what you measure on the outside. So this is an example of a calculated local field potential and not a calculated global field potential. This is a calculated extracellular signature of a spike at various positions. So if you put your electrode there and you see there were the red electrode points, you measure this kind of sharp negativity and a slow positivity, which is typically what you measure if you have an electrode close to a neuron. So meaning that this is a quite, I would say, it still needs more validation, but it's still a quite well-established scheme. So one thing that we use this for and this is a little bit of a side comment why what we can use this multi-scale models for. It's not only to sort of make, I mean one thing is to make predictions from a network model and compare with experiments, but it can also be used to test widely used data analysis methods. And one key problem that people really use has a use a lot, which is widely used and is very problematic is this question of spike sorting. If you put down an electrode in the brain, you pick up spikes, at high frequencies you pick up spikes from neighboring neurons. But what you like to know is what neurons fired when. Like individual spikes, because the individual spikes of individual neurons, because that often contains important information about how correlated the populations are and so on. So this is like the spike sorting is now an important technical thing, analysis technique that people use and it's very time-consuming and it's unreliable. The result depends on who is doing the experiments and what lab is involved. So everybody is interested in this automated spike sorting algorithms. So, but how do we test this? Well one, if you want to test things reliably you would like to have some ground-truth data. Data where you know what the true result is, because then you really have control over your algorithm. So this is something that we have generating with this forward modeling scheme. This is an example of a tetrod, this is like an electrode or multi-electro with the four contacts, shown this red and blue and green and yellow. And these triangles there are neurons, pyramidal neurons, which are spiking. And then we can impose spiking as we want in our model world and then calculate this exocellar potential that is recorded at this tetrod, like mimicking, making it sort of like a virtual experiment. And then we can give this tetrod recordings to the spike sorting algorithm so it's starting well, now you tell us the spikes and then we can test afterwards. So this is an open-ended approach. We can make this as, because if there's a lot of correlation bursting and things like the spike shape varies from spike to spike and so there's all kind of complication that you see in real life. But this is an open-ended approach. We can add as much of this as we want and make like a whole set of test data. So there's a collaborative effort on the development and validation of these automatic spike sorting algorithms and the German node is now hosting this website where people with algorithms should meet people with data and this is up and running hopefully in June. But this problem of data validation and the validation analysis methods is not only for spike sorting algorithms it also applies to all kinds of other analysis methods or the approach at least we should try to validate or test these analysis methods as much as we can. So we're going to have an ISAF workshop similar in format to this in three weeks' time here in Stockholm where this is going to be the focus. We're not only talking about spike sorting but also analysis of LFP and also this spike estimation from two-photon calcium imaging how to go from calcium bumps measured from neurons to spikes. And then we also done some work on the local field potential and investigating how that varies with synaptic position and so on. And for example this is what a true neuronal dipole looks like. This is people doing EEG analysis or MEG analysis. They try to make all this mesoscopic dipole and estimate them and I think it's coming back there. And this is what these neuronal dipoles really look like. So based on this and one thing we see is that when you do we have a hundred hertz dipole it's not just the same thing a hundred times faster. It's completely different meaning that and this has been neglected in typical analysis of LFP or EEG data so far. Some people have made wrong interpretations simply because they haven't done the measurement physics or taken into account the measurement physics properly. And this is another study we did where we investigated how local the local field potential really is. So the outcome of this was that if you put down an electrode in a cortical tissue and the neurons are uncorrelated you typically pick up the signals from neurons 0.2 mm or less away. If it's correlated then it increases. One interesting thing if you look at this multi-level scheme which you talk about and this multimodal modeling what you can measure typically the connection to the from the models to what you can measure happens at these levels of reconstructive neurons. A point neuron doesn't have an LFP. And so typically if you're in this multi-level scheme you would need to make the connection between your models and what you can measure at level one. It doesn't mean that you should get the dynamics necessarily right at level one. Maybe it's better to do that with integrated neurons but then you need some kind of hybrid scheme to make these predictions and that's something that we are working on. And we're going to have a workshop on this actually modeling what you can measure. I think Christof Koch and Jason Kerr is coming to this it's like one of the workshops at New Informatics Congress in Munich in September. I think that can be nice. Okay and then the third that was like two points the third point is modeling when you don't know all the numbers. And this is an example from our own lab and they made this multi-compartmental model of an LGN interneuron. And interneurons in LGN are these quite mysterious creatures because they not only have axonal output they also have dendro dendritic interactions. So anyway so we wanted to have because we want to understand the circuit behavior so we needed a good interneural models. So it did I think what is quite common when you make this multi-compartmental neuron models you do some experiments before we didn't do SMR experiments but there were Hegelun and Oslo did this patch clamp recordings from interneurons and injected currents in the soma and got out his traces so on. And then we had like two examples or like two interneurons like experiments that we then tried to make a model for. And then Geir Hallnitz did I mean found the right he's a postdoc in a group found like the right conductances and then found essentially at the end a set of conductances not only I mean the values of the conductance densities and different parameters that gave quite good fit to the data. So this is quite a quite a and of course now we hope that this represents this represents like two interneurons which we can then and use these models maybe for all the purposes in circuits and so on. But this is really unsatisfactory right this is what everybody does and then you can do a little bit of sensitivity analysis and see how much you have to change things in order to but this is and this conductance value so maybe taken from even different species and certainly different mammals and so it's really unclear what we have learned from this and this is not I mean we are not worse than others but this is sort of like the standard kind of modeling we do with neurons. It's rather dramatic situation I think this is the most kind of this is the most I mean the type most commonly done modeling right multi-compartmental modeling of single neurons and you're very on very shaky grounds but we get it published so then somehow sort of like it solves sort of like the sociological issues but still it's something deeply worrying about this and this is something we have to have to address and it's even more problematic of course for cortical network models this is a paper that just came out from from Simon Schultz and well it's from Imperial and it's a nice paper but I mean this is for cortical network models and the five first tables we're on the parameters so and of course it's this is what it has to be right and so it's a good paper in the sense that they really describe what they've done so we can build on it but it's really unclear what it really means so and then of course then it's a question if you have then this different this candidate models how do you find out what is what is closest to reality that's what you typically want to do with the models you want to use them to make predictions and find out who's closest to reality so then after we have then if you have this candidate models and we calculate all these these things that you can calculate I mean spark rates, multi-unit activity LFP, maybe voltage sensitive dye imaging if you have that or calcium imaging and so on and then you want to find the most you don't want to do this for many models and then you want to find the most probable model given all available data so how do you do that I mean it's not it's not easy at all I mean you can say base but I mean it's not easy to make a practical Bayesian scheme as far as I've seen for this in this multi-modal situations and so on and you can do this something about that the models with fewer parameters are more convincing than the models with many parameters but it's I don't know so we have an example for example where we want to do we met this, we wanted to extract a firing rate model for the processing of whisker input to this column where we used this the same kind of data that we saw earlier but also had a sharp electrode in the thalamus projecting to this projecting to this projecting to the laminar electrode now projecting to the principal column so then the kind of data you get out is this and from this we can actually get out population firing rates that's the first step we use the measurement physics to get out the population firing rates for these populations and different layers in the cortex and then the next thing is from these population firing rates we can ask the question well what model explains these population firing rates extracted or observed population firing rates best so these were like the estimated population firing rates so far so we had until now it's only data analysis so we have population firing rates from the thalamus and these different layers and then we wanted to make this like general model and then we had a procession from thalamus to layer 4 with some feed forward terms recurrent terms and so on and then we end up with two it turns out that the model is not the data is not rich enough to really have too many parameters in there you cannot find the minimum so then you start looking at reduced reduced versions of it one where you only take into account the recurrent actually the slow time scale recurrent interaction is the key thing the model assumes that it's a feed forward thing and then you do this you fit this to these different models you know fit these different models to the data and then you see and you see that actually recurrent model fits the data slightly better in terms like numerical error but then it's really difficult to really to make a statement I think what is really the best of these two models I mean there are things you can do you can sort of do look at there are certain things you can do but it's really this question who of the candidate models are best and this is something which we also need to sort of systematically address if you know the models and make these experimental predictions and then you want to make something who of these models are best or how much better or what is the best we can say about the world based on this modeling experience so last thing modeling tools and hygiene and there's obviously we need to to develop effective, reliable and easy to use simulation analysis tools and we've done a few we made this LF-PAI for simulation of extracellular potentials and we're also involved in Hans-Eckhardt-Pläserina group is involved in NEST and we also developed some ICSD tools also with Klaus Pettersen also Simon who is sitting here so I think what you would like at the end of the day is to be able to have like a test model put up maybe like formulating it with spiking networks or what have you and then quite easily on your computer get out of this different prediction for different measurement modalities and then there are many initiatives like this many of you in the room are involved in this you should get the logo upi I was trying to find Moose but there was no Moose logo I couldn't find there's no website I couldn't find it so anyway so logos are important if you want to show off your it resonates well with Sweden what? it resonates well with Sweden okay so I couldn't sorry so then so you should have like an open access logo then oh sorry so anyway but there's also this other thing modeling what are called modeling hygiene how to make computation neuroscience reproducible so they're like Alin Oli and how's like our pressure in our group Michael Oli of course now is at in Losam had this paper about how to make these actually how to communicate models and how to make them reproducible for example by making these yeah like special like declarations of what the model contains in a standard way and so on so these are also things that needs to be addressed so finally just to summarize these key key challenges and I think if you take up the take up the challenge from Eric I think it's when it comes to these two points modeling what you can measure and or modeling when you don't know all the numbers and selection of best model when comparing with experiments it's not the question of tools really it's more a question about understanding what to do so I think and I think one thing we can do in our multi-scale program is to have some workshops on this I talked to this about several people and everybody who is a serious modeler worries about these things so I think it would actually have interest to many people in the community if they organize such workshops so okay and then this is the acknowledgments from people who have contributed to this this work out there Norwegian University of Life Sciences in the suburb of Oslo 30 kilometers south of Oslo and then these other other places so thank you