 It's very nice to be here. It's my first real visit to Quebec and to the city, and I've really enjoyed it. So I'll tell you a little bit about our work on multi-scale modeling and how we try to fold in different kinds of data, so neural recordings being used in the very broad sense of any kind of data that pertains to neuronal function. So let me just start with a glimpse of how I think about models. Now we have models which can be word models. So here's a very influential word model that coincidentally originated around here, Donald Hebb's model that has driven a lot of investigation into plasticity. But this is a word model. This explains what will happen. It doesn't do so in a mathematical sense. It does not do so in a mechanistic sense, though it gives some indications about where to look for these things. Here's a different kind of model. This is actually quite a powerful kind of model. This model has actually very accurate predictions about things, but clearly it has nothing at all in common with the mechanisms that the real system obeys. Here's a kind of model that we've seen at different points in this meeting, different levels of abstraction of neuronal function. So here's a very simple rate model for how neuronal computation might take place, and this is commonly used in large neural network simulations. It's used a lot in the AI machine learning kind of domain, but we're not going to go there. And then we're starting to come a little bit closer to home. There's different kinds of models which use various abstractions of the biophysics. So here's an integrated fire model. Now the way I like to broach the topic to my students is to say that there's a whole range of basic biophysical biochemical principles on which you can account for a huge swathe of neuronal function. And this relatively small set of equations, ranging from chemistry diffusion, non-spotential cable equation, Hodgkin-Oxley stuff, synaptic stuff. This relatively small set of equations actually is an excellent foundation for a huge range of models and is in fact the basis for some of the work I'll be telling you about. And you've heard about, many of the kinds of models you've heard about fall within the ambit of this set of equations. Now this is the sort of the recurring question that modelers have to address, which is how much detail do you put in and why does it matter? And since I'm going to be delving somewhat deeper into the cellular detail of what goes on, maybe I'll spend a couple of moments giving you my perspective on why the detail matters and why I think that actually worrying about things as fine-grained as molecular function is important. So let's just start with just the computational side of things. If you want to think about how a neuron performs its computation, one aspect of it is something that Yota already brought up, which is that the neuron is not simply a ball that gets inputs from different places and sums up the respective inputs. It's doing a huge amount of computation along the way. The dendrites are in fact a very effective parallel computation mechanism and you can abstract it as like a multilayer neural network as she has done or you can abstract it as I will be telling you about is different domains doing actually quite sophisticated computations ranging from information storage to pattern selectivity. So that's the P here, the parallelism. Next one, which I think is to me one of the most exciting bits is that when you start to look more closely at what's going on in a detail neuron, in a detail neuronal model, you get to see phenomena whose properties are very, very different from just the sum of the parts. This is the interesting nonlinearities that come out, properties of multistability and so on. These are emergent properties if you like, it's a much abused term, but these are properties which make the neuron do much more interesting things than simply add up a whole lot of inputs. Another thing which I've sort of grown to appreciate as I've done these models over the years is that remarkably often when you look at the details that underline neuronal function, when you look at the various channels that are there, when you look at the signaling pathways, it seems to be that nature has put in a lot of perhaps redundancy, a lot of ways of ensuring that the system works even if the conditions are not quite right. And this sort of appears, this sort of just happens when you put in more detail. To put it in other terms, you can make abstract models that will do what you want, perhaps, but will only do so in a very narrow range of parameters. And I've found more often than not that going into a very detailed model means that the system behaves correctly, in other words it exhibits the properties that you're interested in over a much wider range of conditions. And this, I think, is interesting and perhaps a natural outcome of the fact that neurons, biological systems in general have to work over a very wide range. And so when you go into the biological details, you're likely to see some of those properties emerge. Another thing which is important is that very frequently when you look at an abstract model or any model, you do so in a very, very specific context. So in other words, it will work for one set of conditions, but you change the conditions, you change the problem a little bit, and it might not do what you hoped it would do. To put it in other terms, a real cell and real neuron has to worry about not just integrating, say, synaptic input, it has to worry about neuromodulators. It has to worry about the fact that it may be affected by pathogens, it might undergo some kind of damage, and yet it has to continue to perform a function which may be somewhat modified by these additional inputs, but it still has to give a reasonable output. So real neurons are operating in a more complicated context than simple abstract models. And finally, there's the much neglected and I think much underappreciated aspect of homeostasis or housekeeping, which is just keeping the shop going, keeping the neuron alive, keeping the ATP, the channels alive, keeping the iron gradients, there is non-trivial, it's non-trivial in the extreme, and I think there's a lot of very interesting essential stuff going on at this level of computation. And this is just, so this set of things, PER perch, these five points are purely from the computational viewpoint, and if one wants to look at other kinds of domains such as disease or damage to the brain or to development, looking at these additional details, especially the molecular level ones becomes very, very important. So this is why I'm so interested in what's going on in great detail, as I'll be telling you about. Okay, so this is my flame-bait slide. This is my back of the envelope calculation, which says that actually what most people observe through electrical recordings is a small, a very small error term on what the neuron is actually doing by way of computation. And the argument goes like this, that electrical computations in the brain happen fast, they operate say on the time scale of one millisecond, but electricity propagates actually quite a long distance. The length constant is quite long, say about half a millimeter. Chemical computations on the other hand are relatively slow, though of course bear in mind that synaptic transmission is primarily chemical. But even leaving that outside our present discussion, chemical stuff is slow, but the length scale is so short because diffusion is limited that you compensate for the slow chemical calculations by having a lot of calculations going on in parallel. So you could basically say that every spine is performing its own computations in parallel. So now, in addition to all of this, or to multiply all of this, there's the fact that when it comes to electricity, there's one signal. It's the potential or the current, you know, which are, which are flip sides of the same coin, whereas in chemical signaling, you have hundreds of different pathways. And so you put all of these things together. And so in principle, the amount of computation you can do chemically is far more than you can do electrically speaking. Now anyway, so this is, this is a bit of flame bit and we can have some fun discussions on it later on. But again, it motivates some of, some of the analyses and approaches that I'll be telling you about. Okay, so what I'll do is I'll start out talking about the modeling framework that we use. I'll discuss some of the data framework and Sharon give a very nice overview of the importance of such frameworks. And then I'll just give you some glimpses of what we're planning to do with this, with these capabilities. So our modeling framework is Moose, which stands, I think I actually forgot to put in the acronym. It stands for multi-scale object-oriented simulation environment for those of you who are wondering. Here it is. Oh, I've actually got it up there. And it's open source, GPL, all the rest of it. You can download it. You can play with it. You can make your computer overheat very, very nicely with all of this. And the key point of Moose is that it was designed from the outset to be able to do multi-scale computations, that is, calculations ranging from literally molecular single molecule level events all the way up to large networks. There are two particular domains that we've been playing with in recent years. One is the multi-scale domain ranging from few molecules doing chemical computations embedded in their natural setting, that is, in the neuron in a single cell. So these calculations, which I'll tell you about a bit more, use a framework called RDesigner. And this basically is aimed at describing what single neurons can do computationally, including all of the chemical events. And then we also have a bunch of network computations that we can do, which are, and you've seen a lot of those kinds of multi-scale computations. For example, Ivan gave this marvelous talk about using really, really detailed models, even at the single cell level and then embedding them in a large network. So these are all things that one can do with Moose. So just to remind you the kinds of calculations one does, you can use your standard electrical ingredients, which are cell morphology, a bunch of ion channels. You can define these things using neuro-ML. You can define them using some legacy formats like the genesis.p format. You can define the morphology using the SWC format from neuromorpho. These are all things that the system can take in. It builds up your standard cable equation representation, which is implemented as a bunch of compartments. And this is solved using one of our numerical engines, which we call the H-solve in honor of Mike Heinz, H4 Heinz. And this can do partitioning of course from in a very wide range of levels of detail. The chemical side is actually very reminiscent of the electrical side of things. We can import a bunch of definition formats for chemical systems. What we define is a set of reaction diffusion system. And much like compartmental modeling in the electrical domain, the way you do these calculations is you subdivide the cell. You have to subdivide it more finely because the length constant, as I said, for diffusion is quite small. And so you may have to do far more subdivisions of the cell in order to accurately describe the chemical reactions, diffusion going back and forth. And then you have some set of reactions which are solved using differential equations. And you can partition this in various ways. And we have two numerical engines for doing this, which work in tandem. One is for doing the chemical kinetics, that's the K-solve. And the other one is for doing diffusion stuff, that's called D-solve. And these can do fairly large reaction diffusion systems, subdividing the cell into 30,000 or so little pieces. Each one of those pieces may, just so that you're aware of what it takes, may have a lot of reactions in it. And this is a small subset of the kinds of reactions that one worries about in synaptic functioning, for example. This is just a bunch of synaptic reactions that we did many, many years ago. And of course, things have gotten worse, if you like, since then. There's a lot more reactions that are now known and that are realized to be important for synaptic function. So we put a little bit of extra effort on the spines for a number of reasons. This is something for which, to my knowledge, there is no standard to define what spines do and how they should compute. But we have a specification within the R-designer framework, which I'll tell you about in a moment. But spines do a lot of very interesting things. I mean, for one, they house a lot of reactions, of course, and they, of course, house the synapses, they house the NMD receptors, and so on. But spines are not static structures. I mean, for that matter, nor neuronal morphologies. But spines are really very dynamic. And when you have spines changing their size and shape, that does a lot of strange things to the possible physiology, to the possible computations. For example, a spine gets bigger. Assuming that the receptor density is the same, the total receptor conductance is going to become larger. Assuming that the number of molecules in there is the same, those are going to get diluted out. Assuming that the cell membrane properties are the same, the passive properties are going to change all over the place. And something else happens, which is also actually quite a complication, which is that when the spine geometry changes, the diffusive access that the spine has to the dendrite, that changes dramatically. And all of these are the outcome. All of these changes are the outcome of some chemical event that caused a spine change in the first place. So you have all these interesting loops of scales of events happening from the chemical, electrical, and morphology. And you have to fold these all together when, especially in the case of modeling spines. There's some interesting side notes here. For example, it may be a bit surprising, it was non-intuitive to me, that it's actually faster to do the stochastic calculations for chemistry when you're doing it in the volume of a spine than it is to do standard deterministic calculations. So your ODEs take longer to run than your stochastic Gillespie algorithm kind of calculations, which is kind of nice, because when you're talking about volumes of a spine, you need to worry about chemical noise. And that's what the stochastic methods give you. And so again, you have to deal with tens of thousands of spines in some very detailed models. OK, so and this is the kind of stuff that lives in every single one of those spines and that we need to crunch through. So let me just change a little bit to how we set these things up. So we've devised something called R-Designer, and I should really have broken up that acronym. It stands for Reaction Diffusion and Electrical Signaling in Neurons. That's what the R-Designer is. And what it can do is listed over here and sort of schematized over here. But basically, you can define the somatic excitability, you can define molecular transport along the dendrites, dendritic excitability, you can insert spines in whatever positioning and spacing you like, you can address matters of spine structural change, you can define protein synthesis and turnover, synaptic input of course, and deal with diffusion and chemical signaling all over the cell. So let me give you a simple example of what you can do with R-Designer. So here's a little system where you have a cell which is excitable. It has a couple of spines on it which have some reactions in it. They have synaptic input coming to glutamate and NMDA receptors. And then there's some reaction diffusion signaling happening in the dendrite. And here's the chemical system, some bits of it are in the dendrite, that's just the calcium diffusing, some bits of it are in the spine, head, and some bits of it are in the postsynaptic density. So this is a neat little system. And what you can do with this is actually kind of cool. This turns out to be bi-stable. It's a fake system, but it's bi-stable which is kind of an interesting property, which by which I mean that it starts, if you're giving it a regular, ticking away of synaptic input at one hertz, the system will settle down to a certain low value of membrane depolarization. Every time you give a pulse, there's a small uptake in the membrane potential. So at 10 seconds what we're doing here is we're giving it a burst of synaptic input. And what that does is it opens the NMD channels, calcine floods in, and it kicks off the reaction system you see over here. And that works its way through to the camkinase 2. And what the camkinase 2 does, it translocates to the postsynaptic density, it phosphorylates, and therefore increases the conductance of your glutamate receptor. That means that every synaptic input causes a greater depolarization, and that means that more calcium will come in even for the low synaptic rate input. And so there's the phosphorylation happening of the receptor. And basically to cut a long story short, this flips it into a state where the baseline synaptic input is enough to keep the thing in a state of high responsiveness. Okay, so this is, it's a toy model in some sense because it's obviously somewhat artificial, but I think it illustrates the levels of complexity that come up when you are dealing with multi-scale events that include the chemical events, the electrical events, and it takes one line to change this to also deal with, for example, morphological change. And just to indicate what our designer can do for you, this is the entire definition file for this model. So there's a few lines here for defining the simulation, specifying the time steps for display and computation. There's a list of prototypes which include the channel prototypes. Basically the channel kinetics are defined in a neuro-ML file somewhere. There's something that defines what the cell geometry is, something that defines what kind of spine we're using, what its dimensions are, what its channels are. Then we have a small section which tells us where the spines are on the neuron, where the channels are on the neuron. And you can put in fairly complicated, here I haven't done it, but you can put in fairly complicated distributions depending on position from soma and so on. So that's the distributions part. And then in just these two lines, we're doing the multi-scale bridging. We're going from the chemical to the electrical, that is the channel phosphorylation to changing the conductance, and we're going from the electrical to the chemical, that is the calcium ion influx into the calcium concentrations. So that just takes a couple of lines. And then we deliver the stimulus which takes all of one line. We have a bunch of lines for displaying it and running it. So this is the entire definition for this rather interesting loop. And that's not all. But let me just spend a couple of moments describing what these adapter things are, because these are the core of mapping between different domains of function. What you have to do when you're mapping from chemical to electrical, for example, is that as I said, the spatial discretization of the chemical system is much finer than it is for the electrical system. So if you want to take, say, a concentration term and use that to modulate a channel conductance term, what you need to do is you need to say, actually, I'm going to take all of the concentration terms in all of the pixels, all of the voxels, that map onto the one voxel, the one segment that was used for the electrical calculation. So it has to do an averaging over a certain length scale. So that's a space averaging. But interestingly, you have to go the other way for doing the averaging in the time domain, which is that you might have some complicated but very fast electrical event, let's say the ion influx. That is happening at a much finer time scale than the chemical events are, than the chemical computations are. And so you need to now do the averaging, the summing over the finer time scale and pass that onto the chemical system. So all of this happens behind the scenes and then there's various scaling factors that one can apply, for example, to map the concentration of the channel to a conductance change. And this is all it takes, really, to map from one physical, to one domain of computation to the other. And that's how we do the multi-scaling. So just to give you an idea of the kinds of complexity we look at, so here's the simple model that you just saw. It has six electrical segments, four ion channels, two kinds of receptors, some 60 chemical voxels, and eight or so reactions. The more complicated version over here, significantly more complicated version, has got a full neuronal morphology, suitably subdivided into lots and lots of spines, lots of ion channels, and much more complicated reaction scheme. And the key point is that to do this, to go from one to the other, you need to change three lines. You need to change one line which says, this is the morphology. You need to change one more line which says this is the chemical system. And you need to change a line which says that this is how you're going to deliver the stimulus. And that means that this short snippet of, this not snippet, this entire program that defines this reaction diffusion, this multi-scale system, can define this simple model here, as well as this complicated model here. And that I think is a very nice way of using the modularity of standards. That is we take some standard definition for morphology, we take some standard definition for chemical systems, and it's up to now the software to package all of these and make them work together. Just, I'm sure this is of interest to some of you. That is the speed question. How long does it take to run these things? Well, the simple model runs significantly faster than real time, which is nice because very often chemical type experiments, such as LTP experiments take half an hour to run. Unfortunately, unless you do parallelization, the big thing runs a lot slower than real time. And it's kind of interesting that the rate-limiting thinger is actually the electrical computations because you have to run electrical models at a much finer time step than you have to run chemical ones. So adding the chemistry doesn't actually slow it down that much. You can have a really, really complicated chemistry in a model of this size. Okay, so that's just an indication of what you have there. Okay, so that was the modeling framework. And now I'll tell you a little bit about how we parameterize and get the numbers into such a system. So you've already heard about various data frameworks. The BlueBrain project had a very systematic way of building up their parameters. You also heard about the Allen Institute's framework for defining specific kinds of experiments and having them in their database. And so we've also developed a database and it actually bears some resemblance to something that Sharon talked about. How do you define a set of experiments and map them onto a set of simulations? So the basic idea is that we have our model. For example, it could be defined in SPML or it could be defined using RDesigner. We have a stimulus that was used in an experiment. So they put it in a slice in a dish and they zapped it in some way or they poured some chemicals on it. And then you have the expected outcome. You had what actually came out of the experiment. So we define this in what we call the fine sim format. And this is basically, at this point, it's simply a table format which allows us to unambiguously specify all of the inputs that go into this calculation. And that feeds right into Moose, which does everything that needs to be done and you'll see a little bit more about that. And that gives you a readout and then you can compare that with what the experiment actually got. And that gives you a score and you can do all sorts of things after that. You can optimize it. You can decide whether or not you believe this experiment and so on. So this is the pipeline that we employ for this system. One important design decision that we had to make here was how do you define what is the model on which you will base everything else? So one way of doing it might have been that for each experiment you have a model and you try to find a way subsequently to put them together. We felt that instead we'll have a reference model which has all the pieces in it because the interactions between the pieces are actually also fairly involved. So here's our reference model up here in panel A. And for a given experiment which might only involve a small subset of pathways, it might be in a test tube and only involve a few molecules, what we do is we sort of trim away the excess and just focus on the parts which we are interested in. So this is the rough pathway that we're interested in for this experiment and just zooming in over here. So we trim away all of the other bits, all the bits there and there, focus on that. And even within that there's some bits that we drop out and then the system cleans up all the loose ends and it runs the simulation just for that bit, just for that experiment. And then you can specify all the necessary aspects of the experiment. It turns out that it's actually quite, there's a common subset that seems to account for many, many things. You define what kinds of readouts you want, these would be the results, so you could have a dose response curve, some kind of bar chart, you could have an action potential coming out of a current clamp experiment and so on. And then you run the simulation, get a score out of it, you can do different kinds of fitting, you can use this for optimizing it. And in the end though, it comes down to scientific judgment to decide how much you trust this experiment, how much weight you want to give it. We have over the past few months built up a database of experiments and this is something that we keep expanding. I think there's some, at this point there's some 70 different experiments with each of which can constrain the model in different ways, looking at different pathways. The model at this point, version 0.9 something has got roughly 300 molecules and 200 reactions in it and it draws on data from many, many sources. So that's the framework in which we are working to get decent models. So let me just end then with what we're hoping to do with all of this. So let me get back to my perch acronym here. So P, we've been analyzing parallel computation over the cell. And so here's the, here's our sort of a monster model which has got all sorts of horrible things in it. It was morphology from neuromorpho, some 5,000 spines, a huge reaction set in each spine, background synaptic input, the works. And this is just a picture of it. All the little blinky lights which you may or may not be able to see are synaptic input coming in randomly at background except it's not totally random. Somewhere embedded in all these blinky lights are certain sequences of input. And the parallelism comes in because the cell is, this is our interpretation, is in parallel able to monitor whether or not there is a sequence of input, a sequence of activation coming in in any particular end right. You could also argue that the cell is in parallel at every single one of those synapses assessing whether the pattern of inputs is such that it should engage in synaptic plasticity. So that's parallelism. Here's an example of emergence. So this is a very old study that we looked at where the emergent property was that of bistability. So you have a feedback loop, a chemical feedback loop in this case, and that is interesting as a switching function. Here's another kind of emergence. We were looking at sequence selectivity and basically what we find is that with the right chemical system in place, if you follow a synaptic input that comes in successive locations along at end right, you get a strong response, but if it comes in a scrambled order, as in this case, so it sort of jumps from there to there, the net result is very small. So here are examples of emergent behavior that is happening at the biochemical and electrical interface and is able to do an interesting computation. We're also using this for some disease models. We're making reference models to the extent that we can get the data for healthy neurons and we're comparing them with experiments done in fragile X models, mouse models of mutation. And this is a consortium effort and we're actually very keen to bring in other people who are interested in such efforts. So these are models which look very closely at the chemical signaling events that underlie these diseases. Okay, so to summarize, there are very many important problems in neuroscience which require models that actually pay a lot of attention to biological detail. And these typically span many scales, electrical and optical, and you can get data from them from interesting new recording techniques that you heard about in the morning. You can look at chemical reporters and you can look at morphological change. These are all inputs that you can give to these analyses. And we've developed tools to run these and parameterize these and analyze them and we're hoping to build up collaborations to work on these. So there we are. This was the framework for modeling, for the data handling and analysis and what we're planning to do with it. And these are some of the crew who have worked on the project. Thank you. Thank you, Upi. Any questions? Please use the microphones. Maybe a stupid question, but is there something that neuron cannot do of the things that you described? Like incorporating temporal and spatial scales, I think that's in there and chemistry as well. So why would I use moves instead of neuron? Neuron can, in principle, do some of the chemical stuff, but it would be painful in the extreme. And I don't think it at present can do the stochastic calculations, though that may have changed recently. I think it can. Yeah, but it would not be fun. Very nice. Yeah, so some of the, you kept emphasizing the chemical aspect was very slow, but obviously there are some very fast chemical reactions. And so I was wondering whether things like calcium binding to synaptotechement, for example, whether you can run your chemical reactions at different rates to take into account the spectrum of kinetics of the chemical reactions. So that happens sort of automatically if you're using the Gillespie algorithm. So if this is happening in the spine, then the different rates are partitioned without you having to do any further effort. So are you using mostly the stochastic approach? In the spines, that's what we're doing. For the dendrites, the volume is large and so the Gillespie method really gets bogged down. So there you have to use a deterministic method and there one needs to deal with stiffness of the kind that you're describing. We are currently using a Runge-Kutta method to get very technical, but there are some nice implicit methods that would address the point that you bring up where the fast things can be handled cleanly even though there's a lot of most of this happening slowly. Okay, just one related question, which is if you have a low copy number in the spines of the proteins that are interacting, does that cause any problems with the adapter approach? Because doesn't that depend on sort of averaging and well-mixed? I mean, we can get you the technical details, but what we do is when we're dealing with any interface between the spine and the dendrite where you're doing stochastic versus deterministic calculations, then you need to do actually probability-based judgment about whether the real value should give you plus one or minus one on the stochastic side and vice versa. So we have worried about that and we've taken it into account. So this may be a slightly naive question, but can your modeling framework sort of deal with gene transcription sort of level things? So it's easy to implement gene transcription in our molecular framework. It's not very efficient. There's other more efficient ways of doing it, which we haven't yet done, partly because we've not really had the use case to do so. But of course, there's an enormous amount of transcriptomic data coming out now, which might be very relevant. So I think, I mean, for now it's easily possible to do it in the chemical framework. It's just not the optimal way of doing it. So I have a question. So the title of the session is standardization in multi-scale modeling. So I understand that our designer is based on runs on moose as the backend, but it looks in principle, it could also be available on Neuron, for example. How much work would be to implement our designer for Neuron? I suspect it would be a lot. It was a lot of work for moose, and moose was designed to do this sort of thing. In Neuron, the chemical stuff has been added on by Tom Morse and others as a subsequent effort. But it could be done. I mean, it's a technical exercise. I think that what we got from our designer exercise was an appreciation of actually how simple it could be to define a really complicated model. That is, we've been able to do it in just a few lines, and there's no real reason why it should be why you need more information to do so. And this is something which you can probably also define this set of things in Neuron, too. There, that does have a fair degree of flexibility. But in all these things, my experience is you first have to have an implementation that shows that it can be done, and then you can figure out how to do it in a clean way. This is not, this is Python, right? So this is not perhaps the optimal way for defining a model. But it shows you what is needed. Any more questions? No, I just have a question about the spine granularity. So we know that many molecules in spines are, for example, either synaptically localized in the membrane or perisynaptically localized like M-glue receptors or lipases. Yeah. Which actually, that organization is really critical for function. Yeah. So can you model that, or do you model that, or do you take right now the spine as one entity? So we can model that. We don't do it yet. Currently our subdivision is we have the postsynaptic density and we have the spine bulk and then we have the dendrite and of course so many other things and so on. But it's not a big step to add additional relevant compartments as the need comes up. Okay, could we have the next speaker please? Jason Sheffey, is it here? And then we'll get to that, any more. Yeah.