 My name is Dan Keller, my background is from Reaction Diffusion as specifically working with MCEL with Tom Bartol and Terry Sinovsky before I came to the blue brain. So in this talk I'll tell you a little bit about the column simulations, how those are set up, and also the building process that goes into creating the mesh models that we use for our Reaction Diffusion simulations. How we go about populating these mesh models with molecules. How we annotate the regions and register the simulation in turn back with the larger macro scalar simulation. And so the important thing about multi scalar modeling in the blue brain is that everything at a larger scale has a counterpart at the most reduced scale. And lastly I'll tell you a little bit about the simulations that we've been performing. Okay so in the laboratory we have to characterize all the cells that go into creating a cortical column. We patch the cells and of course label them with dye, characterize them electrophysiologically and perform histology on them. Then reconstructing their sections and segments with neuro lucida. Okay finally when all is said and done we have a morphological model for all the sections and segments. Generally there's about 300 segments per neuron in the models that we use. Yes we do track the axon although those are not explicitly simulated usually within the larger electrical simulation just for computational reasons. Now there's a tremendous amount of morphological variation that goes into the cells in the column. Here's just a representative sampling of all the different cell types and when we don't have good enough reconstructions we go back to the lab and tell people to focus on this cell type and focus on for example getting more axonal reconstructions of a particular cell type. So it's kind of an literature process towards obtaining a representative sampling of all of the cell types within the column. Now what do we do with these morphologies once we've emptied them? Well as James mentioned we use kind of similar process we position these neurons in space and they're rotated and placed at their correct layers within the column. Now once we have the dendrites and axons we do touch detection. That is to say we detect that there's a proximal connection, proximity between a dendrite and axon and we place synapses at the point of intersection and when all is said and done we obtain a micro-circuit with the complete connectivity of the circuit already in there and we simulate these in neuron on a super computer. Now that we have the circuit we can build the subcellular features that correspond to all of these for example synapses. We have to have the spines including the postsynaptic density or PST, the boutons with the active zone, the synaptic cleft, all the subcellular organelles such as the endoplasmic reticulum, mitochondria and even the soma. So I'll tell you a little bit about the process that goes into this. I've already talked to you a little bit about the light microscopy and there's also a parallel EM track towards obtaining the finer features of these subcellular structures. These all go into a process we call the ultrastructure builder which after completion generates all the subcellular meshes that correspond to everything in the circuit. These go into another builder which maps constraints, which maps molecular concentrations and reactions taken from the literature onto the meshes and builds models. These models being reaction diffusion models with MCEL or steps but we've attempted to keep the model as generic as possible and simulator agnostic meaning that potentially we could support other simulators as well, for example Smolvain or what have you. The data from the morphologies goes to building the meshes, the surface meshes on the neurons. We take statistical representations of all these finer ultrastructure features to build the boutons, to build the spines and then we have a process called the extracellular optimizer which kind of moves things into place because what you get out of touch detection doesn't necessarily have the correct distribution of spine distances and sometimes because we've just laid these morphologies on top of each other you can have processes that intersect each other so we need to kind of move all those guys apart with a repulsive force field. Finally, we put in the mitochondria, the endoplasmic reticulum and build the soma. I'll just kind of start with different aspects. For the soma we can place different organelles the Golgi apparatus, the ER and the nucleus in there. Now when we generate additional subcellular features such as the endoplasmic reticulum and mitochondria we generally save these in the same files in neuron meshes and all of these are annotated so we know their identity. We don't necessarily maintain the registration problem the registration with the original segments and sections but when we in the final process go to export this region of subcellular space we just use mesh boolean operations to pull out these structures. When we generate the ER we know the probability that I should say the ER is a net like structure that kind of permeates the whole of all the processes so we know the probabilities that two different processes will intersect or diverge or converge and can use that to kind of draw from this distribution to generate the ER. Same thing for the mitochondria. We know the numbers per unit length of axon and their links that they generally have. So here's kind of a representative picture of these meshes the red being a mitochondria and the white being an endoplasmic reticulum segment. Now in excitatory neurons of course there's generally just one mitochondria that pervades the whole of this structure whereas in inhibitory neurons there's actually many different mitochondria. So we also put in the butons and it's kind of a similar process we draw from statistical distributions so that we know the radius of the swelling that the butons undergo and then also remap and remesh the active zone in order to place receptors later on in these structures. And of course everything is parameterizable. When we go to create the spines now recall that out of touch detection there's just the presence of the synapse that's noted and then you generally have a gulf a gap to fill with the spine. Now we have specialized tools and also the literature originally we're using mostly just literature to create spines and the parameters that we wanted to match were the neck radius, the spine volume, the head volume the curvature, post-synaptic density area and volume which of course is important for the strength of the synapse. So we put these into the generator now we're moving towards using specialized tools such as this one generated by the Kahal Institute for us to actually sample spine morphologies from EM data sets in order to create this library of different spines that we can draw from to populate our neurons. So we generally just at least at this stage extract the radius as a function of the length of the spine in order to generate them. Then we have to go back to our original morphologies and insert these new spines now when we originally did the neuro lucida for those morphologies of course we kept the spine information but it's no longer valid the presence of the synapse is no longer valid once we did this touch detection in order to so we have to kind of regenerate them. Now when we've added these new segments in the morphologies that's not what's running in the macroscalar circuit simulation. What's running is actually a neuron without these spines in it. So we have to kind of maintain the registration and knowledge of our new spine segments to the original morphologies. That said we could conceivably simulate these as chemical compartments if we wanted to because they're now generated in the morphologies. Okay, so what happens during this process? Well I can specify any coordinate of any position within the column and pull out a cube of space generally of interest are the synapses. So when we export a synapse we have to enforce the cleft spacing. So I have a refinement step and this is this 20 nanometer cleft gap because of course that's relevant when you release glutamate into the synapse. Now for now I've assumed that the null space in our cube besides being occupied by axons and dendrites the null space will then be glial cells and we can make a mesh corresponding to those glial cells. Now in this diagram what you see is a spine and the red is the PSD and it's not just a surface region that we've tracked but also the internal volume elements seen at the very right corresponding to the PSD and we maintain knowledge of all these structures so that later on we can map the chemical reactions into them. So I'd like to kind of reiterate Uppi's call for a new set of standards in defining the chemical reactions within a context of neurons because and I'll give you some examples later on because we've actually run up against some of these I'd say kind of deficiencies currently in how in the current set of standards. Okay so we can produce neurons with a complete set of spines. Now the interesting thing is now the spines are actually melded with the mesh so the whole of the morphology is the whole of the mesh is now watertight a watertight compartment so that's very useful when conducting these reaction diffusion types of simulations within the meshes. So how do you get them to be watertight? Well it was actually quite an ordeal to generate these meshes. So is this automated or is it by hand? No it is fully automated. So it was it's kind of a massive piece of code that goes and does that. Yeah Yeah there were many because there's so many special cases within the data that you have to just go through and kind of do it the hard way just by waiting to encounter a special case and then just going back and solving that problem. But I think now we have we can mesh 30,000 neurons in the typical number in a column without any errors being encountered. So just to be sure what you fake as input is neuro lucida or EM reconstruction data and just give it give it the the crude topology of the neuron and then it makes watertight. Yes that's correct. So we're also working as the next step to synthesize glial cells. I told you how we're taking the null space before to just kind of create a crude mesh upon which we can map surface reactions. But it would be nice if we actually had the glial cells fully represented within the context of this model. So what we do in collaboration with Graham-Knot is we have these filled glial cells and we've done serial EM on them and we're also pulling out the statistics of their branching area, things like that in order to actually grow and synthesize new glial cells that will be laid down after everybody else is placed and laid down in such a way that they don't intersect any pre-existing neurons. Of course we will always still have this additional morphological refinement step in which we kind of massage the meshes to have the correct extracellular gap in spacing between them. Right and that's actually what we call the extracellular optimizer. In order to give the synapses correct we have these springs that we have to attach between your pre-synaptic and post-synaptic segments to kind of move the segments into close opposition to each other. And during this stage what we get for free by attaching the right set of constraints is actually we can move some that might otherwise overlap so they don't overlap anymore. Okay. So the goal is to create neuro-pill that's completely filled with mesh elements as is real tissue. And here's just kind of a demonstration of an earlier version of the model in which we were just putting in spines. Now of course the spines are actually grown as part of the as part of the meshing process and this way I'm just kind of sprinkling them on here in visualization. Of course here's the soma and then we're going to just kind of fly along the dendrites to look at a synapse of interest. Oh yes of course our endoplasmic reticulum and mitochondria in this one. There are still cases in which there's close opposition of the endoplasmic reticulum and mitochondria of the spine apparatus that I still have to kind of refine to make the spine apparatus stick through into the interior of the spines. And then we'll just kind of zoom in on a spine and synapse of course now with the correct spacing. So I'm just going to proceed. Okay. So as I told you now we can actually do random access and cut out our tissue. These annotated simulations are what goes into the molecular simulators and it's the original electrical simulation. So we can map the voltages of the electrical simulation onto the channels in our mesh models in order to get the proper amount of current flow that goes in and drives events in those membranes. Now currently it's kind of a one-way street. We haven't closed the link back from the fine level reaction diffusion levels to the electrical simulation. And so it's basically just the voltages driving the reaction diffusion simulations. There's a number of computational constraints of course involved in doing that and you should be aware that the time scale for simulating these detailed simulations is actually much longer than the time scale for simulating the corresponding amount of time with just an electrical simulation. There's another gap which I think will be difficult to surmount. So it'll never be possible to run an entire brain or entire column at full molecular scale. You'll always have to if you choose to run them concurrently choose to run a very small amount of tissue in conjunction with the full column at this level of resolution. Yes. And one of the other simulators that we can export to us is steps. So I don't think that we should be tied to a particular simulator. Well actually, I think in a more general sense that you seem to have picked a very, very highly geometrically detailed version of the chemistry and I think for many purposes you might even consider going down of coarseness on your resolution and say okay I'm going to treat it as compartment for ESD1. Sure. I think that's actually a good way because much of what I've shown you now is in some sense, computational overkill. Do we really need a finely meshed endoplasmic reticulum in this fine level detail? Well, I suspect not. I suspect actually that you could get by with just having the correct surface area and some kind of volume element maybe in a much simpler simulation. But how do you show this? And I think the correct way to show it is to actually have both scales of simulation and show okay. For this particular feature space does not matter so much. We don't need this much detail. Now there will be instances in which space matters incredibly and I think this approach towards having multiple representations at different levels of coarseness will allow us to properly assess just when we need a particular level of detail. Okay. So now I'll talk to you about populating these meshes with molecules. We currently have a database of concentrations and reactions of all the molecules in the spine. Now we don't tie these to compartments. Some model languages do tie these to compartments but it actually is a little bit cleaner if you don't. Users should also be able to register different models corresponding to the same molecule. For example, paper A has one model of the same molecule as paper B they do it in slightly different ways. What we are working on is a way to check the constraints so that these models are all interoperable. Did you have a... What do you mean that reactions are not tied to compartments? Okay. So in some languages you have to have a molecule within a certain compartment, undergoing a reaction with another molecule within that same compartment. And that compartment itself is specified as part of the reaction. That's what I mean. Whereas you could imagine that you might be able to just take the molecules out and not have them linked in any way to the compartment that they're undergoing the reaction in. And I think that makes it a little bit easier to port molecular models across compartments. Yes. Are you referring to the SKML? Well, okay, yes, that would be an example. Because that's a relatively easy mapping to... Yeah, it's not insurmountable. It's really actually... For this model compartment X happens to represent then right, for this model it happens compartment Y happens to represent then right, and so on. Yeah, it's not too hard. But it makes it a little nicer if you don't have them tied like that. Well, yeah, except that in some models, in many models I would argue the compartmentalization is a key part of the chemistry. So you do want to say that certain reactions happen in this context and other reactions happen outside. For sure, but that linking should happen at the point at which you map the molecules onto the compartments. The model has already done that for you. The modeler, if he or she has already said that these reactions are happening in the then right and these reactions are happening in the spine. It was the compartmentalization. Yeah, and clearly the information is there and they can cross talk to each other. Yeah, so they have and they've done some of this, so it seems a shame not to be using that. Yeah, it's certainly not a huge one. So I'm curious, you're using you've taken models, say, from biomodels or docs or somewhere and you've already massaged them into this database form. Is that correct? Right. Is there any particular reason why you don't want to just take the original models? Well, the thing about, let's say, biomodels, for example, is that you can't just pluck one molecule or one model of a molecule and expect it to talk to another one that you've also taken from biomodels. So I think there are a set of standards that could kind of enforce interoperability between those models. Yeah, those are also possible to use as well. I thought we were talking about just the biomodels database. Okay, so another thing is we'd like to be able to have a good naming scheme for the, well, okay, besides, sorry, I just mentioned that point. Well, another issue we've come across in the project is that an ion channel to an electrophysiologist doesn't necessarily correspond to what it, the same structure that a molecular biologist would see, right? Because you have different subunit compositions. So I think we also need rules to map these entities, one to make them cross talk to each other. Okay. So the current regions in our simulations that we can annotate, I've made a list of them, but of course there's others. And I think what we need are consistent region names that map that actually have entity, that actually have correspondence at another kind of level of simulation. So for example, the PSD might have a correspondence with the PSD at a higher level. Because that's where all the conductances are. So I've talked to you about the fact that it's a one-way street from the electrical simulations to the molecule simulations. We'd like to one day connect these back together. We'll never be able to run these at the same level of detail for the entire column, but it might be possible to construct lookup tables for a given set of input stimuli, stereotype stimuli, and then be able to rapidly reference these lookup tables in the electrical simulation to understand what is synapsed as for example. So what you see here is an active zone that I've remeshed onto the surface of an axon. And the interior is a simulation of calcium, the white dots kind of surrounding one of the vesicles in there. So we can get an idea of the probability of release of that vesicle in response to calcium influx. What we'd like to be able to do in any standardized representation is to be able to track and specify the constellation of channels that you might see in dots in this left figure with the positions of the vesicles seen as circles in this diagram. And you can imagine that their distribution might be written as a set of constraint equations which should also, I think, be tracked in any specification of the chemistry. And on the right you see a simulation of the calcium influx that occurs when these channels are activated. So I've talked to you about the subcellular simulations. In conjunction with that are the extracellular simulations. So what we're working to do is superimpose a grid of voxel over the column. Now, the subcellular simulations are in the order of maybe let's say on the order of microns, but these coarser-grained voxels which map over the column are in the order of tens of microns and we can model these as single compartments into which we can inject our remove ions depending on the activity of the neuronal segments within each of the voxels and then track the diffusion between these voxel elements. Now this is desirable for a number of reasons as we've touched upon in earlier talks. Most particularly in linking in the glial cell contribution to the extracellular environment that the neurons experience. And then we can go ahead, once we know the changed ion concentrations and remap these ion concentrations extracellularly to the reversal potential experienced by each of the segments in the neuron simulations. Okay, so here's this kind of diagram showing that process. You have your subcellular domain with the spine, some glia and axon. The extracellular simulations, this grid of voxels is mapping to that and giving that information about their extracellular ionic concentrations. Meanwhile, both entities are exchanging information with the column simulations. Alright. So just to summarize, we can now export any cubic region of NeuroPill to these molecular simulations. I think we do need a set of common standards in order to exchange information about these neural modules including a set of standardized subcellular regions, molecule and multimeric complex names and models that correspond to each of those molecules. Now, currently we drive these simulations with the electrical activity of the network. They do run at a different time scale than the main cortical column, right? Because for molecular simulations, you often need a much finer time scale than you would get by with for the electrical simulations. And I think there will occur cases in which we'd want to be able to simulate both of these levels, the electrical simulation and the subcellular simulations simultaneously. So that will actually help validate the approach. So, thanks for your attention. Questions? I guess. I should also acknowledge that we do exist within a larger ecosystem of people mostly focused on the circuit simulations. And thanks in particular to the INCF, Sean Hill Eric and Matthew A. Burns for having me.