 Thank you to the organizers for inviting me to this workshop. We at IBM have been working on this infrastructure, the neural tissue simulator, for about 10 years. And I'm going to share with you what we've accomplished in that time. In particular, I'll tell you about why we consider this an ultra-scalable solution to the problem of neural tissue simulation. And in addition, I'm going to talk about how it allows you to specify and solve arbitrary model graphs over a complex neural tissue topology. So what do I mean by model graphs? I would argue that almost all models that we create in computational neuroscience can be represented as directed acyclic graphs of models which are communicating with one another. This is an old way of viewing parallel computation in general. Except for the most simplest models, which can be viewed as single nodes. I think more complex network models ultimately can be decomposed in this way. Yeah, acyclic, which may be confusing, but I think I'll explain to you why, though networks themselves are highly recurrent, in order to compute the network, you need to decompose it into an acyclic graph. Because dependencies in computation can only go in one direction. You have a left-hand side and a right-hand side. So this is what I mean by acyclic. So in the process of giving this presentation, I'll address three major questions, which I think might be relevant to the objectives of this overall two-day workshop. One, how do we specify these graphs arbitrarily? And we have one solution to that, which I'll share with you. Secondly, how do we simulate neural tissue itself as a model graph? And that goes to your question, how can that be that a neural tissue is an acyclic graph? And I'll try to address that and share with you how we did that with the neural tissue simulator. And then finally, I actually will start with an answer to this question. How do we best compute a neural tissue model graph? And when I say best, I'm implying that there is a particular machine architecture that we're targeting. So at IBM, we conducted an experiment which was the use of the model graph simulator to simulate neural tissue with a very specific decomposition, which I'll share with you. And that decomposition dictates then how to map the computation onto a parallel architecture such as blue gene. So much like Eric did in his introductory comments, I'm going to describe what we mean by neural tissue simulation. And we know from sort of a long history dating back to the middle of the last century that we all started with Hodgkin and Huxley in single-compartment models, which themselves were multi-scaled in that they were aggregates of channels that were simulated in this single model of a compartment. So we very quickly started to compose fiber models at IBM and other places, Cooley and Dodge for example, which connected these single-compartment models together through fixed axial resistance or conductance. These can then be solved as branch structures, the work of Heinz and others, to give rise to full-neuron compartmental models. We're familiar with the concept of neural networks. This is a B-tree network from the work of Turing. What typically is done to approximate neural or neuronal networks is to incorporate models of synapses in these neural networks and to perhaps embed within them compartmental models of neurons, which then are coupled across these synapses. What neural tissue simulation does is augments sort of these constraints which we inherit from this long history of modeling. The previous constraints themselves, in our definition of neural tissue simulation, are included. That is, we use multi-compartment Hodgkin and Huxley models of neurons, derived from in this case anatomically reconstructed neurons. We try to replicate a diversity of neuron and synapse types, really an arbitrary diversity because that's what we observe in real neural tissue. We couple these neurons together through synapses which attempt to match synaptic distributions from real tissue. None of this is new. In fact, the additional constraints that I'll share with you now have been used in many other simulators and simulation infrastructures and approaches as well. We think this is what distinguishes neural tissue simulation and that every model in the simulation is embedded within some three-dimensional coordinate system of the tissue. You have to have that in order to really begin to approximate the tissue. I think Eric addressed that and implied that in his introductory comments. The coordinates for these models are then available during initialization and simulation themselves so that you can compute over coordinates which are beyond the single neuron or the single elements of the network, but instead refer to the coordinate system of the tissue. In this way, model dependencies and the communication between models and their calculation itself can depend upon these coordinates. This I think is very useful in a variety of modeling approaches. One that has been used in the case of the blue brain project and one which I contributed to in the middle of the last decade in the blue brain project was that of detecting contacts between neurons in this three-dimensional coordinate system in order to provide a basis for synapse creation at points of touch between neurons. This is one way in which the model dependencies can be based on these coordinates. In this case, the geometry of the neuron dictates where synapses are created in the tissue. Some advantages and things we can derive from these additional constraints. One, we can constrain synapses. As I mentioned, based on tissue geometry, we can, for example, facilitate models of other than synaptic interactions as Eric addressed. For example, the faptic interactions between neurons. We can facilitate models of extracellular phenomena such as drug and neuromodulator diffusion, injury effects such as spreading depression. We can also, using this coordinate system, facilitate forward models of larger scale phenomena such as EEG, MEG and bold. We've identified emerging opportunities using this approach and ones shown in yellow are ones that we've been actively addressing in our lab. One is widespread gap junctional coupling which is, in its sense, just a network phenomenon, but I think because of the way in which gap junctions are computed, it lends itself more to a tissue simulation. And I'll share with you why in the specific example you appear all of. Tissue and circuit development, something Eric mentioned is something we're actively looking at in order to insert neurons into a tissue in a way that's reasonable, that allows the neuron to accommodate to the tissue environment in which it's growing or from which it's been extracted. And ultimately, deep brain stimulation is another target area for application and I would refer you to the work of McIntyre for a very nice piece of work which looks at fibers in real patients and fiber tracks as measured through DTI and approximates with Hodgkin-Huxley models the effects of the extracellular fields generated by a deep brain stimulator. This is one way in which multi-scale modeling has been used in a clinical setting and we're looking at that application with the neural tissue simulator. So the questions that I raised in my introduction, that is how to specify arbitrary model graphs, how to specify neural tissue as a model graph and how to decompose it and compute it, they're all interrelated. So my talk is going to address them sometimes together, sometimes out of order. In this case, I'm talking about the question of axons and really about neural tissue and how to represent it in a simulator that uses the model graph abstraction but it also is intimately related to computation and I think you'll understand why in a minute. So many simulations don't include axons and this is reasonable in the sense that axons are all or nothing in their conduction of action potentials and the way we think about them and the way they're typically modeled and so messages can simply be passed on a parallel machine to represent and this is, you know, described in this review for the BlueBrain project were processors on the blue gene now which was used to simulate the cortical column. Act like neurons and these processors are shown in red the coupling between the processors on the blue gene or the nodes in yellow and a rack of blue gene is shown in the lower left quadrant. Connections between the processors then act like axons. This is a very natural way to think about decomposing a neural tissue onto a parallel machine and it implies that in some way the brain is the brain is a super computer or at least there's an analogy between the neurons or processors and the connections the axons are like the network of the neuron and I would argue that that's not necessarily an obvious analogy it doesn't have to be that the most natural decomposition is the right one or that it's the only one and in fact there are many decompositions that one could imagine for mapping brain tissue onto a parallel machine. Some include higher levels in terms of multi-scaled column level decompositions or micro-circuit level. At lower levels we can think about compartments themselves as being entities that can be mapped onto processors or at the very lowest level that we typically think of in neuroscience at the atomic level there is the EM where neurons themselves don't really appear we see instead this gamish of compartments and different pieces of the tissue all the way down to mitochondria and this level actually is the one that I'm going to focus on so what we've done with the neural tissue simulator is taken this question of decomposing the calculation of neural tissue which I think is intimately related to multi-scale modeling in general because ultimately you want to calculate what you're modeling in an efficient way and so partly this is the reason why IBM is interested in this type of an experiment which is to ask questions about what's the best decomposition for the machines IBM is building and I think it's a common architecture that is that of a massively parallel machine such as Blue Gene and the decomposition we chose again which is best exemplified by the EM level where a tissue volume now comprising fibers which pass through some fixed rectangular prism which themselves are cut at the boundaries of that prism all fibers within that are mapped onto a note of Blue Gene and I'm not saying that we're doing a field approximation of the tissue or that in some way we're dispensing with neurons, we're not the neurons are there, they're just cut and they're spanning multiple processors and so what this does is it imposes the coordinate system of the tissue onto the machine so now the machine is basically a domain decomposition of the tissue and not a set of neurons coupled across the machine communication architecture so this was the goal, this was the vision and we thought this might be a reasonable approach for something which is very close to to the hearts of many in parallel computing which is scaling can you scale your system and your simulation up and in terms of scaling there are different types of scaling one is known as strong scaling and in strong scaling you take the number of nodes on the machine and you increase it while maintaining the size of the calculation constant and so you see whether or not you get a speed up and this is useful if your goal is to speed up your calculation and in the case of the neural tissue simulator we saw a very good speed up on a blue gene P with a 16,000 neuron simulation with a thousand compartments per neuron and in this case the synapses were on the order of 10,000 synapses conducting space synapses ampline and GABA per neuron this compares then to work done with parallel neuron by Heinz and colleagues and reasonably well there's an argument to be made that our simulator is on the same order of speed given these numbers we use gap junctions in our simulation so it might be best mapped to the second curve in Heinz's plot but of course there are differences and so I don't really want to dwell on comparing the two I just wanted to make the case that our strong scaling results are comparable what instead I want to focus on in terms of scaling is other benefits of this volume decomposition and how mapping the tissue coordinates of the models which ultimately comprise our model graph into the processors of blue gene on a volume by volume basis and how that affects two other types of scaling but in order to describe that I just wanted to drive home the point that in this case the compartments of the neurons shown on the left are divided among four processors represented by this 2D slicing scheme and the different colors represent where compartments in this case are mapped and you can see at the volume boundaries which would be a blow up of the intersection of these two cuts where the different models representing the compartments or the branches need to then communicate shown in the lower right communicate in order to solve the Gaussian elimination which is required to solve the dendritic voltage in, for example, Heinz's approach so keeping this in mind I also want to point out that this very small green structure in the upper left quadrant of the lower right figure is a synapse so where's the communication it's not between synapses it's not within synapses that synapse is entirely within a single node of the machine so synapses are simply you know ordinary differential equations that couple compartments with a presynaptic or post-synaptic component in order to perturb the voltage of the post-synaptic side in this case in GABA for example and so no communication occurs in terms of MPI or the machine itself for synapse solutions instead the communication is fixed in synapses because we know the EM cross-sectional area of number of branches or number of cut fibers per unit area we know that that's been known since EM was invented and it's really pretty much constant across all animals so we kind of already know the communication requirements for this type of simulation even if we don't have the connectome so this is one argument as to why the volume decomposition so then you're explicitly computing the action potential propagation that goes back to the slide about axons which is that we do simulate the axons and I failed to drive home that point you're right so the axons then become part of the Hodgkin-Huxley solution and we have no thresholds then there's no logical operator if something exceeds a threshold instead we're stepping forward continually a set of ODE's which are coupled in this way in our model graph though the axons could be logical you know logical spike passing entities we just haven't implemented them that way was there another question? I don't see how all the symmetries or most of them can be in the bulk of the volume because the surface volume ratio of your domain must be quite large so lots of symmetries should occur on the surface and therefore be standing domain so in a tissue in a tissue that's 4 millimeters by 2 millimeters we map that on to say a 36 by 36 by 36 partitioning of the system so the surface to volume ratio isn't so bad the system automatically deals with those that do span boundaries and what I'd like to point out in a second is why even that doesn't worry us but first let me just point out that synapse scaling what we call synapse scaling something not really in the computer science domain but we think relevant to computational neuroscience in this plot we show that as we grow the number of synapses per neuron in these simulations and this simulation has 250,000 neurons now on 1,024 nodes of blue gene P from zero synapses per neuron to 12,000 we span forward as a magnitude increase in synapses the speed up I'm sorry the slow down in our computation is less than 10 it's really close to like a doubling which is I think you know it's important to note at least that this demonstrates how synapses aren't really about communication in our simulator they're about computation and it's pretty light okay these are full conducting space synapses as well in GABA and so they are updated on every integration time step of the dendrite these are not spaced out according to some assumed synaptic delay as is often the case with logical spike passing yes what kind of circuit do you use okay I'm going out of order I'm trying something new because I want to get to sort of the way the graph is specified as being the crux of this talk since that's what the workshop is about is specifying models and multi-scale models but in short it's a cortical like structure which I will describe for you in a minute but to drive home the point about what if synapses found boundaries or what about these cut fibers what's the communication cost remember this is nearest neighbor communication on the Taurus or any network it could be a Beowulf cluster or whatever and so nearest neighbor implies that there are fewer hops in fact there's one hop on a blue jean which means that the link bandwidth is going to be much greater than say long distance with between non-adjacent nodes and this really is where we get I think the most important form of scaling in this experiment this computer science experiment which is our weak scaling result weak scaling is when you take the size of the calculation and you grow it at the same time as you grow the size of the machine and what this implies is or what it asks is the question if you have a big problem or starting small because you only have a small machine available will you get there? can you just grow the machine and still compute at the same rate because often depending upon your decomposition the answer might be no in this case we grew the size of the simulation from 16,000 neurons 1,000 compartments each on 64 nodes of blue jean to over a million neurons over a billion compartments and 10 to the 10 synapses on the full 4,000 node machine at Watson labs in Yorktown Heights and you see the compute time is for anything it's improving which is kind of weird but it's at least flat so what this suggested to us was we could then project out what kind of a machine would be required to simulate for example a leader of tissue a relevant volume to us because that's about how much we have in our heads of neural tissue so with that type of a weak scaling result we're fairly confident that this solution will continue to scale as machines grow larger over the next decade and I'll leave it at that but that's our weak scaling result so I don't know anything about the jeans internal architecture but you're saying it's a 3D torus yeah it's actually 5D now on Q and I'm trying to figure out what to do with the other two dimensions but does that actually scale when you with the physical interconnects between the processors is it possible to maintain that 3D architecture as you fill a room with additional jeans yes yeah I mean that's that's what the whole other side of the building is all about is figuring out how to wire that up and maintain those delays and it's a very you know it's a very natural I think wiring problem because again it's nearest neighbor but you do have long distance connections when you wrap around somehow they may maintain it yeah it's a packaging problem at least with blue jean q it's a solve problem and it actually has moved from Ethernet type cup cabling to fiber optic partly for that reason to make it more reliable yes is it at a what the the volume decomposition is actually adaptive to the load so we create a histogram in three dimensions of the load by mapping the computational cost of all of our models to the coordinates that they live at and then equalizing the histogram in three dimensions so they are rectangular prisms there could be some balance but that's our approach I wouldn't say the same number of compartments because synapses have a load channels might be distributed non-uniformly so I'll allude to that in a minute but it's really the computational cost of every model that gets summed at a what you might call a compartment but we're trying to be careful to distinguish compartment variables which are models from segments which are more like topological skeletal elements that the models get targeted to I'm going to kind of emphasize that point in a minute but basically this topological skeleton which is what you see in the light microscope for example fibers and touches you can't see the synapses right in light microscope so think of it as what you see in the light microscope is the topological skeleton the models then are targeted arbitrarily to the skeleton we can target arbitrary synapse models to touches we can target arbitrary compartment variables to branch segments and all of that then gets summed in terms of computational cost to balance the load and this really gets us to what is the model graph for a neural tissue you know so how about diffusion right so and I will talk about calcium we are modeling the intercellular diffusion of calcium along the fiber and I'll tell you why but we handle that I was more thinking about exocellular we thought about it of another form of exocellular right so where I'm headed with this description of the model graph of a neural tissue is that we have models which we know how to connect and there's a functor if you will in our system that goes through a dance to connect channels to compartments synapses to compartments compartments to synapses and there's a sort of a set of connections that have to be made even if you have arbitrary model types but the simulator that is built on top of allows for arbitrary models with arbitrary interfaces so even though we've imposed this sort of stereotype connection routine in order to lay down the basic neural tissue that we're familiar with channels synapses and compartment variables we can also impose another mesh if you will on top of it which could for example be an exocellular mesh it could be a set of models which represent finite elements and these could then have standard or non-standard interfaces I should say into the tissue to allow them to for example some currents across channels or provide an exocellular field potential which would then induce a current within the fiber if you were simulating something such as deep brain stimulation and it could also handle reaction diffusion in the exocellular space but this hasn't been implemented it's just supported by the underlying infrastructure of the model graph simulator so the model graph simulator is what the neural tissue simulator is built on top of and it provides a language for expressing model state the phases of computation which is how we decompose something which is a model of a recurrent system into a directed acyclic graph because we break it into phases so a single model might have multiple phases of computation which ultimately means that the graph is really a graph between model phases as opposed to between models and that's basically the answer to that question from earlier there's a language for composing these models into graphs and under the covers and this I think is one of the advantages of this approach as well is that we automatically partition the computation for multi-threaded architectures and we automatically generate the communication that's required using MPI in this case to solve the graph over a parallel architecture such as Blue Gene and so the user doesn't have to think about parallelization so again the model definition language allows you to specify models their state, their interfaces their phases, the graph view allows you to compose those models into a graph model types themselves are laid out in memory in a efficient way and compute it in a multi-threaded for multi-core architectures the communication then is between models and their proxies on other processors and so you can imagine the models within a node is existing and the proxies is being sort of the fuzzy boundaries of the graph that the particular node sees and those fuzzy boundaries in the case of the volume decomposition are those fibers that happen to and then we have a parameterization of the models which is standard and this allows us to set up state and compute an example of MDL the model definition language is shown here we declare state in this case for a sodium channel these should be familiar variables we declare connections what it expects on the pre-node side in this case a compartment and it expects a voltage producer and in the case of sodium channels it also expects another potentially the same connection a sodium concentration producer so it needs to know the concentration of the intracellular sodium so that it can compute the nearest potential and know it's reversal potential and this is the way we think about it what does a model need to know what does it expect when it makes certain types of connections and what does it produce that's not shown here but what does it produce for others to use and it's the problem of something else in this case some functor a channel meet compartment and they exchange their interfaces and the system sort of discovers the appropriate state that it needs to compute over in this way obviously the designer of these models has to be aware of that introduction step I have a technical question when you say a compartment does a compartment does the volume decomposition affect the shape of the compartments do you have a diagonal slice of that so if you did a different volume decomposition the fly lay they would be different slicing and therefore different compartments or not it does and I'll emphasize that in the next slide in short we introduce what are called junctions at branch points these are separate models which can be solved in one of two ways implicitly or explicitly those junctions then can also be introduced at cut points when we decompose the tissue and so junctions pop up when you decompose the tissue in different ways and you will get slightly different results then especially when an explicit junction is in a different place than it was previously these are numerical methods issues and issues of accuracy and we can get into that perhaps after so Heinz's fully implicit method for solving a neuron is shown on the left since that seminal work on the neuron in the 80s he has published a branch-based decomposition where individual branches of a neuron can be solved implicitly what we've drawn upon is the work of Rempe and Chop from Northwestern in particular David Chop that showed that you can introduce junctions which have explicit predictor-corrector numerics associated with them and it's good to know that numerics in other fields are subjects of investigation as well we want to find new types of numerics and this is what Chop is doing what we did was we imposed an implicit solution which broke the problem up into branch orders we then introduced explicit junctions at particular locations in those branch order implicit solutions where every branch is sort of a simple tridiagonal solve and ultimately to give rise to this volume decomposition we allowed junctions to then, as I mentioned be determined by the cut points and so the stride at which explicit junctions are introduced is fixed this limits us to a certain number of phases of computation but where specifically the explicit junctions exist depends upon where you cut ok so I don't want to dwell too much on this but this is how we made the solution work in the volume decomposition such that now the model graph specification, the model specification for something like a branch includes a certain number of forward solve I'm sorry forward eliminate phases and back substitute phases which is just Gaussian elimination now you know turned inside out in our own model definition language so this is what phases look like in our graph specification language and this allows you then as a user of the tool to weave in to the existing phase structure of a simulation other computations for example the finite element model solution of extracellular diffusion might need to be performed in different phases that have very specific order relative to other phases that have already been implemented in the neural tissue simulator and that's supported so from our graph view of what a neural tissue looks like you have a node which is a Hodgkin-Huxley voltage branch for example it has interfaces that it produces for example for a sodium channel and interfaces that it expects for a sodium channel I'm not going to read all these they should be familiar to you similarly an ampereceptor would produce certain interfaces and a connexon which would be one half of a gap junction would produce certain interfaces in addition we support an extracellular medium node which could then become a mesh this produces concentrations of various ion species temperature so it's a very natural way to start connected by the user this tissue functor simply knows how to connect channel types synapse types but you can augment them with new interfaces so a different type of ion species doesn't have to be a major change in the underlying architecture it's just something that the models are aware of and then the synapse obviously has a presynaptic side so you can see that here which is communicating ultimately with this other voltage branch and as I mentioned these are all targeted to the scaffolding of the topological skeleton which is what you see in the light microscope and that's what gets loaded in first is basically the structural data models themselves are associated with structural elements I should also note that in this decomposition what we've come to understand is that a synapse is a model that takes you from one topological entities compartment variables to another topological entities compartment variables where as a channel takes you a channel model takes you from one topological entities compartment variables back to the same topological entities compartment variables though those sets might not be overlapping of compartment variables the topological entities are uniquely I think specified in this way so you may be familiar with the paper we published last year on this simulator it included this cortical simulation which I will describe in a moment but before I get to that I'm forced to tell you about sort of our next modeling objective which is more scientific it's not simply for demonstration purposes which is a model of the inferior olive and the reason I'm forced to tell you about it is that it posed a problem for us and it demonstrates what we mean by an extensible modeling infrastructure the problem is that the inferior olive has calcium channels and it has calcium dependent channels and we had only modeled voltage and so we had to extend our infrastructure to accommodate multiple compartment variables and now we're at a stage where it can accommodate arbitrary compartment variables and what I mean by that is now we have voltage and calcium originally we had Hodgkin and Huxley voltage branches now we have calcium concentration branches and these are all sort of targeted to the same topological entity known as a branch the solution that I described to you earlier involves Rempe and Chops explicit junctions this is our solver these are the interfaces for how we solve across branches and subsequent junctions and to address your question Eric just as this set of models solves reaction diffusion in the sense of currents are the species which are diffusing reacting based on voltage channels channels that pass currents we are now able to simulate diffusion of calcium and it's the same solver so we're reusing the same solver and we anticipate this will be probably the easiest way to go for all reaction diffusion that you might want to simulate over the topological structure of the tissue okay so what I'm trying to describe is that we have a graph of models which are coupled together in a specific way to implement Gaussian elimination and that in the sense of voltage is how we solve for the voltage in every compartment and what's really diffusing is current in a sense so it is a reaction diffusion problem but it's something you know we don't tend to think of it that way numerically it's equivalent and so what we've done is reused the same model graph architecture for the forward substitution or forward substitution backward elimination of Gaussian elimination we've reused it for solving any compartment variable and the reason why we implemented this sort of arbitrary compartment variable capability is because we wanted to solve for calcium now most simulations just model calcium as being something that diffuses across the membrane and gets sucked up by buffers and that's probably fine because buffering is so intense but in our infrastructure we implemented it as a full solution including axial diffusion because it was easier someone says that okay I really want to look at show diffusion as well as that that's a great question so if you go back to here you see that okay so you see that we added calcium and I've represented it as an arbitrary number of compartment variables that can be targeted to the same topological entity a shell then just becomes another instance of a calcium concentration compartment variable which is then coupled to the adjacent one to represent sort of the diffusive coupling towards the center okay so with calcium we now have the capability to model all of our channels for the inferior olive as well as other things that we anticipate such as NMDA receptors and while this looks wickedly complex again none of these connections are specified by the user you just have to know what the interfaces are that this channel expects and that it produces and the tissue functor steps you through this choreographed set of introductions between all the models and what you end up with is something that you know you can explore and say oh yes NMDA receptors depend upon voltage but they produce both a voltage and a calcium concentration because they have calcium right and so it all makes sense but you don't have to specify it at this level I'm just illustrating how you know how complex the graph gets very rapidly as you add more and more model types to these categories of compartment variables channels and receptors okay so now I'm going to address your question regarding what we were simulating with that million neuron simulation for the paper last year in frontiers and neuroinformatics and what we did was we exercised the system basically the workflow involves taking structural models for example SWC files from neuromorfo.org arranging them in some three-dimensional coordinate system of the tissue in this case to create a mini column which comprises 20 neurons from Dr. Markram's lab they are shown here they were positioned according to the layer in which you would expect to find them at the proportions you would expect between excitatory and inhibitory neurons but it's by no means you know a blue brain type simulation it's very crude compared to blue brain but we did attempt to to preserve some biological fidelity our target was a view such as this which is a very beautiful image from Mitch Eva's work where we would include branches and again GABA and ampersynapses at the conductance level we took these mini columns and we randomly rotated every neuron to create unique mini columns in that sense to arrange them in 20 by 20 arrays of mini columns which would comprise a column and then we did something which is a travesty in neuroanatomy we stacked these columns in three dimensions instead of two and the reason we did that was the energy simulator is really about a volume decomposition in three dimensions and so we wanted to exercise it in that regime in order to measure scaling and so what we ended up with as I mentioned was a simulation of a million neurons roughly a billion compartments and 10 billion synapses in addition we have the capability as I mentioned to to study neural development and we're using this in the creation of glomerular in the inferior olive but what we can do we didn't do this in the cortical simulation of a million neurons but we did do it for a single column for 8,000 neurons we were able to grow the axons in this case parameterized to be attracted to cell bodies and you can see that the result is a lamination of the axonal plexus that was sort of surprising and that was strictly for demonstration purposes but again we can target force fields to individual elements of the tissue in order to allow us to perhaps use these neurons which are derived from different animals to insert them into a single tissue in some way that they can actually influence each other's structure perhaps the best solution though is to take them all from the same tissue as Eric was describing contact detection is the next step after we compose the tissue in this way we detect all of the touches between them and this was an architecture that we came up with for the blue brain project again in the last decade where we first hit upon the volume decomposition where now contacts are detected within a volume ok so this speeded up the problem of circuit building tremendously by allowing us to detect contacts within a volume we simply reapplied that volume decomposition and what is now third party code to the problem of decomposing the tissue and the computation of the physiology of the tissue and contact detection we've since optimized it to run multi-threaded on blue gene P and its four core compute nodes so that we can now compute roughly 10 billion contacts per hour of machine time another element that is important in the initialization of these simulations is to be aware that every topological element in this case a branch segment for example has a key associated with it which the user can play with ok so there you get in this case 64 bits and you can divide it up into different fields which mean different things and this allows you then to target models to this topological skeleton arbitrarily we borrowed some of the terminology from blue brain to just for demonstration to show that you can have a layer field and m type field and e type field branch type branch order and all the way down to segment branch and neuron index this field then gets efficiently masked depending upon user specification for example to target compartment variables you might say I'm gonna target compartment variables based on branch type and so your mask then becomes branch type shown in blue the branch types are 0 through 3 and you can see that and the dendrites all are targeted with voltage and calcium but the axon is only targeted with voltage because you don't want to compute calcium in the axon it's irrelevant at least in certain simulations so you can do that and this is where the costs are specified to give rise to the histogram in the channel targeting we can similarly say well I want to target channels according to branch types so for an inferior all of model we might target the soma with sodium targeted to voltage whereas high-threshold calcium and calcium-dependent potassium is targeted to both voltage and calcium for both the dependent and the production side of the connection in addition we allow parameters to be targeted based on this masked in this case branch type so that you can have a different conductance for sodium and the soma and these are things that you're used to and neuron I'm just showing that and furthermore it's not on a model by model basis within the graph it's more at this very high-level specification where it's applied uniformly throughout the tissue based on this specification even though you can mask down to the individual neuron with a file such as this and these are the results we got from our cortical simulation and this just shows that we're able to get overshooting action potentials in the axons there was no noise generating input these were all based on synaptic inputs we had a high sodium conductance so that we had a lot of spontaneous activity and so all of this activity you see is purely generated by the synapses as well as the endogenous sodium firing these were simulations of a million neurons on the bottom are 16 million neurons where we varied the number of synapses and you can see across the dendroids soma and axons the difference in the recording and I like to finish up by talking again about this next stage in our work which is trying to apply the simulator to something more of a scientific question but I sort of bring this approach up which we call brain systems computation to address how we think about the problem multi-scale modeling at least in terms of growing your simulation to larger and larger tissue sizes and the problem we view brain system computation as is one of starting with a whole brain structure such as the inferior olive that you can compute given current resource such as a rack of blue gene P and then adding brain system components as other whole brain structures so you avoid boundary effects within the structure and you add them in a very specific way ultimately you scale the system up as compute resources grow and we know probably where they'll grow to over the next decade so you can plan out but intervening in these steps I think it's important to note it's important that the phenomena you validate against for this sort of step wise addition of larger and larger pieces of tissue or larger and larger structural components should be phenomena that emerge prior to some pre-statable input and what I mean by that it basically inputs are the state space over which your model solved and if you don't know the inputs it's very difficult to validate your model and so in the case of the inferior olive we're starting with something that has intrinsic dynamics that we can validate against even though those are modulated by for example inhibitory inputs from the DCN but we don't want to validate the model based on those inputs because we don't know what the inputs are and this is a really fundamental problem we think in large-scale tissue simulation in general is that often we don't know what the inputs are and with that you can't really state the space in which you're solving your model so if it's possible to iterate out in this way we would like to pursue it and that is to continually add larger and larger pieces of tissue that in and of themselves constitute some intrinsic dynamics that you can validate against where the inputs then are modulatory and unknown but the intrinsic phenomena are sufficient for validation purposes and so again the inferior olive is a very nice structure it has about 24,000 neurons which we think we can fit on a blue gene P and it has these intrinsic emergent properties and I'm going to finish in one more slide where we are currently is we've implemented the channel models from Schweigh-Hofferdoya and Kowato and these models now are running in the neural tissue simulator to give results such as this which involves sodium spikes in the axon shown in blue along the bottom a large plateau potential on the dendrites and calcium concentrations that at least for a first simulation are reasonable we're currently scaling this out and validating the questions we aim to address what are the effects of electronic coupling between IO neurons how is this coupling modulated and what role did the phenomenon observe in the IO ultimately play in these overarching cellular systems so in closing going back to the three questions I started with I hope I've left you in the workshop with some potential candidate answers how do we specify arbitrary model graphs in our case we do this by model definition followed by graph specification we have a language for either of those how do we simulate neural tissue as a model graph in this case it's a topological tissue skeleton which becomes the target for models connected through standard interfaces but we also support the accessing of these models through arbitrary interfaces which are non-standard interfaces which can be for example models of the extra cellular space or other phenomena in the tissue and finally how do we best compute or decompose a neural tissue model graph in our case we've observed very good weak scaling using a volume decomposition and there may be additional benefits in terms of modeling if you have all your models from a particular region of tissue in a single node it may be easier to collect data it may be easier to add models in such a decomposition so we would advocate for that I just want to acknowledge my collaborator John Wagner who is the mathematician on the project who implemented this novel numerical solution which made the volume decomposition possible and Charles Peck who was the chief architect on the model graph simulator and my manager so finally I'll leave you with this statement the neural tissue simulator software is experimental IBM would like to create an active user community so therefore I encourage you to contact the authors if you're interested in using the tool and we will do our best to resolve all the licensing issues with third party code and finally make it public so thank you