 Okay, before I actually get into that I wanted to tie up a loose end from last time and that is the question came up, how can you make a set point for homeostasis basically? How can you encode what target value some property should be? And I don't think I answered that very well so I went back and looked around and in most cases we, in most homeostatic processes we don't really know the molecular details of how a set point is defined but here is how it could work in principle. And so before I get into this pretty abstract schematic let me just talk about the molecular reality of how this could actually happen. So let's think about, remember I talked about synaptic scaling where you increase or decrease the strength of excitatory synapses and that happens a lot by manipulating the number of available receptor channels in the postsynaptic membrane. And so there are, these are just two of probably many different ways in which you can change or regulate the number of available channels in the postsynaptic membrane. So one scenario is that you have a channel molecule inserted in the membrane but it's not functional, it's closed at this point and it only becomes functional, a functional receptor channel if it gets phosphorylated. And then there would be a kinase that does that phosphorylation and a phosphatase that dephosphorylates and you arrive at some equilibrium between the two. And in the schematic that we'll look at in a moment this would be the plus factor and this would be the minus factor. And together they establish a balance of how many functional receptor channels do we have available. Another way to look at it and this is actually in the actual synaptic scaling that Gina Turgiano studies we know that insertion and removal of postsynaptic receptor channels plays a big role so that would be another way to play with that number of channels is that you have a factor that inserts new channels into the membrane and another factor that removes them and again the insertion and removal pathways will arrive at some kind of equilibrium. And so now the idea is that, let me get the pointer now, the idea is that you can set up a set point based on such pairs of factors. So let's assume you have a firing rate and that firing rate inside the cell could for example be encoded by the intracellular calcium concentration like I was explaining on Wednesday. And if these factors here factor plus and factor minus which could be the kinase phosphatase pair or the insertion and removal pathways if they are now dependent on the calcium concentration or the firing rate you can actually set up a set point. So let's look at what this curve is suggesting and again this is kind of just a basic idea of how you can use the existing molecular machinery to do this but in many cases we don't know how it actually happens. So if your plus factor, your kinase or your insertion pathway, if its activity is dependent on the calcium concentration in this direction and if the other one is dependent in this direction then you get a unique calcium concentration or firing rate at which the two are in equilibrium and so that would basically establish a set point. So if your firing rate for some reason becomes higher, your calcium concentration becomes higher and this removal or dephosphorylation pathway will become stronger and that will bring you back to the set point and vice versa if you're over here. Doesn't this just push the problem back a level because now the question is how do you get set points in factor plus and factor minus because if they change a set point it will change. Right, so the way the figure at the paper presents it is it's actually not seeing it as pushing back a problem but now opening a door for adjusting a set point but I kind of agree like you still then have to understand what now regulates the how much kinase is there and how much phosphatase is there. So in a sense I agree it's pushing back the problem a little bit but you know at some point maybe you arrive at a point where it's genetically encoded you know maybe it's kind of set how much of that kinase you have and how much of that phosphatase you have and that establishes a set point for your cell. But as you already pointed out now the set point can now be moved if you if you now like this this kinase here is itself a molecule and has a concentration in the cell and so if something now comes in and decreases that concentration you just basically scale down that curve and you can move your set point around. But I mean it has I think it has some explanatory value in that it kind of demystifies a little bit what is that magic set point that in the math we just put in the equation but I agree like ultimately you're one level further back and then you have to figure out well what determines the balance of these two of these two factors. Yes. So one question about set points is that they are kept often astonishingly precise over a long period of time. And the question is can you from the slope of this curve maybe make some kind of statement on the... How stable that is. The precision. The precision yeah. In a row there are still slopes that do not really increase the precision but if you have the parameters can increase the precision as well. I completely agree that those slopes will determine like over what range of calcium concentrations can that set point vary but we also have to... I just want to emphasize again this is just a very abstract schematic right even the shape of those curves doesn't necessarily have to look like that but as long as you have a monotonically decreasing and a monotonically increasing dependence like that of two antagonistic factors basically you'll get some kind of equilibrium point and then yeah I think anything further than that precision and things like that will then depend on exactly what is the implementation. I also when you started your question I thought oh he's going to go... He's going to go... Uppi on me. He's going to talk about small numbers of molecules and so this is showing a nice smooth curve here but if Uppi is right and your calcium concentration is really pretty low so then all of this could become much more noisy too. So it's just... I just wanted to bring this up to kind of show that in principle with the molecular machinery that exists in the cell you can relatively easily create a set point like that. Okay so now let's move into the parameter variability and ensemble modeling part so again I just wanted to put this up as a reminder a lot of what I'll be talking about is in the pyloric system of the crab or lobster and you have this three node pacemaker circuit pattern generating circuit with a pacemaker kernel that oscillates rhythmically and two types of follower neurons that also oscillate in response to inhibition from the pacemaker so this will just serve as a reminder of what circuit we're talking about. So we saw a little bit last time that there was even a question about that that there is variability in the properties of these neurons even though we're talking about identified neurons that basically you can find the same neuron in every animal it generates the same rhythmic activity it innervates the same muscles and it has the same synapses with partners the properties of these neurons can vary pretty widely and so here's an example of that so here is a pacemaker neuron and another neuron in the pyloric circuit and we're looking at the maximal conductance so in a sense you can think about it how many ion channels of a given type are in that neurons membrane for three different types of potassium currents so these are they're all potassium currents this is the A type potassium current which is an inactivating potassium current this is a calcium dependent potassium current and this is the delayed rectifier that repolarizes your action potential and each data point here is from one animal in the same in the same pacemaker neuron or in the same other neuron and you see that from animal to animal you don't always find the same value but you get ranges of values and they can typically range two to three to four folds or even five fold between different animals even though those cells are generating the exact same electrical activity so there is parameter variability and it's not just maximal conductances of ion channels and it's not just in the somatogastric ganglion so I kind of put together a little collection of examples of parameter variability so here again we're looking at the pyloric circuit but now we're looking at the conductances of synapses so how strong are these inhibitory synapses in the circuit here is the synapse from the pacemaker to the LP neuron this is the synapse from the I said earlier the pacemaker actually consists of two different cell types AB and PD so you can break this down into the AB component and the PD component onto LP and you see that both the total synapse strength of both components together and each component individually also is variable and again you get like for example this one here goes from let's say 25 to 150 so you get a several fold range of that same synapse strength between different animals now when we're talking about synapses that gets us to something that is kind of a hot topic recently and that is the connectome and the claim by some people that basically everything a neural network does and everything that you do is basically determined by your synapse strengths and this to me already kind of puts kind of a little bit of a perspective on that and one example that always comes up is that people say well you have C elegans where we have the complete connectome and we know exactly how everything is connected but if you actually go into the literature and you look it up here is data from a comparison of data from two individual worms two specimen of C elegans and it turns out that actually not even in terms of strength but even in terms of the existence of a given synapse only 75% of the synapses that exist in one animal actually also exist in the other animal so we're definitely not talking about a kind of cookie cutter connectome that's exactly the same in every worm so there's variability even in the existence of particular synapses synapse is not just in their strength this is just another figure again showing that conductance variability in stomatogastric neurons and here are a couple of other examples this is from guinea pig cochlear neurons so neurons in the guinea pig ear they vary both in terms of their so here we're looking again at a potassium current the threshold of activation of that current varies it's slope conductance which is a kind of a measure for the amplitude of that current also varies several fold here we're looking from a colleague of mine at Emory Iran Calabrese we're looking at the strength of inhibitory synapses between leech heart interneurons they vary several fold even though these are also identified neurons and this is from mouse purkinje neurons that you've heard about several times now the cerebellum neurons with that flat dendritic arbor and you see there that purkinje neurons that generate very similar electrical activity can have pretty different sodium current and calcium current amplitude so I would say that this parameter variability despite very similar output is kind of a ubiquitous thing and so what does that mean and how do we deal with that when we try to build models but before I want to get into those questions I want to do a little bit of terminology and that is because I want you to kind of follow what I'm trying to say and also because some people are kind of real murky about how they use different terms and I think that when we talk about these things we should try to be clear and consistent about what we call things and so I would argue that it's useful to distinguish between three different types of descriptors of a dynamical system for example a neuron or a neuronal network one thing is parameters so those are kind of fixed properties of that system I'm talking about something like morphology or the maximal conductance for a sodium current or membrane capacitance basically things that these things can change over longer time scales but for the purposes of what we're looking at at the moment they are kind of fixed entities they are parameters that in a sense that you put into your model or that the biological system establishes during development and then that's a fixed value that's what I call a parameter in contrast to that you have variables or dynamic variables things like the membrane potential the calcium concentration the gating variables of different ion channels basically anything that you describe with a differential equation that's what I call a variable and that's different from a parameter so those are the things that change over time and that show the interesting temporal dynamics and then there's things that also change over time I'm thinking about for example the firing rate of a neuron what else could this be yeah firing rate is the best example that comes to mind right now so these are basically descriptors of what the neuron does and they're also changing over time so it's easy to maybe confuse them with dynamic variables but there's not a differential equation that describes the firing rate it's an outcome of several dynamic variables taken together I'm going to try to be really consistent and distinguish these things the most abused term I think is parameters like a lot of people use parameters for anything basically and I just think it's useful to distinguish these things so try to catch me using a term wrong and maybe that'll make you pay attention it can happen so again so we now have this parameter variability so these I call parameters because for the purposes of what we're looking at today there are constant numbers and how do you deal with that so let's say we want to build a model neuron now to describe to describe these stomatogastering neurons the model neuron that I'm going to be using in the rest of today is a single compartment conductance based model neuron so the single compartment is this blob here it has intracellularly a model of a calcium buffer and it has eight different membrane conductances that are all based on measurements in biological neurons so there's your regular fast sodium current that makes your action potential upstroke then there's two different calcium currents we've already seen these different potassium currents here these three we already saw before there is also the H current that we heard about a couple days ago so the hyperpolarization activated current and a simple leak current and for all of these we have based on voltage current experiments in stomatogastering neurons we basically have a parameterization that is a description of their dynamics and how they depend on voltage and we won't get too much into equations like this but just for those of you who do this kind of modeling a reminder and for those of you who don't just kind of also a reminder I guess how complex even a simple single model neuron like that can be so we basically have a master equation here that describes how the membrane potential changes as a function of all the currents all the membrane currents in the neuron then each of these membrane currents that's why it says times 8 here is described by by these Hodgkin-Huxley type dynamics with activation and inactivation variables that have half activation thresholds and that have time constants and then there's also a simple kind of intracellular calcium buffer and so you see that there's lots of these are now parameters so these are fixed numbers that we plug into these equations the half activation voltage here or the reversal potential here so these are now parameters that if we want to model this neuron we have to supply a number for each of these and I already told you for a lot of those we have numbers based on voltage climate experiments so we can say the reversal potential for the sodium current is 50 millivolts plus minus a few millivolts or something most of these are not all that variable so most of these are relatively narrowly constrained but what is not narrowly constrained and we just saw that before what's not narrowly constrained is how many what's the maximal conductance of these particular currents so how many ion channels of that type are in the membrane so I'm charging you with building this model I'm giving you the differential equations with all those parameters that are relatively well known and constrained but now you need to pick what are the maximal conductances so based on this experimental information what are you going to do so let's say for this A current in this pacemaker neuron what value are you going to pick for your model neuron I'm not trying to entrap you here just what would be your first first order approach I'm asking you for a number are you going to make it this or this what number are you going to pick as your first shot it's an easy question the median or average yeah somewhere in there right that's the that's the obvious first thing you always do you have a spreader something you want to condense it down into one number you take the average or the mean or something and people do that over and over and sometimes it works but often they find if they plug those numbers into their model and they simulate the model and they look at what does it do it doesn't do anything or it sits there silent when it should be bursting so usually that doesn't work and I want to spend just one slide meaning what about these complex systems can lead to that kind of what's called failure of averaging so let me hear so this is from a really interesting paper I don't have the reference here but it came out of the lab of Larry Abbott and so what they did is they basically had a simple model neuron it was actually a simplified version of the somatogastric model neuron basically they varied these maximal conductances of different ion ionic currents so here's the sodium and the delayed rectifier potassium current but they had I think five or six different currents not the eight that I'm going to be working with and they varied all of them and we're looking here just at everything projected onto this two-dimensional plane right really it's a six or five-dimensional thing but they projected it all onto this plane and every color dot here is one version of the model with this particular combination of these two conductances and then some values for the other conductances and what's color coded here and so for each of these parameter choices which they picked randomly by just randomly varying these parameters for each of them they looked at what is the output so they simulated the model with those parameters and they asked what does it do and the color code here just stands for basically the output of the model neuron and what's color coded is the number of spikes per burst so a lot of these will generate these kind of rapid bursts of action potential zoomed in here in time and the color tells you how many spikes are in a burst zero I guess means that it's a silent neuron although I don't see that many here one means that it makes this kind of activity with like one spike and then a shoulder and then two and three and four and five is additional spikes and the point they're making here is imagine you had an experimental population of neurons that all generate this kind of activity this spike with a shoulder like for example these three guys here one two three their activity is very similar both in the shape of the shoulder and also in the overall spike rate and in this particular example here where are these three located number one is here number two is here and number three is here so imagine you had an experimental population like this and you measure the delayed rectifound you measure the sodium and now you're trying to build a model to replicate this kind of thing if you now take take the average put a delayed rectifier concentration and all these blue guys are also in this category if you take their average delayed rectifier G max maximal conductance you end up with this value here if you take their average sodium you end up with this value and so you're going to place your model if you use those two averages in this point here and you're going to fall outside of the distribution that actually produces this activity so this is actually the activity you will get if you use those two averages here so it's different from what you were trying to achieve and you see why this is happening right it's because this distribution of these spike plus shoulder guys is kind of concave here and is hugging the axes and so your average actually falls outside the distribution itself and that's actually something that we're seeing a lot that distributions of parameter combinations that produce similar activity can often have non-trivial shape in the parameter space and often a non-convex shape so it's not a nice blob where the average is smack in the middle of the ranges but it's some complicated shape where averaging will not get you anywhere so for this and many other reasons and mainly because we were interested in well what does this variability of parameters despite similar output what does that mean we said we want to kind of just take this really naive approach and just systematically and with brute force explore a parameter space of this model neuron and basically just ask if we if we simulate this model neuron again here's the reminder of what it is if we simulate it for many many different combinations of these maximal conductances what kinds of activity do we get and how do they how does that activity depend on the particular combination of parameters that we chose so here's the very naive approach we basically say okay we have these eight parameters here these are the maximal conductances for the eight different currents and we're going to vary each of them from zero to some physiologically reasonable value that we know based on experiments in fixed increments and so here's so we're dealing with an eight dimensional parameter space here we're showing three because that's all I can do on a two dimensional and and basically the idea is we just cover this eight dimensional space with a regular grid of simulation points and for each of them we ask what does it do and again I want to emphasize that that's varying eight parameters but there's a lot that we're keeping constant between those different versions we're not changing anything about the voltage dependence of these activation and inactivation gates or about their time constants all of that constant and we can talk later about why that is I already said that these things are in biology much less variable from animal to animal so that's a good reason for varying these but not these but there are also computational reasons you can imagine that or it's immediately obvious that every additional parameter that you add every additional dimension just makes this number of combinations here explode right and so eight at the time when we did this almost ten years ago now God was kind of computationally what we were willing to wait for on a computer cluster that obviously becomes more and more feasible for larger numbers of parameters but there is a limit right if we varied all parameters of this of this model in which I think it's like 50 or 60 parameters you know you wait a couple hundred years on your computer cluster for that to finish up okay so in this particular case we have eight parameters that we vary through six different values each which is a 1.7 million version data set and we're basically for each of them simulate what does it do have an automatic classification scheme and save all of that information in a database and that's what we call the model neuron database okay so what do you get when you do that so here are a couple of examples on the right you see these bars indicate for the different maximal conductances how much of each was in that particular version of the model neuron and on the left is the voltage trace produced by that parameter combination and you see you can get just mixing the relative concentrations of these ion channels you can get lots of different types of activity so silence nothing happens you can get a spiker but here we know that this is not a regular normal spiker but it's a calcium spiker because it actually has no sodium it has no sodium current this one is one that also spikes but it does it mostly with its sodium you get these guys like we just saw with a single spike and then a broad shoulder you get what we're basically looking for when we try to model a somatogastring neuron these nice kind of burst patterns you can get non-periodic stuff even though we have no noise included in this model here you still can get non-periodic behaviors and we get these weird guys that I later found out at first I thought that's not logical but later I found out these are called hyperbolic bursters and they actually there are some exotic cell types that actually show this kind of behavior so then like I said we have an automatic classification scheme that online actually as it's simulating is grouping them into four broad categories so silent tonically spiking bursting and then non-periodic and it's kind of a technical aside but kind of one of the tricky things in setting up an automated database like that is basically most of my time I spent programming to make sure that each individual version of the model was simulated for the minimum amount of time that would allow me to accurately classify it if you have something like this you're probably going to have to run it for quite a while until you have convinced yourself there's no periodicity in there but if you have something like this after five or ten spikes it's pretty clear this thing is spiking like clockwork so you don't have to simulate a really long time and so that way you can kind of have a balance between the amount of time you need for each individual version and still have an accurate classification and that really minimizes the overall simulation time as opposed to just running everything for 60 seconds or something okay so now in this particular case we were looking for parameter combinations that would produce something that looks like a somatogastric neuron in particular somatogastric pacemaker and so our first thing was to look at this database overall and look at how does it break down and luckily or maybe not luckily because we chose the parameters in physiological ranges what we ended up was about two-thirds bursting neurons so that was useful and then a bunch of spiking and silent and a few non-periodic ones and the bursters are further broken down here by number of spikes per burst so a first thing you can then do with a database like this is basically just use it to find parameter sets that do a particular thing right before I told you if you build your model from scratch you put in your average parameters from your experiments you will usually end up with something that doesn't work and then the traditional thing and that is done a lot and it's totally legitimate is that you then start hand tuning and using your common sense and adjusting parameters get it to the point where it does what you want it to do here you can be more stupid and you don't need any common sense you just go in the database and say give me everybody who does this or that and so now we're going to look for in this database for models that generate this kind of bursting pacemaker activity just as one example of what you can look for in a database like that so we start out with all with the entire database 1.7 million we then say we're only looking at the bursters we want the period to be in kind of a physiological range for these somatogastric neurons brings us down to 200,000 versions then we want the burst duration to be reasonable we want the duty cycle to be reasonable duty cycle is the fraction of the period that's taken up by the burst and so now we're down to a pretty number now 80 80 versions out of these 1.7 million and then we applied a couple additional constraints there's a thing called the phase response curve which describes how an oscillator responds to inputs at different times in its cycle that narrowed it down and then an important parameter is kind of the slow wave amplitude of slow voltage oscillation that underlies the spiking and now we're down to just 9 neurons out of 1.7 million and yes Have any of those 9 previously been kind of manually designed before or are they completely new? That is a good question you know I never went and compared those 9 I compared them to each other but I actually never went and compared them to a hand tuned version that's a good point I should look at that I would suspect that they are probably in the range of a hand tuned version and that is because when I compared them between two each other it turned out and we'll see much more of that later it turned out that they were not all over the place but they were kind of in a neighborhood of the parameter space and so I would assume that any other combination that does similar activity would also be in that neighborhood so here's just one example so here's again our biological activity voltage here and this is one of those 9 pacemakers the voltage trace and here's the phase response curve I won't get into what this means one thing as an aside I haven't talked about this there's obviously a striking difference between the biological and the model neuron here right the spikes are tiny here they're just they're just like 10 millivolts whereas these are your visual nice overshooting action potentials and that just comes that's unavoidable with this kind of model that comes from the fact that we have modeled these as single compartment so all the conductances including the ones that make the action potential are in that single compartment whereas in the biological neuron you have the cell body and the spikes are actually initiated pretty far away in the dendrite and so what you in the biological neurons what you measure at the cell body with your electrode is really just an attenuated echo of the action potential and there are more refined models with 2 or 4 or even more compartments of these same stomatogastric neurons where you then separate the spiking currents the sodium and the delayed rectifier and put them into a remote compartment and then you get this kind of activity at the cell body you get your spike initiation zone with the nice overshooting spikes and your attenuated spikes at the cell body so this is not something we can ever achieve with the single compartment ones but for the purposes that again gets us to the discussion yesterday last night like how should you model things for the purposes of this database and the things we wanted to achieve with it this didn't matter to us the spatial the fact that we had lumped everything into a single compartment for other questions you do need that spatial resolution right ok so what are those 9 models now so here are their individual conductances I'm not showing all 8 of them because some of them were the same through all 9 and you see that they can vary right so here's the sodium the smallest is a 100 millisiemens per square centimeter which is a conductance density and the largest is 400 so you have a 4 fold range and similarly for some of these others and so another way to express that is to look at the coefficient of variation that's the standard deviation over the mean and you typically get like 0.3 0.4 that range and just to compare that back to what I showed you before the experimentally observed range were in the same ball park in terms of coefficients of variation so that to me was reassuring so it basically says that we find similar electrical activity on the basis of variable parameters both in the biological neurons and in the model neurons so this was all single cell level so far so now the question was does this message of similar behavior from different parameters also apply at the network level so now we're basically doing the same exercise constructing a model neuron a model network database now by brute force exploration but we're doing it at the network level so this is your network again and what we're doing now is we're going to explore a 10 dimensional space we have 3 cells and we're going to in a moment I'm going to show you cell models that we can plug into this position or this position or this position and then we have 7 synapses here in the circuit all inhibitory and for each of them we're going to vary the strength of the synapse from 0 so basically no synapse present all the way to saturation so a really strong synapse and what I mean by saturation is if you make your synaptic conductance which is a measure for the strength of the synapse if you make it very very strong then if the pre synaptic neuron is active the post synaptic neuron will basically be clamped to the synaptic reversal potential that synapse will totally suck the membrane potential to the synaptic reversal potential and you can make the synapse stronger than that you can make it 10 times or 100 times stronger than that it will that's all it can do it can never go beyond the synaptic reversal so that's basically functional saturation so that's a huge range and we thought well we kind of know roughly how strong these synapses are I showed you before some data that showed that they're variable but they're in some range so should we really make them 0 to really strong but then we thought let's just see what happens and see that there are some interesting outcomes of that okay so what are those neurons that we plug into these individual positions these are the models that we plug into the pacemaker position there are five different ones of them and they're basically just picked out of those nine that I showed you before and I picked them to make them as diverse as possible in their conductances so I said that they're all in a neighborhood but I basically picked them to be at the edges of the neighborhood to cover some range and then from the same database that I showed you before from a single neuron database we can select models for the follower neurons for the lateral pilaric and pilaric neuron again based on what we know about their isolated behavior in the biological system so we know that if you isolate them from the rhythmic inhibition that they get from the pacemaker they will be either silent or tonically they will not burst in themselves they only burst if you repeatedly inhibit them so we picked silent and spiking ones and then we gave them one burst of inhibition from the pacemaker and we required that they made a nice rebound burst because that's what the biological neurons do and with a certain delay a couple of different criteria that we use to pick follower neuron similar to what the biological ones do when did I start at 11 something 1110 or something yeah I probably again have more slides than I can cover but the story towards the end is kind of modular so we can easily drop some of the parts okay so here's our database now we're putting all of these together that makes 20 million models now and again we have everything automated so in an automated way we detect is this thing rhythmically active at all if it is what are the what's the period what are the burst durations what are phase relationships and all of that and that all gets dumped into a database and we can then look at what's the outcome and here are just 10 examples just to give you a sense of what can happen so in the top row you have networks that use the same model neurons that have different synaptic strengths and they're indicated here by by these the size and color of these blobs and here you have the same synaptic connectivity but now we're using different models in these different positions and you can see that a lot can go wrong we're trying to get this triphasic rhythm that the biological system generates a lot can go wrong so here you have cases where some of the neurons are silent and some are tonically active there's no bursting going on here or you can get cases where everybody's kind of bursty but some of them are really more irregular are not really as regular as we would like them to be you can get cases where one of them skips every other cycle you can get cases where everybody bursts but the order is wrong you want the same order as in the biological circuit which is L-P-P-Y-P-D L-P-P-Y-P-D and in this case you have L-P-P-D-P-Y so wrong wrong triphasic order but on the right here there are two examples where everything's triphasic and they're in the right order so these we then called pyloric like rhythmic in the right order so that's a start and it turned out that there were actually I was surprised there were actually 20% of the entire database were pyloric like that and why do I say I was surprised I said that we varied each individual synapse over this huge range when we know that the actual biological range is probably much smaller and so kind of just in a combinatorial way you would expect many many cases where one or the other or several synapses are way out of their range and it falls apart but still we got 20% so that was encouraging like this had originally been my plan like to go this far and then say let's analyze this thing but then we said well then maybe we can go a little more refined and be really demanding and not just require rhythmicity and the right triphasic order but but require a much better match with biology and so to do that we established a set of 15 criteria to describe the output characteristics of these networks to describe the rhythm see how I'm not calling these parameters I'm calling these output characteristics and so what are those the most straightforward one period then we have burst durations for each of the neurons we have delays which is time from this onset to this onset then gaps which is the time between the end here and the start and then duty cycles I already said that's the ratio between the burst duration and the period and phase relationships so that's the ratio of a delay over the period so different ways of describing this rhythm and Dirk Bucher here who was a postdoc at the same time with me in Eve Martyr's lab now has his own lab in Florida he did the painstaking thing of going back into lots of old recordings from the lab because what everybody does is when they take a somatogastric nervous system and place it in the dish the first thing they do is they record the rhythm before they do any manipulation so there's lots of baseline data and he went back and plowed through a hundred of these old recordings and extracted the biological range for each of these 15 criteria that I just told you about period burst duration and so on and so and so that created basically an experimental database that we then published and here's just are some examples of the biological ranges so here's the cycle period each dot here is an animal and you see the cycle period is pretty variable goes from a little over one to a little over two seconds here that kind of gets to I think it may have been you or someone else was asking the question how variable are these things was it you? Yeah we saw before I showed you that in their parameters some parameters like half activation thresholds and time constants are a little variable some parameters are very variable like the maximal conductances and now we see the same thing in the output features some are very variable the cycle period that's pretty variable and that's under the same conditions same temperature and everything but other things are more tightly constrained so what we're looking at here is the phase of different events in the cycles of what are those events so PD off is when so we define phase 0 as the beginning of the PD the Pilaric dilator burst always shown up here phase 0 is what we define here and then phase the PD off phase is the fraction of the period at which the burst ends so that is about 0.4 and then the same for when does the LPE burst start and end and when does the PY burst start and then you see that even though the cycle period varies pretty widely these phases on and offset phases of the burst are much more narrowly constrained and people talk about this a lot I think this tells us probably a lot about what the circuit really cares about it doesn't seem to matter so much how and again as a reminder this is moving filters in the stomach so it's activating a couple of antagonistic muscles that move the stomach wall and that moves a filtering system that passes small food particles for further digestion throws back big food particles for further chewing and so it doesn't seem to matter how quickly you do that but you want the right phase relationships you don't want your antagonistic muscles to be active at the same time and you cramp up or something so that tells you something about what is probably the desired function of this circuit we'll see a lot of that later on okay so I'm going really slow here but you know I'd rather go in depth and you follow this stuff and we lose some of the finer details at the end rather than going like way over your head okay so we have this experimental database to constrain those 15 criteria that characterize a rhythm so for each of these criteria we now have a mean plus minus standard deviation from the biology and if we now go into those 20% philoric like networks and we filter out only the ones that fall in the biological range for all 15 criteria then we're down to 2.2% of the database which is a small fraction but again we varied the synapses very widely it's hard to say for a German but 2.2% because we started out with 20 million is still 450,000 different versions of this network that generate a physiologically reasonable rhythm and here are two examples model network 1 and 2 with the voltage traces and here are some cellular properties and some synaptic properties from these two particular networks and it's just that just like with the single cells or maybe even to a bigger extent than in the single cells we see variability in these cellular and synaptic parameters so I'm just going to pick two examples this one has a big sodium current in the py neuron here it's pretty small and this one has a large, a strong synapse from the pacemaker to the lateral philoric neuron which is very weak so you again have basically this message of similar highly biologically constrained output on the basis of different parameters and this is just I was particularly interested in the synapses because we had varied them so widely and I had expected that to produce many more failures and so here is here are these pyloric networks so those 450,000 narrowly constrained ones and it's a histogram of these over each of the synapses so over the strength of this synapse strength of this synapse, strength of this synapse and so on so here you see how widely each of these synaptic strengths was varied I didn't say this before in the single cell database we used like a regular regularly spaced grid of conductance values with the synapses we fiddled around a little bit and done some pilot stuff we actually varied it more on a kind of semi logarithmic scale because it seemed like if a synapse is small then a small change is more interesting than if it's already big so often with if you do an exploration like this it makes sense to first do a little bit of fiddling around and playing around and figuring out how do you want to vary these different things with the exception of this synapse here which apparently needs to be pretty weak in order for a network to be successful all the other synapses cover the entire range from zero to the maximum saturating strength so to me that was really surprising and so we asked so we have a huge range for almost all of these synapses can what if we narrow ourselves down not just to the biological range of these 15 different features but what if we carve out subsets of that and so one thing we tried was to say what if we pick out only the fastest few percent the fastest couple hundred networks out of these or narrow range in the middle of in terms of period or only the slowest ones how does it break down and you see that still most of the synapses cover the full range and there's a lot of interesting information in here that we analyze but I won't get into it but for example here you see that apparently there's some relationship between the strength of this pacemaker to lateral pyloric synapse and the period you can achieve if it's weak you tend to be faster you go slower if it's stronger so there's a lot of information in here that I won't get into and interestingly to me also again in the context of that whole connectome discussion where it's like the connectivity determines everything and it's so important well here we I'm not I'm maybe being too glib here I think it is very important but I think one at least one person who is very prominently pushing that idea who is that Sebastian I think he's kind of totally neglecting the fact that there's also a lot of processing going on in the cellular properties it's not just the synapses it also matters what the cells are doing but in any case so what we notice here is that out of those 2% of those 450,000 highly functional networks if we look at what synapses do they actually have only less than half of them have all all connections present and there are some more one of the connections is missing so here this one is missing and here this one is missing and then there are some more 2 of them are missing and then there's even a very small fraction where they get by with just inhibition from the pacemaker to to the follower neurons but no talking between them and no feedback so that's like the very basic if you don't have that then these follower neurons won't be rhythmic right they will just be tonic or silent so that's the minimum you need and even even with that you can if everything else is right you can generate that rhythm so okay so this is going to wrap up the kind of core that I absolutely wanted to get across and then we can go into into some other related points so what I absolutely wanted to get across is that there is parameter non-uniqueness both in biology and we can mimic that in the models so parameter non-uniqueness means there is not just one point in parameter space that the system has to sit in and only then can it function properly but you seem to be able to do the job right range of different parameters and so that gets us to this concept of a solution space so now the idea is that you have this 8 or 10 or however dimensional parameter space and the solutions now these functional parameter combinations are somewhere in that space and so we call that sub part of the entire space now the solution space that's the parameter sets that allow you to function properly and it also gets me to this idea that I put in the title this idea of ensemble modeling so you see that we now basically end up with instead of a single version of a model that we study and where we can switch one thing on and often see how what are the mechanisms that determine the behavior now we have a whole ensemble of models and so I think of this as kind of the model equivalent of a cell of a biological population right in biology you have either in mammals you have like hundreds of thousands of Purkinje neurons they all have some overall commonalities but each of them individually is slightly different so you have a population or in the invertebrates you have this one cell type that you can look at a population across and you see variability and I would argue that working on an individual model neuron and studying that in detail has a lot of value and I think should absolutely continue but there may be some questions where we may be better off in working with a whole ensemble because if you ask what's the role of the sodium current what if I switch it off what will happen if you do that in one neuron in one model you might arrive at some conclusion that maybe idiosyncratic just to that particular version but if you do it across an ensemble that mimics the biological variability and you see that ok 95% of the cases it has this effect and then there are some outliers you get a more robust sense of kind of what's going on and this term I when we first started thinking about it that way I thought it wouldn't be a unique thing to neuroscience you know that's got to be any complex system biology or otherwise must have that same property that probably there are different parameter combinations that can do the same thing and so I started kind of looking around a little bit to see if in any field people kind of were using that and were actively exploiting that and it turns out that people who model metabolic pathways in the body and in cells they often they know we have these kind of starting substrates and we have this end product but there are a couple of different chemical pathways that could lead from X to Y and as long as they don't have the complete knowledge of what are those what is the actual pathway in place in the cell they often kind of examine all possible pathways and they call it an ensemble and so that's where that term is coming from and I already kind of alluded to the biological implications and that is if there was a unique solution you have to sit in exactly this parameter spot to generate a proloric rhythm that would make the system very brittle the biological system right any small variation could probably kick you out of proper function but if you have an entire solution space you know that makes you much more robust there may be directions in which you still can easily fall out of your solution space but there may be other directions where you can vary a parameter a lot and it doesn't really matter and you still produce a functional rhythm so I think and that's the whole connection with the homeostasis that I talked about on Wednesday is that we think that these systems have evolved to kind of promote robustness in that they are they are non-unique in their parameters they can do the same job in many different ways did you want to ask something just scratching so before I move on I just wanted to mention that this is one way of exploring a parameter space and how the behavior of a system depends on the parameters there are other ways and other ways that people use to find parameter sets that produce correct activity one of them is gradient descent so you take your model and you define some kind of a fitness function for example you say I'm going to say the the period you know the fitness is highest if the period is this value and it falls off as I go away from that target value and then you have your parameter space and you define that fitness function over the parameter space and you do some kind of gradient descent to find either a peak of fitness or a minimum of error and people often do that they take like a voltage trace recorded from a biological neuron and they take their model neuron and they try to do a gradient descent to get the model neuron to exactly match that voltage trace and in some cases it succeeds but it's a whole field of the literature and I think one of the reasons is that with neurons because they have these spiky you know unique events going on it's very difficult to define a fitness function in a way that that doesn't lead to a lot of jagged features in the fitness surface and if you have a lot of kind of jagged features and local minima then gradient descent is very difficult to do but another method that people do a lot and I think is more successful than gradient descent and it's also very interesting I think is this kind of evolutionary algorithm or genetic algorithm who knows what a genetic algorithm is quite a few so I don't have to go into a lot of detail so the idea is that you start with some initial population and now what I mean by population is each individual is now a parameter set right so you start with a bunch of random parameter sets of your model and then you again have to define some kind of fitness like how well does it match my biological behavior and you pick your champions and you breed them and you do some mutation and you go through a couple of cycles that try to mimic biological evolution and you narrow down on parameter sets that match your biological behavior well and that is actually done a lot and matches pretty often matches pretty well and coming back to that issue that I talked about at the beginning you know we looked at 8 or 10 in my lab and in other labs we're now up to like 20 or 30 parameters that we're systematically exploring the space of but then there's a limit right the more you add at some point it becomes computationally not feasible and so what I think is a very interesting approach and the lab of Eric the engineer has kind of pioneered that is to do then kind of a hybrid thing so what they have a really interesting paper that basically starts with an evolutionary algorithm on a very complex multi compartment Purkinje cell model neuron and they have they run their evolutionary algorithm and they arrive I think they they picked like the 20 best parameter sets and then they use those as anchor points or a systematic exploration more like more like the systematic exploration that I showed you so they're basically out of those 20 they picked three and that anchors a hyperplane in parameter space and now they systematically explore on that on that hyperplane so kind of hybrid approaches like that I think are really interesting where you have the best of both worlds you you don't have an explosive amount of computation because you start with a couple of already pretty highly evolved individuals but then you you can have the fine-grained exploration that you can do with a grid okay so I could stop here there's there's lots of other stuff but I mean we need to have lunch break so I can if anybody's interested in really kind of pursuing this kind of stuff I'm happy to you know email or talk while I'm still here so you're interested I'm wondering about whether the lunch is sitting out of that I mean quickly okay say again I don't know just well I was just wondering could we maybe there's no sort of project time as such as that there is there's a lab but there's after the lab there's there's the sort of poor yeah well there's lots but maybe I'll do this like a five-minute thing just because that's what people always ask is so what is that solution space yeah right so what is that shape of that solution space is it little islands scattered all over I think this died again is it little islands scattered all over the parameter space or is it one continuous blob is it a few small areas and is it nice and convex or can it have concave features and I already gave away often they do have kind of concave features but how do we do that like these are high-dimensional parameter spaces so how do you visualize and analyze that so here's one method that we came up to visualize this and this really benefits from this regular grid structure this is doable but harder to do kind of a more scattered population of parameter sets so this is called dimensional stacking so let me take you through what that is and then I think I can wrap it up there so say you had a two-dimensional parameter space sodium and delayed rectifier conductance and you cover that with a grid of say a hundred or in this case thirty six simulations six by six then you can easily visualize that you could for example make all the spiking versions red and make all the bursting ones blue and look at just how is it distributed in this two-dimensional thing so that would be the thing at the top now if you have four dimensions the idea of dimensional stacking is that you do these little two-dimensional plots and now you stack you use the next two parameters to stack those inside of the next dimension right so this little red box your first two parameters in this case calcium dependent potassium and sodium are varying in these little squares and now you repeat that but for the next higher value of the delayed rectifier right so you stack you stack that first two dimensions inside the next two dimensions and for this eight-dimensional data set that the single neural database contained you have to do four levels of that and now you basically arrive at a two-dimensional plot where every pixel stands for one entry in the database one set of parameters so they're all present but they're arranged in this particular way based on those parameters and what does that look like so here's your first dimensional stack that you're looking at and when you first look at it you don't see anything right but let me take you through it so these bars here indicate which of the parameters are at which level of organization so this calcium potassium here is this highest level of organization so here calcium dependent potassium is zero here it's one two and so on and the same vertically you can't see it here I think it says sodium and then this is the next level of organization zero one two three four five and so on and color coded is that breakdown I had before whether it's silent in gray or tonic in blue or bursting in green through red by number of spikes per burst and now you can start drawing conclusions right first of all you notice if you're calcium dependent potassium and your sodium are at their lowest value are at zero then almost you're almost you're almost sure to be silent it's very hard to do anything other than silence if you don't have a sodium current and then you see that inside these block inside these blocks as you go from here to here you go to higher and higher numbers of spikes per burst so what does that mean that means you go from second level of organization you go from high transient calcium and low delayed rectifier to low transient calcium and high delayed rectifier so if you vary those two conductances in that direction you increase your number of spikes per burst and so you see how you can kind of learn something about the structure of these spaces that way and some of you have probably realized that what you are going to see in these plots depends a lot on who you put at which level of organization but I'm not going to talk about that and so the final thing then is well that's not even talk about that I think that's probably enough ok ok yeah yeah so I guess I should have finished this last thought so what this was basically going to show here is that we did a connectivity analysis and we asked if I take a subset that generates a certain behavior like bursting with a certain period is it in fact in separate islands or is it connected and it turns out that unless you do a very narrowly really narrowly constrained behavior these things tend to be all connected in a continuous area of parameter space in a continuous solution space that doesn't mean a nice blob but it can have a complex shape but they tend to be connected and then so we have a paper with a former postdoc of mine where we asked exactly that question so we were basically looking at a very simple model in that case that had just two dimensions and what he was mapping out and Ray Oliver is his name these were two conductances I don't even remember what they were exactly and he found that there is a solution space that shaped like this were in here the neuron could generate the kind of oscillation that we were looking for over here it was silent stuck at hyperpolarizing here it was in depolarization block with some kind of thing like that and then you are asking how does a homeostatic mechanism know where to go basically we then kind of implemented a very simple versions of a homeostatic mechanism that would basically move the system on linear trajectories through parameter space until a certain calcium concentration had been reached so which was high in this area and low in these other areas and so it turned out then that with this kind of regulatory mechanism which I think was relatively realistic in its overall geometry you can set up a homeostatic mechanism that no matter where you are over here if you get perturbed out of here it will always take you somewhere inside the solution space but if you get perturbed over here it will never be able to get you back into the solution space and this is a very abstract model but some epilepsy people kind of were interested in that and said well maybe this could explain how you know some kinds of insults into your brain can be overcome by homeostasis and others might be so massive that you will never be able to go back into your solution space but all of this is partially very abstract because we just don't have the data yet and also maybe a little speculative at this point okay so if you take the animals from the diversity between animals is that just because it's kind of random where it ends up or does this reflect some kind of history yeah we have not systematically explored that but I think he had a good point in that you have kind of animal models where there's highly inbred lab strains and then you have our kind of model that we get from the fisherman and you know they catch them in the ocean sometime in this place sometime in that place and who knows what life history they had in terms of what they were exposed to and just anecdotally I think he's right that these lab strains that all come from the same background with the same you know similar genetics and the same to be more similar than than animals like that so I think there may be a point to that that variability the extent of the variability may depend on the previous history of the animal and if we have that if we have that idea and if it's true that that these homeostatic mechanisms are constantly operating and are constantly moving a system around in parameter space then that would make total sense where exactly you sit in solution space may depend a lot on where you came from but there's no real hard data to kind of make a strong link between them right and that is also there is also practically no data on that and that is simply for technical reasons like doing a longitudinal study on what are a neurons parameters over a homeostatic time scale which is days and weeks it's very difficult either with electrophysiology or with molecular biology