 Dendritic computation in memory and integration for a good while and she'll talk to us about Dendritic contributions to complex functions insights from computational models. Over to you. It's a great pleasure to be here. I'm looking forward to a lot of exchanges with people from the experimental and the machine learning field and what I want to talk to you about today is our work on looking at how Dendrites contribute to information processing and memory formation in different brain areas. So as I mentioned already, what is it? Why is it processing? Okay, we are mostly interested in understanding Dendrites, these thin processes that extend from the cell bodies of neurons and we want to understand using computational models how these structures may be contributing to different learning and memory functions like special learning, contextual learning, fear learning, working memory and so on and so forth. We've recently started looking into also the role of Dendrites in visual processing and the role of interneuron Dendrites in various kinds of computations. What we're trying to find out here is whether there is a single function or attribute of Dendrites that is common across different brain areas and different cell types. So how do we do that? We do that using purely computational approaches and I want to give you a few examples of the models that we have been developing in our lab over the past 10 or so years. So we have single neuron computational models which are very detailed biophysical models that incorporate realistic morphologies of neurons and what you see here is a layer 5 pyramidal neural of the prefrontal cortex. We also have similar models for the CA-1 area of the hippocampus. This is a model that I developed when I just started my own lab in CREED in collaboration with Barlett-Mell. We have models of the VA-1 area layer 2-3 V1 pyramidal neurons or we also have amygdala single neuron models and more recently we started looking into interneurons and what you see here are single neuron models of fuzz spiking basket cells. All of these models have a very large complexity in terms of the biophysical properties that they incorporate which are all these various ion channels. All of them are described by pairs of differential equations so there are a lot of parameters to fit and this is pretty much how this biophysical modeling work. So essentially you have a detailed morphology of a neuron which is then described by a series of electrical circuits. Each of these electrical circuits can be approximated with a typical cable theory let's say equations where you have the active conductances, the batteries, the capacitance representing the membrane and so on and so forth. If you have to stay closer to the mic because the people in the back can't hear you. Really? Okay, sorry. Closer to the mic than it is. Okay so we have all these complicated, let's say, mathematics in the background of these beautiful cell models and as a result there is a lot of parameters that one needs to tune in order to get such models. And as Iván Sholdel mentioned just a second ago there are new neuroinformatics tools coming out right now for doing multiple types of optimization to find these parameters. I have to admit that in my lab we don't really use those tools and I'll tell you why because I think that it is really important for a student especially to be able to understand when they play with this and the other parameter what does it mean that this parameter will cause to the network. So as a graduate student I also experience that doing this over and over and over again it lets you gain a very deep understanding of how each of the ionic mechanisms that you simulate contributes to the activity of the pyramidal cell. So for purely training purposes I ask my students to manually tune their models and once they've done that they can go into any online tool to you know end up with a final set of parameters. So these are examples of single cell models and we have a lot of microcircuit models in the lab as well. This is a micro circuit model of a few detailed biophysical neurons connected together in the prefrontal cortex and this is how it looks like when you simulate this in neuron. So you have the depolarization gosh sorry about this it's advancing on its own. Okay so when the color changes here it means that the cell goes from depolarized in purple to from hyperpolarized in purple to depolarized in yellow and it fires a spike when you see a yellow color and we also have micro circuit models where we eliminate this complexity of the morphology and make them slightly simpler and in these cases you even have more parameters to fit because now instead of looking at just one cell you have multiple. Still nothing close to what Ivan presented like a real size CO1 network. This is 10 or 20 neurons in this micro circuit and other examples of micro circuit included then the gyros network model that we recently developed in collaboration with Attila Lozonsky to see the contribution of MOSI cells in spatial learning and the CO1 network model very simplified one again to assess how spatial learning is influenced by VIP interneurons. So these are some of our simplified micro circuit models and they're simplified in the sense that they can also incorporate instead of detailed mathematical descriptions some simpler descriptions of integrate and fire kind of neurons where you can also extend them by adding dendrites and dendrites can also be described by integrate and fire type of equations. And finally we have this larger scale network models that are in the order of 500 cells or so where we also incorporate a large number of plasticity mechanisms and I will talk about this network model in a while and the idea here is to be able to come up with something that is capable of learning that different type of memories as opposed to just looking at information processing. So this is an overview of the type of modeling tools that we have been developing in the lab and there are many more in addition to those and I just want to highlight a set of recent studies where we use these kind of models to infer the functional properties of the network that we're interested in. So I'm going to focus on what we have learned over the years from these kind of models and I just want to start by the very old work that I did in collaboration with Bartlett which was pioneering at the time which we used a detailed C1 parameter cell model essentially to study how dendrites contribute to information. And in this early study what we found that was very interesting at the time was that dendrites of these neurons integrate information in a super linear manner so if input arrived within the same branch then they some made in a super linear manner which is the red line shown here whereas if the same inputs are distributed in two branches then they add up linearly which is shown by the green line here. So this prediction at the time was let's say revolutionary and it steered the interest of the community and it was later verified experimentally both in cortical neurons this is the neocortex layer 5 pyramidal neuron done by Jackie Chiller's lab where they found the same superlinearity in basal dendrites and this is the C1 pyramidal neuron oblique dendrites same as the model done by the lab of Jeff McGee. So the interesting thing that came out of this model is that we suggested something new the experimentalists got interested and looked into it and they came up with a verification thankfully of our prediction. Based on this early work we suggested that pyramidal neurons integrate as multi-stage non-linear integrators which means that they have one level of processing that takes place in the dendrite and in this case it's a sigmoidal let's say subunic that is present in pyramidal neurons and as a result of this finding we suggested that these cells act as two-stage artificial neuronal networks. So you have one level of processing in the dendrites and another level of processing at the same cell body both of them are highly non-linear okay and now what we believe about pyramidal neurons it's been a while since then is that they act as multi-stage integrators so they have non-linearities in their dendrites they can have non-linearities also in their tuft of basal trees but the take-home message is that these cells are doing much more than just summator inputs and processing them okay so we now are happy with that interpretation of what pyramidal neurons do. The question I wanted to tackle in this talk is what happens to interneurons okay there are recent data coming from several labs that suggests that the dendrites of interneurons also exhibit non-linearity similar to those of pyramidal cells for example in stratum radiadum interneuros in the ce1 we don't know the exact type people have reported this jump the supra linear jump in response to increasing strength increasing number of synapses activated within a dendrite which you can see here and here and this non-linear jump is dependent on the nmda receptor because if you block it it disappears so this is in ce1 stratum radiadum interneurons also in cerebellar stellate cells the lab of david degree goryo showed that you can have sub-linear dendritic summation of epsp so the voltage summates sub-linearly within the dendrites of the cells but at the same time if you look at the calcium concentration then the response is highly supra linear in the same dendrites so we have some interesting dynamics taking place in the dendrites of interneurons and also in fast basket in fast biking basket cells which is one of the most dominant classes of interneurons in the cortex and the hippocampus that have also been suggested to have supra linear dendrites which you can see here during sharpwave ripples in fact in the ce3 this is work from the lab of balash roxa so there are all these data not that many admittedly but a lot of data suggesting that the dendrites of interneurons may also be doing some interesting non-linear computation yet still the underlying dogma about how these neurons process information to most of us at least is the point neuron dogma which means that we think of interneurons as having dendrites but not actually utilizing those dendrites so they're there to collect the input and then we have a spiking non-linearity at the cell body which is what is representing a point neuron so alexandra zilivagi a brilliant undergraduate master students in my lab wanted to ask the question of how do interneuron dendrites and particularly fast biking basket cells integrate inputs so to do that she used eight different morphologies of these neurons five from the hippocampus and three from the prefrontal cortex and came up with detailed biophysical models of these cells which are validated against the large number of experimental data that I don't have time to show but this work is on bio-archive so trust me it's a good model and we can talk about it afterwards so once she came up with a model she wanted to ask how do the dendrites of these cells integrate information and to her surprise she found that the dendrites of these cells have two distinct modes of integration so there are some dendrites that integrate their inputs in a sub-linear manner which are shown in purple here which is what we would expect based on the status of others but there are also some dendrites that exhibit the dridic non-linearity which is a generation of sodium spikes in this case and therefore they have super linear integration and you can see them here in two of the representative morphologies right so I have two modes of the dridic integration which is cool so what is the reason what is the main biophysical mechanism that underlies this type of integration well the obvious one the sodium spike so if she blocks the sodium channels in the dendrites in these super linear dendrites then their integration properties become purely sub-linear much like the other types of dendrites we have in these models cells which you may say it's obvious but why then have two types of the dridic non-linearities why aren't they all of them super linear or all of them sub-linear is this anything special apart from the conductance of the sodium channels and I should also mention that the conductance of the sodium channels is very low in these dendrites it's about 10 times lower than that of the cell body and it's according to experimental data so it's not that we have a lot of sodium in these cells we have a realistic amount of sodium so what else in addition to sodium determines the difference between these two type of integration modes so then we turn to morphology and what we found was that if you look at the length of the super linear versus the sub-linear dendrites in the hippocampus then there is some statistical difference if you look at the diameter then this difference is much higher if you look at the length and the diameter in the prefrontal cortex then you don't see the same thing here the diameter is about the same but the length is more significantly different so there seem to be different morphological rules that underlie this difference between the two integration modes in the two areas that we looked at in both of them however if you look at the volume which essentially is a combination of the length and diameter then this feature is statistically different between the two classes the volume and as a result the input resistance between the two branches is statistically different okay so we have in conclusion some kind of morphological determinants of these two type of integration modes which essentially is the volume the combination of length and diameter but is this difference causal so to answer this question we went on and since we have models we can causally manipulate the morphology in our models and what we did is essentially to look at the distribution of the two type of dendrites in our control condition and then go ahead and change their length and diameter so as to make them all look like super linear dendrites or all of them look like sub linear dendrites so what do we get this is a distribution in control conditions so you say you can see that you have about half super linear and half sub linear dendrites if you turn them all into super linear looking dendrites you have a change in this distribution which is substantial so more or less all of them become super linear if you do the opposite you again have a change so you have more sub linear dendrites but not all of them so what we can conclude from these manipulation experiments is that volume or the length and the diameter of the dendrites has a causal effect on the integration mode but it's not the only one right because if it was the only one then these two distributions would be a hundred percent changed so morphology plays an important role so we have sodium conductances and morphology as the key determinants of the two integration modes okay so then what we have two types of dendrites in fast piking intern neurons what does this mean for the way they process information can we learn anything new from what we know before so what we did next is to assess how the cells respond to synaptic input depending on how we arrange this input on to their dendrites since we have two types of non-linearities we expect to see differences in the responses of the cells so we did the experiment whereby we arranged synapses in a distributed manner so covering a lot of their dendrites or clustering them within a few dendrites for both the hippocampus and the prefrontal cortex and looked at the mean firing grade of the cells and this is just one of the representative morphologies and we found that across all of the eight morphologies that we simulated these cells preferred the dispersed arrangement so they had a higher firing grade when synapses were randomly located into their branches rather when they were clustered in a few dendrites this is the opposite from what we know about pyramidal cells pyramidal cells prefer to get clustered inputs because it's easier for them to fire dendritic spikes and increase the firing at the cell body okay so this is unintuitive let's say given the fact that we have supra-linear dendrites in these cells so why do these interneuron cells prefer a dispersed arrangement to answer this question which was indeed puzzling to us and the reviewers we had to go into the literature we did a lot of reading and we found some interesting information that mentioned the following facts first these cells do not have a lot of an NMDA receptors their non-linearities come from sodium spikes so these non-linearities are really fast they don't capitalize on the temporal integration properties of NMDA receptors then the dendrites of these cells are primarily thin the diameter is very thin the input resistance is high coupled to thin diameters which means that if you induce a fast depolarizing effect then a fast depolarizing EPSP then this EPSP that doesn't some may temporarily is very likely to activate hyperpolarizing conductances which brings me to the third fact these branches and the disinterneurons have a lot of A type potassium channel so if you have a strongly depolarization that is fast rising it will also activate the potassium channels very strongly so to see whether this hypothesis that was proposed theoretically in the literature is true we went into our model and essentially closed all the A type potassium channels and increased the diameter of the dendrites to 2 microns to counter to changes let's say effect and what we found is that these manipulations made the response into two types of arrangements identical so there was no longer a preference to a dispersed arrangement so from this the take home message is that yes it is highly likely that the morphological properties and the biophysical properties of these cells make them prefer a dispersed arrangement opposite to the pyramidal neurons so we have this information the question that comes naturally next is what is a good reduction of a fast piking interneuron that it looks like a pyramidal in the sense that he has super linear and sublinear dendrites let's say but he doesn't like clustered inputs so how do I describe the cell so to answer this question we took all of the aid biophysical models and try to approximate their firing rates using abstractions that are described as artificial neuronal networks this is the point neuron hypothesis that we've known so far so some making our inputs and passing them through some kind of linearity or super linearity at the cell body and this is a modular artificial neural network where we separated two layers into dendrites that have super linear integration properties and dendrites that have sublinear integration properties and these are essentially hidden layer neurons with these features and then once we have these two types of hidden layer neurons we pass them through the somatic nonlinearity and we tried a lot of different configuration and transfer functions for these cells in order to come up with the ones that better match the experimental data so what we found was that these two layer modular an n explains the firing rate of these cells to a very large set of different inputs much better than the linear an n which would be the point neuron right so this is the two layer modular an n for the hippocampus and the prefrontal cortex in blue this is the linear an n which as you can see is not as good but it's not terrible I mean the correlation coefficient here is 0.75 0.85 compared to 0.96 and 95 in the hippocampus and the prefrontal cortex respectively so the linear point neuron assumption is not terrible why is that the case given that we know that dendrites of these cells are not linear oh sorry I forgot one slide then I'll come back to this question again so the first thing we wanted to find out is whether both types of these nonlinearities are necessary or just one will be sufficient because UE essentially increased the flexibility of the network so we looked at similar such an n models which could be only with sublinear hidden neurons or only with supra linear hidden neurons and the response was to this question was that yes they are this is the supra linear an n this is the sublinear an n and this is the modular that has both and in all of the morphologies we tested the modular did a slightly better job okay so I'm coming back to the question as to what is it that these other models explain well but the modular an n is ideal so we have two types of variabilities in the data sets that we create we have a variability that is generated because of the number of synapses that are activated and this is a kind of variability that is easy to detect by very simple models so if I stimulate the cell with five synapses or I stimulate the cell with 50 synapses then I expect that the firing rate will be much higher for 50 than for 5 so this is a kind of variability that a simple linear model would be good at detecting and then we have a type of variability that is very challenging for a linear model and this is where I just change the arrangement of synapses but not the number so the power the strength of the input is the same but where they are placed changes and this is a kind of nonlinearity that a linear model should not be able to detect because the location here could be from a supra linear to a sub linear dendrite and a linear model does not have that information so to understand whether this is what the modular network is explaining better what we did is to look at how well we can explain sets of equal power so we only have 20 synapses and we change the allocation 40 synapses we change their location 60 synapses we change their location and we look at how good these models do for these different datasets and then we can see that the too much the too hidden layer network model is doing a superior job at this particular task this is the performance of this model compared to everything else and look at how poorly the linear model is doing in this case okay so now we have a detailed biophysical model a reduction of it into a two-stage a nm that is a better description of the processing of the cell in terms of mean firing rates but then the important question to ask is what is the functional consequence of having this type of intern neurons do they contribute something to information processing to learning and memory at the circuit level so to answer this question we built a computational network model of let's say the hippocampus it's a generic model so it doesn't matter it's a model that is capable of forming associative memories most of the data that was used are based on the hippocampus and this is a published model that was published in 2016 so we took this micro circuit model and we extended it by incorporating fast biking interneurons with the two types of dendrites sorry with the two types of dendrites so sigmoidal dendrites and sublinear dendrites and also the dritik targeting interneurons so some interneurons with dendrites but not so fancy properties the pyramidal neurons in this model are all equipped with supra linear dendrites based on our initial work so we have this micro circuit model and then we train it to learn multiple multiple memories and we ask what is the contribution of interneurons into this process so how do we learn different memories essentially by incorporating various plasticity rules so this model has four types of plasticity rules the classical l t p l t d that depends on protein synthesis the protein synthesis can be somatic or the dritik or both this was published in 2016 it has homeostatic plasticity so synaptic scaling in the entire network plasticity of neuronal excitability which essentially a change in the somatic excitability after learning as well as axonal rewiring so synapses are allowed to touch and remove the dendrites of the cells until they find suitable partners and we trained this network model to form one memory by presenting a memory as a random poisson spike train that lasts for about one second repeatedly to the network and letting the plasticity rules operate and after the next day after 24 hours we present to the network a small part of the memory and we look at the activity of the neurons and if the same neurons light up that were activated during learning or are a big proportion of them then we consider that the memory has been learned and we end up with a microcircuit so a small portion of the network that encodes each one of the memories that we learn okay so what do these two-stage interneurons do for memory encoding in their network model to answer this question we look at the properties of a given memory one of the properties is the size of the n-gram so typically a size of an n-gram is between 20 and 30 percent of the neurons in the network as reported by alzino silva and we look at the size of the n-gram when we have linear dendrites or non-linear non-linear is both supra and sap in this case and we find that the size of the n-gram is reduced when we have non-linear dendrites so we have smaller n-grams we have n-grams with lower firing rates this is the non-linear and the linear model and we have n-grams that are sparser which means that the activity of the population is lower in the networks is higher sparsity lower activity so all of these findings essentially what they tell us is that two-stage interneurons bring resource savings so we don't see a major change in the way we encode the memory we just see that we encode the memory with using fewer resources fewer neurons lower firing rates sparser networks okay so this is the advantage that an interneuron with non-linear dendrites brings to the circuit so what about more complex functions like interaction of memories a few years ago the lab of alcino-silva presented a very cool study where they showed that if you learn memories in a sequence separated by a few hours then these memories interact with one another and the interaction is because they are stored in the same population of neurons so if you learn memories separated by five hours then the overlap between the two memories is in the order of about 20 percent whereas if you separate them by 24 hours or seven days there is no overlapping the population that encodes the two memories okay so there was a suggestion that the cellular principle with which we link memories over time is by directing them into the same population of neurons so you want we wanted to ask whether non-linear interneurons contribute something to memory linking so we did the same experiment into our model where we presented two memories separated by different time intervals and looked at the overlap of neurons that encoded these memories the experimental data report around 15 to 20 percent overlap and what we find is that if you separate the two memories by one hour then the non-linear model has a similar overlap nearly 20 percent whereas the linear model has a much higher overlap and if you separate these memories by 24 hours then there is virtually no overlap in any of the cases so the take-home message from here is that if you have non-linear interneurons then the interaction between memories is achieved by more physiological lower amount of overlaps which is also a way for avoiding confusing the two memories with one another so to conclude what i presented to you today is a list of predictions that came out from multiple models of fast-piking interneurons and the first prediction is that we have two major modes of the duty integration that coexist in these cells and so far there is no experiment that has demonstrated this we're waiting for someone to do it we find that the morphological features and the sodium spikes are gating supra-linearity in the dendrites of these cells we find that these cells are model tells us not that we find this is all predictions that are contrary to pyramidal neurons interneurons prefer the scatter arrangement of inputs rather than a clustered ones as a result of these features these cells are better represented by two-stage ANNs with the two sub-population of dendrites well incorporated in these models this two-stage integration provides resource savings with respect to the encoding of a single memory and the binding of at least two memories over time and this work comes with a set of tools that include detailed biophysical models reduced single cell models and artificial neuronal network models as well as a large-scale circuit model all of which are freely available to the community and we hope that people will find them interesting and use them and with that i'd like to thank you all for your attention including all of the people that did the work and i just highlighted the work of george and alexandra in this presentation our collaborators and all of our funders for their generous support and of course you for your attention thank you i hope i didn't go over time can i request you to use your uh the mic sitting in front of you as the questions answers would be recorded so no questions yeah i've gone hey uh so i mean i think one of the things that i find interesting about some of these models and there was a similar paper out from maté langil and uh tego branco about this as well is that although there's obviously some components that you need the dendritic non-linearities to explain at the end of the day there's a surprising amount of variance that you're explaining with a purely linear model as well and so to what extent do you think that the sort of non-linearities are effectively kind of adding details to what's going on as opposed to the kind of core computations that are occurring in the circuit and and to what extent can you maybe capture a lot of the core computations that are occurring even in the absence of those non-linearities you know like say with the n-grams you you might not have a realistically sparse level of n-grams but you're still getting a similar kind of principle of n-gram formation for example um so i'm kind of curious about your thoughts on that that's a very good question i also get that all the time for pyrrhamdal neuron uh non-linearities so i think that in the brain the reason for having this kind of non-linearities is essentially uh to optimize let's say storage because of our size of whatever constraints energy and so on and so forth so from what i've shown today as well there's nothing that you actually miss in terms of the critical computations by thinking of these cells as point neurons but having said that maybe we haven't looked into all the kind of computations that are required or that our brain is actually doing in order to find this missing link so i would i would not like to say that linear is okay i mean it's okay for some aspects of what we're interested in and some of the questions we're asking but maybe it's not okay for others so yes yeah great talk very very interesting stuff so the um my question is really just to think it a little bit further um so as you know there are now real really clear data that show uh that there is a heterogeneity in for example the principal cells so you look at ca1 now ca3 is clearly a heterogeneous the pyramidal cells and critically the ca1 pyramidal cells for example are contacting the basket cells very differentially depending on their subtype so for example if you are a pyramidal cell in the ca1 and you project to the medial prefrontal cortex you are much more likely to innovate the fast packing cells than if you are neighboring pyramidal cell that projects to the amygdala we only know though about the differences in the numbers of these connections from the excitatory cells to the interneurons but i wonder could you take what we know about the behavior of these pathways um and uh using your modeling approaches and to figure out whether there is to make a prediction whether in addition to the differences in the connectivity numbers there is also a differential distribution for example of how the pyramidal cells innovate the fast packing baskets yes that's a that's a great question i wish there were data out there to help us out but there are none to my knowledge but you could make predictions yes you could certainly make predictions and actually i haven't shown anything but we we have i mean depending on whether the inputs land or do sublinear let's say versus supra linear dendrites then you will have a different response at the summer and you may be able to play with these two modes of integration depending on you know the population let's say of inputs that come in and this would essentially allow the cell to operate under different regimes so if we have the most of the sublinear dendrites being engaged with distributed inputs then you more easily go into the gamma range of fighting that these cells are optimized for whereas if you have most of the supra linear dendrites being activated with clustered inputs then you have lower firing rates and more targeted outputs so it allows the cells to exhibit two different let's say operation modes depending on the inputs yes so that's a very good point we should probably incorporate this in the study as well i just wanted just add i guess a comment just multiplexed computation is in other way perhaps of saying that and one of the things that we showed is actually you can do sort of linear computation at the same time as doing computation with sort of scato versus cluster of synaptic locations so yeah that's true yes yes excellent just wanted a bit more clarity on on the last part of the overlap of patterns so why what was changing in the model to get less overlap as a function of time nothing we just started the non-linear dendrites in the two-stage intern neurons in one case they were so it's just a comparison of those two sorry i missed that yeah so so this is when the dendrites are linear purely linear and this is where the dendrites are non-linear meaning the two modes and if you have no linear dendrites you have a smaller overlap than if you have linear dendrites okay yeah that's it we haven't changed anything