 So this is third part or third lecture of today, and now we actually talk about science in a more strict sense. We talk about cortical variability, in particular cortical variability dynamics. I will explain what we mean with this and some of you will know about this, it has become a rather famous topic and by chance it was the topic of my PhD thesis, which was a while ago and yeah, recently it has been quite of interest and so I give you an overview over experimental observations, the phenomenology, but also the recent models that people have come up with to explain this. And you will measure variability, time-resolved, static, two types of variability in the data today and this can be very helpful for the challenge you are going to be facing next week. You don't know yet of that challenge, but it's going to be an interesting challenge and that can be quite useful. Okay, so this is the outline of this lecture. Let me start with some motivation. This is three slides, first slide. If you look at neocortex with the only exception, let's say, of auditory cortex one, maybe, you usually expect permanent activity. That is, there is always spontaneous activity and this is an intercellular recording from prefrontal cortex in an anesthetized rat. You have seen this type of experiments before in my first lecture and what you see here is membrane potential over time, five seconds. One intercellular recording. That is, by the way, the last of your supervisor. Now, why is this membrane potential at minus 50 and not at the textbook minus 70 or minus 55 for the matter? Yes, somebody else maybe also. So why is this not minus 70? Yes, so the answer is there is a constant input impinging on this neuron. So the minus 70 is neuron in vitro, neuron in culture, neuron in isolation. That's what the equations tell us due to reversal potentials, driving force, et cetera, but in the real brain here we have lots of inputs and this here is a zoom-in. This is now 100 millisecond. This is a spike. I just cut it here. What you see here is lots of fast upstrokes which belong to EPSPs. So these are excitatory post-synaptic potentials. You can also see IPSP inhibitory post-synaptic potentials here. On top there is also some measurement noise, certainly. It's not all fluctuation from input, but anyhow. So you can nicely see how it builds up here before it actually generates an output spike. So we have permanent activity and what people argued or the textbook knowledge is that in neocortex spiking is irregular. With irregular it's meant that it's not like tick, tick, tick, tick like a clock, but it's tick, tick, tick, tick, tick, tick, tick, tick. So if you are doing recordings you know this sort of noise from just connecting speaker to your recording. So that's an issue that we're going to come to later. So that's again here the spikes, irregular intervals. So this is a repetition of the same experiment. Now again data from a monkey. Time axis and trial number is not really important here. It's just that you see that there is this irregularity. But let's say striking feature that has been reported in the 90s and people have talked a lot about it in the late 90s and 2000s is where does this large trial by trial probability come from. And this is a classic, really a classic paper here from Amos Ariely and co-workers at Erzen, the last author, the retired professor in Freiburg. So what do you see here? You see recordings from area 17 of a cat, so visual area 17 that's approximately v1. The cat is anesthetized, eyes are open and it watches a video on a, at that time, classical screen. And it sees the same video over and over again. And while the cat sees this visual input, this time-dependent visual input, they record here from a 2 by 2 millimeter, I hope, surface of the cat area 17 with voltage-sensitive dye imaging. That has been, it's been out of use these days, but it's a dye that binds to the membrane and reports, I mean by means of changing its reflectance, I think, or maybe also its fluorescence, I'm even not 100% sure, its fluorescence, I guess, according to the trans-membrane voltage. So what you see here is basically membrane patch. Okay. And over many, many neurons. And this is time zero, this is when the, actually this short movie stimulus starts, and this is 112 milliseconds later. And this is A is first trial, B is the second trial. And what you should see here is that they don't look alike much. Okay. So this you record in one trial and this in the second trial. It's identical stimulus, the cat cannot move, it's not awake, there's not much of active sensing going on. At the same time they had an electrode sticking in at this position and they recorded voltage local field potentials. Okay. And single unit activity from this one electrode. And this is just a voltage sensitive dye here at this position, at this pixel. And basically, you know, it's not obvious that they have to do anything, I mean they have anything in common these two trials. Okay. The spikes are completely different. The LFP looks different and so on. Okay. I think this is a very nice paper here and they also have an interesting interpretation. But this is sort of motivating the problem. Okay. Where does this variability come from? And another question is if this is sort of noisy sort of, if this is meant to be noise, how can the brain actually work under such noisy conditions? Yeah. What do we do? So in experiments, whatever has been done classically or what we do in a first, let's say, stupid approximation, we take, okay, we say we just repeated 100 trials 100 times, we have 100 trials and we average. Okay. And then we see some average response. But obviously the brain kind of, I mean, life is single trial. Okay. If you have to cross a busy street in Berlin, for example, you cannot say, okay, I do it 10 times and then I average, you know, and I survived 0.8. Okay. That does not work. Okay. You're either dead or alive. So the point is the brain somehow works in single trial and that is why understanding the nature of variability is very important. And then this is another classical paper by Shaddon and Newsom in 1998. They also looked at, here, this is visual cortex, if I remember correctly, maybe it was MTM. I haven't read it for 20 years. But anyhow, so what they reported is in visual areas that they measure, they counted the number of spikes like in this little piece here, okay, over trials and they measured the variance of this spike count. So how large is the variance and what is the mean? And they say that the variance is about 1 to 1.5 times larger than the mean, okay, variance divided by mean of a count number is what we call the fan effector named after some chemist who looked into Brown in motion. But basically we call this the fan effector, okay. And they said, okay, the fan effector is larger than 1. The variance is larger than the mean. And this is a bit striking because if you look at the Poisson process and you probably have heard of a Poisson process. So the Poisson process is a thoracic point process that has high randomness in a sense. It has a maximum entropy under certain sort of description. And it has a fan effect of 1, okay. So for the Poisson process, the variance equals to 1 and that's really, really variable. So why should this be even worse around 1, okay. One hypothesis or one conclusion from this paper and other papers is that some people actually said, okay, single yarn spiking is more variable than the Poisson process and actually we need more irregular process or processes that are more noisy than Poisson which has led to the fact that a lot of modelers, for example in cortical network models think they have to have CVs and fan effectors larger than 1. I think this is simply a misunderstanding but that's another issue. So as a matter of fact, it's not true that the single yarn EA such is more variable than the Poisson process. And this is what I try to convince you now. Unfortunately, I have to get this done out, okay. So this is still sort of motivation. What are the sources of trial to trial variability in the brain? Is this sort of a noise for the system or a nuisance? Where does it come from? And that's more methodological or technical question. How can we measure it reliably? That's important for you, okay. I bought this sketch here from an older paper from Deweyza and CEDA. They depicted a neuron here and they said, okay, we have output spike trains. We have presynaptic input spike trains. We have synapses and they have channels and so on and usually we look at output variability so we look at how variable are action potential trains or spike trains of individual neurons and this should depend on intrinsic sources. That's how I would call them. They call it private noise. So there are intrinsic sources. For example, synaptic transmission is actually quite variable against something that is often or actually mostly ignored in network simulations. So we have and others have done experiments looking at cortical transmission, cortical-cortical transmission and you have a large variation in amplitudes and you have failures and things like that from one transmission to the next. But there are of course also extrinsic sources which is the input. Is the input already variable? And of course the output is input to another neuron. These are the questions one can address. And I took this part out because it would just have taken too long today. So neuron intrinsic source variability so there's a body of literature out there where people looked at how reliable is dendritic integration? How reliable is synaptic transmission? How reliable is invasion of axons and so on? What is intrinsic channel noise? And for the cortex my conclusion would be here of what I don't show. Of course there's variability but it does not at all account for this very high variability we see in vivo. There's nothing to do with the tri-by-tri-variability but there's no encounter there. It's much lower. It makes a contribution on a much lower scale. And in the references I will give a sort of for this point also some list of papers that you can have a look at. Yes. So just, can you? Okay. What is the alternative of variability being noise to the system? It's a very good question. The answer is the way we look at it we might look at it from a wrong perspective and in that sense variability might be epiphenomic in a sense. This system creates this variability it's certainly not thermal noise or something so the question is does the function of the network maybe I mean this is in the end I will try to conclude that our current models suggest that the way it functions leads to the fact that we see this high variability but it has nothing to do with the processing of the network. I'll try to arrive there. Okay. So new intrinsic factors are sources are too small to explain it let's put it this way that's at least my opinion. So next thing is before we look into more data analysis of this how do we measure interval and counterability look at both. That will be important. I can point to this nice book that Sonja Grün and Stefan Rotter edited already nine years ago but there's one chapter on this on this issue. So we have already established what is a spike train if you have a spike train there are from a statistical point of view exactly two random variables that we can estimate or that we can work with. So we have the intervals in between spikes and this is continuous it's called a continuous random variable and we have the number of spikes which is discrete because there can only be if you count you can only count seven or eight but not seven point three. This is the only data that we have access to so to speak. So yeah okay so and then of course we come up with models and try to fit models to explain this but for the time being we only have intervals and number of spikes and number of spikes you can translate this in firing weight if you wish. And now if we go from statistics to a stochastic theory if you go to math then there is a whole body there's a field that is called point process theory that can where you define mathematical define processes that generate events in time discrete events in time and if a process is properly if a process is defined then the two variables are actually closely related. They are closely related and maybe this sort of formula here in this abstract formula explains it quite well so tau k is the sum over all intervals from one to k let's say one, two, three, four okay this would be tau four here and and one can say now that the probability this again the notation here is probability of tau k being smaller or equal to t is the same as the probability of observing n spikes n spikes in the same interval from t to t being n being k larger equal to k so basically what it says is if you go along here the probability of observing a fifth spike is about the same of this being the sum of these four intervals something, trying to be intuitive the point I want to make here is the two measurements are related and they are related in an interesting manner and now we can use quantities statistical quantities we estimate quantities from our data and we relate them to see to infer what type of process we deal with we could for example exclude some processes okay and so what is often used is the Fanner Factor, we talked about this so I cross-repeated trials in a fixed observation window I count the number of spikes five, five, five okay this is four I compute the variance across trials and divide by the mean and we also use the coefficient of variation of inter-spike intervals and we usually use it in a squared manner so we use the variance of the inter-spike intervals divided by the squared mean so we take many intervals like all of these for example compute the variance and divide by the mean squared or the cv would be the standard deviation by the mean it's dimensionless that's why it's called a coefficient and the interesting thing is now that these two are nicely related for most processes we look at for example if we look at stationary processes that are stationary in time what I mean is that the intensity or firing rate does not change okay that's not very realistic for the time being but we have to deal with this then this formula holds here something you can dig out of this very cool book from the sixties and what it says is that the fan factor is equal to cv squared times one plus two times the zero correlations so if intervals are correlated what does it mean a correlation means for example if the last interval I mean it means for example if intervals are getting larger and larger and larger then they have a positive correlation and we will look at this in an example so you correlate, serially correlate intervals okay so it's the fifth you compare fifth interval with the fourth and the fourth with the third and the third with the second and so on and then you get a statistic it's like cross correlating two variables it's correlating across time you lag it sort of and then you ask for higher correlations and for all higher correlations so if you sum them all up okay this means cv squared plus two times cv squared times c basically and more interesting is if you talk about renewal processes then the two things the two measures are typically equal and for example the Poisson process is a renewal process and the renewal process just means okay I will explain this in a second maybe important is this is of course a limit a limit result obviously we don't have infinite time amount of time to record from monkey or something so we have to come to some practical terms and that's what this part of the lecture is about it's you know it's a bit boring but it's method okay so what is the renewal process I give you a very famous example can anybody recognize what this could be here this is middle ages this is eight years ago what could that be do we have any catholic in here catholic people here it's popes yes exactly it's popes yeah okay popes is a renewal process because whenever one pope dies you replace him okay it means that the lifetime of the fifth pope is independent of the lifetime of the fourth pope and the third pope and anyhow of the popes in the future by yeah and that's the pardon they're old yeah they start old in any case so the idea is of renewal means this whole theory comes from industrialization how can we make workers work all night and children all night and stuff how many light bulbs do I need how often do I have to replace them okay that's where it comes from but basically renewal means you renew a pope you instant a new pope and he has no relation with the old one we usually also we I quietly assumed that we have orderly process it means that you never have two spikes at the same time which is true for neurons in popes it's a bit different we had anti-popes at earlier times and also we have and I measured actually CV for the pope is extremely high and if you look into it a bit more closely there are a few popes who were very short because they were assassinated so that makes high interval variability yeah but it's a nice I think explanation of what a renewal process is the question is are neurons renewal process or not so renewal process are very often used to simulate renewal processes okay and you can also make them with changing firing weights okay so renewal process by definition is actually no change in firing weight but you can you can how you say you enforce this yeah there are ways of of doing sort of combining renewal theory and changes of intensity okay now how can we measure CV and fan effector so here this is a form of simulation but basically you have spike trions repeatedly this I call operational time operational time means this is the average length of one interval it doesn't so if the rate is double then this is just half the size okay we call this operational time this is the average intervals so we observe now only a very short period on average two intervals that's of course really short one should that but actually it happens yeah so you might look at interval of 400 milliseconds before and after stimulus and before stimulus maybe the rate is so low that you only have one two or three spikes in your observation typically what happens is if you if you have this finite observation not an infinite is that this would be the under two interval distribution the distribution of intervals the probability of intervals that I used for simulating this and this is the observed one yeah because I cannot observe any interval that is longer than two by definition okay so it's truncated here so the and then renormalized so what you have it's more narrow or in other words the CV you measure the interval variability is always smaller than the two one okay because you don't have infinite time to to observe and this is that the black lines in this here is sort of depending on operational time and for different processes so the you know the two for example this is the process so the two CV squared CV is one but if you measure it with small in small windows you underestimate it that's all you need to know it's not a bad thing but you have to know it okay and then we you know we come up with some rule of thumb for you know any practical purposes you know and say okay let's try to have not less than five spikes and average in my interval and the fan effector the fan effector it's a bit more complicated but the fan effector basically always tends to one yeah but it it it's bias it comes in rather late so if you have a very small interval you can think of it if it's interval is almost infinitely small then you either have a spike or no spike no matter what the process is and your variance becomes equal to the mean somehow because you approximate a Bernoulli process doesn't yeah okay if you have this in mind it just means try to use larger intervals if you want to have good quantities use small ones no of the bias to come to look for example at dynamics that's fine okay as long as you know what you're doing you can do almost everything so conclusion one here is the point process three relinks interval and count statistics that's well that's interesting because we can test now in our real data what does it look like the fan effector and cv have this bias for estimation okay let's look into quartile variability and I don't start with the behaving monkey but I start with experiments that we did to understand this a bit more a bit better we and others okay so we started by injecting currents noise currents into in in vitro recording into a per mitre neurons of so much sensory cortex in rats okay so this in vitro is acute slice preparation this is a glass pipet that you see here in the infrared image this cali is 20 micrometers this is the per mitre cell up here is the PL surface and what we inject through this into the neuron is a current a noisy current and the noisy current is constructed by point process we use point process so we assume excitatory presynaptic neurons I assume inhibitory presynaptic neurons I convolve them with excitory inhibitory currents if I sum them up I get a funny current and this is the voltage recording now okay the actual potentials are cut here so that you can actually see how this looks like so it actually looks pretty well pretty similar to in vivo recording okay we fool the cell by injecting current and we see what is the cells output and we can do that in different ways so by we can change this input statistics such that this is more or less variable so for example we have only excitatory input okay and they are prosynaptic distributed in time the inputs I mean these inputs here prosynaptic distributed the red ones and then this is how the noise looks like this is called short noise low pass filter prosynaptic process is called short noise to inject this and this is how the neuron reacts this is the voltage trace and this is always the same neuron all four times but we did here on the right side we balanced excitations with inhibition but not fully but with a 2 to 1 a sort of a mean positive current and something changes here you can see it by eye it becomes more irregular and more spikes because because you drive it by more fluctuations the neuron and the lower left now we did again pure excitation but now that it wasn't a prosynaptic process but a cluster process so the the inputs come in burst ok let's say so it becomes even noiseier and if we combine this again with inhibition here then it becomes really really noisy ok so if in these four conditions for these four conditions now we measured final factor and the square at CV and as I said identity here they would be the same for renewal processes and if we do this with these neurons then we find that these are the different conditions the noise conditions and if the strongest noise we get the highest value as a final factor on CV2 this is these triangles here with the lowest noise condition they are all down here but basically they all fall nicely on this identity line and this shading in the back here is a confidence limit that we did by simulations so basically because you have of course a variance of estimation we only record several seconds and so on so we have a limited amount of data so we always have an estimation variance also ok so this looks pretty nice and it comes up with rather small values and you will see later that in vivo and I said initially that Shedler and Newsom and others showed you know final factors are 1 or 1.5 or something if you do this in vitro thingy here this is the order of 0.5 or smaller and they are roughly equal ok and other people have looked in either or these two measurements and they all fit to these measurements but we cannot easily we cannot explain anything in vivo with this so let's do in vivo recordings this is again intercellar recording in vivo from red anesthetized but now we used pieces of low anesthesia and where there were no we basically cut out traces of recording where there was no down states there were no down states and then it looks like this much pretty similar to this in vivo injection type and we again now we have only 8 cells but we again measured final factor in cv squared and what we find is again they are really small the scale goes from 0.1 to 0.6 so they are smaller than again around 0.5 or so only half of the size of a Poisson process and interestingly the final factor so these orange lines are the real recordings and they all on average they are on to the left of this identity line and that means that the final factor is actually smaller than the cv squared and if we what could be a reason for this if this is not the same I showed you this equation before do you have an idea what yep so we can now test whether the correlation is the reason for this because this is rather stationary here so we took pieces where the rate doesn't change much so if you shuffle the intervals if you just randomize them and you do the same analysis the cv is by definition the same this doesn't change by shuffling the intervals but the final factor does change and the final factor and now the grey line the grey ones suddenly are more or less on this identity line so shuffling intervals destroys this smaller than relation and this might be due to serial correlations here so what type of serial correlations do I need that the final factor is smaller than cv squared I mean basically you can read they have to be negative if this is negative then this becomes smaller than this and so if we now estimate from this data serial correlations indeed we have of order 1, 2, 3 and so on these are different the first order serial correlations so neighboring intervals they are negatively correlated for 7 out of 8 neurons and the 8th one is this one that anyhow didn't change much so serial correlations in this data and this is not only in and this is a simulation but not so important but there are papers out there that look at serial correlations and find them all over the place in different systems there's not so much reports in the neocortex because it's more difficult to do but I can tell you that one possibility or one cellular mechanism that induces negative serial correlation is adaptation spike frequency adaptation and that's again ubiquitous across animal kingdom a lot of spiking neurons do have the machinery to produce adaptation currents including cortical neurons so there has been a nice paper I think by the group of Adrian Ferro if I'm correct looking into this and so right, so we have to deal with these issues so we don't there are not renewals the real neurons don't seem to be renewal they have negative correlations if you go back to the in vitro data and you look at it very carefully you also find it now if you look in this nice paper from the lab of Shigeru Shinomoto a theoretical neuroscientist who I don't know 100 or something experimenter and asked for their data and talked to them so long until they got tired and gave the data to him and then he analyzed different species mammalian species and different recording areas hippocampus prefrontal cortex visual cortex motor cortex the regularity he did not use the CV or CV squared or any other of these related measures he used a dispersion index or dispersion measure which nevertheless is similar it's regularity so the higher the more regular it's a bit opposite to the CV but it seems that in motor areas recordings are more regular and they are more irregular in sensory areas this is also my experience we mostly work on cortical data and it's a bit more regular so the CVs are lower on the other hand the final factors are highest there in the motor system yes please if you want to know the adaptation model you know the adaptation parameter would you be able to retrieve this adaptation factor out of the spark trains the question is whether by fitting adapting a model that has adaptive currents to spike trains whether we can sort of infer let's say the spike frequency adaptation parameter of this model and the answer is yes and a group that has been very active in this field is the group of Wolfram Gassner in Lausanne they have been over years improving single neuron models to predict spike trains of recordings and yes basically they had to incorporate some sort of adaptation to properly predict spiking cortical spiking which is not so surprising I mean there are different variants of models around that you can fit some of them are more phenomenological some of them are more mechanistic and then you estimate these parameters as good as you can but again it's not as easy as they typically describe it in papers I would say yes please do you estimate the regular variability across all of the intervals or is that like a drift in the intervals from the beginning of death I mean overall overall but it's a good question because we have to talk about how to once more about how to estimate them because there are different ways of estimating this variability so basically you have to estimate it locally and then average because if the firing rate changes then of course over a long time the intervals will be extremely variable right they are small when the rate is high and they are long when the rate is low and that's of course not what we want to look at we want to look at sort of the imaginary static state variability of intervals or irregularity irregularity okay but maybe it will become more the next examples but also when you will work on this data right because we're going to help you and you will find out how it is okay so conclusion from this smaller part here was understational input conditions in vitro also in vivo we tried this in a balanced network I mean in vitro we could sort of regulate this a little bit according to neurons are less variable than the Poisson process so both irregularity is low factor is low and we have this relation here most likely due to coalescence, negative coalescence yeah and so adaptation negative coalescence are actually a sink of variability right they reduce variability in the system that's something we sort of biologically interesting and relevant and I want to point to this thing here Poisson process is a diffusion model if it wouldn't be recorded I would use a different name but anyhow it's not a good model for single, for recording neurons or for any neuron actually spiking neuron and still we like to use it in math and that makes sense sometimes you want to do approximations yeah but don't be fooled and make simulations of networks and try to be Poissonian okay I mean that's like trying to be unbiological okay good now so to the interesting part and this is the data that you again the data that you will use you have seen this experiment before so monkey sits here in the monkey chair in front of this vertical panel okay this is trial start he has his hand in the center here this is vertical he will be asked to move his hand to one out of six targets at time zero he gets the target shown here this is the one target condition I will show conditions later then he has to wait for one second and this will turn from green to red and then he's allowed to move okay and reaction times will be fast because he can anticipate he does exactly know he can prepare here exactly which movement he will have to execute and then we have this preparation period exusion period separated this is now free neurons recorded in parallel I think from primary motor cortex in this case this is monkey one in your data and we look at intervals variability so we look at interspike intervals and of course this is the variability of a short time scale okay on the order of one interspike interval we measure the variance and the mean but we also look at the final factor but we should not forget that you know here these are grouped trials where the monkey moved to the same direction but in between every second each of two trials are maybe 30 seconds or something so because he has randomized different directions okay and this is over many minutes here okay so you actually have a much longer time scale so from trial to trial a lot of time can pass by okay we do this and I still haven't talked about how we can estimate this so what I ask you to use in the exercise is the so-called CV2 it's a local measure it's explained there so we only use two neighboring intervals basically we measure the variance of those two divided by the mean and then we go to the next to the next and to the next and then we have a very local measure that is more or less or to a large extent independent of finding way changes that are slower than one or two intervals yes there are other means and actually in this rather old figure this is from my PhD thesis I did use a different method I don't explain in detail now but what you see here is again the fan effector whoops for exactly this data here I'm sorry and the squared CV this is a logarithmic scale and we get huge fan effectors okay up to 10 or 12 that's gigantic I try to simulate this with point processes I mean you can't do it or it's basically impossible except if you introduce the non-stationaries I come to this and the CV squared isn't the order of point one to up to two or two and a half but this is also biased because because the variability is so high across trials and in this case I used intervals from pooled them from different trials and then you are already a little bit well you're in deep shit already because the intervals are different because your your account I mean the rate is so changing so much from trial to trial so you have to use the other ways around it but at this I have a good reason for using it here and then you see that the fan effector is about two to three times higher than the CV squared yeah or it's even more four times and it's on average numbers are between two and three okay yes and fan effector over CV squared is about 2.4 yeah now this is not consistent with what I showed you before in vitro there's something different something's going on here from here to here this is the same type of nuance okay this is a rat and this is a monkey but that's not the point you find the same in the rat so basically there is there is something that is not intrinsic to the nuance I think this is something I would like to hypothesize already at this point this must be some extrinsic sources let's say and if you now look at the fan effector over time of this experiment so this is before them here's the monkey gets the cue this is the time before this is the 900 milliseconds of the rating period and this is again an equal length period around movement onset and these are the distributions of the fan effectors and they go from large numbers around 2 in the spontaneous state let's say down to small numbers 1.1 1.3 during movement the they reduce a lot and this is now an important finding that has been much discussed this is the same data now time resolved and you will do this in this data okay you just take a window and move it across the spike trains and you always measure for each trial the count and then you measure divide the variance by the mean and this is what you get out of it and this is that you can see that in relation to the task triggers the variability goes down then it comes up again and it goes down again during movement I'll show you this later okay so the ability is actually modulated in a task relevant and this has been very prominently published by Mark Churchill and his group so they first showed it in motor cortex like we did in the first paper and then they here in this paper they again collected data from different sensory areas and motor areas and you find this fan effector here go down go down go down rather sharply okay and this is always the stimulus onset here it's a grating here it's this cost gratings that give you a movement impression that's area MT visual area 4 here directional sensitive again motion sensitive and here premotor dorsal as a motor area when the monkey starts to move it goes down this is more slowly that's also what we see but actually it's also partly due to the fact that we don't have internal trigger to align movement data as we can do it with sensory stimuli so tributaryl variability is large count variability is larger than interval variability much larger which tells us that there's some inconsistency certainly nothing to do with an stationary process that we usually think of and the final factor is reduced to the task and this church and paper has been very influential and people have then started to think about where does this come from and this is my final part now I will describe some models I start with an older model that we thought of and that is maybe partly true we don't know really so we thought these are different trials over time when the monkey is performing the identical trials but maybe the overall network state is changing homoestatically to whatever reasons new modulation attention and so on and maybe this already is sufficient to at least explain mechanistically why we see this high variability this is now from firing rate estimated over long period of time from again average of course many neurons from monkey brain and indeed in a small range but it changes quite a bit the average network activity and these are the real trials that have been done in this experiment and what we did again we went back to in vitro experiments and what we did is we used noise current but we artificially introduced trials this was always 5 seconds long and then we changed the the intensities of these excitatory input but we only changed it by 2% or 10% to 5 or 10% and this is how it looks like here for example there is a little jump but you can hardly see it it's almost invisible for us what does the neuron do with this type of changes and if you do that so the black one are controls and here the open symbols here show you for 2, 5 and 10% what happens to fan effect the relation to the cv2 and indeed if you have for example 5% variation of the input only 5% from trial to trial 5% more or less excitation then we get very large numbers here already where the fan effect is about 2 times, 3 times, 4 times higher than the cv2 so it just tells you that indeed if there is a non-stationarity across trials in the input then we get this high output and this is something that will happen the question is only how does it happen I think this is the only good explanation right now mechanistically and I think nobody has a different idea that to one particular neuron from trial to trial is pretty different the reason is why and it translates in a sort of a drastic by built in the output and this sort of on the scale you can get similar results as we see in the monkey okay so that's why now there is a different model 2a I call it there will also be 2b and this is now a rather recent okay 2012, 7 years but that's a good model and Wahid sitting here in the back is also working on this type of models so they argued okay attractor networks balanced random networks and the cortex they don't do much so we have to think about function how can a cortex our cortical model do something functional and attractor networks are attractive it was interesting models because attractor network first of all they don't have to be cortical or spiking let's talk about abstract about attractor networks in certain parameter regimes they are multi stable so attractors can be assumed but you jump out of attractors a new one under spontaneous conditions but under slight stimulation or changes of input or you can sort of drive the model in to assume certain attractors then it could also be interesting that attractors can be strong or weaker and this could be learned for example and then we could use this maybe for decision making certainly for working memory there are nice models out there that can explain working memory tasks with attractor networks now if we model this in the cortex and we need many let's say several thousand excited to inhibit our neurons and we have to somehow generate attractors and we can do that by architecture so what they said basically at the same time here Gustavo Deco and here from the Deuron lab they said ok if we have a balanced network we somehow now pick subgroups of these neurons and we strengthen them their interconnection they strongly have strong excitor interconnection so these are excitor neurons talk about excitor neurons here so we make clusters they also have connections to all others but they are a bit weaker it's like a social group you're strongly connected and you're weakly connected to some friends in other strongly connected groups so we have these excitor clusters there's one inhibitory pool and by doing this you suddenly get this what we call multistability we have attractors that can fall on and off so this is here a spontaneous let's say simulation spontaneous activity the gray ones here are the inhibitory neurons again you see a lot of spikes, these are many neurons and then suddenly by this you can actually find these are the clusters here so this is one cluster here and suddenly they become strongly active but then they fall off and this gets on and then they come on again this gets on and then this gets on they come out of the these attractors come out of the energy level if you do this and now if you now do in this spontaneous if you cut this into pieces and you measure factor factors and you assume these are trials then of course from trial to this is only one single trial the next trial will look very differently and then you already have a high variability mainly because within I mean within a cluster you can be active or non-active so it's almost a bimodal thing for this type of model but the interesting thing is now that the final factor is in I'm sorry so if this is now a stimulation here here we stimulate one cluster one cluster yes which one is stimulated here actually okay inhibitor I'm sorry these are just the ones that are stimulated here so five clusters are stimulated here and then they will be highly active and the others will be suppressed because there's still this global inhibition and now if we assume this to be a trial and an experiment and we slide a window along to measure a final factor we find oops they're really high like in real data suck and they go down and come up again as sort of in real data model yeah if you don't have these clusters it's just like that okay this is just the balanced random network okay so that has been a if we wish to break through a good idea has been a good idea still is a good idea and it can explain this variability and at this point one could say look if this attractor network type is a processing type this comes back to your question and we look at an individual neuron across trials and in a recording we wouldn't see this right we don't know how to if you if you randomize the order here you don't see this anymore it's you know and then we see one neuron and we repeat this only one trial all together and if you have ten trials of this there will be high variability because sometimes this neuron is on and sometimes it's off and for us it's variable but for the brain it's not the point because it just computes something different orders or different time steps each time now if you look we remodeled this I mean we had a bit of difficulty to reconstruct these models you have to be very careful with the parameters and the parameter range but what we found is indeed the firing rates are always pretty high if inside is such a cluster and the CV goes down to other small values outside they are normal CV2 but don't be confused for the moment it still means irregularity but inside a cluster it goes down quite a bit because it's not I wouldn't call it burst firing but let's say regular strong firing and this is not necessarily what we see in the data so we have let's say issues with this model that we would try to improve that's what I'm going to show you in 2B so firing rates are really high so you can only have high or low rates more or less regular spiking during the active clusters is not really what we see in the data and the model is sensitive to simulation parameters I will show you this in a second so what we thought now is that we could do the following each of these excitatory clusters in the network also gets a private inhibitory cluster that is strongly interconnected and loosely with others each cluster has also its private inhibitor let's say more or less if we do this so this is monkey data again I showed you before this sort of factor decrease in the monkey data and this is how the CV looks like or CV2 in this case it's actually flat it does not change as I showed you before the EE model and we have this the CV is really going down and coming up again this is not really what we want that's basically referring to your question and we have another issue if we do strong stimulation we get nicely reduced the final factor is nicely reduced so this is stimulus amplitude but if we do this weak gray stimulation here then we often have an increase in jump and sometimes you don't it's sometimes just not enough to trigger this jump if we do this what we call EI clustering now we can solve this issue a little bit we have the CV now more constant and final factor is decaying so we are now talking about models and you will analyze the data but I think it's interesting to understand a bit of this interpretation that is currently around this is sort of what people including us currently think is good models to understand it yes new models are calculation models ah no ok so the new models are good that you ask the new models that we use and they were used in all these slides before is leaky integrated fire neurons full stop it's current based modeling we also tried it conductance based but we try to stay in line with the previous publications let's say ok what is the reason maybe for having this more regular spiking here but not here if we now look in this pure excitatory clustering this goes spontaneously up in this upstate here and then we see that the spiking becomes and if we look at the current that this neuron receives it's one of these neurons here then it's nicely balanced before it jumps up but then suddenly excitation goes up but inhibition doesn't so basically within the cluster the network is driven out of balance so it's not a random it's not a balanced network anymore and on the right side we try to keep that or basically by this excitatory inhibitor clustering we are much closer to the balanced state and that is why spiking remains more irregular and again this here is the pure excitatory clustering network and what we also find is that if we want to find this case where the final factors are high in the beginning and this is what I meant with parameter space so this is the excitatory strength of synapses in the cluster there's only a small period where you actually find this jumping multistability but if we do this EI clustering then this works nicely over larger range and this is shown here number of clusters, strength of clusters and we get high fan effectors as one indicator for this multistability robustly over a larger range of parameters so this is just sort of a bit of an advancement of the previous models if you wish now if I come back to the monkey data that's the very last part now we had these three conditions so the monkey either gets a single a single target with a preparatory signal he has a delay period of one second and then he awakes and he has to move to this one target but he can also get two or three targets and one of those two randomly will be the final target or one of those three so he has full information here but only incomplete information here that's what we saw before already can we model this is one question so first of all if you look at the fan effector the interesting thing is this is context dependent so depending on the context of this the fan effector the black is now the one target condition okay it goes down here and then it stays pretty low whereas in the other two conditions it also has this sack let's say and it comes back to higher levels and if someone makes sense the variability is lowest in the case where the monkey knows what he's going to do where he has full information but the variability is higher when he does not know what to do in an interactive network for example it might not assume one attractor but it has to juggle with two or three attractors and this is what we tried to model so if we now have a network with only six attractors or six embedded attractors clusters that can be attractors and we stimulate either one, two or three then what we find is in the model the same more or less the same thing so we have a decrease when the queue comes on and then we have a significant difference between the three different contexts what is also what you saw before is these decoding so if we this is a directional task so if you decode the movement direction in the one target condition we have 100% decoding probability mostly but if we have two alternative targets then it's only 50% one half and one third for three targets yes please how does the decoding go a little more okay okay that's because we use a fine-rate estimate that is centered so we use a non-causal kernel but you can make this causal if you want to estimate exactly for example the time when it jumps up it's a matter of choice let's say you will also deal with this problem if you do it in the model we get the same thing and that's quite nice because we basically if we stimulate three clusters when it jumps between these three clusters the variance is higher okay and for the final target two of these inputs are cut off or switched off and then it has to fall into the final into the final cluster and then it will we can decode it with 100% probability so with this rather simple model let's say with this cluster attractor networks we can use this sort of movement selection paradigm okay and very last is if we have adaptation I talked about adaptation I just want to tell you if you have nuance that have adaptive currents then we also have interesting effects again in a cortical network with adaptive nuance we have this very strong reduction of the fan effector but for a short time and this is the time scale of adaptation so adaptive neuron drive if neuron in a balanced networks are adaptive and they are strongly excited then they drive themselves out of balance so to speak they take them out of the can I say this and this unbalancing reduces the fan effector and we believe that this is due for this rather early reduction that we see in all data that is published and so basically we think that cluster network plus adaptation is the one that can explain a lot about this phenomenon but not only there was recently a nice very interesting paper where they fitted models to different various data from mice recordings from mice and they could only fit the data properly if they introduced adaptation and I think this is something that also modelers should not forget that cellular mechanisms are out there that might be relevant and this is what I mean this sharp drop here we think that this is maybe due to adaptation and that is an ongoing work of one of your tutors Wahid Ostami final conclusions the Poisson model is I just repeat this for the third time it is not a good model for neuronal spiking but if you are a mathematician you are allowed to use it or a theoretical physicist maybe there is source of variability I didn't show you this in detail but you can try to read this up that is my hypothesis intrinsic factors to neurons are negligible compared to what we observe in vivo the balance in a balance network you can only explain factor factors up to 0.5, 0.6 or something this is sort of if you wish the neuron in a dish or the neuron in under stationary conditions better and this excess variability is the strong variability and the modulation can be explained so it reflects non-stationarity across trials I think that is also rather clear at least for me that is what I believe and we can either think of slow modulation in the background and this attractor dynamics and possibly both are there we also have homoestasis in the brain and then there are sinks of variability how is variability reduced by frequency adaptation is a strong measure or strong mechanism to transiently reduce variability in your output and that can be maybe this is really used for coding quite a bit and of course I explained the attractor state the attractor networks that could capture a lot of this variability dynamics so finally I just want to thank the people that have been involved in over several years now in all the data that you saw in the analysis I think it is fair to show them and you will actually so you know Sonja already she is co-organizer of the course but also Alexa will be here as a keynote speaker she will arrive I don't know when this week sometime soon Bahid is here okay so thank you very much for your attention