 Okay, so I would just like to start the session with Charles Druckmann who was at Janelia and has recently set up a lab at Stanford University and he's going to be talking about his work on cortical dynamics. Good afternoon everyone, can you hear me well enough? Okay, let me just start my timer so that I know that I'm on time. Okay, so I'm going to start a bit broadly given the fact that I assume not everyone here sort of is a full-time neuroscientist. This might seem so broad as to be a bit naive but I think it's actually worth thinking about. Right, so animals are pretty incredible and the things they do are amazing. Right, so you can go to the extreme example like a bee finding food and communicating it to its friends by a dance or you can go just to you know a mouse doing whatever it is when mice think they're not observed and you know just going about their daily business in their natural habitat and you know meeting predators like ducks and very important things like that and then yeah he makes it in the end. Right, so the the reason I bring it up is I find this pretty incredible. Right, there are these animals they have this little things inside their heads inside their bodies and somehow that is the thing that we're trying to study. Right, so sometimes when you get lost in the businesses of you know dendrites and neurons and this and that and you know this kind of neuromotorator that kind of neuromotor it helps to actually think how amazing is this phenomena and how challenging it might be to understand that. Right, so a slightly more adult version of that is that we're interested in understanding the neural computations that underlie behavior and what's been very interesting in the past few years is that our ability to record neural activity for animals that are actually behaving they're performing some kind of task is increasing remarkably right so Ken showed you a very nice example of this this morning I'll show you another one so this is the spatial temporal pattern of activity in a few hundred neurons recorded in cortex by g-camp imaging in Carl Suboda's lab and what and each one so we talked a bit about custom images this morning but just to make it clear each one is each dot here is a neuron their the level of their activity is color coded so that warmer or pinker colors indicate high activity and that's just a bunch of neurons that you can record and I'm just playing the activity over time while an animal is doing an extremely simple two alternative force choice task with a small delay and what I want you to take away from this is not the pretty colors but rather the fact that there are all kinds of weird spatial temporal patterns of activity here that don't have any obvious relationship to the task right so all the mouse needs to do is to figure out whether a pole was given up here or here and then report it a second and a half later and yet you find all this weird stuff popping up right so to me that immediately raises a question of what are we going to do with that right so there's one option which is to say we're going to do absolutely nothing with that right so technically in order for the brain to work and solve this task all it needs is that the spatial temporal pattern of activity visited by stimulus a whatever it is will be different than the spatial temporal pattern of activity listed by stimulus b I only show you the pattern for stimulus a but you can take the pattern of stimulus b duty conic analysis will take you five minutes they're very different this is all the brain needs and maybe we're done right so I think that's one attitude but I think there's a more interesting attitude that tries to take to take the details of this data seriously right and tries to be bothered by the fact that this is perhaps not how I would expect at cortex to look like right which brings you to the question the other immediate question then what exactly did I expect cortex to look at right can I actually do this can I look at the spatial temporal pattern of activity of a few hundred neurons and say with confidence this is totally consistent with the animal doing computation x but it's really not consistent with the animal doing computation y or conversely is this consistent with theory of cortical function a but not theory of cortical function b and I think what the ability to record this data highlights is that we're very far away from it and I think this is something that we need to deal with in my personal opinion the reason that we're very far away from being able to do that is that we just have terrible intuition for parallel distributed computation right so maybe should speak just for myself I have terrible intuition for parallel distributed computation right so if you take 500 thousand neurons I don't know take a billion as many as you want each one connects to a thousand others they're all constantly transmitting information between themselves computing something with your dynamics and then you want to ask yourself what should this thing look like while it's doing it and why does it look one particular way or another you quickly realize that you have no intuition for this mode of computation so it becomes very difficult to check it against data right so put so the central notion behind this kind of thinking is that the brain does this and the principles of how it does it are embedded in the details and we need to learn how to understand these details and how to deal with these details if we want to learn the unique computational style that the brain has you don't have to write you can just do machine learning in general and understand intelligence but if you are seriously interested in understanding how the brain does its own version this is something that we have to deal with every time that we see something surprising I think we should think about why that how that fits into our perceptual model and if it doesn't then you know try to rethink things a bit does that make sense so related to more to talk before then you know we're very used to structure to function questions in biology when it's physical structure right so synapses and wings and things like that and how they relate to actual function but now that we're able to record dynamics and detail we see if they have their own rich structure and that structure somehow related to the function to computation and we need to sort that out and then we can sort out the structure to dynamics part right so just like more it showed us all this beautiful structure it would be really weird to just ignore that and say oh what the hell you know there's a different way for the brain to work it doesn't have all that so we're not going to bother about these details I think it's just the same thing that we now have the ability to record these dynamics they're much more complex than we would have naively expected so I think it really makes sense to try and look at this okay that was perhaps an over long computation introduction because the course the question is how do you do this and this is the main thing that my lab is interested in so what when you talk about computation one can talk about it in general I don't even know exactly what that means so need to pick something specific the thing that I like is short-term memory which is the ability of animals to generate persistent and selective representations from transient stimuli okay so the classical example is a delayed match the sample task so the monkey sits on a chair he fixates at a screen you show it an image there it was and then there's going to be a delay period from a few seconds to a few tens of seconds and then you're going to show it the second image and the monkey's job is to say whether they're same or different so the first image was the general research campus building the second was the Eiffel Tower so if the monkey is smart he'll say they're different and if he's really smart he'll say that apparently the same length they've put end to end and equally popular as tourist destinations so this might seem like a pretty naive computation it might be very simple but there are some key properties of it that I really like right so the first is that every time that you introduce a stimulus to a brain then you can't control the computations that are happening right you give a sensory stimulus all kinds of things are happening pattern discrimination pattern completion bandwidth equalization many many things that have been described in the literature over the years and you just can't control those but having this delay period in the middle in which you're no longer presenting sensory stimuli in which hopefully the main computation that's going on is just this maintenance of this one particular thing that the mouse or monkey needs is a very unique opportunity and the other thing is that I think this is the simplest computation of building a model of the world which is I think what brains are for right so for instance if I turn my back to you which will be extremely rude but I'll do it anyhow to make the point right then you know the image of you is totally disappears from my retina but when I turn back I'm not surprised that you're here but but that's because I did that for a second if I would do that for 20 minutes then I'd be pretty surprised if you're here I mean I know this is Canada so people are extremely polite so it's possible that I would have to do it longer but but and you know but if I do it for two hours I won't be surprised that the table is still here right so you know it might be too fancy to call this building a model of the world and I want to say too much about it but just to say that this business of generating persistent representations of the world to me is I think extremely interesting and just indicative of a more general part of even more interesting things that animals need to do okay so what's the simplest way this could work so imagine you have a single neuron this is time on the x-axis and firing rate on the y-axis so I'm not supposed to use my the laser pointer but I'm not sure I'll deal with that so we'll see how that goes so there are two interesting times the times of the first image presentation the second image presentation and a very sensible way to do this would be to just you have a neuron that codes for this image since it codes for this image when you show the image the firing rate goes up and then the firing rate remains stable and elevated through the entire delay period right so this would be a very reasonable way to build this computation and when these experiments were done in the early 90s and when neurons like that were discovered it was extremely exciting because neurons have biophysical time constants of tens of milliseconds and it's not obvious how you generate something that suddenly has a time constant of many seconds right so this was extremely exciting extremely interesting from biophysical perspective but from the conceptual perspective it's not that exciting right you need to generate a persistent representations to just have a neuron that codes for it and you generate persistent activity in that you're on but what was then observed is that actually these neurons are extremely rare so they're about three to five percent of the population and if we look at most of them they just look different so ignore the different colors their different stimulus conditions that doesn't matter for now just look at the temporal profiles so the first one starts up and then goes down this one in the bottom left starts low and then goes up this one on the top right only remembers that needs to respond in the end and there are all kinds of weird things but whatever all this is it's not a stable and persistent representation and this immediately leads us to something weird so we all believe that neurons code for things with their firing rate we know all the monkey needs to do is to remember this one thing we know he's able to do that because he does the task and yet what we find is that the activity of almost every single neuron is constantly changing during this delay period so in cases like this when our intuition is tricky it's worth trying to think a bit more abstractly so bear with me while I do this so imagine you want to represent points on a plane not images that's a stimulus of the mentionality to one very reasonable way to do this is with a linear population code so each neuron has a preferred direction in space you're on one likes this top left direction you're on two likes this top right direction and imagine I want to encode this big point in space there's one very easy way to do that which imagine just the activity of your one is at half so I'm scaling down this vector by half and it's a linear population code so I'm going to sum across all the neurons you're on two had activity of one so I didn't scale it down I add vectors like you add vectors tip to toe and I find myself encoding the right point in space okay but now imagine that your own one is ramping up and you're on two is ramping down so now the vectors scale differently they scale differently they'll add up differently and I'll find myself encoding a different point in space which is again just our intuition that if neurons code for things with their firing rates the firing rate changes so should the representation however this is this intuition is extremely misleading every time that you have more neurons than dimensions of stimulus that you're trying to represent this can happen sort of by force when you have a divergent architecture like the classical phylamocortical projection where there are 50 times or 100 times many more neurons in cortex than in thalamus but often we're just interested in situations which we have you know a hundred million neurons in a cortical area and we're encoding basically one simple thing okay so how does that change the picture well it changes it dramatically you don't need to go to a hundred million dimensions it's enough to look at just three dimensions so now you have three vectors you need to arrange them in a plane if you choose this arrangement this is called the mercedes-benz arrangement in the applied math literature believe it or not this could be called the peace sign arrangement but I don't know if there was DARPA funding or something and then now the picture is totally different so if you want to encode this point in space you can do it just fine by having the activity of neuron one at one but you could also do it equally well with the activity of neuron one at half and you're on two at three at minus half they will add up to be exactly the same thing and you don't have to have negative activity that's only an artifact of using three neurons which is just done here for simplicity and of course it's not that you just have two patterns of activity we have an infinite number of patterns activity shown in the same way sort of level of activity in a bar plot below and how they above sorry and how they add up below and since it doesn't matter which of these patterns you choose to represent there's no change in the coding we and later others chose to call these non-coding or null patterns but of course it's not that the entire network could only represent one point in space that would be really weird there are different patterns of activity that do cause a change of representation right so here activity is changing and also the place that's being coded for the position in space that's been coding for is changing and since these patterns do cause a change of presentation we call these coding patterns does that make sense so an efficient way to think about it is again in this space of neural activity so you can present the activity of a network at a given point in time by a dot of a space equal to the number of neurons so here this dot is above on the one axis so neuron one has some activity and it's on zero in neurons two and three so there's no activity neurons two and three and the dynamics of the network are just how this dot evolves in time and the realization is that every time that you have more dimensions more neurons sorry than dimensions we're trying to represent there's going to be some degree of freedom or null space we call it null space just because of the relation to linear algebra or a non-coding space that if you change the activity along that direction then nothing will change representation and the animation in the bottom right was just me pushing the activity up and down this direction okay so of course when we observe a bunch of neurons we don't observe the full dimension we typically observe a very small fraction of them if we observe only one like in some of these classical experiments then I can make this neuron look like any arbitrary psth that exists just by the correct dynamics along this degree of freedom which won't cause a change of representation right so what this sort of somewhat abstract thinking has taught us that even a computation is simple as keeping track of a perfectly persistent stimulus can come with more complicated dynamics you might not easily expect okay but the real question is which someone if it were slightly less polite audience someone might ask as you know um is oh um yeah I'll come back to this you know is this just fun linear algebra of a board physicist or does it actually have anything to do with the brain uh so there's going to be one way to show that before I show you there are going to be several ways to show you before I get into that I want to make two comments about this the first is that this is fundamentally a theory of population coding right so if I observe as I mentioned before if I observe one neuron at a time then you have no way of knowing whether it's changing because it's taking apart in a meaningful pattern or a non meaningful pattern but there's something even more important that what this work tries to address is what is the unit that we choose to ascribe meaning to when we do a neuroscience experiment right so it used to be when you could only record one neuron at a time that everyone would try to interpret the brain through to the picture of single neuron activity that's kind of slightly in the past now and then people have switched to ensembles right but usually they mean kind of the same thing right so for instance you would have one selective ensemble which would have simple dynamics and another non selective ensemble which would have whatever dynamics you want but still it's you know these picture of one ensemble that's important another ensemble that's not important this is a very different picture so there are no two ensembles here there's just three neurons there's one ensemble and the unit that we're going to ascribe meaning to is a specific pattern across this population right so one pattern is going to be extremely meaningful and another pattern is going to be extremely non meaningful and if we choose to believe that that has fundamental consequences consequences for the kind of experiments we want to do the kind of perturbation techniques we feel like we should develop so it's an important thing and I hope to convince you that it's worth your time thinking about that okay so um what's the extreme version of how you could tell that this is not just linear algebra for board physicists so imagine I can go into a circuit I can map out what I record a bunch of neurons map out what I think is a non-coding space and what I think is the coding space and now I can do an experiment that specifically pushes activity only along the non-coding direction you would expect there be no behavioral effect and I can do a different set of trials in which I'll push the activity specifically along the coding direction and then you might expect that I would be able to generate a behavioral effect does that make sense that's not quite what we did so the technology to do that is very difficult we're pursuing it now but that data though very interesting is a bit too raw to talk about what we did is a zeroth order version of that experiment which we basically just bonked the network with a large perturbation then was up to us by analysis to figure out whether in certain cases we have perturbations that go along more in one direction along another direction sort of sort everything throughout by analysis instead of by doing the proper experiment but we're now also trying to do the proper experiments okay um oh I'm going the wrong way sorry okay uh so onto the data so okay so as I mentioned before this is a two alternative force choice task with a delay so a pole goes up in one of two positions and the mouse is trained so that if it goes up in one position he likes left and if it goes up in another position he likes right and this is what it looks like while he's doing it pole goes up there's a delay and then he likes pole goes up there's a delay and then he looks and they look awful cute while they're doing that the second thing that you want to do is you want to record dynamics from a relevant area and this is the point to say that my lab at at this time doesn't do experiments so all of this work was done in collaboration with the so both a lab uh and the first part was done by Newell Lee who's an extremely talented post who was an extremely talented postdoc in Carl's lab now he has his own group in Baylor and if you're looking for a postdoc and you're doing experiments I would definitely say go to Newell he's fantastic he's looking for postdocs um and then later Kevon who's a postdoc who's joined between Carl and myself joined to the project and he's also now doing the the more sophisticated perturbations that will hopefully soon will have enough data to talk about them in detail but none of this could have happened uh without this collaboration and without Carl's generosity to try to sort of ask some of the you know more out there theoretical ideas with his labs time and resources okay so previous work in the so both a lab has told us where one should record so the experiment that it was done I'll just go over it briefly is basically you take cortex you inhibit different parts of it you do this on a grid and then you're looking for an error that has a specific pattern of deficit which is when you shut it down during the sample period then there's no when you shut it down during the sample period there's no behavioral deficit since this is a memory area but when you shut it down during the memory area there's a very strong deficit in the animals ability to do the task you find that that's this area in the top left you call it something in this case ALM for anterior lateral motor cortex that's where it is and then Newell goes in with electrodes I know in the beginning I showed you some calcium data all the rest is going to be electrophysiology since it's much easier to talk about sort of fine-scale temporal dynamics with electrophysiology and then you look at the neurons in first of all you find neurons that have persistent activity during the delay period you find neurons that have selective activity during the delay period this neuron has this particular dynamic this neuron has weirder dynamics it ramps up from one trial condition and kind of ramps down for other trial condition this neuron has dynamics but no selectivity and you can find pretty much whatever you want in this zoo of neurons very much like a Ramos classical data okay and now the experiment in this case the perturbation is going to be photo inhibition which is you try to delete spikes by activating inhibitory neurons this was calibrated in separate experiments to destroy basically all of the activity in about a one millimeter area so again this is not sort of the precise activity space specific perturbation that we want but it has this very interesting flavor if we're going to observe the dynamics perturb them and then actually observe them as they recover and observe behavior as it recovers and try to then relate one to the other okay does that make sense good so let's just get into the data this is a sample neuron raster plot above shows spikes as a little tick and then psth below shows you the average across trial firing grade for these two different conditions so this neuron has very sort has activity during the delay period it ramps up very strongly there's much more activity for the right trials than the left trials and then when you do the perturbation on a subset of trials then add as advertised during the perturbation the firing rate goes down to essentially zero but then what was interesting is what happens after right so if you stop this perturbation early what you find is that the firing rates are recovering which is not obvious since basically all of the models of these networks work by having sort of these recurrent dynamics that are meant to supply memory but what was even more interesting is that they recover with the correct selectivity so blue was above red before and blue is still above red after right and given the fact that they recovered the correct selectivity it wasn't a surprise that the animals are actually totally fine with this perturbation even though you're taking the activity in the relevant brain area and wiping it out for a few hundred milliseconds but what was even more curious is that if you have a sharp eye you might notice that it looks like the activity is recovering awfully close to where it should have been had you not done this perturbation right so you can see that the the blue the the light blue trial is the non-perturb version of the dark blue trial and you see that they're coming up awfully close to each other it's not some artifact of just post-inhibitory rebound you can look at this for neurons that are ramping down and you see something very similar and you can do the statistics carefully to show that this is a real effect in many neurons but importantly it's in many neurons but not all of them right so there are neurons like this in which the original activity in light blue looks nothing like the recovery from perturb activity in dark blue and this is something that we'll return to in a few slides okay so why was this surprising to us so memory models circuit models of memory are very popular there are dozens of them and none of them have this prediction basically so the sort of the short version of the reason is that if you want to build a memory network then you're basically turning taking something that is a pulse of information turning into a step of representation right so what you're building is essentially an integrator and then an integrator has a very different prediction so in order to generate this wrap you actually have an integrator that's feeding into an integrator and then if you stop this integration during the delay period maybe even shut it down to zero you would expect to have this persistent gap it has to do with the amount of time that you did the perturbation which we just don't see in our data even though we tried in multiple ways okay so this is why we found it quite surprising but there's a very trivial explanation that could have been true right so imagine this brain error is just a readout of where the interesting stuff happens so then this is kind of what you'd expect to see right so when you shut it down you can't see anything but the since you haven't affected where the real stuff happens then you know you just see that it recovers when you stop perturbing it this is an argument that's very hard to absolutely shut down because you can never say that something didn't happen anywhere else in the brain let me give you a couple of lines of evidence why we don't think that's the case so the first has to do with playing with the amount of inhibition so imagine you what I told you up until now was for inhibiting an area that's roughly a fourth of this brain area you can then max out the inhibition and catch this entire brain area and there's almost no difference in the behavioral effect but as soon as you put some inhibition in the two hemispheres then you find that there are massive deficits in behavior and indeed if you totally shut down the two hemispheres the mice become at chance right so what this data strongly suggests is that what is preserving this memory is some kind of interaction between the two hemispheres as if the other hemisphere somehow has a copy of the relevant dynamics and is feeding it back into the hemisphere after you just perturbed it and the second line of evidence is you actually cut the corpus callosum which i thought there would be no way the mouse would still you know work after that and yet nuo did it and the prediction that comes out is the right prediction right that mouse is still able to do the test now it's extremely sensitive to unilateral perturbation so you know once we saw something like that it led to an immediate question of you know what do we exactly mean when we say that activity needs to recover right what do we mean when we say there's a copy of the dynamics on the other side i mean you know one wouldn't seriously expect every single idiosycratic twist in turn of every single neuron psdh to be stored in the other side right and then to somehow be magically restored by a bunch of axons going through so in order to look at that we turn to the same activity space analysis only now we're going to look at things along specific dimensions which are rotations of the space sometimes these are called modes the classical example is principal component analysis so imagine you have two neurons each dot is a trial these two neurons have some firing rate on each one of these trials and i'm just scattering all of the trials on one plot so what principal component analysis tries to do is to find a direction in the space which is just a particular weighted sum of neurons that maintains a lot of the variance right if i take a bunch of trials just to make visualization easier i project it down i project all of them down this direction activity space maintained some amount of variance right but if i switch to a different direction which again just a different weighted sum now 0.25 times neuron 1 not 0.75 this will retain a different amount of variance and eventually as i twist things around all the way i'll find this direction which is the first principal component which captures the maximum amount of variance when you project that data down does that make sense good that's not what we did this is the point where usually people fall asleep so i want to make sure that we did something similar conceptually but distinct which is linear discriminant analysis where now you have two trial types red and blue and what you want is not to maintain the maximum amount of variance you want to maintain decode ability you want to maintain your ability to predict what the animal is doing if i now take the choice that was just the first principal component this is not a very good choice because in this bottom left area things are heavily mixed but now as i twist the vector around again then my presentation gets stuck okay as i twist it around again i find this direction which once you project down then things are easily decodable just by throwing the threshold there in the middle between the two groups right so this is linear discriminant analysis it's of course a very old technique and i purposefully chose the data so that the first principal component and the first discriminant component are different but this doesn't have to be the case right so if the data were otherwise arranged they could be the same but there's no reason a priori to think they might be okay just to be over the deck take about okay so we find that direction and now of course what does it look like in the data so we take the neural activity we project it along this linear decoder and then we just look at a single trial over time that's one single blue trial this is one single red trial they're very different these are all of the trials from this session they are really different and it's very easy to decode what the animal is going to do okay and you can average across sessions it's an even stronger effect but what's interesting is not just these dynamics this is what we were kind of expecting from single neurons having decodability the question is what happens during the perturbation right so i'm keeping the the unperturbed data in these dashed lines in the background and now showing you in thick lines the perturbed data so when you do an ipsilateral perturbation of course during the perturbation the difference in between the two shut down because the firing rates are all going to zero but once you stop the perturbation things kind of spring back up and there's again they're again highly separable but what was more interesting is actually the bilateral data so now we shut down both hemispheres the animal is a chance most of the animals are a chance many of the animals are a chance and now the presentation indeed looks highly mixed right but we can use this mixing sort of as a feature not as a bug so if this really is the pattern of activity that's going to be relevant for what the mouse is going to do then we might be able to predict what the mouse is going to do even though the mouse himself apparently is unable to do it because he is still a chance and how would that work so this average is a very wide average in the sense that there are some trials that recover beautifully like this red trial and there are some trials that recovered so poorly that they went all the way into the area that's normally associated with the lick left decision and if this really is the important direction in activity space then we should be able to take these specific trials look at where they stand at the end of the delay period and then see how that relates to behavior so that's exactly what we did we took this direction we've been trials according to where they fell at the end of the delay period so if they fell into this area on the left which is the air on lick left trials if they fell into the area in the left which is the area that's normally associated with a lick left decision if they fell all the way here into the maximum case then the mouse will be right on a hundred percent of these trials if they fell into the area that's normally associated with the opposite decision then on these particular trials the mouse is going to be completely wrong right so we can basically predict what the mouse is going to do even though the mouse himself is unable to do it you can look at lick right trials and it's quite symmetric and this is the picture that we get right we have this one direction activity space that have dynamics that can easily tell apart what the mouse is going to do and these dynamics matter even when you do a perturbation then it causes the mouse himself to be confused but that's not the only direction activity space you can look for more directions you have more than one year on so generally you're going to be able to find one directions depending on how the variance is spread there's a limit to how many directions you can find but almost always you'll be able to find two so then we look for one on purpose that's very different so we saw that this direction recovered and now we want to look for one that's the opposite right we want to look for a direction activity space that actually doesn't recover at all you can do that and this is what the projection looks like so as you can see the light blue traces look very different from the from the blue and red traces which is where the dynamics normally are but it's not surprising that you can find that what's more interesting is to see what it looks like when you're not perturbing the system right so one hypothesis is that has nothing to do with the normal dynamics of the system so when you look at this particular direction in control trials you'll see that there's no variance in it right it captures nothing of the normal activity this is actually very different from what we find we find that this perturbed mode captures almost as much variance as the first mode which is the one that sort of oops uh yeah i have to speed up sorry i'm almost done uh which captures uh i i i did a really way too long introduction sorry uh which captures sort of so it has a similar amount of variance to the first direction which is the one that will determine the animal's behavior but when you try to decode things out of it it's much much much worse right you can barely decode things only in a subset of trials so what this might remind you of is that we have this type of uh we have this direction of activity space on the right which you can't decode anything out of and the brain chose to leave it to be not robust right it didn't bother to build robustness into it and you have this direction in the left we can use to predict behavior one to one and the brain chose uh to make it robust right so when we wrote the theory paper about sort of these different about sort of this interpretation of neurons through directions in activity space you weren't smart enough to make this prediction uh that happens often in theory but here you know no one thought there would be robustness so maybe that's an excuse but if you were to build the system this is what you would do if robustness was expensive to build you would put it into the components that are important and you probably might not put into the points that are not important so if you're building a space shuttle you probably should put it into the engines robustness but not maybe into the system that the astronauts use to send emails or something um and this is exactly what we found but the thing that's so interesting about it is that this is the correct explanation is not on the level of single neurons so this is a scatter of all the neurons their projection on the selective and non-selective modes so the neuron that I showed you in the beginning that looked like it recovered beautifully was the one that I purposefully chose uh that has a projection of one on the selective mode and zero on anything else and the neuron that I showed you that looked like a mess was again one that I purposefully chose that has a projection of zero on the selective mode and high projections on the other modes so its activity looks by eye like it's nothing similar but if you look at more stuff of all of the neurons there are mix right these were just two cherry picked examples so the level of single neuron is not the level where you find the correct explanation to this robustness and it's not in the level of selective ensembles versus non-selective examples so this is you know not real data that would show you how it would be if there was a selective ensemble but you would have a bunch of neurons that have a strong weight on the important modes and no weight on the non-important modes and you have a bunch of other neurons that have no weight on the discriminating modes and a lot of weight on the non-important mode this is about as different as can be from the picture that we find okay so there's a modeling section on how does one obtain such robustness which I'm going to totally skip over um okay I'll totally skip over it uh yeah just to say that we can do it right so in general modeling can tell you one of two things right and could give you a serious hypothesis for how things happen or you can just tell you that you don't need to invoke witches and dragons to assume that this thing is true let's just say that we don't need to invoke witches and dragons to assume this is true you can sort of build these very reasonable circuit models that have this property so I'm sorry I had to skip that section but let me just summarize our findings so we find the surprising robustness of the detailed trajectory and the important thing was is that the right way to explain this robustness is really by looking at these directions of activity space and then this goes along with our theoretical model that not all population activity modes not all directions and activities place are created equal of course the issue is that when you do a bunch of populations recordings what you're seeing is a sum across all of them superposition of all of them and then you need to somehow dissect that and in this case you actually really needed the perturbation experiments to be able to do that and that's not trivial in any case so I'll steal just one minute to say a philosophical epilogue here so I think this piece of work was very interesting not for the stuff I showed you but for the amount of questions that came out of it later which we're still trying to follow up well I think the stuff I showed you was interesting otherwise I wouldn't have bothered with it but but you know why were we able to why did this thing work it worked we had a very straightforward conceptual framework for how the theoretical question we're interested in should play out in data and I think that's why we were able to design the right experiments get them done and know what's the right analysis to do that ends up being very tricky because in order to do something like that you actually do need to know what exactly is a circuit trying to compute and you need to know what it would look like as it does that right so any of the two alone would be enough so if I knew what the circuit is trying to compute I can just look in the dynamics and say aha that's how it gets done and then write that down and we have an answer if I had a book that magically says what are all the computations and how they look like in population activity then I could just look and say oh mouse is doing computation 23 now the difficulty is that we know neither in fact I don't think we even have good conceptual modes of either so what we're trying to do is simultaneously solve both of them and that doesn't always work out right so I think a lot of the job of theory and computation is judiciously picking out cases in which this link is particularly strong to the experiment and then you can make a lot of progress and it's often difficult because you know if you don't have a theory of how the activity relates to something you're interested it's quite difficult to even know what's the right experiment what's something that you might even think it would be reasonable to try to look at okay I'll skip Warren McCulloch's famous words so okay so Warren McCulloch gave this talk in Harvard what is a number that a man may know it and a man that he may know a number which I think is very inspiring I think it has a lot to do with what we're doing neuroscience but this is a particularly human chauvinist point of view but when you know when things well down to the fact that he thought that we would understand what is the nature of being a person by understanding brains and neural computation and we would also understand you know what is the computation but seeing what humans do and I think what we're trying to do is kind of similar only in monkeys or mice right so this business of understanding information processing I think will tell us a lot about what it is to be an animal and vice versa sort of through trying to struggle with the things that the brain actually does we learn a lot about computation and I think that's something very exciting and I'll end there sorry probably still all my question time sorry for cutting you off but yeah we discussion time yeah start over there quickly use your microphones yeah so fantastic talk thank you very much what I'm curious about is whether you guys have examined this in naive animals so in particular you know you might expect that the robustness is only a product of the animal having learned that that is the null space for that problem and in a naive animal you wouldn't see that so that's a fantastic question okay so just to clarify the question is how come there are robust directions right evolution didn't encounter optogenetic perturbation of an entire hemisphere so first of all we we tried the usual substance we thought oh maybe this will just be you know robustness to noise and that will give it to you for free that's not true so then we have to track it through learning and in fact there are there are kind of two hypotheses that there's nothing stable in the beginning and learning stabilizes a direction but if you work with nonlinear networks it's not obvious that the hypothesis is it actually the opposite thing that there's a weird mix that everything is kind of robust and kind of non robust and learning is either about tuning the connections so that the most robust thing is used in dabbing down the rest or some weird combination for the boat we're trying to do that unfortunately you can't track the same neuron by ephys from day to day so you need to do it in imaging and unfortunately speaking about population dynamics through imaging is very tricky so on a neuroinformatics note that's something that we actually tried to do we looked at data collected from a few thousand neurons when that what from the animal doing the same task in imaging and in ephys and we systematically looked at the differences of how just sort of doing or sensibly the same analysis on these two data sets and there are a lot of complicated difficulties so we're still working through how much we think has changed you know how to do this properly given the fact that we have to do it through imaging but yeah we really want to do that and we really need to do I agree yeah yeah there's initially some robust dimension yeah yeah no I mean that exactly that that's the reason you would want to do these experiments but again this you know suddenly we're talking about directions right we because otherwise what we'd be comparing histograms of robust neurons that just feels you you see that sort of this kind of thinking actually opens up I think very interesting sets of experiments that I think the wise I actually wondered there's a paper that came out recently from Aaron Batista and Byron you where they sort of you know did a subsequent analysis on their perturbation of the BCI paradigm and they show that the sort of learning of new mappings happens with a fairly fixed in a fairly fixed part of the subspace and you sort of remap everything to a fairly constrained space which I think indicates that there are at least my interpretation you guys probably be able to give a much better one is that there is a fairly restricted subspace that learning maps things to maybe just you know because there's whatever biological constraints on that being the sort of optimal subspace but it's also wondering more generally so you know the task you're using here is fairly low-dimensional it's fairly simple and straightforward and you know I'm not super familiar with mouse anatomy so I wonder like what else does this brain area do and would you see you know this sort of stability and you know consistency of null space probability in you know different behaviors that the animal's engaging in or in sort of you know once you expand the dimensionality of what the animal's actually doing okay those are two long questions and I'll try to give short answers to both so I'll start with just explaining I think the first question to people who might not know the terminology so BCI is brain computer interface where you take a bunch of neurons you choose to relate that to somehow something that the monkey is interested in and then the monkey's job is to turn this brain computer control into something specific in order for it to get a reward so that's an extremely powerful scenario for relating neural activity to you know what a monkey is trying to do because you control every single neuron that's relevant for this thing which you can't do when you're just recording 50 random neurons from cortex and their finding is that basically there's a limit to how much in a single session importantly a monkey can remap what he's doing right so if you ask it to try to do something that's complicated and it's very different from the natural dynamics of the system that ends up being harder for the monkey to learn right so that's the other version of doing experiments sort of on this sort of faster time scale and these are beautiful experiments and really worth reading the the second question had to do with complexity of tasks so can I if I'm allowed to say something extreme I think that's the thing that's holding our field back so I think there's a lot that's interesting to be learned from two alternative force choice but ultimately if you need to do a binary decision you can do that with two neurons you might need 200 in order to I don't know fight signal to noise ratio but that doesn't explain why there are a hundred million neurons there right so I think the fact that that we're looking at these very low dimensional tasks is really impoverishing our view of dynamics in fact it's also impoverishing theorists because you know I know how to generate two fixed points really well people have known for 30 years right so if we want to look at more challenging things theoretically then it will be ideal if the experimental world will also switch to more difficult tasks and the training of animals to do complicated yet controlled things is one of the thing that the fields is not investing it enough crazily I'm thinking of starting to try to do this myself not because it's a good idea theorists you know it could go very wrong when theorists start doing experiments but but I think it's really to the point where this needs to be done and just the sooner the better because I mean I could make random predictions but but it just it so since it hasn't been done at all I think it just needs to be done and then you can look at it okay I think we should stop there so we have a little bit of time at the very end for people who want to come up with some further questions okay we'll go on to the next week great thank you very much