 All right, so welcome to the TSBVP seminar. Yeah, it's a pleasure to have Professor Timothy O'Reilly for this week's talk, right? So after studying mathematics, Tim did his PhD in biophysics and physiology. And then after doing a postdoc at the mother's lab at the Brandeis. So that is a lab that works on the gastric ganglion, small network of like around 30 neurons. And then he moved on to the Cambridge and then he's currently a professor at the Department of Engineering. So today he's going to talk about the fever control and variability in the nervous system. So, Tim, please. Thank you. And thanks for the informal demonstration of one of the downsides of feedback. We're going to be talking in this talk mostly about the upsides of feedback and how it actually reconciles a large amount of variability and flux in the nervous system. So let me begin with an old idea. And I'm very aware that I'm talking to an interdisciplinary audience here. So if anyone is lost, please put your hands up and I can clarify, hopefully online. So the old idea is that learning results from modifying connections in the brain. And this is probably familiar to you even through popular culture. But you may not have seen what a synapse looks like. This is a diagram of synapse. There's a release of chemical neurotransmitter from the pre-synaptic side and then the post-synaptic side senses that and transduces the signal. And these connections are responsible for maintaining the function of the nervous system. Now, when there's a modification to these connections, there's some kind of plasticity signal. I'm not going to go into detail in what that signal is. It might be an additional neuroactive substance, something like dopamine. It might be actually a particular pattern in the neuroactivity itself that causes this change. Whatever it is, it leads to some sort of change in the strength and often the size of the synapse. And there's a few other things that you need to know. One is that synapses can enlarge, this is called potentiation, they can shrink, this is called depression. New synapses can grow, existing ones can die away and there's a limit, fortunately, to the size of an individual synapse. So these are sort of the key facts that you need to know about neuroplasticity for this talk. So another idea that's supported by a lot of experimental evidence and also makes sense is that potentiation or any significant change to a synapse needs some kind of switch, a kind of threshold to engage the process. So what I mean by that is that in time, there's some kind of discrete plasticity signal, some event, something surprising happens in the external world. There's some kind of residual trace of this event at the relevant synapses in the brain. This trace is held for a period of time during which the strength or synaptic weight increases over time. And this trace enables a bunch of biochemical processes to occur, biosynthesis, insertion of receptors into the memory and even structural remodelling synapse. So all of these things occur because there's some kind of permissive signal that is triggered, okay? So here's a conceptual diagram that the plasticity signal causes some kind of switch and then we get growth. Now how is such switches formed? Synapse is a small biochemical compartments and therefore quite noisy. And there's been tremendous amount of research in the last decade trying to identify the biochemical pathways that need to be switches. One example of a candidate's molecule, molecular pathway is quite famous. It's called cankai names too. It's an enzyme that sits close to the receptors of the synapse and these enzymes sense heightened activity and then are able to undergo phosphorylation. They can phosphorylate themselves and others and this type of trigger causes changes at the synapse and can lead to differentiation. And the model for this is very old. So this goes back decades where we see that the net phosphorylation or activation rates of these enzymes becomes positive when calcium concentration exceeds the threshold and it becomes negative when the on-enzyme experience is a drop in calcium that is somewhat far away from the special showing hysteresis in the activity. So this model is based of a sort of conceptual model and some amount of biochemical data. Okay, so the basic idea is we've got these two states and on state and off state and some sort of mechanism that's switching between us. And it's even possible to see this happening nowadays. You can see this happening in an individual and drifting spine or synapse. So here this little globe is a synapse and at this point in time, the experimentalists have triggered a surge in calcium as well as a coincidence, a strong input to the synapse and then what we see in pseudo color is the enzyme becomes phosphorylated that the red color and subsequently the synapse physically grows in size of the life of the course of several minutes. So this is nice but seems like it's the end of the story. We've found these kinds of switches, these enzymes seem to work as advertised. Unfortunately, it's very hard to dissect out the bio molecular pathways. If one takes the isolated chemical system, so the chemical molecule purifies them and puts all of its friends in there together in a tube, this system doesn't appear to exhibit hysteresis. So what do I mean by that? If we take the enzyme in its inactive state and ramp up free calcium, then the enzyme activates as it should. However, if we start with active enzyme and then ramp down calcium, it deactivates but the proportion of active subunits overlays these curves completely showing no hysteresis. So this argues that this enzyme at least when it's isolated from the synapse doesn't really behave as a switch, but it's very fragile. And these issues have dogged many attempts to uncover the molecular pathways that controls natural plasticity. So we entered the story at this point in time. Myself and personally led this work, Monica Giosha, and we went back to basic and we asked, okay, what makes a reliable microscopic biochemical switch? We brought things right back to basics, forget specific molecules and so on. We're just going to think in generalities. So let's start with this conceptual idea that there are two states, an on state and an off state. And let's cook up some generic biochemical model of such a switch. So let's have two species. This could be inactive enzyme and active enzyme, or there could be two separate species. And let's cook a plausible biochemical reaction that controls the rate of change of the state of the system. So here, X is being inhibited by Y and Y is being mutually inhibited by X. And those of you who have some biochemistry will recognize the form of these are just ill equations. So very straightforward vanilla model. The classical theory, and I'll say what I mean by that in a moment, says that we need two stable states in the system in order to have a switch, okay? It's almost a topology. So if we plot the curves where the rates of these have a net zero, the so-called null finds, then if this exponent is two or higher, then we get several intersection points, okay? So we end up with, in this case, two stable states flanking this unstable equilibrium. So we have a nice candidate for a switch. If you start the system near this, it will fall into the stable state. If we push it past this point, it will fall into this stable state. But here's like a switch. If, however, we take the same system and we take this example and then set it to one, this very similar system now only has a single equilibrium. So this lets us play with a basic chemical, biochemical motif and play with the number of equilibria it has and ask what happens. So one thing to point out is that this situation where there's one stable equilibrium actually corresponds to the data I showed you earlier, where the equilibrium itself might move as a function of the amount of signal or calcium, but there is only ever one equilibrium that's known as the resistance system, okay? So I mentioned that this is a kind of classical model. What I really mean by that is that it's an approximation of what really goes on in a biochemical system. Synapse is a small, about one micron cubed or smaller and many molecular species at a synapse are present in low numbers, okay? In particular, Kankei 2, the one we talked about earlier is present in 80 to 250 copies, okay? And the reactions themselves are probabilistic. So what that means is that this nice smooth ordinary differential equation, mass action universe doesn't really apply, right? The dynamics instead are discrete and the states of the system hop according to various probabilities which are functions of the state of the system itself, okay? So this is the appropriate framework to model small system. So what we're going to do now is take the same system which has this one stable equilibrium and these mutual competing species. And just to remind you, when there's no cooperativity in this system, we just have this single stable equilibrium. So we're going to take this exact same system, the exact set of equations, we're going to transport this now to the stochastic discrete mode with the same parameters, the same regular ones, okay? So here's what this looks like in the case where there are thousands of molecules present. What I'm plotting here is a probability density of where the states of this system, if you can think of them as concentrations of X and Y where they sit in the X, Y plane, they kind of hover around in the vicinity of the stable equilibrium. And as you might expect, there's a bunch of fluctuations sort of buzzing around this equilibrium, okay? So nothing too surprising, nothing too exciting either. It's just a noisy version of this system that we have. Now, what if we take the same system and we constrain it so that it doesn't have thousands of molecules, it instead has tens or hundreds of molecules? Well, this is what happened. So now we have a mode here. So concentration high probability states here and concentration of high probability states here and very little going on at this deterministic equilibrium, okay? And just in case you fell asleep in the last few seconds, again, this is a large system has exactly the same dynamics that became the case. All we're going to do is constrain it so that there's fewer molecules and we're going to put this. So what's going on here, okay? What is going on? Well, the short answer is there isn't a good theory for this, okay? So in a strict sense, I can say I don't know, right? The mathematics for handling these discrete stochastic systems when they're non-linear, frankly, it doesn't really work. There's lots of approximate ways of studying these systems. One way to study them is to just simulate them and play around, okay? Another way is to get a bit of intuition and that's what I'm going to show you now. So the first thing to note this is that the world that this system lives in is no longer this smooth, continuous space, it's a lattice, right? I can't have half a phosphorylated molecule or half a molecule, I've got to have a whole million. So when all of the states sit on this discrete lattice, in particular, it's very unlikely that the stable equilibrium will sit on the lattice point but that's a detail. So what difference does this coarseness of this system or this discreteness of the system make? Well, here's one observation and a piece of intuition. I like for understanding why this system is fundamentally different. Let's zoom in on the region where we saw that mode or that concentration of states. What we notice is that the lines that they mark where the net flux or net flow changes sign or changes direction, these start hugging the axis and they start lining up so that they actually fit close to or between lattice points. So what that means is that if we were on any of the lattice points and I've drawn the flow field here in the yellow, say we start at this point, well, there's a high probability the net probability of flowing upwards and slightly out. But we could flow up several steps because the probability of flowing out is quite low, okay? Similarly, on this side of both milk lines with the net probability of flowing down and a small probability per step of the unit time of flowing across. So this means that the system can get kind of trapped here for some period of time, flowing up accidentally hopping across the milk line, flowing down then accidentally hopping across again, okay? Now stress that this is just an intuition but it seems to pan out quite well and it seems to generalize a little bit to systems that don't have this degree of symmetry. But in this case, the system is completely symmetric. So the same argument would apply here as well, right? So we have a possible explanation for why we get this multi modality in this case. Regardless, what we have is a robust form of find modality or stochastic possibility in these small systems and the interesting thing is that this by stability is annihilated as the system grows in size, okay? So here's for a small system, we have two modes and on and off we could call them and then as the system grows, it dissolves away so we end up with our own noisy equilibrium. And this has been known for some time but what was less appreciated was that this system could act as a switch and if it's coupled to growth, interesting things will happen, okay? What we went into in the paper and what I'm not going to talk about today for time reasons is that this does generalize to multiple species and to situations where the rates are not symmetric. And it turns out that this mutual repression, mutual repression motif is fundamentally important for this behavior to occur. In any case. In your self-examination, how long was that creating a pattern for each one? Right, that's a nice question. So we have these different modes but a very relevant question for this to be a reliable switch is how long will I hang out in a mode if I start that? And the answer is an arbitrary amount of time. I can tune the parameters of the system so the dwell time is arbitrarily long. You think there is only one vibration in the system and there isn't any indication that we can work from small to large even if we create a size bigger than what we want? In this case, no. For this system, because of the law of mass action, as I scale up the size of the system, that equilibrium is going to become more and more, what, the fluctuations around it are going to shrink in the relative sense. Any more questions? There's a helpful question, thank you. So going back to this discussion of synapses having the switch and then growing, we can ask, well, what will happen if we couple this to growth? And the answer is quite nice. Let's take the same system and let's label these states an unsafe and an off state. And let's suppose there's a threshold for initiating synapse growth, one in the on state. Well, as the synapse grows, it's now going to switch itself off at some point. So this will automatically prevent the system from potentially passing a certain point because growth itself is coupled to the physics of how the system actually works. So we can have synapse that starts to grow and then it kills its own switch once it gets past a certain point. So this we put forward just as a nice idea, what's nice about it is it works in spite of the fact there's a high amount of uncertainty and noise in the system that this exploits actually a transition in the physics of the system, not the quantitative properties of the system, it's the qualitative property of the system that really matters. So what we learned from this is that these mutually inhibiting species in a quiet stochastic switch like behavior in the microscopic limit. And if this is coupled to growth, this gives a novel type of feedback loop that can regulate synapse differentiation. So this is all speculative, but it might explain two-ish decades of contradictory findings that maybe when one tries to reproduce what's happening inside these microscopic systems, the size as well as the context and everything else might matter and they play a key role. Okay, so that was just kind of a warm-up story. I'm going to tell you three little stories today. That was the first one. And they are all connected, I promise you, but we're going to go from molecules all the way up to entire organisms and entire animals. That's my plan. Let's see how far we can go. So in the remainder of the talk, we're going to ask how stable these synapses are in the long run. This is a question you should all be concerned about because if you believe everything I've told you, these synapses store your identity, your memory, your ability to function as a human being. And what? Sorry. That's the question. Yes. I think it's something that I've had for a long time. Is there a thing for which it was more the individual experience, this kind of reaction? Completely in vitro. You mean outside of the cell, cell-free in vitro experiments? And the same is a test-beam experiment you mentioned, but with the monitor. Is there a way to do that for a vitro? In principle, I think so. I think speaking to a clever microfluid experts would be the way to go. So there's nice analogs of this. So the so-called BZ reactions, which are not organic reactions, they're inorganic reactions, but they're oscillators. There are examples of people confining those in bubble rafts in an oil and aqueous interface. So something like that you could imagine, you could do a clever biochemistry experiment and confine all of these ingredients and titrate them so you end up with low copy numbers. So yes, in principle, but I've never seen it done for these particular molecules, but that would be the thing to do. Yeah. So how stable are synapses in the long run? That's something we're gonna be concerned about. And are there any implications of all of this for neural circuits and behavior? Physicists like to point out a lot of the time, particularly statistical physicists, that once you zoom out and once you average out behavior, some of these fluctuations go away and they're not a problem. Unfortunately, biology doesn't always respect that. Okay, biology is not a ferroline or a piece of rock. Sometimes the molecular events or the microscopic events actually matter a lot. Okay. So in answer to that first question, it turns out that synaptic connections in the mature brain seem to continually remodel even when there is no apparent learning or change in day, right? So this might come as a surprise. Here's some data from a few years ago from Seymour Rumpelblad, and what they did was imaged the dendrites of an excited green neuron and they tracked these protrusions, which are the sites of the cytometry synapses on the dendrites over many days. And what they found is that some of the synapses hang around and change shape a little bit. Some of them disappeared and new synapses were born. Even though in this case, the animal wasn't learning anything. It was doing something very boring. It was listening to two different tones associated with two different stimuli and doing that same task for weeks on end. Here's a summary of this data when the spines are actually tracked and labelled over around two weeks, something like half of the original population remains. And this is continually replenished on each day by new dendritic spines, synaptic connections coming in and taking to place. So this is quite a lot of flux over the course of a few weeks, okay? Enough flux to be a problem. So what you might ask, and a very sensible thing to ask is that maybe there is something systematic going on here, right? The animal isn't frozen in time. It has a life possibly outside of the experiment that's being done to it and it might be thinking about that life. So perhaps the brain is changing over time in a systematic way. So there could be consolidation of existing memories of the types of metabolic housekeeping, et cetera. Regardless, one would expect then these changes to be the result of some kind of systematic plasticity signal of the type that we were discussing earlier. So an obvious thing to do is to try and block these plasticity signals and see if this has any effect on this frenetic spine terminal. It turns out that some experiments were done of this nature that attempted to arrest this continual change by blocking the component or by correlating the change to other physiological variables. And this is what we're seeing. So here's an older experiment. This is in culture, cultured neurons where the remodeling is tracked over time and then in control conditions, there's a large amount of remodeling over several hours. And when activity is blocked by applying to drug detox into the culture, the amount of remodeling slows down for sure, but it doesn't disappear in a completely significant amount of labs. Doing things a different way, trying to actually regress the changes in synapse with other processes in a statistical sense, only was able to account for a less than half of the total change attributing it to some kind of systematic signal. And this isn't just something that one group has seen in neural neural cultures. This has actually been seen in intact animals as well. Obviously it's very hard to interfere with these pathways and people have to come up with very clever ways of trying to block the plasticity but leave other things constant. So all of these things are subject to caveats. But what's interesting is just how consistent these estimates are. That the amount of turnover that seems to be due to a systematic source is always less than half of the total amount of turnover. Okay. So consistently less than 50% of ongoing static changes seem to be due to these systematic signals. This seems like a small proportion. Why so much of it is apparent fluctuation? Is it just measurement problem? Could be. But for now we're just going to take these results at face value. And what we're going to claim is that this proportion is actually optimal for maintaining a limb behavior. That seems a bit silly. The amount of systematic change is less than half of the total change. Meaning that the total change is largely unsystematic. Out of nowhere. Well, if this isn't obvious to you already, this seems mysterious. I promise you in a few minutes you're going to be annoying because it will seem obvious. That's what I hope. Okay. So what I'm going to show you next are some simulation results that Coralbred explained and some intuition of the mathematics behind the results. And this was work led by my former post-doctor Ubaraman who's their faculty at the University of Sussex. Okay. So what we want to do to tackle this question is cook up a general learning and memory model. So we would like a network of new funds. We'd like them to be attempting to learn some tasks via some type of form of plasticity, some type of learning. And in a very general setting, you might think, well, it's hard to say anything specific about the type of data we've just been looking at. What we really need is an extremely general modeling framework, okay? In particular, we'd like to know what type of tasks we consider. We don't want to be particularly strained. So we'd like to consider all tasks, right? So how am I going to do that for some generic neural network? Well, first of all, we want to consider all tasks that this network can actually solve, okay? So what we did was we borrowed a very clever idea that's used in the connectionist network community, these days known as the AI community, where we take two identical neural networks, identical in structure, we take, call one of them the teacher and we fix random connection weights in this network. We call another one the student and these are the connections that are plastic, they can change, they can be adapted and we just feed random inputs into the teacher and get input-output paths, okay? So this teacher just generates data that the student now attempts to learn. For any given input, this teacher gives an output and now the student, when given that same input, its goal is to try and produce the same output. Now we know in principle that there's a configuration of weights that will allow the student to do this because it's got exactly the same architecture, okay? Not only that, we have access to unlimited amount of data to train these networks. All we need to do is find an appropriate learning rule by which to train the student, okay? And here there's another problem and that is that we don't know which learning rules the brains actually uses, okay? Because after all, we want to relate this back to biology. So what kind of learning rules would we like to use? Well, again, we don't tend ourselves in so we'd like to consider all possible learning rules. How are we gonna do that? Well, for simulations, we'll consider all possible learning rules up to first order. What does that mean? What that means is that we will model learning on some time interval as imperfect gradient descent, okay? So the picture you should have in your head is that the performance of the network can be quantified in terms of its error and the error as a function of all of the different weights forms a complicated landscape with hills and valleys in it and learning corresponds to dissembling one of these valleys to a low point, okay? How do you get down a hill? You can follow the path of steepest descent. It's not always the best idea actually. Turns out there's better words to do it. But locally over some time interval, anything that gets you from high error to low error can be approximated by gradient step plus some other stuff if it's not a very good limit. So this gives us a recipe for cooking up a family of learning rules all the way up to perfect gradient descent which some people argue the brain might be able to do. We're not sure, we're agnostic about that. All the way down to a learning rule that's not much better than random noise chickening the weights around and occasionally moving in the right direction, okay? Good, so we now have a way of decomposing changes in synaptic weights into a systematic component. So this is going to come from our approximate gradient descent rule and then a fluctuation which is just going to be a random disturbance to these weights. And we're going to be interested in how the steady state error, so that's the error that this network settles down to when it's finished learning on the relative magnitude of the systematic the fluctuating components because going back to the experimental data the cartoon version of what happened was that the experimentalist blocked this and then all the less with these fluctuations. So what we're able to do because we can control these things independently is look at how the steady state error depends on the relative strength of each of these. Any questions at this point? Because I've covered a lot of bits and pieces. Well, either everyone's on the same page or some people are so hopelessly lost they can't be bothered to ask questions, but that's okay. So for that, I'll think that you used an older new number, 60 new ones in the beginning. Yes, yes, yes. Yeah, the task error is always computed for the input output to the whole network. So there are some human new things that... Yes, yes, yes. Okay, so here's the result, the numerical result. This is the steady state error as a function of the strength of the plasticity relative to the fluctuations. So I'm just taking the magnitude of this relative to the magnitude of this plotting that on the x-axis. And what you see is when these things are roughly equal which is where the yellow and green regions meet the steady state error is approximately minimal, okay? If the systematic component gets stronger the error starts to creep up. If the random component gets stronger the error starts to creep up, yeah, okay? So this seems to agree in spirit with what we saw in the data. And what's nice about the setup is we can modulate the quality of the learning make it more and more accurate and we can change the overall magnitude of fluctuations. So changing the fluctuations doesn't really have much effect on this relationship. Interestingly, increasing the quality of the learning rule shifts the minimum to favor an even smaller proportion of systematic plasticity. Okay, so if you've got a good learning rule you don't want too much of it. You actually want to let the fluctuations dominate. Let the synapse to do their thing and it'll occasionally project and get them to go in the right direction. But why is that? Why, what's going on? So this is the part where you're gonna think this is all the action. And I'm just gonna give the intuition in a specific case. Let's imagine we only have one synapse and we have an error curve that looks like this. Errors low at the bottom. This is where we've learned things. And we're gonna consider the result of modifying our synapse according to a random component and then a systematic component that's gonna bring us down the gradient of this error surface. So let's suppose we start at the bottom. Flutuation comes along and it knocks us up the slope. Now I ask how big should our compensatory response be to fix the error that's been caused here? And the answer is it should be full and opposite because if it's any larger, we'll create up the other side of the slope. Now this actually holds more generally. But the picture changes slightly as we increase the number of synapses or the dimension of the problem. So I'm gonna give you now a little bit more intuition in a high-dimensional case. Let's suppose now we have three synapses constrained so that error is low in this kind of two-dimensional surface. So inside this surface, we have functional circuit configurations and outside it, things are not so good. Well, if we again perturb so that we're outside of this, the systematic plasticity is its goal is to take this back onto the surface. Now, what's the shortest path back to the surface? Well, the noise component pushed us in some random direction. An efficient systematic component is gonna take a B line straight back then to the surface. So it's going to take in general a shorter path to get back to that surface as determined by this right-angled triangle. And if you do the math from its highest level and you ask on average what the ratio is, about 61%. So this is the ratio of the magnitude of the fluctuation relative to the short part of this triangle, the systematic component. And that's quite complex because this ratio seems close to at least some of these estimates. Okay. I just wonder, so for the maintenance, maybe what you're saying, but how do we move the stationary point? Absolutely. So this all is a steady state analysis. So this is under conditions where we're assuming that there's no neck learning going on. And Professor Foucault is right to point out that if we're in a state where actually the animal is not learned, this is no longer true. And that is no longer the case. In an unlearned state, the systematic component should dominate. But all of the data that we're looking at here is at least nominally at some type of steady state. So that's a very important point. So conclusions for this part. So the experimental evidence seems to suggest less than 50% of systematic reconfiguration is driven by systematic changes at some type of baseline steady rate. And to optimally maintain the function, the systematic component should not outcomplete fluctuations. And this is independent of learning law. And this high amount of turnover in turn suggests that there's many spares of degrees of freedom in neural circuit connectivity. But the fact that these things can wander around, reconfigure, and then the circuit can find some way to fix them without sending everything back where it started suggests there's a huge amount of degrees of freedom, which again shouldn't be a big surprise. But this had interesting implications, okay? And in the final part of the talk, I'm going to talk about what these implications are. So let's go back to this project where we are hanging out inside this space, this curved space of circuit configurations where things work nicely. And some kind of ongoing learning or feedback or interaction with the world, internal or external, something is pulling us into this subspace, okay? Well, if there are many degrees of freedom, many spare degrees of freedom and room to move inside this surface, then generically, if there's any disturbance at all, any amount of noise in the system, for example, we're going to experience drift within this surface. So the circuit configuration itself is just going to change over time, okay? It can't not do that in a second. So this picture, if it's really true, if all that's happening is that the feedback is pushing in a general direction and not trying to restore a former state or keep a synapse sort of frozen, then this predicts a huge amount of variability and some drift in the circuit configuration over time. So the obvious question is, is there any experimental evidence of this? And if so, is it possible to actually manipulate this phenomenon, right? We've got a certain type of dynamics here, which is somewhat related to ongoing learning. That gives us a window into how the brain reconfigures and maintains information in the long run, okay? So for this, I'm going to turn to some experiments that were done a few years ago by our collaborators, in particular by Laura Driscoll, who was a PhD student with Chris Harvey. So what Laura did was she set up the following experiment where there's a mouse housed in a virtual reality environment with an optical window that allowed her to measure the activity of large populations of neurons in the brain using a calcium indicator. So she could literally watch neurons lighter as they were firing. And this virtual reality environment allowed her to design a task that the animal could solve. It's a fairly simple task. So the animal has to run down this virtual maze, turn left or right. And if the left versus right decision is associated with a wall pattern. So if the animal sees black dots on a white background, it needs to go left. If it sees white dots on a black background, it needs to turn right. If it gets the right answer, it gets the wrong one. Okay? Very boring. Animals can do it very well. And they learned this to expert level in a few weeks. So what Laura did was take those animals that had learned this task that were performing at steady state, no obvious behavioral change, and ask what was going on in the activity in a particular region of the brain. And the region of the brain she was interested in was called the posterior parietal cortex. Those of you who are not neuroscientists don't need to pay attention to this. All you need to do is think of it this way. You might be aware that there are parts of the brain that are involved in direct muscular activity or motor output. There are other parts of the brain that are sensory regions they receive signals coming in. PPC is neither of these, right? So it's a circuit that's sort of embedded somewhere in the middle of the brain and what it represents is therefore a little bit more abstract. It's not directly related to the sensory or motor signal. Okay? And it turns out that if you lesion PPC or block it, animals can't learn and perform this task. Okay? So it's an important chunk of the brain to solve in this time. So here's what she found. Importantly, the PPC activity actually represents the task in some sense. So there's a correspondence between where the neurons are maximally active and that's what's being plotted here in this color code. So yellow is where each of these cells, each cell is a row and the x-axis is where is the position of the animal in the mix. So there's a whole bunch of neurons that are lighting up when the animal's at the start. And then as the animal progresses through the task and through the maze, a different subset of neurons become active. Okay? This is nice. And incidentally, she designed this task because what she wanted to do was acquire a baseline and then train the animal to do something different and see all of this change. Unfortunately, that's not quite what happened. 10 days later, keeping the cells sorted in the same order, this is what she sees. So the activity is somewhat disorganized. And this disorganization continues showing a drift in the activity pattern over day. Now, it's not just the case that the neurons have become less active because on any given day, if she goes back in and reorders the cells, she's able to find this nice representation of the task again. So somehow this association between neural activity and what the animal is doing is evolving gradually over a period of several days. So we don't know why this is. The one thing that I'd like to ask you to think about is how neural circuits might cope with this. This is something that we looked into in a very sort of simple way. I'll quickly mention that. You could imagine that there are many possible configurations with so many degrees of freedom. It might be possible to extract or read out the information from this in spite of the fact that things are changing. And the answer is that's almost true, okay? So what we found, and this is work by Michael Rue and Adrienne Lovac, is that it's possible to decode what the animal is doing simply by taking the neural activity in this population and decoding it through a model. Well, what type of model we use, the simplest one, thinker, just linear model. So plain old linear regression can actually reconstruct the trajectory of the animal by taking enough samples. And if you look at the structure of this activity, a bit of intuition should say why. The neurons behave almost like little basis functions that encode what the animal is doing at any point in time. So a linear weighted sum of these basis functions can give you a decent estimate of what's going on. Another motivation for a linear decoder is that that's kind of what neurons and circuits do. So you could imagine another neuron that's connected to the cell in this population taking weighted sums of what's going on and then it's able to reconstruct some estimate and use the information that's present in the circuit. So anything that's a problem for a linear decoder might also be a problem for another part of the break. Conversely, if we can easily read out the information using a linear decoder, there's no reason why the rest of the brain shouldn't be able to do it either. And we can. So this should be a position. Yeah. About the data, can we go back to the slide? Where to here? A one thing, and a one thing data. Yes. So the one is the beginning of the training. No, this is several weeks after training for a month when they've reached a criteria in a performance, a performance is flat throughout this time. And he makes the everything during the course of the learning. They have looked at this. Yeah, I didn't do this work. This is worked by Laura first. You know, I think we have the great things from here to our potential. We don't know. Okay. The people are looking at that and that's an important question. Naively, we'd expect a lot more reconfiguration to go on during the learning time issue. Yeah. And if we just say what they're going through. Yes. Yeah. So the behavior is at quite hearing here. Basically they're performing at 95% all the time. Nothing interesting going on. Like this thing, they've been looking at the products. How much is there? Is there a difference in this? There's a smaller amount in a single day. Yeah. But the largest amount occurs between days, which is interesting. So it could suggest that sleep is involved and a bit of time. And in fact, it seems that both time and experience is jointly determined how much the code evolves. And that's been seen in other parts of the brain as well. So this is not the only part of the brain where this type of phenomenon is observed. Yeah, this question is related to this idea of looking for the brain. Do you think the thoughts have to seem like a cognitive trick? No, they don't. And that's interesting as well. So that's something we could pick up at the end. OK, so where was I? Right, we have this linear decoder. This allows us to reconstruct what the animal's doing. We can pull out its location, its velocity, and its heading in the maze. So this is the one. And unsurprisingly, if we build this linear decoder and then we fix the parameters and then test that same decoder on subsequent or previous days, it doesn't do so well because there's drift going. But this raises an obvious question. There's a huge amount of redundancy in this code. We're only reading out a small amount of information for a large number of samples. Can we find a drift invariant set of weights? How might we do this? Well, an easy way is to take data for multiple days and try to build a model that can predict as well on any given day. And the answer is it kind of works. So a concatenated decoder does not perform. It's statistically always worse than a decoder that's trained and used on that same day. Worse still, even when you do this, and this is challenging statistically because of the limited data, but our best estimate was that you will get degradation of a concatenated decoder no matter how much of a subset of data you actually use. So what this tells us is that the drift is going on in a way that doesn't completely destroy the decoder. It's not as destructive as it could be. So for example, if we have completely random drift, the decoding performance would degrade far more rapidly. So there's some sort of constraint, but it's not constrained into a perfectly linear subspace. So the drift is not occurring in the kernel of some limited decoder. It's not as nice as that. However, it is far more systematic and constrained than you'd expect by chance. And an obvious question then is what's constraining it? Is it the behavior and the continual learning that the animal's doing that's constraining where this neural activity resides? We'd expect it to be, but an obvious thing now to do is to try and manipulate the behavioral feedback. So in order to do that, we asked if we could actually decode the behavior from the PPC activity in real time. We were working with offline data where we fit a linear model and this is a big model, right? There's hundreds of neurons and there's many minutes of data sampled with a fairly high resolution. So it can be challenging to actually get a piece and fit in a certain amount of time, but because it's a linear model, there are nice methods for doing this. Moreover, because it's a linear model, we don't need to isolate individual cells in an imaging frame. If the imaging frame is steady enough, then it contains the signal plus some other nuisance component as well. So by linearity, we can train directly on pixels or super pixels of this imaging frame, build a decoder and then again recover what the animal's doing. Now, why would we wanna do this? The answer is that we can now potentially manipulate the feedback so that the neural activity itself drives the behavior. So in summary, what we're attempting to do is we have this situation here where the PPC is involved in controlling behavior of this virtual environment. We want to shortcut this for the eliminate the motor output to disconnect the control, the physical control of this VR. You go directly from this part of the brain and use that to drive the VR. Why? Because this should now operate as a different constraint for the activity in PPC and it should evolve differently than in the physical case. In particular, our conjecture is that this will maintain the performance of a linear decoder far greater than chance. So here's where we've got with us. And this was working with PhD students. You can saw all of working in postdocs in physics lab, Dan Wilson. So they set up the same thing except this time they have a real time decoder. And without the mouse knowing it, they can switch between physical control of the VR or brain machine interface control of the VR. And this is what it looks like. So here's a comparison of the physical interface and the brain controlled interface. The animal is running down the linear portion of this maze. It reaches a turn and then turns left or right. And if you look at trajectories of these animals, it's very hard to tell the difference between the brain control case and the physical case. Now, the brain control case is not quite as good. The animal definitely figures that something's a bit strange here, the performance is not quite the same, but it's good enough to study what happens in the long run. And so far what we found, and the broken ones show that is that even with a fixed decoder over several days, we don't get the drop in performance that we'd expect from offline data, which is interesting. I haven't done this long enough or across as many animals as we need to to make strong conclusion that this is promising. And another interesting thing is that the activation of the neurons and what's being shown here as a function of location in the maze is where individual cells are most active, that there's a difference between the BMI case that's shown in blue and the physical case that's shown in black, okay? So there's certainly some contextual difference that the animal detects when it's operating in this BMI case. But that contextual difference is not enough to completely destroy the decoding of the task, okay? So on a short time scale with a big change in activity in PPC, but it's compatible with this linear decoder. On a longer time scale, what we're interested to see is does this actually shape the evolution of the neural code as we predict it would? Well, we're bringing up like the BMI. So, do you call the motor intention rather than the current state? It's current state, yes. So there's no history. It's a static decoder actually. Okay. So now let's come to the transaction. So how can you operate the control? So you will use the current test before the system and the data feedback. Yes. Okay, because that doesn't mean that neural activity in some of our cases, we see the current actual data. Yes. Okay. So there's something probably a bit unnatural about this, that the change in the virtual reality environment somewhat anticipates a little bit what the animal is actually doing. So we don't know what the appropriate lag would be because when we looked in the data, there wasn't a detectable optimal lag between the activity and the behavior. If you go back in time, you can always reconstruct from history what's going on, but that's not a surprise. But it wasn't obvious that there was a sort of obvious peak in predictive power between TPC and behavior. Yeah. And some of that could be a limitation of calcium indicator as well because they do have the sort of temple smoothing effect on neural activity. And actually, I thought of those both working on the... Yes, we didn't freeze movement. What's interesting is the animal's physical movement starts to deviate from what it should do. So that there's a gradual mismatch covering between the animal stereotype movement and then it's actual motion in the maze. So they start to decouple a little bit. So one of the things we're trying to do now is to slow down the animal's movement by physically breaking the ball. We'll see if we can decouple the two things. Do you know if the TPC activity has something like two various influences in four canvases? Yeah, I don't know. I don't know. Like a kind of theory. Yeah. Yes, I don't know actually. I don't know whether you get these little episodes that predict the animal's intentions or be interesting if you do. So we're actually working with Yulia Kruppek who's been doing the same thing with hippocampus. Possibly, yeah, at those things. But it's very hard with optical methods. Okay, so let me quickly summarize. I can take a few more questions after that. So in this part, I talked about decoding signals directly from TPC and this actually allows these animals to control movement in virtual environments. Interestingly, the new activity differs slightly from the BMI in the physical case. This is something that's seen in traditional BMI as well, where you use the motor cortex as well. So that's actually consistent. Interestingly, in fixed decoder seems to perform safely over several days, at least so far, despite possible changes in the new activity. So that brings me to the end and hopefully you've seen that the thread here has been that there is a large amount of ongoing change at every level in the nervous system. And that observation by itself is very hard to reconcile with anything coherent going on. But when you put feedback into the picture, you start to realize that a huge amount of reconfiguration can happen as long as there is a minimal amount of feedback keeping things going. So I'd like to thank all the wonderful people that I've met and worked with over the years. It's a real privilege to be among young creative smart people as my nine to five job. The people here, so Monica, Druva, Michael, Charlie, Ethan, and Adriana were directly involved in the work that I talked about today as were these collaborators. I'd also like to thank Jonas and Lynn for hosting me while I'm here and making sure that everything's worked so smoothly. The TSVP committee for allowing me to come here in the first place and of course, OIST and yourselves for supporting this visit and for listening to me talk for a minute. So I can take more questions now. Thank you. Good, thank you very much. So, thank you. Has there been any experimental work? So this is about the spine vertebrate. Is there any experimental work that has been able to look at how much of the 500 or over talks to a vertebrate inside and out thing in both? Yeah, I don't know. That number, and for your model, if that number wasn't particularly low, what do you think that would be for your model? Yeah, I think what, yeah, so what you're asking is, is there essentially a persistent available input and then the spines that are turning over are just passing the baton between each other and maintaining connection with the same input? I suspect that's actually not the case for the simple reason that when you look at the turnover and on a single dendrite, these spines are appearing and disappearing in very distal parts of that denco. So it could be the case that an axon snakes along and makes contact in several places, as does happen quite a lot. But it doesn't seem to be the case that the turnover is just a zero sum game at individual synapses. But I don't have a systematic data set that speaks to that. I'm not aware of anyone having gathered one. The last bit, I think it was super interesting to articulate the drift, the online situation. But I'm not sure how that addressed the failure of the linear motor in the actual situation. Right, so I think what happens, and this again, subject to it being preliminary data, we haven't run this for a whole month. So we don't know what will happen in one rush. But I think that when the animal is actually in closed loop, it's able to correct the minor deviations from what's happening, right? So it intends to move forward. It kind of moves forward, but with a bit of a deviation, well, it will correct that. And as long as those corrections don't destabilize the overall loop, okay? In other words, as long as they're modest and the controller doesn't amplify them, which a linear controller could do in certain circumstances, then you'll get graceful if slightly suboptimal control of the BMI interface. It's a little bit like if I was to mess with your mouse pointer, and each day I slightly rotate the X, Y, because without you know it. On that day you'll come in and you know what's going on, but you'll very quickly adapt. That you can't mimic that situation in offline data, okay? So I think what's going on, if there is the substantial change in this case, right? Is that the animals continually adapting? And then the question is, does that adaptation show up in the trajectory of the drift? So it's almost like forcing a form of ongoing latent learning and then looking at how the population activity evolves in its presence. I have a question about some sort of biological underpaying for drivers for all business time, for removal of lap grating. Of course, some sort of like maintenance for lap grating to the different place, but at the same time, especially in the case of the campus where higher quality brain energy is more of an automatic encoding system. They're incorporating new information with opportunities. And in your final experiment where I know it's going to go through everything and have imagine the animal is kind of living for a life outside of the past. What do you think would be another task? So I would introduce you to the proper reality task and to produce other transfer changes, which is quite different in nature from maintenance purpose. We expect the representational drift being very different from what you have observed and recorded in your environment. Yeah. And I should mention I'm not physically doing these experiments. It's down in Chris's lab. There's very kindly, and often it's time to do this. What you'd expect from the theoretical point of view is, so let me repeat what you've suggested. Instead of doing the fairly mundane task day in, day out that doesn't really use up resources, neural resources, what if suddenly the animal had more of a challenge? Okay. And had to do an auxiliary task or a variant of that task or something more complex each day. How might that affect drift? Yeah. So the purely theoretical answer to that is drift is occurring in spare degrees of freedom. So if you reduce the number of spare degrees of freedom, as you should, if you've increased the complexity of the task or other additional tasks, the drift can now only occur in a lower dimensional space. Okay. So that should actually be detectable. If the drift resembles something like a Brownian motion in this subspace, if you change the dimension of the subspace, if you change the mean squared deviation of that, of that random walk as a function of time, so you should actually be able to detect it. And what you should see is the mean squared deviation drop as the complexity of the task increases. So that's the naive. Yes. Vanilla answer to that. I think it's an interesting variation. Yes. Yes. Well, Chris, Chris is lab has, has considered this and we've sort of considered this. It's, it's something it's quite hard to do. Because it's not obvious that the animal, one of the problems I foresee with this is you really get a limited amount of time where an animal is engaged and motivated to actually do anything. And that tends to equal the amount of time that you need to gather the data. So you have to design this in a very clever way, possibly so that the animal being monitored in its own case with something that's mobile to actually do that. But yeah, that's a good idea. Yeah. Yeah. Yeah. And it's, it's again something that's, it's on this ever growing list of things to try out is now that we have this decoder, we can mess with it and we can apply transformations to that decoder to see does it have any effect on the PPC activity on the long or the short term. Yeah. Maybe. Yes. Maybe, maybe. And there's probably a very clever transformations that we could do that are cast dependent as well that maximally use the neural activity that's available. So yeah, it opens really this is a proof of principle, but it opens the way to doing all these types of, of manipulations. Yeah. Yeah. Yeah.