 University of Washington, and as a Schwarz-Seoli postdoc fellow, that is known as a very prestigious postdoc in the United States. Her background is, if I remember, physics. She is trying to understand the cognitive process or mechanism of the cognitive process from the viewpoint of statistical physics and neural dynamics. And today she is going to talk about, can we hope for simplicity when describing a brain? My brain probably will be simple enough, but probably not others. So, please welcome her. So, when you make a question, please use my cell phone. I get easily distracted. Hi, I'm Linoy. Thank you. Jonas, I can still hear myself echoing. I think I speak loudly, so I'm not sure if you want to. That's fine. All right. I hope it's not going to be too loud. I'm Linoy. I'm coming to you from Seattle. It's been a little, I think we're flipping, so it's a little over half my visit here. It's been just a delight. It's incredibly stimulating, and everyone I'm talking to does incredibly interesting stuff. It really, really has been a privilege to visit. And other than being sorry that I'm not here for longer, I'm also very, very grateful that I get to share some of my work from before I came here with you. So, can you start? Maybe the first thing to say, let me flip this back, is that I'm interested in the interface between physics and biology. And when we do something that's so interdisciplinary, as all of OIST members know, one needs to choose what kind of physics, what kind of biology, and see what actually matches and how this can be helpful. So, my biological system of choice, my favorite one, is the brain. And it's my favorite one because it's very complex. It's not the only thing that is complex. But when we're thinking about doing physics, we're thinking about writing down something that is maybe more simple or maybe more intuitive, which makes for this kind of dissonance with a system that is that complex. And so, let's think about what does it mean to do something as simple. And if I'm inferring that the brain is complex, then let's talk for a second about why is the brain actually complex. Is it complex, or can we just do this? It's been elusive for many, many years, despite a lot of research, still very much a mystery. How many neurons do we have in the brain? Anyone knows? In your brain? Order of magnitude. Yeah, like a hundred billion. That's about as many stars in the galaxy. That's a lot. Okay? And not only that we have a hundred billion, they also all kind of furiously interact with each other. So every time you throw a ball, every time you listen to a symphony, every time you are patching a cell, every time you are doing a fluid dynamics experiment in the lab, in fact, those can be actually maybe all of your brain, all of these things you do. But even really, really simple things involve thousands of these neurons together. So we have any thought, any memory is not a property of a one neuron, it's a property of a giant network of neurons in the brain. And so we have a hundred billion neurons that interact with each other. So we have trillions of connections. And this is giant network that indeed exhibits very complex dynamics. It is complex. It is only 2% of your body weight. There's so much interaction there that it actually takes quarter of your energy. So from like a 2000 calorie diet of yours of the day, 500 of those go just for the brain, just for 2% of your weight because of how much that is going on there just to keep this going. And no, it doesn't change much if you think really hard or not. So with a brain, I was chatting with a few of you last week about the stock and about the idea of doing physics with a brain and thinking about these kind of interfaces and snigged from Nick's lab was kind enough to get me those. So the brain is not the only complex thing. These are magnets. Magnets are also made of many, many things, many, many, many atoms. Normally, this is why this is not magnetic, right? They are randomly having their orientations and directions inside it. But in this stuff, so this is a ferromagnetic material, what is happening inside is that the atoms are aligned. So they are all the same direction. And this is why the property of magnetism happens in this material. Magnetism then is not a property of one atom. No one atom is a ferromagnet. It's the entire material. It's a collective property, much like a thought to a neuron. Thought is not something that happens in one neuron. It's something that happens when we have a lot of neurons and a computation that happens between them, simultaneously being active. I'm taking questions at any moment, by the way, so feel free to raise your hand. Yeah, I'm going to take the clarification and admit that I stretched the analogy, right? Spins are very much more like each other than neurons are. That being said, we're going to try and write something simple, which means that I'm going to try and find out how far can I go, thinking about neurons, simplifying, approximating them to be relatively similar to each other. Can I even get anywhere? Or the way the essence I just said, do I need to think about, for anything that I do, I need to think about every possible detail that's inside them, and I need to think about every type of them. Surely for some things, this is incredibly important. It matters what type of neuron it is, it matters what kind of specialty and what kind of computation and what kind of structure it is. Maybe for something, we can write something simple, maybe we can't. So this is exactly what we're going to try to find out. More questions? Okay. Right, so we need some metrics for thinking about something really complex. I want to write down something that is collective, so properties of collective, and I have distributed four things that are going to help me out. Otoma, Tulaksmi, yes? Are you all good? Can I ask you to aggregate around them so that you can see what's going on? There is a food color next to you. So there is a food color next to you and it's just water and it's transparent container, it's nothing special. All of you just find yourselves looking at the thing. I have given this intentionally to people who I really, really believe in their experimental skills. Sorry, I've given this intentionally to the physicists who cannot do anything other than Tomo who can do everything. So like experimentally, so I'm just of course kidding. I'm going to ask you to put one drop of color and watch what's happening. I know you all know what's going to happen, but do this anyway for me, please. So just drop from the bottle. Great. Great. You can put more than one. Watch it again, again. Great. Are we done? Okay, everyone back to the seat. Or not, you can stand. So what's happening? Who can tell? What's happening? It is diffusing. Yeah, everyone's a scientist, a really complicated word. Okay, what is actually happening in the structure? Tell me what's happening spatially. What happened? The color was where at the beginning? Even more. Where was the color at the beginning when you put this in? Concentrated. Okay. And what happened at the end? Yes, what happened? You had something that was relatively ordered, right? It was a structure that was condensed there, the structure, and the disorder was growing over time. Okay. I'm going to use this word, the entropy, to talk about this process. It's not going to be an entire process like this in the brain, but I did want to give you an intuition of what this word captures when we're talking about physical system. So the entropy here grows. The disorder in the system is growing. Yes? Okay. You can keep watching the water, but we're not going to need them anymore. We're talking about the brain. It's something that is collective. We need collective properties. And what we're interested in is saying something that is relatively simple on the population, on the entire network. So the first part of the stock is going to be about relatively large networks in the brain. And after this, we're going to get to even larger networks and see if we need a different approach for this. Before I'm going any further, I think... So maybe... I'm realizing that this is on the slide, so there's a way to move this a little bit. Maybe it's okay, or maybe we can move this to the bottom or... Thank you. Before moving any further, I need to say that I can do none of this alone or at all. The work I'm about to show you is a collaboration. I'm incredibly dependent on fantastic experimentalists and fantastic other theorists. So this is Jeff Gauthier who's now in Swarthmore, but he used to be a postdoc in David Tang's lab at Princeton University. Carl Brody is also at Princeton University and on the theory side are Bill Bialik and myself. And so this is a collaboration between theory-loving experimentalists and experiment-loving theories. Here's what Jeff does for his... For basically work. He plays video games with mice. And I get to brag this because I'm not Jeff, so I can say how amazing this is. And this is a mouse. The mouse is running on a styrofoam ball. There is a 270 degrees of a screen. And there are images. The video is projected. The environment is projected on the screen. What you don't see here is that there is a two-photon microscope. This is why the head of the mouse is fixed, that is imaging the brain of the mouse as the mouse is doing this. So there is in vivo imaging inside the brain of the mouse. As the mouse is running down this linear virtual track. So there's about four virtual meters long and every time the animal gets to the end, gets a drop of water. And then repeats again and again. The typical mouse can go something like 30 minutes or 40 minutes doing this again and again. Every trial, every run down this track is about 12 seconds. And so we have this running down the corridor. And the mice are doing this. The head fixing here and the fact that this is virtual reality allows us to get the imaging inside the brain because there is the brain is exposed. And so what you're seeing here about 100 neurons, every ellipse like this is basically a neuron. Specifically, we're imaging in a place, when we're imaging what we're seeing is, I think you all know that nerve cells, the neurons in the brain spike. So they have this very fast response, electrical response that goes up and down. And we see each one of these spikes. When we image, because this is about like looking at color that they're expressing, then we get a correlate to this activity that is a little slower. So something that's more like in 100 milliseconds. And we're looking at the fluorescent. But the idea is we're recording or imaging the activity of the neurons. This correlates to this unit of the spikes. And we can see when are they active more and when are they active less. Okay, so where are we recording? We're recording actually in the hippocampus. So this is a mouse hippocampus. I need actually a volunteer. I swear I'm not going to harm you. Thank you. Okay, so this is the mouse hippocampus. Milena is going to demonstrate a human hippocampus. Because I know this picture is a little weird. Hippocampus is incredibly recent, maybe the most researched brain region in neuroscience. And the reason for this is that it's a region that, thank you, is all good. This is a region that is associated with memories and with learning. And without it, many of these functions can no longer happen. And so this picture, Milena's brain is fantastic and intact. And I can completely attest to this because she does beautiful work. But what's going to happen here when you're looking at this is that if we're actually cutting this in half, right, Milena, and we're looking inside, the hippocampus is a brain region that is deep inside the brain. So like a little bit above and behind the ear. And it has this kind of half open heart shaped. Thank you very much. Half open heart shape. This is what like you can see here in red. And it's called hippocampus because in Greek hippocampus means seahorse. And I kid you not, someone back then thought that it looks really, really the same. This is the region you can kind of see their point, right? So it's, this is why you call it hippocampus. And a beautiful thing that happens in the hippocampus is that we have place coding in it. So this is a phenomenon that was discovered in 1971 by John O'Keefe. He later proceeded to get a Nobel Prize for this in 2014. And we basically have something like a GPS in the brain. And this is where it is located. So basically means that I have neurons that become active say that I come to this room. I have been in this room before in Shearer's talk and in Danny's talk. And I've seen it before. I now know the room. I'm familiar with it, which means that every time I'm in particular areas here, I will know that this is the back of the room. So there will be fire with a mouse cells in my brain, neurons that will respond and go more active every time I'm here in the room. And then there will be other neurons that will respond every time I am here in the room and every time I'm there in the room and have this kind of map. So we're having this like GPS. If you were to read the activity in the brain of the mouse, like we're doing here, then we have these place cells, these cells that have a spatial tuning in the environment. And you can see this activity, these lines are the activity of every cell. So this cell responds every time the mouse runs on this linear track and it's located right here. There is cell four every time that the mouse is located at the beginning. Cell three every time the mouse is located at the end. And so, and this is very reliable. So we'll just have this place coding in the brain, about 30%, sometimes 50% of the cells when you record in the hippocampus or image the way we do have place tuning and then 70 or the other 50 of them don't. We're not sure if it's because they have place coding in other environments or because they are never actually place coding, but we do know that we're not seeing that much location information in their activity. So this is what it looks like when you have the place cell and I actually organized the cells here so you can see that only the first 30% of them, because every row here is a neuron, it's the activity. And this is over time. So this is a minute. And every, every time you have a run, even if I didn't tell you that there are three runs here, you would have been able to tell, right, because I organized the neuron, according to when the, the mass, the center of mass of activities actually happening. And so you can see cell one, two, three, four, five, six, and they're tiling the environment. Every time the mouse runs this environment, one, two, three, four, five, until the end, you can see this coding. And then the rest of the cells, as you can see, not something very significant in this environment. Taking questions now on the setup. No, yes. Yeah, the hippocampus has way more neurons. There are a hundred. Yeah, here they're like, yeah, maybe 80 neurons. And these are a subset indeed. This a few thousands cell fires simultaneously. And then we have a probability of recording one of them something like, yes, thank you. I'm going to repeat the question because I know you're not holding a microphone before the question was, is there a specific location in the hippocampus where all these places are located? The answer to this, and thank you for asking is no, it is distributed in hippocampus. For imaging, we need a relatively flat region, right? So remember this, the shape of the hippocampus, you get a very specific flat parts in one of these angles, and you have only a subset of the neurons that you're imaging, you're not imaging all of the hippocampus. Yes. Thank you. More questions. So we're looking at this, and you can already see that we have multiple kind of, even without looking at the neurons in their shape or their type or any of this, we already have heterogeneity, right? You have neurons here that respond to place and neurons here that don't. But they're all connected. They're all connected in the same kind of ways. They're all what we call pyramidal cells. They're all the same type of cells, excitatory in the brain. And so how are we going to capture this? And specifically, let's say that I did want to write down something relatively simple. I want to write down, maybe be blind to what they're actually doing, try to write down something simple to all of them and find out how far can I get with it? So let's, let's try to do this. I'm going to try to look at the things that I know that I can measure really well. Okay, so the things that I know I can measure really well, I, I'm going to put inside my model, and I'm going to put only two of those. The first thing I'm going to put is the average activity of every one of the neurons. I'm going to make the mean for each one of these neurons. And the second thing I'm going to put in, if I want to try out to find out what is the collective doing here. So for the collective doing here, if I looked at individuals, that's not going to be enough. And now I can choose all kinds of other things, right? Or what happens when I look at, you know, 10 of them and the other is off or something like this. But let's not do this. Let's do the simplest possible thing that ties neurons together. Let's look at pairwise interaction. Just the correlations between pairs. So I'm going to take the individual measures, just the mean of each one of the neurons, and I'm going to take pairwise interactions between them. And this is all the time going to put into my model. So individuals, pairs, and I'm going to try to infer and look what happens collectively. What happens for the entire, can I look, say something about the entire pattern? Can I say something about the probability to get a certain pattern where certain neurons are active, other are not active? Can I say something about the place cells versus the non-place cells in this network, given that I didn't put any information like this inside my model? I'm happy to say questions on this part of the, what I'm about to say right now later. I'm going to go through this a little bit quicker because this is just the technical part of how to transform the signal into a model. Each one of these neurons has a signal when it goes on and off. I'm going to bind this. After I bind this, I'm going to binarize this. So I'm going to just put the zero or a one for each moment in time for each neuron. Was it active? Was it not active? This is as far as we go. As simple as we go, we're going to do this for all of the neurons. And then I'm going to look at these snapshots in time. So at this moment in time, Jeff is recording the neurons. I'm sitting in the other room. Nobody lets theorists inside the experiment, right? And I'm asking, hey, Jeff, what are you seeing right now? What is the probability that neurons three, four, and five were active when all the rest of the 100 were not active? Can I say something about this? So I'm going to look at these population states, these configurations of the neurons. And I'm going to put inside my model, as we said, this is second-order thing, just the mean, the individual activity, and the pairwise interactions of them. And to do this, when I do this, I will build an entire probability distribution for each one of the population states. What is the probability to get any kind of combination of silence and activity inside this network? And the way in which I'm going to do this is that I'm going to assume the maximum entropy distribution. And this is why we played this game before. This is where I'm trying to convince you the way in which we're doing this is that we're going to assume nothing else other than the two constraints that I just told you about, the means and the pairwise and nothing else. So I'm going to take the probability distribution that matches this, that is the least constrained other than the two constraints that I put in, the most entropy, right? The end result, right? Like not the beginning of what you're looking at in the war. The physicists in the audience will recognize this as a rising model with competing interactions, but it actually doesn't matter here. What matters here is that we're capturing only, we're putting only what we want and we're trying to infer something about the collective, the entire network. And as we're doing this, we now have an entire probability distribution where I can sample from it, because now I have different population states that I can ask about, even those that did not appear in the experiment. And I can ask about a new configuration of silence and activity, what is the probability this is going to happen in my data, right? Because I'm only recording every subset of activity, not all the possible combination. And I would like to know something about the entire collection. And so I'm going to take samples and now I can do just all kinds of checks and predictions, right? I can check first of all that the pairwise interactions that we're looking at from the data and from the model because I'm going to compute everything else on the samples that I took from my model on the data and I can compare them. Yes, he was asking if the average I was taking was the time average or the area, it is the time average for the mean of the activity of the neurons. But pay attention here, this is already the question, that there is no time component to this model in the sense that there are actually no dynamics. So this is a thing that information that the model lacks. This is a probability distribution and it's about snapshots in time. If I were to scramble the different moments in time, but keep the configuration of the neurons, the silence and activity the same, I would get the same model, which is a huge caveat, right? So anything that I'm getting here, I'm getting despite the fact that there are no dynamics in this model at all. And of course, the biology is a dynamical thing, there is evolution of what's happening. So can we even get anything? First of all, we can see that everything is on the diagonal, we use a match from the data and the model, get no points for this, this just means that it works because the pairwise were inside the model, right? So we're fitting fine. But anything else that I'm going to ask right now is actually a prediction of the model because I put nothing in. So I can ask about my three, like triplet correlations instead of the pairwise. That's something the model knows nothing about. Let's find out. I actually put all of the 76,000 combinations the data has here for you. And you can see again, model on the x-axis, data on the y-axis and they're falling on diagonal with really very few outside ones that are not within experimental error. So almost all of the 76,000, which I think you'll agree is that the enormous number actually can be predicted from the model. Probability is for getting activity in three neurons combination. Another thing I can do is to ask about probability to get 10 neurons active, any 10 neurons active, not specific one, and all the rest silence or any three neurons active and all the rest silence and kind of sample what's happening in the model as well. So I plotted here in blue what's happening in the data. I'm going to overlay here in red what's happening in the model and so you can see that at the points where we barely have activity because this is very sparse. Normally there is, right, it's very common for just one neuron or two neurons to be active and for everything else to be silent. Yeah, because we're talking about like 70 milliseconds time bin. So it's very sparse. Sorry. And you can see that here we're not doing as well in the rest of it because we barely have data to do this but overall we're doing fantastically well. The data in the model can do this. We have prediction also for this quantity. Let me just do one more thing. I saw your finger, Milena. Another thing that I can do, you can tell me, okay, you can, you know, you can compete all these things that I actually don't really care about because it's all kinds of statistics of the data and you said something about place. Place sounds like a function that I care about. Can you say something about place? Let's see if we can say something about place. Let's find out what happens. If I look at say neuron number five, given the activity of neuron number five, like given the activity of everyone else around me and I'm neuron number five, can I find out if neuron number five is going to fire or not? Can I predict this collective thing looking at everyone else? What is the probability that I neuron number five is actually active right now? So what I put here for you is a window in time and the orange dots here are moments where one of the neurons, a place neuron, was actually active and I think you'll be able to see by the periodicity here, right? Every time the mouse runs down, you can tell that this is every time the mouse is in a particular location. So it's definitely a place cell. I'm going to overlay here in the moment the probability we get from the rest of the network for any moment here for the neuron to be active. So if we did this right, the peaks of the probability are going to coincide with these orange dots. So you can see that this is going just fine. We're indeed predicting a very high probability for the neuron to be active when we're looking at everyone else and not that neuron. So definitely there is something collective that is going on in this network and we're getting at it even from just the means and pair sizes or even from very little in this. Okay, place you can do, place is a very dominant feature in this data. What's happening with the non-place, right? It's like way less clear what they're actually doing. So let's look at this. You can already see even had I not told you that this is not a place cell. It's not as periodic. This is a non-place cell. It has way less location information or maybe not at all. And I'm going to overlay this here in a second. Again, I just did. And you can still see that even when it is not as periodic and not as organized, this very, very simple model can get me really good predictions even for neurons that I don't exactly know what they are even doing and what they're coding. Ah, great question. So I think Doa Sensei was asking, I'm going to repeat the question. What happens if you're only looking at the place cells when you want to predict location versus the non-place cells? Was this the answer? Yeah. So maybe two things to say. The first thing is that we do very well here both for the place cell and the non-place cells. If you look only at the place cells, here's the question, can I predict location of the animal better? Because clearly they're the ones with more information. The answer is actually that the place cells do contribute. So if you were to decode place and looking at everything with the non-place cells, the conjunction coding of them despite having very little information actually adds to the collective information about place. And so what really is happening is that the best way to know the location of the animal without looking at the animal and just reading the neurons is to take the entire network and actually not just the place cells because they do have some component of this. And in fact, you can also, oh right, this is just statistics of before just so that you know that I didn't just take a snapshot of just places where it works well. This is all the moments where place cells were predicted to be active and non-places were predicted to be active. And you can see that it works really, really well. And so there is an information that is collectively coded in this network. You are getting at it with this model. And maybe one last thing just to say here is that you could be also asking me, well, maybe what's happening is that really you're taking the entire thing but we have two separate subgroups, right? Place cells really are predicting the activity of the rest of the place cell. So if neuron number five is the place cell, really most of the contribution when you're looking at the network is coming from the other place cells, or you're actually looking at all of them and not just the place cell. So maybe, is it the case that we really have two subpopulations here? Place cells contribute to place cells. Non-place cells are indicative for what's happening in non-place cells. And information is actually split. And the answer to this is no, because you can see that here there's place cell contribution and non-place cell contribution that is relatively equal, is on the diagonal here, for activity of place cells and of non-place cells in both cases. So we're actually taking information from all of the network. And this is tying to what the essence had just asked. And we're looking at all of this when we're trying to decide whether, when we're computing whether this neuron was active or not. Milana, you're asking. Actually, no, I have two questions. The first one was actually going back a few slides. You were talking about actually related to a previous question as well, that you don't have dynamics here. You don't really look at time anymore. But I think that you mentioned that I think is actually related to that and to time is the binarization process itself, right? Because that determines how your states look like and in a way how your dynamics look like. Yes. So I was wondering, because with the results that you got, which are very impressive, how specific do you have to be with the binarization so that you get them? Terrific question. So if I understand correctly, the question Milana was asking about the binarization question about the population state here. So indeed, we don't have dynamics, did a snapshot. Milana is saying, but you have determined the size of the snapshot, right? This is seven milliseconds. If you were to choose 120 or 30, this could throw off the entire thing. I have actually chosen 120, 350. And as it turns, actually not 50 because the resolution here is not limiting this, but I have doubled and tripled and still a certain level. The reason to do this, I did the same procedure and build the same model again and again. And as it turns out, other than making the correlations a little stronger, because you're getting longer, longer time then, it actually does not harm the results at all. So this thing is actually robust to this. And thank you for the question. So if I can ask my second question, which is then related to the predictions that you got, you show the graphs like for play cell, if you ignore the play cell itself, but then use everyone else, can you predict the activity? No, that's the one cell, not the entire place of the population. No, but you ignored the play cell itself, but you use the activity of everyone else to predict it, right? So, and then you show that you cannot ignore non-play cells, for example. But then my question is, if you, instead of using everyone else, you use 90% of everyone else or 80% of everyone else, what is the limit that you have to get so that you actually pass chance level? I'm happy to talk later about this. Let me say very succinctly that it actually matters when you start playing this game from a certain threshold. 90%, you're still fine. 80%, you're still fine. You start going to a 70 or below. It starts, it starts being very important. Who is it that you discarded? Because if you discarded all the play cells in the neighborhood of this play cell, your ability to predict the activity of that particular play cell decreases very dramatically. And so you need to still have some information about what's happening for the cell in the network. And sometimes the redundancy is not, the redundancy is not in every cell, because you get to the same as every cell, or you also wouldn't need any of them, right? What's really happening is that there is some redundancy, but it's not an overlapping situation. It's a little bit. And so the collective is very strong, but it starts, starts to matter. Who are you discarding? I actually meant in a spatial way, the neighborhood. So if I am here, and the next neuron that is here, and I have discarded this neuron out, and this neuron stays in, yes, the spatial neuron, I discarded both its neighborhoods for where it's coding. I no longer have information about that at all. And then you'll have some information in the rest of the neurons, but it will just not be as high quality. It decreases a lot. More questions? Yes. The connection set up by this maximum entropy framework, how are they different from the connection simply by correlation or covariance? Oh, they are the same. So what I fit here is a second moment. But what is the, what I'm predicting, what I'm playing with is actually the covariances. This is binary. It doesn't matter, right? If it's the correlation, the covariance, and what I'm looking at is actually the covariances. So what I was plotting before was the covariances. What is fitted in the model is second moments, but it's including the same thing. Thank you for the question. All right. Let's go back. How are we doing on time? Okay. So this is for the first part. I've shown you that there is collective information that is coded in both the place and the non-place cells. And I also have shown you that surprisingly, the simple models that only include individual and pairs kind of right interactions successfully predict the population level of property, which is, which makes us very optimistic, right? There are some things we can't do with them, but at least we can get somewhere. So here's what happened. Even larger networks goes because this took me a while to do this model. And as I was doing this, experimentalists are working way faster and Jeff is way more talented. And I'm going to finish this center for a second and then I'll take a question. And what happened was I basically blinked in order to write this paper. I came back and Jeff goes, hey, I have two orders of magnitude more cells for you. And I thought, oh yeah, I wanted many neurons. This is great. And went back and thought, oh my God, I have no idea what to do right now. I cannot write these models for 2,000 cells. I am not sure how to deal with it. And worse than anything, if we keep going this way, every time I write a paper and come back, he has two orders of magnitude more neurons, which is what's happening in neuroscience right now. We can now do like 1,000 and 1,000 of cells simultaneously with imaging, with recordings. I can't keep up, right? So I need something else. We need something else to do. Just so that you can see, this is what he was showing me. So when I said before, oh, the neurons are kind of like stars in the galaxy. This is honestly what I was thinking when I was looking at this movie. It looks like the stars in the galaxy. These are about 2,000 neurons. And they are, you can see basically the complexity right there, like partially correlated. There are all kinds of patterns going on here. I'm going to take a question as you are looking at this beautiful thing. Yes. Yes. So again, back to Ising model is, Ising model is next neighbor interaction. Can you just walk me through where is the next neighbor here? Because you are taking pairwise for different places. The neurons are different locations. Yes. Here's what I'm not going to do. I'm not actually going to walk you through this on the board right now. I am going to tell you that I'm giving a talk with more technical details on July 12th. And that Nick has the details for this. And I'm happy to talk about this more than. However, I will say it's next neighbor in a functional way. So it is in the sense that correlated neurons are the neighboring neurons here, right? So the concept here being this is a functional kind of connectivity. It's not a spatial one. It's who am I talking to and not who am I next to? Because neurons extend their arms and the conversation to very far away to thousands of other. Thank you. The locality is in terms of interaction, not in actual locality. I'm not going to say yes to this because locality is a complicated concept here. And I want to talk more about this. And I think you know this when you're asking. So I'm going to just retain the answer that I gave you before. And let's talk more about locality in the next thing. And afterwards, yes. Yeah, this is probably also for the technical question, but maybe just in two words. I'm happy to answer them unless they require. How are you making the inference? What is it? How are you making the inference? This I'm happy to answer. I'm doing Mark of Shame Monte Carlo. So I'm running MCMC. Specifically here, I was not running any approximation. So it was actually game sampling. But with specific regimes and for specific networks, you can also do pretty well with all kinds of algorithms that are interesting. But it is an MCMC. Especially for larger systems, maybe. Yes. For the larger systems, yes. This we will talk about later. For the larger systems, other solutions, none of them great. And yes, I will take from this though something that thank you for reminding me to say, which is that the inference procedure here for those of you who care about this is convex. So there's no problem to get to where you want to get to it just that you need to not get stuck on the way in the energy landscape. Just as a side comment. Yes. So something that I didn't understand that your predictive model is linked to the type of experience that the mouse is doing. So what I'm saying, like maybe you get another predictive model depending the experiment that you're looking at. Absolutely. So biology, welcome, not physics. Everything is tied to the specific system and it's in specific conditions. So to generalize over them, because you're assembling it at a particular condition, I would rather it not be this way, but it really is and it's part of the magic and it's why it's beautiful. You needed to actually look at many, many conditions. So every time I do this for a different mouse and a different day, I will reconstruct a model and I'm not actually going to put these variables together. What I am comparing is how good are my predictions? So if I keep on doing this, how good they are and they're really, really good all the time between different minds, different days, different sessions, but I'm not building a model to the metadata set. Maybe you can classify, categorize some. Yes, yes. This question I would love to actually take later because we actually have done this through about 900 different subsets and there is a paper about this and I'm happy to tell you what we found out, but it works very well when you're taking these kind of metrics. Okay, I'm going to, so about 2000 neurons here, 1500. Here's what I'm going to do. Want to show you where am I going with this? So we need a different approach. We need a different approach and I was thinking maybe we'll look again at the water so that I can show you what we want to do. Oh, I actually need one of the colors. Take your green. Sorry. Can someone help me out? Can you hold this above the glass please? But I do want to show you the following thing. You seeing this? What does this look like? Yeah, I can tell you're a first year because I didn't answer it that soon, right, which is another one's mind here. So thank you. Yes, it looks, thank you. We're done. Deanna, thank you. Yes, I didn't have your bathtub sink, so I can't do this, but I had to do this in the in the glass, but I thought, well, we have water and containers. Maybe look at this. It looks an awful lot like a hurricane. And the reason for this is that even if this is happening in a very small scale, or on a very, very large scale when we're looking at air molecules and water molecules, it's governed by the same equation. So it's the same model when taking the Vistokes equation, the same model that is going to describe me what is happening in this tiny glass or what's happening in a hurricane. And the reason this can happen on different scales is that what we're doing here is not writing down the position and the velocity of every water molecule. What we're doing here is writing down collective properties, aggregated measures. So the thing that goes into a model when we write this down and the thing that can scale up and down like this is when we talk about pressure, about temperature, about the speed of the entire thing, right? These are collective properties. They're aggregated measures. And they're not each one of the individuals. So looking at something like these 2000 neurons, thinking, okay, maybe the way to go is not for me to start from each one of the individuals and look at the pairs and then try for collective features, which worked really well, but we seem to be growing in the number of individual units that we need here. Maybe we can go closer to what's happening here. Let's just, let's look at something that I can go through across the scale. I'll start from the smaller scales and I go up and let's see if I get something like this, because if I don't get something that scales up when I am looking at my data, then I can't write this down, right? There's no guarantee here. This is not a system where we've done this before. And there's no guarantee here that as I am scaling up, it will look the same. What does it mean for something to be looking the same? So let me talk about this for just a moment. Looking the same is as I am zooming out, I'm going to ask whether this looks the same. And how am I going to do this thing where I'm zooming out, where I'm what we call coarse graining? Let's think about a bunch of neurons that are active or silent. Or you can think about the atoms in the magnet before that are up or down, just the arrows here. You're looking at this, it's very hard to tell what is the trend of the entire thing. So I'm going to partition this and just look at now nine of them. Still hard to just describe this. So I'm going to just do this in each one of the partitions. So for these nine, majority rule, it's mostly going up. For these nine, up, up, down, I can do this for all of them. And then I can do this again. So it's an iterative process where I can go again and again with the same procedure and scale up. Okay. I did not invent this. This is cat enough. And there's a lot of literature in physics for what we call renormalization group, which is this the first version of it. What matters here is that we are coarse graining. So I'm going to start from the scale that is small. I'm going to go to the scale that is larger. And I'm going to move. So from these small individual units, I'm going to move into aggregated properties. And I would like to know whether things stay the same as I am scaling up. And if you think that what I'm saying right now doesn't sound like it's going to work, then I'm going to remind you that in a way, we're actually doing this all the time. Because what we're doing when we're writing down individual neurons is already taking an aggregate measure, because activity of a neuron includes a lot of ions and a lot of channels and a lot of organelles inside. And we're looking just at like, Oh, active or not, that is already in a way, a course measure. It's just a matter of where do you want to start the scales and how do you move up. So let's look at coarse graining. And we're going to do this in the data. Okay, I'm going to look at say I have these 10 neurons, I want to scale them up. So I'm going to look at how correlated they are. So again, into the same question here, it's not a spatial kind of thing. It's how much am I talking to someone neuron one and neuron five could be talking a lot more and be way more correlated than neurons, the neurons that are next to each other. So I'm going to first compute pairwise correlations. Same thing I was computing before, the pair that was the most correlated, I'm going to sum up its activity. So I'm going to do the same pairs instead of a one and five, I now have a one plus five kind of thing that is aggregated and bigger. Two and nine is going to be the next pair that is highly correlated. And I'm going to keep on going and keep on going from the 10, I'm going to have five of those that are the sums kind of thing. And because this is an iterative process, I can keep going, I can now look at those and go, Oh, the new kind of quantity here that is one plus five, it's the most correlated to the three plus 10, I'm going to put those together. And I'm going to have groups of four and then of eight. And because I have about 1500 cells in what I showed you before, we can go until 256. So the kind of groups that are going from two to 256, as I am in pairwise manner, coarsening them up. Exactly what I said before, the one and the five and literally the summed activity. So now the new quantity is the summed activity of the two that were before. Okay, so what does it mean when I'm doing this? And I'm going to now compute everything in the level of the individuals, the pairs, the fourth, the eighth, and I can look at anything that I compute in through the scales as I'm scaling up the network and seeing whether it's the same. What does it mean for something to stay the same as I scale up? There is a there is a there is a map for this, right? So this allows me to do the fun thing of showing the course snowflake, which is always fun, which is to say, as I am zooming in or zooming out, this stays exactly the same. And this is a spatial instantiation of this. But this is true for any quantities as I am zooming in or zooming out or staying the same and staying the same means a power law. So let me just finish the same means a power loss. I'm just going to remind you that if I have a situation of like a thousand of those, this is one power law and I can do this in one order of magnitude less for 100 for 10. And this is what happens when I just plot them on X and Y. But if I were to log log my scale, I'm going to put them together and then I log log my scale, I'm going to get a straight line. I'm going to come back for a second and show you remind you that if I were to log, you can see that the shape of this function looks exactly the same, no matter the scale. So finding a power law means finding scale in variance means that as I am going up or down, I am staying the same. This is like finding a cost of like in the neural activity. Yes. Ask the aggregating with this pair-wise cascade and also aggregating in one shot in the certain level, the other structure difference or the same. All right. Can you repeat again? Yeah. For example, you can find, apply some kind of clustering algorithm to clusters. And then compared to those method, pair-wise clustering have different combinations. Yes. So you can think about kind of reducing the dimensions here, clustering or actually saying or stuff like this as a comparison to this kind of approach. The difference will be that in the other way, we are not having a knob on how to move in between. I have to make decisions on, oh, I'm going to cluster it to X. And so we're having this arbitraryness here. The point is not to cluster. The point is to find out, can I actually scale up and down, which means that the thing I'm going to write down in different scales are going to look the same. So it's trying to figure out for the model, can we actually have these kind of collective properties rather than, oh, can I just reduce the dimensions? But ask me more later about this because we also did this in momentum shell just for the record and nothing real space. And it works really well and looks, it also has a knob on something that's more like this way. Yes. More questions? Let me maybe finish this. And so, you know, I log log my scales. I think that the parallel that looks exactly the same kind of shape now looks like a straight line. In the straight line, you can see that these equidistances here in the different scales. And so that means that if I'm computing something, and I'm log logging in the plot of this property that I computed, and it looks like a straight line, this is a power law, and this is a, oh, yay, we're actually having something that is scaling variance. So let's see. I'm going to tell you the end, which is that we have found this in many, many properties. In fact, when you do this, you can actually, despite this not being guaranteed in any way or model, look at scaling the coarse grain measures. And they really do exhibit power laws. You really can change you with this knob for coarse graining, looking at individuals or aggregated measures. And so we're very optimistic for it. We're going to just tell you that we found this in multiple measures, statistical ones, dynamical ones. Maybe let's look at our time here. Yes, and let's focus on one. Okay, you will believe me or open a paper that we have found this in many, many places. And I will show you just one of those. So let's actually go to the dynamics one, because there's something we didn't do before. I'm going to look at the temporal correlations of the neurons. So actually, in time, okay, which means here that we have kind of what is the time scale of activity in the network. And so you can think about this as if you have one individual neuron has a lot of fluctuation. But if you're aggregating many neurons together, this is something that has slower fluctuations because the little things cancel out. So as we are coarse graining up, you would expect the time scale here to actually grow, right, which is actually exactly what is happening. So the collective fluctuations, dynamical fluctuations of the network are indeed slower, which is first thing that is good. And if we're rescaling them and looking at them, so this is a thing that is with the straight line that I told you. So I'm looking at the temporal correlation at the time scale here at my tau. I am indeed seeing a straight line. And this is minded from the data from the mouse. I didn't write here anything else, but it's really, really on the straight line, which is striking. And it is especially striking, because it allows me to say something that I can never say in biology, which is that these things between days, between completely different conditions are incredibly reproducible and they're reproducible to the second place after the decimal point. This never happened. So really, really happy, right? We're actually the exponent here of what this line looks like and the slope here is incredibly reproducible. And it is always falling on the straight line and it is true across animals, which means that we indeed found scaling variance and we can indeed write down models that are collective, that are for these collective properties. And now we have a handle on how to do this, because we have faith that as we're writing this down, it might behave like what you saw here for the hurricane and the tiny thing inside my glass, because it's retaining something that is important. And actually, apropos retaining something that is important, let's actually look at the one thing that maybe you thought was the most important thing even before. There is place coding in this network. There are more neurons that you imaged, but there are still about 50% of them actually respond to the location of the mouse. So what happens to the place? If I lost my place coding as I'm doing this procedure, maybe you don't care that I have any of the other things that actually scale in variance, but actually I'm not losing it. So the location is actually spreading in the network as I'm aggregating up. So from the individuals A16, and this actually keeps going, you're not losing the place information and you can always decode place is not just spreading. And you actually have in the course grain description very good coding for place. Okay, so what I have shown you is that we have this kind of course-graining procedure that allows us to make something simple that retains structure across scale, and that we have some faith to do this because you actually see scale in variance. Even the place coding is preserved under the course-graining that we have come up with. And what I showed you before, also optimistic about the collective things in that work, right? We have collective information that is coded in the place and the non-place cells. We can write simple models in order to get them out. And they successfully predict this population level of things. So going for language and for physics, that is describing the collective and predictive in the level of the collective, rather than going for individuals, so that maybe we can actually understand something in the higher scales. I'm going to say optimistic. Yeah, not sure if I managed to convince you, but I think so. I think we can hope simplicity. This makes me happy. I hope it makes you happy. There is a lot of complexity that is not to be discarded, that is very important for particular things. This is not what I'm saying. What I am saying is that if you choose the right thing, we have some hope that you can write something that is effective, something that is simple, and it can actually get you very far. Thank you. It was a great talk. It was a lot of fun, but I'm an engineer, so I start thinking about things, and one is in the brain. There are places where the neurons have a very characteristic property, so they're in the basal ganglia, they're the cans, so they're firing, and they stop when you're in the cut points in a behavior. There's the VAR, where it's very obviously a rate code, but a tough one is in the cortex, where all the experimenters just count the spikes and call out a number, but it's very easy to show that that's a good correlation of the experiment, but it can't be what the brain is using. I'm caught between liking your talk and the wondering if we'll have to know what the behaviors are first before we can back into the physical model. Yes. Okay. I am entirely with you. The super unfortunate thing about my talk is that it does not answer the question, how does the brain work? I am very disappointed by this too, but I'm working on it, and so you're entirely right. There are many, many questions that we can be addressing with it, and it's also true that it is dependent on the behavior, so I think your point was, if I'm understanding correctly, that the particular, in a way, there is no making sense of this in not in the light of the behavior, right? Like I showed you here, the mouse is running, something is happening, there is control that has to do with where am I going, there is coding for the place. The function here is not, this is not a passive media, right? Like don't talk to me about weather because there is actually biology behind here, and there is a particular computation that needs to be described, and so the point is well taken. I am entirely with you. I am hoping that much like here, even though we were kind of blind to the computation, we still managed to retain it and say something about it, that clearly that something that needs to be retained in the physics of what we're doing, that this is kind of the hope that I have for other things we'll be writing, because I don't know how else to take language for collective properties when I also have all this control of the biology, right? So I'm hoping that there is a way like here to combine them together. So even if it's in the basal ganglia context, an area that we is relevant very much to like motor habits that we have, and to many other things. So we know a little bit more about the activity there as it was mentioned here, or in the cortex where we have coding for many things, and it's a lot more of a disordered system right now in our understanding, because we know less about what's happening there, that either way, no matter how much we know about what's being coded and what the area is relevant for, you kind of can still do these things to understand the structure and the mechanism, and not lose track of the behavior of the place of the function. I'm hoping we can still do this. Yes. Yes. Hi. I'm not sure she's hearing me. Sorry, I can't read the name, maybe you can tell her that she is. Izumi? I'm listening. Okay, can you hear me? Yes, I can. Oh, okay. Oh, thank you. Yeah, I really enjoyed your talk. Thanks very much for that. As you mentioned in your talk, now the drive in neuroscience is to record from more and more neurons, either imaging or electrophysiology, and so your method is fantastic, but I'm wondering, is there a really requirement for recording from every single neuron, or is the implication of you presented that you can actually learn something quite fundamental from a small sample? Yeah, I think that's a terrific question. Yes, please. Thank you for asking. So that is true, in a way, because it's so hard, the brain is co-complex, some of what were happening because in a way, the theory is lagging in neuroscience. We're struggling to catch up because it's been very complex, and theory requires observations, and observations only now started actually accumulating enough. And so there has been tremendous and beautiful work, but without having access to more, there was no way to write down things that we can now write down. There's a question of, oh, now that we have in the space where we just want to record more neurons, is this actually the answer to anything? Because we started doing this because we don't know how this works. And I'm hoping that by actually doing these kind of things in a population level, we get answers to, oh, actually, you don't need to go for like 100 million. I'm all good in the thousand. So see what we can actually predict. What scale can I actually predict in? How far can I go? What am I missing? And then do this thing that we have in, when we're actually synced with the theory in the experiment, the loop where we're saying back, oh, actually, this is how far we can get and what we're missing. And we're feeding back to the question of, as the experimentalists in the field are asking what to record next. And we can have some information for this because right now there's no way to direct this. So kind of the notion of trying to record more, what are we recording this for? I don't know. I do know that trying to answer these questions might give me some handle on this. And I'm hoping, I'm hoping that I don't need to send you to record every possible ion channels in every possible. Yeah. Great. Thank you. Thank you. Yes. Sorry. Yes. Yes. Or maybe let me do this. Who has not asked a question before? Yes. Thank you. Thank you very much for the nice talk. The only question I'm going to ask is the preprocessing because you are basically taking the same calcium data, which is not the real biological data, it's calcium dynamics. And then you are doing a heavily processing from calcium data. You are even taking another step and calling them spike. And then you are doing all of the modeling on these data set. And then you are showing us beautiful correlation and fittings. Do you think it really captures what is going on in the dynamics of the brain? I do. I do. Or I would not be standing here. But let me say a few things about this, which is that I have taken liberties of what is it that I'm actually including. Much like that I've taken liberties with not including what is the cell type and not including the inhibitory neurons. I am taking specific information and I am building on this and seeing how far it can go. I think the other way around will be very difficult. If I don't know what is relevant, I don't know how to take all of this. So I need to start with deciding on what is relevant and what can you maybe approximate. And the way to find out whether we were right is to get to the end, see how far we got and what we're missing and say, oh, okay, I'm going to go back and actually I need this other thing too. But if I am starting with every possible thing, I will never be able to actually do reduction. And specifically on the calcium thing, I would also argue that we do not know that calcium dynamics are less important than spikes. I think I'm not sure that most of right, so we're very spike oriented in what we're thinking about things. The calcium dynamics definitely basically correspond to this. So it's just a slower thing. They could be important on their own more than some other thing. And it's true that we're not exactly sure what is the unit here. And you're entirely right about this. Play with some of the decisions of like how to be and how to do things like what Milan asked. And so we did and we're hoping that it's robust to this. But of course, these are choices. And yeah, yes. I'll repeat your question if you don't have it. Okay, thank you. So going back to easy. Sorry, I'm not hearing you. Can you hear me? Yes. Yeah, going back to easy model because I'm so I believe you found the page transition. Is that what you're describing in the end when you're talking about scaling variance? You're asking me about the partition function. You're tying together the icing with the renormalization group. Is this because you're asking me about partition function? Okay. So I guess my question is, can you kind of tell me about if there is an analog quantity, let's say of temperature on the easy model side in the traditional one? Like do you, did you find disordered phase, ordered phase? You know, did you find order parameter, this kind of question? I need a board for this. July 12th for a more technical talk. However, let me just say the following thing, which is I'm going to combine both your questions. You're asking about order parameters and about the partition function and about a knob here in temperature. So these are these question times coming two different flavors and you can tell what the person studied by asking. The first question can be there is some kind of notion temperature in your model that makes everything completely irrelevant. There is no notion of temperature in the brain. I don't want to listen to you anymore. That is one type of comment about this, right? And the other is, oh, temperature is some kind of knob here. What is the parallel to it? What is the analogy to it? Can I play with it? So I'm going to take neither of these extremes and say, indeed, temperature is not the relevant thing here, but knobs of changing things are very relevant. And so while there is no notion of temperature, there is a notion of how ordered and disordered things are. And this changes given the sparsity of the activity. And so you can indeed be in the situations where you have these kind of like, I'm going to say this very carefully, phase transition like things in your activity. Okay. And the people who don't like this idea can fight with me later, but it's like things when you're writing this. I don't know that it is incredibly informative to the function in the brain, but it is informative to your model. And we can chat more about this. Order parameters. So context on this being, this is for the second half of the talk, having some kind of knobs or not temperature like in the first one, but some kind of knob on in this idea of course, draining and moving in between things. I don't know what it is. You saw that I kind of like was, I put numbers on the slopes of the lines that I showed you, but I did not mention anything about them. And the reason for this is, this is deriving this from the data, right? I have no expectation on what the slope should have been like, what these exponents should have been like. And I don't know what is the relevant thing. So starting from the place of basically doing this in data space and trying to get to the place where we're going for an order parameter, when we write down a model for this. Talk to me later, there's also kind of like a non Gaussian fixed point maybe in the data when you look at this, I am, I'm on unsteady ground here. We haven't done this before. I don't know a good answer to what would be a parallel to order parameter here, but definitely the way in which you would like to go. I think it is sensible to speak about knobs of changes in the structure and in the function in the brain. We can call them order parameters if it helps us with the math of it, much like with the temperature. And I'm going to be very careful and say that while I do hope that we will get those, right now, I don't have one in my hand, but I do think that this direction is very easy for me promising. And so in that sense, sensible, not sensible in the sense of imposing an order parameter of something that I have chosen on a system that I'm not sure what it's doing from a model that is not actually scale invariant and is not going anywhere. And talking about that. So that's not so much. But if we can find a notion that is like write new physics, time is the beautiful thing about biology, it's so complex, we're finding new physics, finding how to tailor this so that we have this kind of language and we write down something new for it, very much so. Please help me out. Yes. The last part about course learning. So for example, if the place cells are responding to the place field independently like a person spikes, what kind of course grading results do you expect? Yes, right. So I didn't show this, but there's control for the first part of the talk where like if you only do basically a place coding response, it is not doing as well on the predictions as the model that we wrote down. And so in that kind of sense, if you were to have no overlap at all and you were course grading, you could be and there was no redundancy, there was nothing collective, you could be in a situation where you would be losing the place code because you're like as you're summing this up. However, if the coding is strong enough, as you are summing up to neurons, if you have, think about this, if you have silence in one and a very specific field in the other and you sum them up, you will retain this amplitude as you're summing them up. If you take the same procedure that the new put the other neuron with a similar activity. So the thing, the way in which you're going to lose it is if your place coding was very not sparse. So if you had a lot of like right, a lot of fields in one or in a lot of things in another or like very high noise, as you were summing them up, you would be losing the selectivity. Right. But if you have silence in everything or in the one that you're as you're the course grading in a very strong amplitude in a very specific place in one of them, that is the situation in which we're actually retaining this. So we are actually closer to this and we are to complete mess on the place coding. That's a really good question. Thank you. Yes. Sorry. You mentioned that the function that you're considering is a convex function. So it definitely has a global. It has a global minimum. And then it appears that there is no thresholding on the correlations that you're considering. So you're having this big field and you're doing the you're doing the Gibbs you mentioned or some sort of the MCMC sampling. Yes. So it's very combinatorial that you're approaching for something that definitely has a definite global response in there because it's a convex function. So let me clarify. I might not have been clear. The convex comment was a comment about the inference procedure, not about the function of the data. So when you perform the inference, sorry, this was quickly as an answer, and the MCMC, which is not data itself, this is just the guarantee on the boundaries of you actually getting to a place where you get to fit the second moment, like the pairwise correlations. It is however not anything about the data and how convex the data is or the function of the data. So these things are actually separate. It's a technical comment about the computational procedure rather than a functional or a comment about the data. Is that helpful? But that's the way that, for instance, when you're talking about the Ison model that you were actually mentioning, then there's this mixing J that they have in there. So that would very much determine what type of these correlations are going to be plugged in. Yes. So by the end of this procedure, what I got is a set of coefficients, right? So these H's and J's, these things I have inside are the things that I'm getting outside from my model. So I put in means and covariances, and I'm trying to infer their coefficients, their couplings. So the things that multiply each one of those variables. So these parameters are the thing that I'm going to infer out. And you're right that it's very large, right? It's NNMS on over to like where you have a lot, a lot of them, because it's as many as you have pairwise correlations. But there's nothing convex about the J's. It's the procedure of getting to finding the J's that is happening. We can, we can keep talking. Yes, later, later. Maybe we can have another question from online participant. Yes. So you will be the last question. Yeah, Tom, hi. Oh, okay. Is it okay? Just quickly, two things. Thank you for the talk, of course. One is, does your method allow any sort of guessing about which neurons are maybe the most important for guessing the higher order interactions? And the second question is, sort of related, relatedly, could you, could you sort of guess which, which neurons ought to be coarsened or like there were different levels of coarsening, like maybe some things that happening on shorter time scales and other things happening on larger time scales or something like that that you could do in this framework? Let me answer your first question with Milena's suggestion. This goes together. So the finding out which neurons are most important is looking at, and we can do this together later. I'll show you the basically the contributions from the J's from the rest of the network. So the contributions from all the neurons. And when you do this, you can see that while it is actually taking contribution from all of them, some of them are more important than others, which is why I answered to Milena, that when you start discarding neurons, at some point it becomes really important who did you just discard because it takes away a lot of information. So yes, in that sense, it is kind of finding who are the most important neurons to specific other neurons. This is something we can say. And for your second question, you're asking me about different coarsening things. I am actually hoping that this is robust to all kinds of coarsening. The idea of coarsening, one can think about different ways to coarsen. I can coarsen not in pairwise, I can coarsen not on the correlations. There are all kinds of things that I can do. And people are trying all kinds of them. I'm hoping some people are talking to me about this. And I'm hoping to find out the answers. I haven't tried all the possible procedures, but I am hoping that this is the wrong question in the sense that not this was a very good question yours, but I'm saying that I'm hoping that asking how exactly do I coarses is not the thing that everything is dependent on, because that means that we have found something that cannot possibly become generalized. It's not something that we can find an order parameter for eventually if you're incredibly dependent on the particular procedure. Thank you. Thank you. We have a lot of discussion. Stay here. You can stay. Those of you who need to go, thank you so, so much for coming. I'm staying here until the end of July. Please don't hesitate to be in touch. It's been really, really delightful. And every conversation is fantastic. So please talk to me more. Thank you.