 Thank you very much. Thank you for inviting me. How's the volume if I talk from here? Can you hear me back there? No, yes. Thank you. I'm in a difficult spot because lunch is at 12.10. And it's 10 minutes to 12, so I have to be done in 20 minutes. So I'll use the pillow approach, which is to go at 5 billion words a second. No, I'll try to make it. Let's see how it goes. So first of all, let me start way too broad and argue that the fundamental mandate of neuroscience, in my opinion, is to relate what happens in neural circuits to what happens as we act and move our body and behave. And there's lots of institutes with this kind of name that are being formed around the world. And so it used to be that actually people wanted to go from genes to behavior, but now we're getting a little more rational. We want to go from circus to behavior. And so that's just reason for a few minutes, at most a couple minutes, about what it takes. So a very tired analogy is about machines that we can build to machines that someone else has built and the evolution has. And so imagine that this is the most complicated machine that we can build currently, I think. Imagine that we were trying to reverse engineer it. We have video games, all sorts of things. We have a bunch of laptops arrive on Earth. And we can play with them and open them and figure out that there's microprocessors. And people get Nobel prizes for discovering the transistor instead of inventing it. And so what would be the level of understanding that you would want for this kind of system? Well, most arguably, the level you'd want is the level of algorithms, languages, operating systems. Once you realize that there's that level, you could actually decouple the computations from the circuits, which could be very useful if some of these laptops that have landed are from different companies and different microprocessors, et cetera. And so you break the question into three. What is being computed? And that is a question that if you understand it, it helps you then guide research into how it is computed and also why it is computed. OK, so if we now take it to the animals and brains and people, the idea is that there's this intermediate stage of neural computations. And that's what we're trying to understand. If you understand what is being computed, then that really guides very well our research into how it is computed. And this idea of breaking the question into three is basically David Maher's idea from the 1980s. And when enough time goes by, you can recycle ideas pretending that they are yours. And that's what I did recently. And so about 80% of people consider that this is absolutely obvious, and 20% of people think it's completely crap. And it's crazy. You should actually go from understanding the connectome into everything forward. And I find that a bizarre statement. OK, so now what I think is the common interest in this session and maybe this meeting is the understanding that the computations of interest are performed by neuronal populations. And what I'd like to do now is, so this would be a bunch of neurons responding to one stimulus. So what I'd like to do now is first give you what I think is a very well developed example of a situation in which we can describe the activity of a whole population of neurons with very simple equations, which then can in turn guide research into the underlying circuits and guide research into why you'd want to compute those equations. And so that's going to be an example from four years ago published. And I'll try to go fairly quickly over that. And then I'll tell you all the reasons why this approach instead runs into difficulties and where are the frontiers of this approach. And that will be unpublished to work. All right, so take this population. This is the population that we actually recorded from in my lab in the cat, visual cortex. This is the same technique that you heard about in at least two talks. You insert a Utah array, which is a 10 by 10 array into the, in this case, the visual cortex of an anesthetized cat. And as you might know, in carnivores and primates, perhaps including humans, there are these maps of orientation preference, such that this is primary visual cortex and this is four millimeters by four millimeters to give you an idea, such that neurons that are sitting next to each other tend to prefer similar orientations of visual stimuli. And so when you shoot an array like this 10 by 10 into this map, by chance, some of the electrodes land in places that like one orientation, some electrodes land in places that like another orientation. And so the first thing you can do is let's pretend that all of these electrodes gave you nice, well-isolated neurons. It's actually not true. Some of them do. Some of them just give you more unit activity. But it doesn't matter here because all the neurons that are sitting next to each other have similar preferences for a stimulus orientation. So I'll keep on saying neuron when, in fact, I'm not sure that each electrode gave me an actual single neuron. And so what we can do is we can say, look, I'm going to take all of the neurons recorded here and I'm going to label them by their preferred orientation. And then when I show a stimulus, for example, a stimulus that has this orientation, I get this pattern of population activity, which I can then make cuter by, say, averaging over bins of neurons and fitting with a Gaussian. And up to now, I'm making a statement that is absolutely circular because I'm saying, well, let me actually say it. Then you'll realize how circular it is. When I show a stimulus that has 45 degrees of orientation, the neurons that prefer 45 degrees of orientation fire a lot. The neurons that prefer minus 45 fire very little. Well, duh, that's how I defined them. But still, at least this gives you an idea that things are behaving the way they should. And now we can go on to ask a slightly more interesting question, which is what happens when you put two stimuli on top of each other. And when you do that, then it guides you through these six stimuli that were shown to the cat. When you show nothing, oh, by the way, all of these are populations of neurons. So for example, this is the activity of neurons preferring different orientations. And so when I show nothing, I get nothing. When I show something horizontal, the neurons that prefer horizontal fire a lot. The neurons that prefer vertical don't fire. When I show something vertical, the neurons that prefer vertical fire, the neurons that prefer horizontal don't fire. When you increase the contrast of, you quadruple the contrast of horizontal, you increase the responses. You don't quadruple them. There is a bit of saturation, but that things happen. But you see that this Gaussian stays in the same shape, just scales. All right. There's a mildly interesting case shown here, which is when you put two orientations on top of each other and they have the same contrast or the same strength. As you might imagine, there's a symmetry there, and the symmetry is preserved in the brain. So you get activity in the neurons that like horizontal and the neurons that like vertical. And these individual activities are a little bit smaller than the activities you measured when the stimuli were alone, but it's OK. They're still there. The really striking case is this one, where you've taken a stimulus that gave you a very reasonable response when presented alone, and you sum it to a stimulus that also gave you a nice response, but it's stronger in contrast. And the one that's stronger in contrast completely wins as if completely killing the response to the lower contrast stimulus. There's no trace in here of a response to this orientation. And so basically, you go from a regime of summation to a regime of computation. Now, if you want, you can use an equation to summarize these results. And this equation is the computation that I was telling you is the final result of this analysis. It's called the normalization equation. It says that the primary visual cortex makes the following computation. It says, aha, I'm responding to two stimuli. I'm going to give responses that are Gaussian. The population would be Gaussian. Gaussian responds to the first stimulus, and Gaussian responds to the second stimulus. And I'm going to scale them by contrast, but I'm also dividing by two numbers. One is a constant, and one is the root-mean-square contrast, which is simply the square root of the sum of the square contrast. Now, I'm actually hiding the fact that there are some exponents, which are important. I'm simplifying this equation for you. But all I want to say is that this really, really simple equation gives rise to these curves. Once you've fitted the single orientation cases, you have no more free parameters, and you can very well predict both this regime of summation and this regime of computation. And just to show you the behavior of the model, I'll go now to a case in which we have way more stimuli, way more population responses. Remember, each of these graphs is the activity of about 100 neurons in response to one of these stimuli. And the curves here come from the model. And you can see that the model does a very good job. And in particular, it captures a regime of simple summation, which is a regime that you have a low contrast, a regime of sublinear summation where you still have summation but the responses are smaller than what you would have predicted by just summing, and regimes of vicious winner-take-all competition that you see when there is different contrast involved. And we think that this kind of computation has wider roles in the brain. It's been observed in various places. But even in visual cortex, you could imagine that you are looking at two orientations at the same time, but you want to pay attention more to one or the other if you had a mechanism that effectively increased the contrast of one of the two. And by the way, that's how attention is thought to work, at least by many in the field. That would push you off the diagonal and the orientation you're interested in would win over the one that you're not interested in. All right, so reasons to be happy, up to here, are that we have a very simple equation, normalization, which describes the collective responses of a ton of cortical neurons. It accounts for striking flexibility in the way the cortex operates, depending on context, high contrast, low contrast, different contrast, et cetera. And it seems to operate in many other neural systems. I have no time to show you. From olfaction in flies to value estimation in humans. By value, I actually mean economic value. And this is in written and reviewed that we recently published. Okay, now, the idea is that maybe normalization is a canonical neural computation, which you could define as a computation that is replicated across brain regions to apply similar operations to different problems. Much like the operating system of a computer or pieces of instructions in software would be used by different programs to achieve different results. And it's possible that I overemphasize this normalization equation, just because that's the one that I happened to study for the last 20 years and I study sensory systems. And if you studied other systems, chances are you would bring up different computations as thinking that they're probably canonical and maybe you're right. And so I only talked about this one, and but there's lots of others that are very interesting and we could talk about. But what I'd like to do now is to go to where this approach, I wouldn't say fails, but where the frontiers of this approach are. And mostly I'm gonna go to the business of noisiness of responses, which is the fact that responses, so for example, here are six population responses to the same stimulus. And as you can see, they're not the same. There's variability. And this variability, I'd like to tell you a little bit about it. And so the first thing I'd like to do is tell you that the origin of this variability is largely cortical. So if you record from two parts of the brain, this one is the lateral genicular nucleus, which I will call the output of the eye, which is a bit of a simplification. And if you record in visual cortex and you record now from one neuron, one neuron here and one neuron here, what you see is that if you repeat the same stimulus three times, you get three times essentially the same response from this neuron here. And if you repeat the same stimulus three times, you get three times very different responses from a neuron in visual cortex. In other words, the lateral genicular nucleus, or sorry, the eye basically works as a camera. You take three pictures of the same image, you get three identical pictures. The visual cortex works the way you would like actually a brain to work. If I show you my face, you would like your brain to think, oh, he's got a big nose. Second time you see my face, you don't want your brain to think, oh, he's got a big nose. You'd like to think, oh yeah, I've already seen that face, right? So you want your brain to actually not respond equally to every image. And so this is what happens in the cortex, it doesn't happen before. And by the way, this has nothing to do with the fact that neurons in visual cortex are noisy, sorry, this is just a quantification of this variability which I'll skip. It turns out the neurons in cortex, as neurons in most other places are extremely precise machines. The only reason why we use a Poisson model to describe them is that we have no idea of the noise that goes into them. But the neurons themselves are not noisy at all. They're incredibly perfect machines. Okay, so it helps when you give a talk to know what your next slide is, but I have no idea, so I will advance. Ah, and this is supposed to play by itself, and it's not, so I don't know how to do it. So what I'd like to do now is show you the activity of a bunch of neurons recorded with calcium imaging in two-fold of microscopy. And yeah, is it showing? If it's not showing, let's not worry. Yeah, look over, slide over, cause, yeah, no. It's not working. It's loading right now. Okay. It's loading. Play it outside. Yeah, it's just sad when you give your slides, always bring your laptop, I think. This is what I'm gonna remember. Okay, so there was a beautiful movie here of neurons firing together, and I think actually the analysis of this kind of data is one of the frontiers of contemporary neuroscience because you get the activity of hundreds of neurons at a time, and you see interesting patterns of activity and you won't see them. Okay, so what I'd like to do now is bring you to actually super current research in my lab where we're characterizing the fact that some neurons act as soloists and other neurons act as choristers. And that will help us predict pairwise correlations between neurons, which is one way to describe the activity of a population. So here now we move to yet another technique, which is a technique that has been alluded to of these neuron access probes, these multi-site electrodes. The advantage of these electrodes is that each spike gets collected by multiple sites on the electrode, which means that you can actually, it's a principle of tetrodes basically, you can much better discriminate the spikes from different neurons, and I share the lab with Kenneth Harris, as Sonia mentioned, and Kenneth is the master of spike sorting. And so pretty much what happens next is that you can believe that what I'll show you next is from actually very well characterized single neurons. And here's an example of traces recorded from the end of this electrode. And we will be talking about neurons with these different spike shapes. So what I'd like to do now is show you the spike trains of those neurons, which are hard to see. This is blue, it was one of them, this is black, this is red. I think there's green here, yes. And if you want, you can also compute an average across the population of the activity during these various seconds. And what I'd like to point out now is that some of these neurons, for example the red one, are extremely well correlated with the population, and we'll call that neuron a chorister, meaning it's member of the choir or of the orchestra you're playing. And other neurons like the black one are very independent or uncorrelated with the population, and we'll call that a soloist. So what I'd like to do in the next three minutes is tell you a few things about choristers and soloists. And the first thing I'd like to do is tell you how useful this concept is to describe pairwise correlations between neurons. Now clearly if you want to describe a population activity, you need to look at more than pairwise correlations. But let's just take that as an example of a possible goal here. Clearly if you want to measure pairwise correlations, that's a problem of order n square. So if you have 100 neurons you need to measure in the order of 5,000 pairwise correlations. And I'd like to show you that knowing whether neurons are soloists or choristers actually brings down this problem to order n. And to do that I'll use a modification of a model that we published which works as follows. Take that spike train and convert it into a sequence of zeros and ones like at least Jonathan's show, maybe other speakers as well. And now allow me to measure the following things. First of all, we will measure the average over time of each neuron, the average activity of the time. That's just how much that neuron likes to fire. That's just one number per neuron. Then allow me to measure the instantaneous population rate which means how much the population is firing at each moment. But I don't need to keep track of time. I can just accumulate that into a probability. And that will be a distribution of population rates. Whoa. And that's shown here, it's a probability distribution. And then allow me to measure this one more thing which is how much of a chorister or soloist each neuron is. That is how much I expect the population, well, what is the correlation between population rate and activity of the neuron. That's again an order N kind of thing. So now I have order N, order N, and the distribution. And I'd like to show you that if I know that, I can actually do a pretty good job at predicting the pairwise correlations between neurons. So here's an example on the abscissa. We have the observed pairwise correlation between neurons. This is spontaneous activity. And so for example, this is a pair where there's a 10% correlation between the two neurons in the pair. This is a pair where there's a little bit of a negative correlation. And up here instead, we have the predicted pairwise correlations predicted by this really simple model. And it does a really nice job. And here's how it does in terms of percent of variance that it explains. In some cases it predicts a lot of the correlations. In some cases less. And it turns out that that depends on how variable the population rate is. But I'll have to move on, I can't get into the details. So this is one thing I can tell you about soloists and choristers. If you know who's a soloist and who's a chorister, you can really use that to predict pairwise correlations. I wanna tell you two more things about soloists and choristers. The first is that it turns out that choristers are way more responsive in the visual sensory than soloists. And that's another thing I'm showing here. And third, you can actually probe causal connectivity by using optogenetics, which works by shining light on mice that have been genetically engineered so that the neurons in certain places respond to light directly. And when you do that, you discover that the choristers are way more causally connected than the soloists. I'm gonna come to a summary now and tell you that I told you three things about choristers and soloists. If you know who's a soloist and who's a chorister, you can predict n-square pairwise correlations, you can predict central responsiveness, and you can predict causal connectivity. I'm in your lunchtime, so I'm going to skip the philosophy, skip the movie that is not gonna run anyway, skip the behavior, and tell you that basically I think we're doing quite good progress in understanding the computations performed by populations, but what I wanted to tell you in the philosophy, maybe I'll tell you just one thing, is that when we work in visual cortex, we're so lucky that we can tag each neuron by so many attributes, where it is in cortex, what it likes, what orientation it likes, what spatial position, retinal position it occupies, et cetera. But when we work in all sorts of other places in the brain, we don't have this luck of being able to tag neurons. And I think that's why we need to look for intrinsic structure in population activity. And that's it, thank you so much. I was just wondering, in the situation, in the competition situation, where one orientation is inhibited, why is it we can still perceive both orientations? Yeah, I get that question often. So the first two quick answers are, number one, we're not cats, number two, the screen is not properly calibrated. But more generally, you can demonstrate these effects in humans. You just need a much more controlled situation. There are lots of psychophysical experiments where this kind of competition has been observed, and actually the same equation works very well. So thanks for a very interesting talk. One, I think some people, at least what I've heard from the discussion, say that, oh, this is for simplified stimuli. So, but this is very different in the natural imaging. What is natural images and natural image processing? What is the response to that? Oh, okay, so simple stimuli are helpful. Okay, I'll give you, simple stimuli are very helpful to characterize the properties of the early visual system, such as the retina, the LGN, primary visual cortex. The moment you start going beyond that, you start running into serious difficulty. And especially when you get to areas interested in, for example, recognition of faces and things like this, you'd be crazy if you didn't use faces to study them. So there's that aspect. Personally, I had a research program that started a few years ago, which was, aha, I'm going to take, for example, the LGN, and I'm going to first characterize how it works in response to simple stimuli like ratings, and then I'm going to test whether this extends to movies, okay? And this worked. It was a lot of work, but it was okay. The moment I got to primary visual cortex, I saw that the primary determinant of responses seems to be this ongoing activity. And it seemed to me more interesting than the details of the mathematics of image processing, and that's why my lab now focuses mostly on that. But I think if one has the guts to do it, sure, characterize the system with ratings, dots, bars, and then ask how far, and then obtain models that are non-linear, otherwise you're wasting time, and then characterize whether that predicts the responses to movies. It's a tough job, though. Now we have, let's start here. Thanks, Matteo, that was a really nice talk. I'm curious about the soloists and choristers, and I'm wondering if you could tell us anything more about the tuning properties First I was going to say maybe it's a sampling issue, like the soloists are choristers with some other population you didn't record, but then you showed us that they have different responsiveness, and so did you characterize their tuning properties or what are they good for? First of all, it's entirely possible that the soloists are choristers of some other population, right? There's, absolutely. And what are the properties? Okay, so we looked at visual responsiveness, then we looked at a litany of other properties where we could find no difference. So we looked at orientation tuning width, we looked at the size of the receptive field, we looked at contrast sensitivity. That doesn't mean that there isn't an effect there. Our main intent here is to reduce the dimensionality of the description of the population. And it's funny that when you start to do this stuff, you immediately lose contact with all the, with the good concepts that you learn, like, oh, each neuron should have its orientation tuning, which it does, by the way. But it's just you immediately go into this more abstract world, which many of us find us in. So same question to you, right? You've got these traveling waves, but wait, don't you have some autotopic representation in there? How do they two relate? And in a way, you don't need the first to talk about the other and vice versa. So I think there's still a disconnect there. Yeah, so someone who's had a foot in both the experimental side and some of this statistical and theoretical analysis, one of the, there are two things that worry me a little bit, and I'd be interested in the comments of anyone, which is why I save this to the end. One is we end up for population activity having to make some sort of random connectivity theory or sparse connectivity theory. And yet, when you look through a microscope at the brain, it's not random, and we don't know if it's actually sparse. We think it is, but we don't know. And I'm worried, number one, what if it's really just a few inputs that dominate the responses to any neuron you're looking at, but you just don't happen to be able to record them? And the second is everyone talking about behavior, we've started trying to collect data and correlate behavior in a more complicated way than just training an animal to do one simple thing. And I would make an appeal to people in this room. We need methods to deal with those kinds of data. So, yeah. Because it's tremendously bothersome that we can't quantify. It's a lot to discuss between now and lunch, but what I can tell you is what I'm trying to do in the lab. And so what we think is true, but this is so new that it's possibly quite wrong, is that these kind of top-down signals, which are an expression of connectivity. And by the way, by top-down, I mean these signals that are not driven by visual input, but rather by other things in the brain. For example, what the animal is doing, what it's thinking, what its reward is, what its task is, et cetera. We think that most of these signals land in the apical dendrites. So what we're doing now with the photomicroscopy is looking at responses of the apical dendrites of neurons and comparing them to responses of the soma. And if this movie were working, you would see how this happens as you zoom down. So the question you bring is about connectivity is a huge one and I don't have an answer to it, maybe others do. But what I can tell you from the standpoint of single neurons is that we can look at places that we save different types of connectivity. So the dendrites, the basal dendrites or the apical dendrites, and try to compare different properties of responses and of activity in these different levels. And maybe that gets a little bit of connectivity, but that's just my personal take. Thank you very much. So this, I hope you've got now some idea about the problems we face. And I want to thank the speakers again and the portrait is over now. Okay, if we could have all of the speakers first, all of the speakers, if you follow that woman up there, Rosa, she can take you to lunch. So.