 Okay, welcome to today's integrative research seminar. It is my pleasure to introduce Ruben Moreno, who will be our speaker today. Ruben has a degree in physics and a PhD also in physics from the Universitat Autonoma de Madrid. And then he spent several years in New York City, so working at NYU and also in Rochester University, until 2010 when he was awarded a Ramonica Hall Fellowship. And came back to Spain and he was working in the Hospitals in Giandadeu, which is one of the leading pediatric hospitals in Europe and a very active research institution. Since very recently, since 2015, he's part of our university and he created a new group in 2015 called the Theoretical and Cognitive Neuroscience and he's going to tell us about his research interest and his plans for the future for the group. So, please. Okay, thank you very much for the invitation to the department, to Miguel Angel and Vanessa for the organization of this seminar. So today I'm going to be talking about whether we can decode the brain. So this work has been funded by this scheme. So the brain consists of 100 billion neurons embedded in very high dense circuits, like this one. So if we take one single neuron in this circuit, can this neuron here, the rest of the neurons in the circuit, or it can just read out a few cells in the circuit? And what about us, theoreticians and experimentalists of the brain? Can we listen to one of these cells and determine or know what the circuit is doing, what the other cells are doing? So in order to answer this question, we need to know how information is represented in the brain. So one possibility that I illustrate over here is that every cell in the circuit represents a very complex feature with a very strong overlap and activation of neurons that corresponds to these features represents the perception of a high level object, such a phase. So this is an illustration of what is called the distributed coding idea of the brain. Sincerely, information is not in a single neuron, information is widespread across many cells in the brain to represent an item such as this phase. Another possibility is that every neuron in a particular circuit represents a complex object, such a phase, different phases, very special phases, and in this case activation of a single neuron will suffice to have the representation or the perception of a given phase, like this one. So this illustrates an alternative code which is called the sparse coding idea of the brain. So what I feel is that the brain is even much more complex than either of the two. So let's assume that at a particular moment of time there is an object in the world that is very important, such as this phase. What I think is going to happen is that all neurons in this circuit are going to be representing in a dynamical fashion this object. So they're not going to be representing different features, but they're going to have information about the relevant object in the world, in this case this phase. So I would like to use an analogy, so this is a radio, radio, radio. So I think that we can view the behavior of a single cell as a radio antenna that emits and broadcasts information to the rest of the brain. And at the same time you can see or you can view a neuron as behaving as a radio receptor. A radio receptor that can, if appropriately tuned, you can listen to any channel that is played in the brain. For instance, any neuron here could listen to the relevant information in the world, in this case a phase. So if information is way to spread in the brain as we suggest, then we could pretty much take any bunch of cells in any area in the brain and decode the information that those neurons have about the stimuli. So what do we mean by decline? I mean, my colleagues here in the department probably know very well what do I mean by decline because they use this word in very similar fashion. But for the general audience, so what I mean is the following. Let's assume that we stimulate the brain with a stimulus S, then we record the activity of a few cells in the brain. And now we use an algorithm, a decoder, like a radio receptor that interprets this information and tell us what was the stimulus that most likely was presented to this person. So what about this other scenario? What do we mean by decoding in this scenario? Let's assume that now there is no stimulus. But we know that the brain undergoes its own internal dynamics, is that you are thinking, you are feeling. So in this case, we are very interested in trying to decode or predict what you are going to do next, what this person is going to do next. So what we are interested in decoding or predicting are the choices in this case. And these choices, of course, are generated by some internal dynamics of the brain that doesn't fully correspond to the stimulus that has been presented that is experimentally controlled. So I could like to illustrate this with an example. So this is a set of dots moving many of them in random directions and a fraction of those dots are moving in the same direction. And your task here is to tell whether these dots are moving to the right or to the left. So in this case, I don't need to be inside your brains to know what you are perceiving. So what you are perceiving is right towards the motion. But now let's look at this other example. So in this case, we have similar stimulus, but now the dots are moving randomly in any random direction. And your task is still to tell whether these dots are moving to the right or to the left. In this case, I don't have any way to predict what you are going to be your choice. In this case, I will know to be inside your brain and record for a few cells to predict whether you are going to be telling me that these dots are moving to the right or to the left. So in a classical series of experiments by Nielsen and Shadlen, they train monkeys to report the direction of motion in exactly the same stimulus that you have seen before. Two arms were precisely the question of whether it is possible to go inside your brains and find neurons that predicted what's going to be the choice for that stimulus. So this is an illustration of what the monkey has to do to solve a task. So this is the monkey is fixating here. Then two targets appear, one on the right and the other on the left. And then a stimulus, the random dot stimulus appears in the center. And the animal has to keep fixation over here and has to perceive or try to extract information about whether these dots are moving to the right or to the left. Because actually the task of the monkey is to tell whether these dots are moving to the right or towards the left. And of course, because we cannot ask directly the monkey to tell us what is its choice, we or these experimenters, what we do is ask the monkey to make a saccharidide movement from the fixation point towards the right target if the dots were moving to the right or towards the left target if the dots were moving to the left. So of course here, the interesting condition again is the condition that you saw the latest, a condition in which the dots are moving wrongly in any direction. So we don't have a way to predict what the animal is going to do. And yet we are going to be listening the activity of a few cells in this animal and maybe hoping that by listening the activity of this cell we can predict what the animal is going to do on that particular trial. So I'm going to show you a video, but before the video I'm going to dissect the video a little bit so you can interpret it correctly. So this is going to be the target fixation point, but this is the actual position of where the animal is looking at the screen. So this is a screen and the yellow is where the animal is looking at a particular time. Then two targets are going to appear here and here. And the stimulus is going to appear in the center. And the stimulus on a trial-by-trial basis is going to be moving either in this direction, button right, or this other direction, top left. And as I described, the choice of the animal is going to be based on a saccaded, very far side movement from the fixation point towards the target. In this case the monkey perceived or said that the direction of motion was this direction and that's why it chose that target. So let's listen now to one cell that has been recorded in this animal while it's performing this task and in a condition in which the stimulus is non-informative. So by looking at the stimulus we won't know what the animal is going to choose. But let's see if we can predict what the animal is going to choose just by hearing this cell. So the activity is high now, activity low, activity high. So what you just saw, what you just listened is the activity of a cell and whenever the activity of this neuron was high, then it was very predictive as to what would be the choice of the animal, top left. If the activity of this neuron was relatively low, it was very predictive that the animal would choose the other target. So this is a very simple scenario where we are very lucky, we find a cell that by itself, by just listening to it allows us to predict what's going to be the choice of the animal. But in general, the problem is going to be much more complicated. So what are the tools that we use to decode information in the brain? So with the advent of multi-electro-recorded techniques, we are now in a position to record the activity of hundreds of neurons simultaneously using this kind of array, this is a UTA array that is inserted onto the surface of the brain and as a result, we can visualize the activity of hundreds of neurons simultaneously as a function of time at the millisecond resolution. So this leads us with very complex data that we typically summarize with statistical descriptors and for instance here, one that we typically use is what is called the population vector which is essentially for each neuron, we count how many spikes there were in a particular time window and we put this count on a vector and this is the population response of a population of neurons at a given time. We know that the brain has a lot of variability so we know that if we introduce a stimulus S into the brain then we have to characterize the probability distribution of the responses under this stimulation. And from this perspective, now one could go for instance to use Bayesian inference or other machine learning techniques to extract information and these are actually the techniques that we use to read out information in the brain by using these mathematical frameworks. So of course I said that the problem of decoding is very challenging. So what are the challenges that we are facing? Well the first challenge is that the response is never twice the same. So let me show you an example. So this is a stimulus that is typically used in neuroscience. It would be like a great thing moving in a particular direction. And now if you recall from a neuron in primary user cortex, these neurons like this stimuli. So if you put this stimuli in the receptive field of that neuron you're going to have tons of spikes from the neuron. So you run an experiment, you recall one trial from one such neuron and this is what you get. This is a sequence of spikes as a function of time. By the way, this is what you hear before, these discrete events. You hear these events which are called spikes. So what you see again is a sequence of spikes. So this is telling us that this neuron is responding whenever this stimulus is presented so that's good. But the surprise arises when exactly the same experiment is conducted by recording exactly the same neuron and using the same stimulus and when you repeat the experiment in a second trial what you get is pretty much a completely different response. So again the neuron fires but the way it fires is very different. And each time that you repeat this experiment recording from exactly the same neuron and being stimulated with the same stimulus you get a unique set of spikes as you can see here. So this is the first challenge. So we don't have a unique mapping between a stimuli and responses. But it's not the only challenge, there is a second challenge. The second challenge is that this variability that is observed at the single neuron level is observed at the population level. And actually there is what is called covarability. So the variability between neurons is correlated. So this is a classical description. This is the cross correlation function which plots the joint probability density of having two spikes from two cells separated out by some time lag. And here the presence of a peak centered around time lag zero means that these neurons, these two neurons tend to like to fire together in time. So this is what's going on when you have two neurons. But when you have a population of neurons the correlated patterns that can arise are very, very complex. Like this burst of activity around this time as you can see here. So this makes the whole thing very complicated because both variability and correlations are going to make it harder and very, very difficult to train the colors and to learn the statistical models that a bull allows us to understand what is the code of the brain. So if we want to decode information from the brain the first question that we have to address is how much information is actually there in the brain. That's the first question. So psychophysical results which measure the behavior of humans have shown very often that our performance in the most simple task such as orientation discrimination is relatively poor. And actually we made very conspicuous mistakes sometimes. So this relatively poor performance at low level perception contrast with our capabilities to deal with very complex problems such as mathematics and philosophy. So this means that somehow there is something that is limiting how much information we have about this very simple stimuli. And we don't know yet what limits this our performance when dealing or performing very, very simple perceptual tasks. So there are two possibilities here. One possibility is that there are tons of information in the input in the brain coming from our sensors but the brain cannot make a good use of this information. It's maybe suboptimal and it loses a lot of information. That's one possibility. The other possibility is that actually the brain is very optimal is very well tuned but the sensors are very crappy. They are very bad. So essentially the actual input information arriving into the brain maybe is very severely limited. So the former possibility was excluded by this theoretical analysis published recently. And here what I have is a neural network that is connected in any arbitrary way. So we have many neurons and neurons. And here interestingly, so we're going to study the dynamical properties of these networks and how these networks transmit information or represent information. And here one very important point was to fix the input information that was coming into the network. And we did that by fixing a signal, adding some shared noise that is going to be shared across all the neurons in the network and in addition there's going to be some independent noise. So the important factor here is the presence of shared noise. So this shared noise allows us to control how much information is coming into the network. But here the point is that while we are fixing input information we can change drastically the dynamical properties of these networks. For instance, we can change the connectivity matrix the way these neurons connect in such a way that these networks work in what is called a synchronous regime. A regime in which the correlations are very weak, neurons become almost independent. Or by choosing a different set of parameters I can make this network to work in a very synchronous regime. A state in which you're going to have many neurons firing simultaneously in a very correlated way. So this is a different visualization of the same result. So these are the distributions of correlations across three different networks. One network in which the correlation coefficients were very low. And one other network in green in which the correlation coefficients were order 0.1 which amounts to a two-fact order of magnitude difference in correlations between these two types of networks. So essentially we can expand a huge repertoire of dynamical ranges by changing the parameters of this network. But now the question is whether the dynamical regime of this network affects how much information we can read out from the stimulus. That's the important question. And here you have to remember that the stimulus is fixed. So we know how much information is there. So now we can use the same techniques that we use to read out information in the brain to read out information in this artificial network. And this is what we found. What we found here in putting the information as a function of the size of the network. And the input information is characterized by this black line over here. And the dots over here results from simulations and the results from our decoders trying to extract information about the stimulus. And what you can see is that regardless of the working regime of this network, asynchronous or synchronous, there is very little dependence of how much information we can read out about the stimulus from this network when the network is very small. But if we consider more realistic scenarios where actually the networks are large and in cortex networks are very large around 500 cells, then this whole is exactly true. Regardless of the working regime of the network, we can extract all input information. So it's pretty clear that the brain is very likely not responsible for having poor performance at very low, a very simple perceptual task. Because it's very easy to transmit and decode that information. So it's very stupid to think that the brain doesn't know how to do this. So the alternative is that actually there is some finite, some severe limitations in input information. But how can this be possible? So we are taught that our eyes is an amazing device. But in reality we should acknowledge that it's not such an amazing device. For instance it has aberrations, it has tremor, we need to wear glasses. So actually our sensors may introduce very strong limitations as to how much pixel information is there about image. So the input to start with into the brain is severely corrupted by our sensors. So this is the alternative that we have. And if this is alternative then we can make very interesting predictions as to what kind of correlations will emerge in neural networks. So this is what we did in this theoretical analysis. And I'm going to describe to you very intuitively what kind of correlations emerge when you have strong limitations in the inputs of your network. They're going to be a peculiar type of correlation. So what I have over here is the population response for a given stimulus on average. So each dot corresponds to a neural in this population. So here we have the other 80 cells. And this is on the y-axis what you have is the average firing rate of every neuron bullfire under these blue stimulus. We can call it blue stimulus. There is a blue stimulus and this is on average what every neuron bullfire. Here of course you will have some neurons that fire a lot. This will be like the V1 neurons that we saw at the beginning. And there will be some other neurons that fire very little. So let's make this like a discrimination test. Let's assume that you don't know whether you're going to be exposed to a blue stimulus, to a green stimulus or to a red stimulus. So these three colors represent the stimuli and these three set of dots represents the population response on average of a cortical circuit that you are studying and recording from. For instance, okay? So let's think a little bit now. So what happens if in the brain there is independent variability across neurons? Okay, we know that there is variability. So this is not what you get to see, what the brain gets to see in a single trial. This is something that you will build on average. On a single trial you may get something like that. Okay, let's assume that there is independent variability across cells and this will be the population activity response on a single trial. Okay, and as you can see there is variability around some men. And now if I asked you to tell me what was the stimulus that was used, for you it's going to be very easy to tell me that the stimulus was the blue stimulus. Actually you were right. So what I took is that I took the average responses of this blue curve and I had Poisson noise on top of that and this is what they got. So you were right. But this is telling us that independent noise across neurons actually doesn't harm how much information. So the presence of this variability when you have many neurons, you can average out this variability and you can tell very well what was the stimulus. It was the blue stimulus in this case. So but you can tell me, okay, this is very, very stupid type of variability and correlations here, neurons are independent and you were telling me before that neurons are correlated. Okay, so let's add correlations. Let's add very strong correlations. Let's assume that neurons are highly correlated, one each other. And essentially if you have all of these neurons in this particular trial experience a positive fluctuation above the mean. So this is what the black curve over here is what you got on a particular trial. Okay. And here I asked you the same question, what was the stimulus that was used, for you it's going to be very easy to tell again that the stimulus that was used was the blue one. You are just using template matching and you're going to say, okay, which one is closest? Yeah, the blue one. Okay, so this is interesting perceptual demonstration of a very important mathematical fact, which is that even very strong correlations in a network don't necessarily limit how much information you have in the network. Even if you have a strong synchrony, maybe that synchrony doesn't limit information like in this case. Okay. Okay, but what are the type of correlations that limit information in the brain? Well, let's consider this scenario. So here we have, again I'm going to be using the blue stimulus. And now let's assume that neurons that are on the left side of this hill have a positive fluctuation and neurons that rely on the right side of this Gaussian-like response have a negative fluctuation in the response. So let's assume that on a particular trial, the response of this population looks something like that. Okay. If the response looks something like that, essentially like a translation of the blue set of dots into this black set of dots, now you are dead. If I ask you what was the stimulus that was used, whether the stimulus was the blue stimulus or the green stimulus, you're not going to be able to tell now what was the stimulus that was used. The stimulus that was used was the green stimulus and there was a fluctuation that moved this population hill towards the right, as opposed to have blue stimulus and a fluctuation of activity that moved the population hill towards the left. You won't know that. Well, actually we have shown that these type of correlations are the only type of correlations that can surely influence information in the brain. You can have different type of correlations that they want to affect how much information you have. So a little bit more formally, but not much. So we can assume that we change the stimulus parameter S, can be any stimulus parameter that you are interested in studying. And as a function of this stimulus S, you can plot the average fine rate of your population in activity space. Here I put in two axes, okay, two activities. So this is going to define a one-dimensional manifold, something that moves you in a parameterized way as a function of S. So the only type of correlations or fluctuations in the brain that can limit information are those that are going to move you around this line. So essentially this will be like the correlations that limit information. And if those fluctuations move you along those lines, these one-dimensional manifolds, then you're not going to be able to tell whether there was a fluctuation generated intrinsically by the brain or whether the experimenter changed the stimulus. And that's the reason why these type of correlations are the only ones that limit information. So just a little more mathematically, but not going much into details, we can write down any covariance metrics that describe the correlation pattern, the synchrony patterns across cells in a particular circuit as having two contributions. The first contribution is going to be a contribution that doesn't affect information. And there's going to be a second contribution that we call differential correlations because this corresponds to the derivatives of this function F at a particular point. So we project onto this tangent and we call these differential correlations and these are the only type of correlations that can limit information in the brain in a sensory circuit. Okay, so with these we have characterized, well, what type of correlations will emerge if you have very poor inputs into a neural network. But what about this first term? So is this first term playing a particular role? And the answer to this question was found partially in this paper that was very recently published in New York, as a student in the group. So here we went back to the tradition of describing the respond properties of neurons. So this is just for you to know that typically you have here a neuron that you are recording, then you have a stimulus that is characterized by some parameter, for instance the orientation. And the experimenter changes this parameter and plots the fire rate of this neuron as a function of that parameter. And typically you get a bell-shaped curve, something like that, indicating that this neuron likes particular orientations in the world. Well, this is called the tuning curve, this is called the tuning curve of a cell. And within you and another collaborators we start to think whether this tuning curve actually could be modulated by the activity of other cells in the circuit. So this was the hypothesis. So in reality a neuron is not isolated, in reality a neuron is embedded into a large circuit. And one thing that could happen is that the response properties of these cells actually depend a lot on what the other cells are doing. For instance what could happen is that if the activity of the population surrounding the neuron that we are recording from is low, maybe what happens is that the tuning curve of this cell over here is modulated negatively. And what could happen also is that the activity of the surrounding population is very high, what could happen is that this modulates the tuning curve in this other way, positively. In reality there are many possibilities here, and actually we were open to find any options. So we consider all these possible scenarios, we thought that maybe the modulation could be multiplicative like this one, but why not additive modulation, so essentially we add up some activity to all orientations, but could also have more complicated modulations for instance the broadening of the tuning curve or displacement of the tuning curve. So we were open to all these possibilities, and to our surprise the only two type of modulations that we found were multiplicative and additive modulations. We didn't find any other type of modulation, and here you have a representative set of examples. So here you have an additive cell, so now this is data from a primary visual cortex where we plot the response of this cell when the population activity was low and when the population activity was high, and there is a very nice additive modulation, and this other cell looks more like a multiplicatively modulated. And the next interesting result that we found is that actually neurons tend to have one flavor or the other, meaning that if one neuron tends to have like an additive strong modulation, it tended to have very weak multiplicative effect. And this is a Versa. So here we find kind of like two different sub-sets of cells, multiplicative and additive cells. This is what kids have to learn in the school to start to learn math, multiplications and additions. And it looks that primary visual cortex has the tools to implement these two basic operations, and these are the only ones that we see in our data. So what this is doing? So we look at more carefully these two classes of cells, multiplicative and additive cells, and we found that actually the information that we could read out from these cells depended a lot on what kind of neuron it was. So for instance, when we took multiplicative cells, then when population activity went up, the information that we could read out from multiplicative cells went up. And when we took additive cells, when population activity went up, the information that we could read out from these cells went down. So essentially these two types of cells transmit information in a way that depends on the population activity. So it looks that these neurons are transmitting information, are routing information within the neural circuit in a very interesting way. But what about total information? Is total information a function of population activity? So this is what we address in this figure here. Here we have the coding performance as a function of the size of the ensemble. To our surprise, we found no dependence on total information on population activity. So population activity doesn't affect how much information you have in the neural circuit. There is some amount of information and there is always that amount of information. But this gives us a general picture of what's going on. So the circuit surrounding a neuron that you are looking at may be at low firing rate or it might be at high firing rate. Okay? If your circuit has low population activity, then multiplicative cells have low information, while additive cells have large information. But if population activity is high, multiplicative cells have a lot of information and additive cells have less information. So it looks that population activity acts as a traffic light that controls which neurons are going to have more or less information at every single time. Okay? So far we have described in quite a bit detail how we can read out information from a sensory cortex, how much information do we expect to find in sensory cortex. And we have seen how different aspects of population activity modulate this information. But what about coming back to one of the questions that I asked at the beginning of the lecture, which is can we predict the choices? Something that is generated intrinsically in the network, something that has not been put by the experimenter or really very well controlled by the experimenter? Well, this is the example that we saw at the beginning of the lecture. So this is the monkey watching this random of the stimulus. It's a very difficult stimulus, but we can find neurons that tell us what the monkey is going to do. But this somehow was an easy problem because this neuron was read out when this animal was watching at the stimulus. So what about trying to predict animal choices even before the stimulus has been presented? Is that possible? Okay, so to address this question, we ran into a collaboration with Mavi Sánchez Vives in the EDBAPS here in Barcelona, and we studied the behavior and the neural activity of rats. So in order to maximize our chances to get information about choices before the stimulus presentation, we studied this setup. So here there is a rat that self-initiates the trial by poking at the central port. Then this triggers a tone, and then there is a delay, and finally there is a second tone. So the actual stimulus in this test is the stimulus intervals, essentially the distance between the two tones. So the two tones are identical. Information is going to be in this ISI, and this ISI came into two flavors. So either the ISIs could be short, these intervals, these four possible intervals, or they could be long. They could have all these other possible four values. So this is a perceptually challenging task for the animal, especially when the animal has to distinguish between long and short interval. So the task here is to actually distinguish between the ISIs to tell whether the ISIs are long or short. And in this case, if the ISI is perceived as being long, the animal has to go to the right socket and poke over there to get water reward, if that was correct. And if the stimulus was a short one, the animal has to go to the left socket in order to get water reward. If it does otherwise, it will get a time out penalty. So it's something that the animal wants to avoid. The animal wants to do these tasks as well as possible. So these are the sequence of events that a rat will find on a single trial. But here we want to maximize our chances of finding choice-related activity even before the stimulus is presented. In order to do this, what we did with Central Vibes was to introduce complex correlations across trials, an interesting structure in the environment. So what was this structure in the environment? So let's assume that we start with the stimulus, and let's assume that this is a short ISI. And let's assume that the animal makes a correct choice, indicated by this green bar, green plus. So if this happens, if the response is correct, then the next trial is going to be randomly drawn from a uniform distribution across all these possible aid values. So there is no information about what is going to be coming next after a correct response. So let's assume that the next trial corresponds to a long choice, a long inter-stimulus interval. And let's assume that the animal makes a incorrect choice. Then what is going to happen is that in the next trial, the same stimulus is going to be repeated, and it's going to be repeated until the animal gets it right. And when the animal gets it right, the next trial is going to be again drawn randomly from the same uniform distribution across all these possible aid values, and so on and so forth. So this structure is not commonly used, but it introduces a very interesting correlation that makes the environment a little bit more complex, because now the environment is described formally as an outcome coupled hidden mark of chain. And this has some interesting predictions, because so we know that after a correct response, we know that the animal cannot predict what is going to be coming next, but after a incorrect response, it should be able to predict what is becoming next. So after training across many, many trials, the animal is going to learn that after a incorrect response, the same stimulus is going to be repeated next, and as a result, the animal has to switch the socket, has to go to the other socket. So one interesting prediction here is that the animal should show a behavior that reflects, that has learned this property of the task. So let's see the behavior of the animal. So here what would you have is the psychometric curve, is the probability of making long choices as a function of the ISI. So what you see here is something very sensible, which is that the probability of making a long choice is larger whenever the ISI is larger. So the animal is understanding the task, and this is the perceptual part. This is the behavior of the animal on average. But what happens when the animal makes a mistake? When the animal makes a mistake, it should know that the next trial is going to be repeated, and it should figure out what is the correct response in the next trial. So when we plot the same curve indicating the performance of the animal after incorrect responses, what we find is that actually the performance has increased. So this is indicating that the animal actually has learned the structure of the task. So essentially after an incorrect trial, the slope has increased of this curve. So we analyze further this behavior by looking at the strategy that the animal was following on a trial-by-trial basis. And here what you have is a set of sessions for all the three rats. What you have in the y-axis is the probability of loose switch. So if the animal makes an incorrect choice, it will switch to the other socket. And here on the x-axis you have the probability of win-or-stay, the probability that if you get it right, you again try the same socket in the next trial. So what we see consistently is that there is some positive win-stay probability that the cluster is quite close to 0.5, so pretty much in difference. But what we find with this very strong and very robust is that the probability of loose switch was consistently high for all sessions and for all rats. So essentially rats use mainly the strategy that consists of switching the choice when it makes it a wrong choice. And this is consistent with the experimental design, actually it was designed in that way. So with Mavi we want to record the activity in a British frontal cortex. This is an area in the brain of the rat and in other mammals that is very high up in the hierarchy. It receives tons of very processed information but also sensory information. And it has been shown to play a very important role in decision-making and in some interesting deficits of decision-making. But interestingly it has not been shown that conveying signals about choices before a stimulus presentation. So it was an interesting target for us to look at. So what we found here, and it was quite interesting, which is the first analysis that we performed in this data set showed very, very strong signals about the choices of the animal. So almost without any basic analysis we could find very strong signals about what the animal is doing. And we found neurons, for instance, whose fine rate is strongly modulated by the choice, long or short, at different time periods. But what was much more interesting and surprising was examples of single neurons that when we look at them before a stimulus presentation we found that before even the animal could possibly see the stimulus we could predict by looking at this cell what the animal would be choosing in the next trial. So here you have a cell whose activity is strongly modulated by the choice, which is going to occur much later. In addition to finding these signals we found very other interesting signals about, for instance, information about the past that I'm going to describe in a second. So I'm going to show the results of GLM analysis where we try to estimate how many neurons had information about the specific aspects of this task. And this task is very complicated, so we introduced many possible regressors. And to start with we're going to look at what's going on before a stimulus presentation. So what I'm going to be plotting here is the fraction of neurons, of single neurons that were found to have reliable information about many possible regressors. And I'm going to digest this with you. So the many important result was that information about events that happened two or three trials back was not encoded in a bit of frontal cortex. It's interesting because this information that happens very late, very early across trials doesn't play any role in the behavior of the animal. And what we found is information about what the animal is going to do. So before a stimulus presentation we found a large fraction of cells that predicted what the animal is going to do next. And we also found information about what I should do, so this is a variable that takes into account whether it was a mistake in the previous trial or not. And if there is a mistake, there is a variable that should tell the animal switch the socket because you make a wrong response and the next trial is going to be the same. It's going to be the same stimulus, so you have to switch. So there was information about this variable and there was also information about what I'm going to do, about what the animal is going to do. So we went to a different period of time in which already the stimulus was presented, so this is after the stimulus presentation. And we repeated the same analysis and we found again a strong sequence about what the animal is going to do. Actually, the animal is still not doing anything because it just finished listening to the stimulus, so this is still predicted behavior where predicting what the animal is going to do. And here not very surprisingly, but it was a very nice check, we found that there was tons of information about the stimulus. Many neurons in this area encode information about the stimulus in this time period. And then we went to the final epoch of time, which is the actual movement. And here what we found is very interesting observation is that pretty much all neurons that we recorded in the frontal cortex have an activity that correlated with the choice of the animal. So essentially the whole network represents the actual choice of the animal. Which was very, very surprising for us to find such as so many neurons encoding the choice. And so here this is just a summary of all these results. So essentially I'm putting here the fraction of neurons for these three epochs of time that I described. And let me just highlight two of these lines. So the first one is what I'm going to do variable. So how many neurons tell you what the rat is going to do. And we found like a very sharp rise of the encoding of this variable as you can see here. And very interestingly of course there was very strong and significant encoding of this variable even before presentation of the stimulus. So it looks that this circuit is playing a very important role in generating the choices or at least representing the generation of the choices. And the second line that I would like to highlight is the stimulus information. So stimulus information was very significant initially then round up during the stimulus offset. So this is very sensible. And when these two pieces of information, information about the past and information about the current stimulus mixed together here, the stimulus information goes down. So this whole set of results suggests the following picture. It suggests that a bit of frontal cortex activity is consistent with a role in integrating past with current information and in the formation of choices. Because essentially we have been able to detect the presence of very strong signals about choices. So let me summarize. So we have described the type of correlations that limit information in the brain. We think that these type of correlations are going to be there in the brain. Nobody has still detected these type of correlations. They can be very tiny. But this will be there if we are right and information in our sensors is strongly limited. I've described to you that other type of correlations that they don't have anything to do with stimulus and coding. They may play a very important role in controlling how information is routed into the neural circuit. And we have seen that the population activity controls not only the tuning care of the cells, but also which type of cells have more information at any given time. And finally, we have found very strong signals about the choices of an animal before a stimulus representation. So the fact that we can find so many neurons and coding choices in this task. The fact that we can even predict what the animal is going to do before a stimulus presentation suggests that a view like this may be correct in which essentially all tasks relevant information is widespread and broadcasted throughout the brain. Every neuron will have access to the relevant information. And so I would like to acknowledge the people who has been doing this work. Indigo is responsible for the work on population dependent tuning curves. Ramon is the one responsible for the last piece of rat behavior and neural data. The first part was made in collaboration with Alex Puget and other people, not quite here. And here I didn't have time to talk about the very interesting work that other students is doing in the lab, Philippe Sustek, on decision making. And the very interesting work that Gabriela Mosul started to perform to try to understand the neuro-economic basis of choices in monkeys. So with this, Indom, thank you very much. Okay, thank you very much for this very interesting talk. So I wonder whether, well, as far as I understood, you're focused on decoding some visual stimuli, so visual information. So did you work, did you look at, let's say, more complex audio stimuli, so the language? And do you think that there is a difference between the two, the visual and the audio or language more specific? Well, that we don't work on that line of research, but this is very, very interesting. And the semantic representation of information of a language is really very, very complex. And in contrast to what we know about, for instance, primary visual cortex, we know tons of things. We know how the basic features are encoded in primary visual cortex. We know relatively less about how information is encoded in high-level auditory cortex. But in principle, the main tools can be used to decode information relevant for any particular task. And actually, there is a nature of paper by where Tony Griffith is one of the authors, where they use these type of techniques to create these semantic maps that tell us how this information is represented throughout the brain. Yes, thank you for the talk. I have a conceptual question about the first part of the talk. The fact that the information that comes from the sensors is noisy and that some little shift in cell activity can lead to an indetermination in the effective stimuli. Would it be related to the development of the interpretative capacity of the human brain? Well, if I understood you correctly, so the point is why we have these very poor receptors? No, sorry. No problem. No, the question is whether the strong interpretative capacity of the human brain is somehow related with the poor performance of the sensors? Well, the point is that it's not. So my feeling is that the complexity of the brain is most likely allocating not to process a very, very detailed sensory stimuli, but in doing a very complex task. So the fact that we have very poor receptors relatively, I mean, if you compare the optics of the eye with the optics of your camera and your cell phone, your cell phone is much better in getting pixel information, or as high as that, very bad in that. But maybe there was not a limiting factor in evolution, because we can see relatively very well, and now the evolution went in a different direction, the evolution went in a direction in which the size of the brain grew up just to make more complex computations based on this limited sensory information. I mean, it's okay to have sensory limited information, but it's not okay to not make good use of that information. And essentially, I think that all these high level cognitive capabilities are coming, not despite sensory information, because it's not the main role of the big chunk of the brain. So there are also other ways to try to record the firing neurons, for instance, like calcium imaging, so where you also actually try to record the signals from single cells firing and record the behavior, analyze that. So I do not know whether those are more accurate than the receptors, and how do you see whether your methods or your tools are able to apply to those signals for analysis later on? I mean, the tools are general purpose tools, so they can apply, actually they are right now being applied to all sorts of data sets, fMRI, EEG, calcium imaging. But if you're asking me about what are the benefits of calcium imaging versus spike activity that is the main focus of our lab, I will tell you that so calcium imaging so far, it doesn't have that great temporal resolution. So the temporal resolution is around tens of milliseconds, if not words, while the signals that you can record using electrophysiology are going down even below one millisecond. So the temporal precision of the signals that you can record is much higher with electrophysiology so far. And this actually can be important for some problems. For instance, in very fast behaviors or like animals that move very fast, having this very fine temporal precision is very important because if you don't have it, then you can lose quite a lot of computation that is going on in between. So for instance, fMRI will be the poorest example in that direction. The tiny scale is half a second, so it's very difficult with this signal to target the processes that describe for some decision making. Decision making is very fast, so you just have, you are presented with two options, it can be as fast as less than 500 milliseconds. So you process all the information right away, you integrate information and then you make a choice, all that three things in half a second. So all these processes, all these soft processes are important per se and we don't know how they are represented in the brain. So that's why I think it's very important to go to very high temporal resolution. But all the techniques are being pushed in that direction. Even fMRI will evolve in a direction where temporal resolution will increase and calcium imaging is almost guaranteed to get to have very high temporal resolution soon. Thank you very much, it was a very interesting talk. I have a question more looking, let's say, beyond the understanding of how the brain is functioning and trying to decode the activity. So if you are focusing on some particular potential implications of this research for neurodegenerative diseases or for, I don't know, enhancing brain performance or some application. Yeah, that's a very good question. In our lab we've done a target that's very, very interesting topic, although our final goal is pretty much that. I mean we want to understand what is the neural code and being able to develop these tools in such a way that people finally will benefit from these tools. I mean there are very, very strong groups that they apply very similar techniques to move robotic arms with the brain, with the motor cortex, or to move a cursor on the screen. And so far we don't have a very good understanding of what is the code. In particular for motor codes, the codes for motor commands is very, very tricky. It's much more complicated because there is an involvement of both the complexity of the neural signals and also the encoding of properties as a function of time. So everything is very dynamic and so far the tools that we have, they're not really nailing down that problem, but not even the best labs in the world. So actually what these labs can do is to be able to move the cursor, but you know it's like if your daughter would be, or three years old, moving the cursor, like a very, very jockey. So this is the state of the art right now, but it's going to be improved. Okay, thanks very much. Then I would like to thank you for the great presentation. It closes the seminar today.