 And now I will talk in a second part on the experiment that underlies the data and the type of data that you will be presented with today and tomorrow. And as I said, it is similar or it has similarity with the much larger data set that you can work on in the second and third week. And so I will talk about how to estimate firing rates, a very basic measurement method, but you will do it today. And then also you can measure directional tuning or you can decode directional tuning today as well or tomorrow. That's why I'm going to work on this. This is the experiment, it's a monkey experiment and the monkey sits in a monkey chair in front of a vertical panel with touch-sensitive LEDs actually. This is a time point that's called trial start. So the monkey puts his little arm in the center of this, so this is about 10 centimeters. So he sits in front of the screen and he goes like this. And then he has to move his arm. So what happens at this time point that we call time point zero or preparatory signal, it's a cue signal. The monkey is cued with some information and he now can prepare a movement but only one second later, 1,000 milliseconds later, the reaction signal will appear or a go signal and the monkey is allowed to move. So there are three different conditions. So this is the first context here, the first condition. The monkey has its arm in the thrusting position and suddenly one of these six LEDs turns green. And the monkey knows he should do this center out task now. He should move his hand from here to here, touch this button. If this is correct, he's rewarded with a drop of juice. If it's incorrect, he's not rewarded. This is a classical if you wish center out task. Now he waits for one second or he has to wait for one second and if this turns from green to red, he will actually move his hand. In this condition, he knows exactly where to go after one second so he can anticipate, he can actually, you know, he learns this one second and he will initiate his movement even before the one second is over and he has very short reaction times. But there are more conditions, so sometimes there will be two targets and one of them randomly will be the final target. So here the monkey has incomplete information. He has some information about the movement. He's going to do but only incomplete. And what is interesting about this type of delay, type of experiment is that in this delay you try to separate execution of a movement and planning or preparation of a movement and try to also separately analyze this in terms of recordings. Okay, and this is of course three and one of them will be the final target. Okay, there's data from two monkeys, but I think actually you will work only on one monkey and he has been, so these were acute recordings, so at different positions and every sort of working day for the monkey and this is primary motor cortex. Yes, please, question. One that green, if you have more than one target, so both of them will light up and then the animal will have to wait for some time and then one of them will turn off? Yes, no, one of them will turn red and one of them will turn off. Yes, he has to choose this one, otherwise it's incorrect. So here he can plan, still he has sort of a directional, it's a corridor. And we will see later what comes out of this. And I will come back to this even doing a modeling part much later. Okay, this is a typical, now this is one single neuron. Again, the quality is not so good, but I guess you know what it's meant. These are six directions and now the individual trials are grouped by direction. And what you see here is, for example, 41 trials in direction one. First trial and each black tick here is one spike, okay? So each line is one spike train and you repeat the same directional movement here. So this is the preparatory signal, the queuing, and this is the reaction signal where the monkey is allowed to move. And what you can already see for this one neuron is that there's strong activity in these directions and fewer activity in this direction. This we call directional tuning. Classical work that has been done in particular by trotropolis and co-workers in the late 70s and early 80s of last century. Okay, the number of trials is very different for different directions. Why? What do you think? Maybe only hits and... Okay, the suggestion is that it's only correct or in trials, but that's not the point here, actually. So why is it not the same number in each direction? Yeah, yeah. Is the monkey has a bias? The monkey has a bias, hopefully not. Then we have to throw away the data. The neuron we're using is tuned to a certain direction, one or the other? No, it's not about tuning. Actually, it's an experimental thing. I mean, we control this, not the monkey. It's random? It's random. So the point is that the monkey, these directions are randomly drawn. Okay, and that means that, you know, there's something like a Poisson distribution of how many trials come out for each direction. And that's important, because if the monkey knows, let's say this would be, you know, 120 trials in total, and he has been there very often, then he gets biased, because a monkey or we, we are clever enough to, okay, we have been there quite often. That's, I think, it will soon come there. That's what we sometimes do in video games, also. You predict, okay, that's all I'm saying. So this is random, and that's why... But it's something you will not like when you work with this data, because you will do everything from scratch today and tomorrow. So I just give you the data and what to do, and you will program it. You can choose Python or Matlab. We'll talk about this later. But in any case, it's already a bit of a hassle that you have different numbers of trials for different directions, okay? You can't work, for example, with two-dimensional matrices, so on. Okay, how can we measure a fine rate? There's a classical method that we call peristimulus time histogram, so you just make a histogram. So you make bins or histogram classes along the time axis, and you count the number of spikes and divide it by the number of trials, and then you get, and you divide it by the width of your bin, and then you get a number per second, which is a fine rate. Another very simple method is kernel convolution with a linear kernel is what we do today, or what you're gonna be doing. And what does it mean? It means that you convolve a spike train, so you have these individual spike times with some window that has a certain shape, certain weighted window. And what you do is actually, instead of averaging across trials, you can nicely do this also for single trials by averaging over time, okay? There's always a trade-off between averaging across trials and averaging across time. And here, this is a single trial here of spike trains. This one is actually simulated from this point-process intensity, and this is a point-process simulation. And convolution is the same as if you replace each of the spike by a certain shape, for example, this triangle. So if you now sum up all of these triangles, this is your estimate of the rate, okay? You have actually quite a nice estimate of this fine rate. This is a symmetric kernel. You can also choose other kernels. You can also choose non-linear methods and so on. But here we have sort of, you will just do this simple kernel estimate. And then you have fine rates for each of these directions. So this is exactly the same data here. And now you might see better that the average is in black and the individual trials in gray that there's quite some tuning in a certain direction of this neuron. What is also interesting is that the individual fine rates look very variable, or if you maybe think noise and noise in. That's what we're gonna talk about mainly today is what is cortical variability all about. And if you do this, now you have the fine rates for six directions. You can put them in a color code, for example, here that this is time. And then you have this two-dimensional, let's say, that's something you can do if you want today. There are other measures that you can now derive from this, more or less classical tuning vector, but here for a single neuron, this is the uncertainty of the angular uncertainty. So for six directions, circular statistics tells you that this is the baseline and then it goes down. So we have a certainty of 20 degrees or so. And this is a signal to noise ratio. So there are different ways of measuring tuning in a neuron. And if you want to decode direction or information from one neuron or a population of neuron neurons, then we typically nowadays, everybody says machine learning. So we need some sort of classifier or decoder. And I just want to show you how you can do it with a Bayesian decoding because this is nicely intuitive. And if you ever have time left tomorrow, you might try this out. So what is the idea? We have a movement direction D and we have a firing rate R. What you would like to know is, or what we'd like to predict or decode or potentially predict for the future, the probability of observing a certain direction giving your firing rate, okay? I want to say, oh, now it's this firing rate. Okay, the probability for direction six is 0.2. The probability for direction one is 0.4. So I want to know what is the most likely direction where the monkey's going to move, for example. And however, this we cannot measure directly. What we can measure is for each direction, the probability of rates. So this is just a distribution that we can estimate. So for each direction, what is the rate distribution? So let's say the monkey makes, for each direction 40 trials, roughly, we take all these and we estimate this. Then we call this the model and then we use some of the data we don't use for this model estimate. We use this for testing, okay? So this is the idea. You do what we call cross-validation and I'll show you in a minute how this works. But first, what is important is that this guy, Bayes, had great idea or result, namely this relation here. So you arrive at what we want, the probability of a direction conditioned on the firing rate by having this, what we can estimate, and the priors that we can also estimate. The probability of direction, our case, for example, is fixed, is one over six. That's one sixth. And the probability of, the overall probability of firing rate is by what you normalize this. And to show you this intuitively or, I mean, practically, so this is exactly the same data that you saw before. This is now for one particular single neuron. This is one trial of estimated firing rate, a single trial firing rate. And this trial we did not use to estimate this, which is our model. And for each point in time, now we are actually, right now we're at 750 milliseconds after trial start. Okay, this is the time point here. We actually measure, in this single trial, we measure 5.5 hertz. And the question is now, does this give us a probability for a certain direction? And what you see here is, these are just histograms over firing rate, so this is the most trivial way of constructing this. So this is firing rate on the x-axis. This is probability of observing a firing rate for these six directions. So it's actually this thing here, okay? This thing is this. This is also in time? This, yeah, so this is for exactly one time point here now, okay? You have to have this for all times, or that's how we did it. You can, again, different ways of, conceptual ways of doing that. And at this time point, oh, sorry, this was the wrong button. You now check this 5.5 hertz. How probable is it? Nah, probably is it for this and this and this. And you can see already that here, this is more likely, okay? And this is now the percentage. So if you have all, it falls always in this first bin here, and you sum up all of these bins and divide it by the total number, yeah? Then so that the total probability is one, by definition. Then you find out that by 0.458 or 45.8%, this is the most likely direction, okay? That's quite cool in a sense, because this is long before the monkey moves. And what you find in individual single neurons now is, and forget about this first column, but here in the second column B, we see one, two, three different neurons, and they show different types of tuning. For example, this neuron shows, this is now the probability, this is chance level, one over six. And this neuron shows encodes, let's say, movement direction just after the queue, after the queue was given, okay? After the preparatory signal. The monkey is only moving after RS. Then there's another neuron that's very classically is tuned during execution of the movement, but not at all during the preparatory phase. But there's another neuron that sort of gradually builds up and sort of holds information throughout, similar to working memory type neuron, okay? And then interesting is also that one and the same neuron here in C can be involved in computation of this arm movement quite differently depending on these conditions. So this was the one target condition, two target, three targets. So for one target, the monkey knows exactly here already where to go, and there seems to be sort of a, also, so the dashed lines, actually the firing rate is also high firing rate, and then there is a high probability of predicting the correct movement. And it's quite different, and for three targets and two targets, for example, this neuron is again involved, somehow in processing information after the final target was given, yeah? Okay, I don't wanna go into much detail here. The idea is that you can do something similar with this data. The P of R in the base rule, do we also consider that to be stable? The P of R, whether we? It's just a problem of firing rate without ending. No, in our approach here, we continuously did this over time, so the prior P of R as well as P of R condition on D. So we both was measured at each time step. That's why we nicely see these differences here, for example, yeah? Otherwise, if you only compute it here and keep it fixed, then you have a very different outcome. But of course, this is just a Bayesian decoder and it's sort of, if you wish, a very basic way of doing this, and you can throw methods that you learned during this week at this data that you're gonna have next week. But for example, direction, or in this case, it will be different grip types and different experimental variations that you can decode. Okay, and this is, if you do population decoding, if you take more than one neuron, five, 10, 20, or 100, then you're going up to almost probability one, and these are the different conditions. So here you will hit the probability of 50%, which makes sense because the monkey doesn't have more information than one out of two. And here you hit one out of three, which is exactly the information that the monkey has. Okay, that makes sense, yeah? Yeah, okay, good. So this was now explaining you the data where it comes from, and also give you some idea what is in the data that you will be working on doing the exercises.