 I mean, it's relation to getting to the airport sometime. So fundamentally, I'm interested in the question of how our conscious visual perception arises from the coordinated activity of populations of neurons in the brain. My mic's on, but I can stand closer here. It's on. Is that better? Is it better if I'm still back here? So my lab studies the sensory side of things. So how the brain encodes the properties of environmental stimuli in a way that can actually drive perception. We combine a number of techniques including electrophysiological recordings from arrays, behavioral testing in humans and animals, and decoding techniques that we adopt from machine learning to link our physiological and perceptual data. So in this talk, I'm going to start out by sort of modifying three major problems that sensory systems face, and I'll use that to sort of motivate the rest of the talk. So the first major problem that sensory systems face is that individual neurons are noisy. So if we look at sensory systems, then on average they have really nice smooth tuning curves like this. So this would be a neuron that's encoding, say, direction of motion or speed of motion. But the responses to individual trials or repetitions of a single stimulus are really variable. So I'm showing the scatter of responses if we repeated two stimuli A and B. And what you can see is that given the response of one neuron on a single trial, it's impossible to say whether, say, stimulus A or B occurred, even if you knew that it was just those two stimuli. And we know that this variability affects perception. But one might think that it's not a problem because you could average across the responses of many hundreds of neurons and sort of overcome this single trial variability. But we know that neuronal responses are correlated and therefore they're redundant, and therefore we can't just average out this noise. So here I'm showing the tuning curves of two neurons in a different color. The individual dots show the variability. And if we plot the responses of one neuron against another in response to repeated presentations of, say, stimulus A, then the responses are correlated. And so it's a form of redundancy. It's a common noise that we can't get rid of just by averaging out. And we know that these common correlations come from common input. It could be common sensory input, but it could also be common feedback mechanisms associated with attention, alertness, or pre-decision or post-decision feedback. And the final thing that I'm particularly interested in is the fact that neuronal responses are context dependent. I'm showing that here in two ways. So you can see that... So this is the tuning curve, the direction tuning curve of a single neuron after exposure to either a blank screen or after exposure to the context of the direction indicated by the arrow. And you can see that prior exposure to motion in this case reduces the gain of the neurons. So they're less responsive, but it also shifts the tuning curves. So a neuron's tuning curve is not immutable. And so people have described this kind of thing a lot before. What's particularly novel about our approach is that we're recording simultaneously from a true neuronal population and we're trying to relate that to perception in humans. So in my talk I'm going to look at how temporal context affects firstly human perception. I'm going to look at how it affects the encoding of neuronal responses in a non-human primate model. And then I'm going to link the two and see how well we can predict perception from neuronal responses and how well we can predict perceptual areas from neuronal responses. So rather than show you perceptual data, I thought I'd just give you an example of this. So if you look at the red dot there and what I'm essentially doing if you keep your eyes fixed on that red dot is adapting you to motion up and to the left. And in a moment I'm going to change the stimulus. What I want you to do is keep looking at that red dot and judge the direction of the new stimulus. I'll tell you now it'll be close to vertical and I want you to tell me whether it's left or right or vertical. So hopefully if I set this up correctly then most of you perceived it initially as moving slightly to the right and it very rapidly would have moved back to appear being vertical. Did people get that? Yeah, okay. So it should be a pretty powerful effect. So that's evidence of the repulsive direction after effect. So what this tells us is that the brain has made a mistake. So the stimulus is moving up but our sensory encoding is such that things have changed by adapting you. We're changing how you read out your own neural net coding. One interpretation of this is that the brain is not actually self-aware of its own sensory sensitivity. So given that I've just shown that adaptation can cause perceptual errors, why might it have evolved? It's possible that it's enhancing sensitivity to other stimuli or it's helping in energy efficiency. That's an aspect of things that we're currently exploring. I'm not going to talk about it today. Instead, what I'm going to spend most of the talk focused on is our multi-electrode array recordings. So we make these in area MT. The middle temporal area contains mostly motion-sensitive neurons and my work and previous work has shown that activity in this area is causally related to the perception of motion, direction and speed. And we use these Utah arrays. So we have 96 electrodes, extra-cellular electrodes. And we record the responses, oh, I should say, so a typical experiment lasts for about 48 hours. In that time, because we're collecting data at 30 kilohertz, we collect about 500 gigabytes of data. So there's a huge sort of data processing pipeline that we have to have in place that I'm not going to go into today. What I find sort of exciting about this particular work is that everything that I'm going to show you is really based on three hours' worth of data. But it is a massive data set still, a couple of years of analysis by one of my postdocs. So we record these responses to moving stimuli. And we have this continuous adaptation stimulus in which dots are presented on a computer monitor in front of the animal. They move in the same direction for 500 milliseconds. Every 500 milliseconds, we choose a new direction, one of 12. And we get, in an hour, 600 repeats of 12 unique directions. So we have a lot of statistical power. That's really important here. What we're particularly interested in looking at is effective context. So how does a motion upwards affect the response to subsequent motion up onto the left? And we can do this looking at what we call a one-back paradigm or a two-back paradigm. So how does the separation of the adapter and the test period, as I'll call them, affect tuning? Okay. So I will start by showing just a single neuron example. So here I've got the responses of a single neuron to two different adaptation directions. We talk about neurons having a preferred direction so that the arrow yellow indicates that the adapter is in the near-preferred direction. It evokes a higher firing rate from this neuron. If we then present a stimulus moving in the neuron's preferred direction, then you can see that the response to that test stimulus is affected by the previous adapting stimulus. And that's true if we also have adapters moving in the same direction now, but our test is moving in the opposite direction. So the stimulus presented during the adaptation period affects the response of the neuron to the test period. And we can do this for all 12 possible adapters in association with each test period. So ultimately, we get 144 of these pairings. I'm not going to show you all of the responses for this one neuron. Instead, I can summarize it as tuning curves. So what these tuning curves show is how the response to the test direction is contingent on the previously presented adaptation. I'm going to call the control condition here, shown in blue, the responses that we get, the average spiking rates after anti-preferred direction adaptation. And you can see that in comparison with the preferred direction adaptation, shown in yellow, we get reduced responses here, but we actually get increased responses here. Yellow doesn't come out particularly well, but the fact that we get increased responses here is an indication that we're not just fatiguing the neurons, we're not just reducing their activity. We're fundamentally changing their tuning properties. So this is just one neuron to illustrate what's happening across the population. We have some simple summary metrics. So for each neuron's tuning curve, we can look at the gain or what the peak response is. We can look at the preferred direction, which evokes that peak response, and we can look at the bandwidth of the tuning. So across the population, we see robust reductions in gain as a result of adaptation. So what's shown on the x-axis is the difference between the adaptation direction and the preferred direction. So when the adapter matches the preferred direction, we get the maximal reduction in gain. And I've got two animals here. We have a third animal now, and it shows an intermediate effect. So this is a very robust, highly statistically significant effect. But we don't see similar changes in either preferred direction or bandwidth. So with fit curves here, we get statistically significant changes here, but the fact is we've got so much data that it's very easy to get a statistically significant effect. When we look at the effect size, it's trivial. So from now on, I'm just going to focus on the effects of changes in gain and ignore changes in preferred direction or bandwidth. Adaptation also affects variability and co-variability that correlated noise that I was talking about. I'm not going to go into that today. It's too time consuming. So I mentioned at the start that we can look at the effects of different separations between the adapter and the test. And so the previous slide, I was just showing you this one-back effect. And that corresponds to these data points here, where we're getting 10% to 20% reduction in gain. But this effect is surprisingly long-lasting. We were actually astounded how long it lasts. So we can go back to adaptation periods that are separated from the test by up to four intervening motion periods and still see that they're affecting neuronal tuning. Another way of thinking about this is that 500 milliseconds of adaptation exposure can cause changes in neuronal tuning that last for over two seconds. So we see this in the neurons. What we wanted to know was, is this the same sort of thing that's happening in human perception? And the approach that we've taken here is essentially to treat it as a classification problem. So given the responses of 40 to 60 neurons from a single animal, so we have these responses to 12 different directions, how reliably can we predict what the stimulus direction was? And if we take the test and adaptation pairs, how well can we predict adaptation induced human areas? So this is a very common problem that people have solved using things like support vector machines, or a multi-discriminate analysis. We're using a logistic generalized linear model, partly because of some similarities in brain function that we like. The aim of all of these classification techniques is given a training set of data, so in this case blue representing responses to one stimulus orange representing responses to another stimulus. And each data point here corresponds to the response of the population of neurons on a single trial. What we want to find is the best sort of discrimination boundary that allows us to classify what stimulus was presented. So we can use a training set and then we can use a test set which was not involved in training the classifier. And so we can say, well, for this Pentagon, what stimulus was it most likely to be? And we can do this given the responses of all of our neurons and we can do it for all of the different directions, stimulus directions that we've used. When we do that, our decoder performs quite well. So we can get a continuous estimate of direction out of the decoder. This is giving us a prediction error. So the x-axis here is the prediction error. Everything in this pink band is where the error is less than 15 degrees. And because our test directions were separated by 30 degrees, we treat this as being a correctly identified direction. So we're performing a 55% correct decoding performance. So a few caveats. This is based on 30 milliseconds of activity from 20 neurons. So we're severely limiting the amount of data that our decoder has access to. And our chance rate is 8.3% because we've got 12 directions. So we're performing significantly better than chance. So for us, the interesting thing came when we broke down our error distributions based on the previous adaptation period. So this is all sort of cross-validated decoding. We train and test the decoder on different trials, but we also train and test the decoder at different time points. So in this case, we're training the decoder in the adaptation period, and we're testing it during the test period. And what we can see now is that the error distributions of the decoding depend on the adapter. So in this case, whether the adapter was plus or minus 60 degrees relative to the test. And you can see that they're systematically shifted left or right. I'm going to call our median prediction error the direction after effect, or the predicted direction after effect. And if we take those medians, so the green circles correspond to the plot at the top, then we can see that we get different sized direction after effects depending on how far apart the adaptation direction and the test direction are. So how can we relate this back to the human perception? If you think about that example that I showed you at the start, you saw a repulsive perceptual direction after effect. And that corresponds exactly to the types of shifts that we're seeing here from the decoding. So this decoding matches what we see in humans. And we see that this decoding error lasts some time. I'm only showing it here for N back 1 and N back 2. So you can see it's starting to fade out after a couple of... when we have a couple of intervening motion periods. What was particularly exciting for us was that when we actually quantitatively assessed the human direction after effect, it matches exactly the decoded direction after effect, both in time scale and in effect. So this is the data from two observers overlaying on the neuronal decoding. And we see that human observers also have this sustained direction after effect and the same size as we predicted from the mama's data. So to summarize our results, we found that we can decode the activity of neuronal populations recorded in a non-human primate to accurately predict stimulus direction. But we can also decode adapted populations by manipulating our training and our testing sets in the decoding of the classifying. And then we can predict the size and duration of repulsive perceptual after effects. The stability of this decoding over time, and that's a thing that I haven't shown you, but the stability of this decoder over time within an adaptation period or within a test period in combination with the decoded after effect that I did show you, suggests that the brain is not self-aware of its own sensory tuning on these time scales. So the brain's sensitivity is changing, but the brain doesn't have the ability to actually take that into account on short time scales of fractions of a second, and that can account for our perceptual errors. And what we're currently exploring is given that we have these errors in perception, errors in decoding, what are the functional or perceptual benefits that might arise from adaptation. I should end by thanking my collaborators. So in particular Elizabeth Sabitz, the redhead who did most of the analytical work here, and Marcello Rosa who helped out a lot with the surgeries and the animal maintenance. We'd like to try and divide things. We'd like to take care of the hardware, Liz takes care of the software, Marcello takes care of the wetware. And thanks very much, in particular to the centre of integrative brain function for supporting this event and supporting my research. Does the length of time that the adapter has presented affect the results for how long the perception changes, or is there a point at which the effect saturates if it's presented for too long? We just used 33 millisecond motion periods and we can see the same effect even if we have 8.3 millisecond motion periods. So this is just showing decoding performance, but we also get the perceptual repulsion on these time scales. But you were talking about does the effect saturate, I think, so maybe I missed that. Yes, so, yeah, is there, you know, is there kind of a sweet spot for how long you've got to present the adapter to see the maximal effect? Is there at any point that, you know, it saturates? Okay, so we're not sure about the maximal effect. We've looked at time scales from 8 milliseconds up to 500 milliseconds. We think that's sort of ecologically relevant. Other people have used adaptation periods of five minutes, and they seem to say similar sorts of effects, but they are more robust. So these effects do saturate, yeah. Oh, good morning. You use 12 directions to stimulate. Does this somewhat smear the results of the response curve that if you use more directions, you might find that the response curve is narrower? Well, you call it the bandwidth, which confuses me as an electrical engineer. Ah, right. So the bandwidth is not affected, or the neuronal tuning is not affected by the number of directions that we choose. I guess the precision with which we can estimate the bandwidth is affected by how finely we sample it. Is there something magical about the number 12 in nature, or is it just something you like to choose, because you've got a set square with 30 degrees on it? Yeah, we just divided 360 into a certain number of units. In actual, this decoding here, we had 24 directions, and so that, no, we didn't. Okay. And not in that one either. We have done 24 directions. It just means that it takes four times as long to look at all of the pairings that we get. So there's a trade-off there, in terms of pairing up each unique adapter with each unique test. Great. Thank you. Okay, thank you. So I have a few simple questions. The first one is, since you use a ray recording, do you look at the correlation of synchrony besides the firing rate change? The second is, because the example you showed to us, does this effect also happen in V1 instead of MT? Third is, why you use mama set instead of macaques? Thank you. Okay. So I'll answer the last bit first. The reason that we use mama sets is that they have a listen-cephalic cortex. So it's nice and smooth. We can put our array into any brain region that's on the surface. So I'm particularly interested in area MT. We can access that with the array in a mama set. We've also looked at other areas like DM and V1, and we see similar effects, but I'm focused on motion here. I'm sure we'd see exactly the same effects in macaques. The first part of your question was about synchrony and correlations. So in our decoding, we can remove all the correlation structure, all the synchrony structure, by just shuffling across neurons, and that has no effect on the success of our decoding. So some theories predict that correlations are going to be detrimental to decoding, and we didn't actually see that. So in our case, adaptation-induced correlations don't matter. All that matters are adaptation-induced changes in gain.