 The final speaker of this session, very much in the theme of the interaction between vision and action, and I think it's different by action, is Icarus Sturkey from the University of Manchester. He is not going to say, hey again, I want to take some vision, but I have a few of my measurements. Okay, thank you very much. Again, it's a great honor to be here, and I see that I'm one of those people who never actually met my plan, but I hope this can still be interesting. So I'd like to change the title of the book to include some of the most recent stuff that we have been doing in the lab, and I have to say, it is a type of word, it's a beautiful diversity of eyes. All this work is done in mouse, but what I hope to give you is some useful insight into a mouse perspective. So as we all know, the visual world is complicated. Again, in spite of this complexity, it is also established that natural seeds share a number of common properties of regularities. And one of such regularities, for example, is the relation between spatial frequency and the power associated with spatial frequencies. And as humans, we tend to focus on the tail of this long tail distribution, to be very focused on the tiny details of the scene that allows us to manipulate. And this is also reflected in how our visual system organises. So the question is, what about low spatial frequencies? And what I want to tell you is that really also low spatial frequencies can provide very useful information. And indeed, a hallmark of natural scenes is the gradient of light intensity along a visual elevation. And this is important because it allows us to discriminate between different environments. For example, if we are in a local field, we will see that just above the horizon, the light intensity increases quite sharply. While if we are in a forest and we are shielded from the dark skyline, then this gradient will be much more shallow. So here we ask two sort of very basic questions. How do they do such pattern guide models for health behaviors? And secondly, which, if they do, which photo sectors are utilized to capture this feature? And to that, we used a lighter box, a modified version of the lighter box. This has been used for over 40 years in pharmacology. And the logic of this test is quite simple. So we have an arena which handles the animal thing on the right side or on the dark side and mice that belong to the dark side. And in our lab, this is a bit more sophisticated. So around these two chambers, we have a much bigger box that the animal cannot access, but it acts as a diffuser. So essentially, it dumps now those high spatial frequency while those spatial frequency are retained. And we have a bunch of different light sources that provide both visible and duty lines. And they are positioned at different types around these three blocks. And in this, we can recreate the natural patterns of light intensity along the visual elevation. And to measure that, we actually used Dan Smith, which is an arena. Essentially, we have a calibrated camera, calibrated when we dump, and we take a bunch of pictures in the natural scenes. And then these pictures are sort of rectified and finally algorithm. And this algorithm essentially moves all the high spatial frequencies, but preserve this vertical gradient of light intensity. And we use this environmental light meter in two ways. First, we want to acquire this gradient of light intensity from natural scenes. And secondly, to calibrate these gradients in the lab. And with this now, we are ready to go. And the first result that we obtained, which was quite exciting, was that indeed, mice do have a preference for specific patterns of elevation. So here, you can see we have these two chambers. One has a constant light intensity all along the elevation. And the other one has this typical natural pattern of light intensity on the ground. And we also did another bunch of tests. And one of those was with the two chambers with relatively constant gradient of elevation. But in one chamber, you can see the checkables, so we introduced high spatial frequencies. And what we find is that really mice do like this chamber one, so the natural one. And instead, they don't really seem to care about the high spatial frequencies, for example, checker patterns. And I think it is quite interesting. Also, if you think about the traditional life of box tests, where the illumination is more like this, this constant illumination. And so that might actually explain why for all the years, which people kept getting often inconsistent results. And one reason is that maybe even if you remove the mouse inside it, it may be still not good to the right channel. Maybe they don't like it. Maybe they think it's stuck. It doesn't look natural. Then we try to reconstruct this a little bit more. So we provided different gradients at different levels of illumination. And what we find is that what mice really seem to care is the lighting test, just above the illumination. So this is what really seems to drive the mouse preference. And then this takes us to the second question. So which photoreceptors are utilized to capture this pattern? And as we know, so so far I've been talking about lighting test, but of course, there is also a spectral influence along elevation. And this was shown quite nicely, for example, by Euler lab, that they show that UV light is disproportionately higher above the ground compared to green light. And this is also reflected in the way the coordinate photoreception is organized so we know that short wavelength oxygens are more densely expressed in the event of the sun and the sky. And there is also the region where we have cones that can be exclusively expressed in this type of oxygen. And conversely, in the inner retina, we have retinal granules cells that express the oxygen. So they are in principle sensitive. And they tend to be more dense in the ventral retina. So looking at the ground so we did, we use a sort of distinguish between these contributions. We use two type of integrated animals. So one group had no functional called photoreception, so it relied only on rhodopsin or melanopsin, and the other had normal melanopsin. And what we find is that when we remove the cold signaling, this reference is still there. So the mice don't seem to, so cold signaling might be used when they have it, but it doesn't seem to be necessary for this preference. And conversely, when we remove the melanopsin signaling, then we can see down there that the preference is abolished. So just to summarize this initial part, so those patients, those patient frequency information, so the form of light-induced long-term mission guides mouse exploration and allow the animal to choose the right habitat. And comfort reception is not necessarily what melanopsin is important for. Right. And then of course now, so far we focused on the properties of the external environment, but of course as we shown repeatedly and very nicely throughout the previous talks, like our visual input is also fundamentally determined by our own actions. And again, it was beautifully illustrated and also very accurately quantified in my plan's lab. And we also know from decades of research that information about our own action also enters the visual system, for example, in the form of a proper section or through a particular service or from the stimulus system. So there are a lot of ways in which we can acquire information about our own actions. So understanding how information about our own action is integrated with the flow of our visual process. Still today is one of the fundamental challenges, I think, to visual scientists. And this problem has also been intensively studied by using the mouse as a model system. And we have been known now for over decades that, for example, had mice that run on treadmill, the filigree in different stages of visual system can be sometimes dramatically modified. And this has been shown and persisted in primary visual products, like in the figure, but also at earlier stages of visual processing, like, for example, in primary visual talents, and even at the level of written and output. One of the limitations of those studies that I showed you about is the fact that most of them were performed in head fit samples, so they couldn't really express the whole range of natural behaviors. So what we did here was to try to understand how postural movements can enter the visual system, specifically the mouse visual talents, during more natural and free-moving exploration. And so to do that, we did sort of seeing talents recordings while we were performing some of the 3D construction of the mouse head and body. And then we used this 3D data to capture some of the, to capture a number of distinct behavioral variables that allow us to measure different postures for movements of the animal. And with this, we were then able to measure the coupling between these behavioral values. And in order to dissociate the visual system from this no visual information about animal action, we initially performed all these experiments in the dark. And you can see here a couple of examples where you can kind of see that the firing pattern seemed to correlate either with specific postures of the animal or with an overall state of movement. And then we wanted to quantify this more systematically. So we used machine learning, so we did models that could potentially predict the neural firing rates based on specific behavioral variables. And then we measured the correlation between predicted and measured firing rates. And that was our measure of coupling. And what we find is that actually a sizable fraction of nails, even in primary visual columns, are at least to some extent coupled to some behavioral variables in the dark. And sometimes the size of this coupling was relatively modest. And I think part of it is because often nails don't have very firing rates. So they didn't have the dynamic range of encoding more continuous variants. But this effect becomes more apparent when we put together more and more units. And we use this population firing rate question in particular postures. So the first many results that we obtain is that the coupling between postures and movements in the primary visual columns can actually be explained by a few variables. Okay, so the important message here is that some variables are more relevant than others. So, for example, here we counted how many units were coupled to reflect behavioral variables. Okay. So, for example, if we look at this left, right and the body counts, they were only coupled to very few units. Instead, when we look at these sort of up-down face and postures that capture the change in posture when the animal is looking up and down, then those were a strong predictor. And the other strong predictor was represented by full body movements, typically. The second question that we asked was how many couplings can be expressed by individual units? How many different ways single units can be coupled to postures and movements? And through that we performed a relatively simple clustering analysis where we represent each unit as a sort of 2D histogram where the refining rate is calculated as a function of the two strongest predictors, so either this up and down postures or this overall level of movement. And what we find is essentially two groups of neurons that we sort of called look up and look down units. Lookup neurons tend to fire most when the animal is moving a lot and when it is looking up. And look down neurons tend to fire a lot when the animal is moving, but tends to be looking down. And instead, they are almost silent. So far with the results that we showed you were looking in complete darkness so we know regional inputs. Then we repeated all these experiments on the bright light and, well, it all has to go through all the results again, but essentially we had all the replicatives that the bloody quantitative difference is that the coupling is actually stronger on the bright light. And the important thing to show though is that this kind of behavior was preserved on the bright light. So here I'm showing you the cross correlation between firing rates and particular behavioral variables measured either in darkness or on the bright light. And as you can see the cross correlation is very, very similar in these two conditions. And this is true across most of our data sets. So to summarize this second part of human number two. So I'm showing you that the neuronal synchrony real talents are coupled to a specific visual model behaviors by at least a couple of minutes. Coupling is present in the dark and it also maintained and actually amplified on the bright light. And this I would like to acknowledge the people that really work on this project. So in Changpan, again, all the experiments for the first part of the talk, I'm going to show you. Patricia and Chang is a PhD student. We wrote a request with myself. And Patricia lost her liver, is a microwave seller in Manchester, and she worked with me to perform the experiments for the second part of the talk that I'm showing you. And that's it. Yes, I'll send you a new question, Jordan, to predict the firing rate from the teachers that you used for this behavior. Am I understand correctly that it's like the sheep on the horns, you know, at the distance in the beginning and in the end. I'm not sure if I understood it, but how do you describe what's the behavior of the car? Yes, yes, yes. So some of them essentially were two type of values. So one, what I was measuring postures, so we what we do, what we did was to essentially do, once we have this 3D reconstruction of the animal, we superimpose all the poses in the data set. And we run essentially a principle component analysis to capture the direction of, you know, hand and body movements. And that was quite informative because actually the result made a lot of sense. So the principle component was this kind of body sort of arch, margin head arch along the main axis, learning this kind of left and right and so on. So this was just, so some of these components are, some of these variables are essentially the principle components. And then we added some more sort of handcrafted postures like the manually and the angle on the head as a touch of the ground or as, yeah, in relation to the ground and so on. Then we also measured the temporality of some of the variables, so there were the movements. And at the top of it, we had the locomotion in a section by measuring how, you know, how fast the value center of the animal is moving on the plane. And then we, and then additional measure when we sort of calculate the distance between all the 3D points in one plane and all the 3D points in one plane. So I can think of two reasons why these neurons might increase their final rates, so typically a behavior is happening. And they're not mutually exclusive, the neuron could be doing both. But one is to tell other neurons that the stuff is going on. But the other is to actually, it's symptomatic, it's changing, it's threshold, it's changing its sensitivity. Do you have any data indicating what the graph's happening? No. No, I totally agree. It could just be that some form of game modulation, like we've seen in other examples, or it could actually be something more sophisticated. And as you said, they're not mutually exclusive. But as a game modulation, it might actually be exciting because it could be that these neurons are sensing particular cues, tuned particular cues, which are very important for that particular sort of thing. Yes, I think that's one of the, that's one of the ideas. So in a way, like you can think about game modulation on a global scale, so I'm more alert, so I have a game modulation across more of my neurons. But you can think about the law of sophisticated, not structured by specific, depending on the specific task, and then I manipulate only those neurons that are relevant for particular tasks. But I think it's very exciting to be able to break down both of my ideas and my neural responses. I have a question about the first problem. Did you measure what the mice, like your lab mice are experiencing in their cage? And if it's not like what they prefer, where does that instance come from? So in their home cage, yeah. No, that's a good question. I have no idea. I guess it's very different, so natural conditions, right? I mean, it's actually quite dark on the home. Yeah, it would be interesting to actually see how, because I'm going to start. Of course, it might actually be different depending on where you are. It's not like in the stabular, it's somewhere in the last point, so it's cool. Which inputs into the LGM are causing these neurons to respond up and down, if it's coming from the purpose of the thing for some other evidence. I have no idea. I think that's the answer. We are looking into this. And we have more strong production in the retro part, but not in the LGM. But that's what we're thinking about. But as I said, we're thinking about it.