 really important for people working in this region. Anyway, without further ado, I wanted to introduce you to the session and our first speaker is Ben with the so-called division and academic community sector. Okay, thank you. Thank you very much for inviting me to be a part of this sort of privilege to be speaking amongst this group of speakers. You've been teased about this aspect of Mike's work, if you find it already, and the stuff that I want to talk about today is Mike's work on human eye and behaviour. And Mike turned his attention to this question about 1990 and when he did this, most of the work that was being done on human vision and eye movements was being done using very simplified stimuli like dots and cuboes, blank backgrounds, and very simplified responses like pushing buttons. And Mike recognised that if we really want to understand real human behaviour, this might not be sufficient, this might not be the way to do it. Because Mike recognised the fact that, you know, eye movements and vision are not there to just passively receive information from the world, but rather they're fundamentally part of this system that is about driving our actions. And in a way, you know, perception isn't the goal of the system here. Okay, often the goal of the system is behaviour. It's to be able to interact with the environment and move around and carry out our behaviours. And these two things are intimately linked and interact with each other all the time. And that was the approach that Mike took. So to do this, he wanted to study behaviour as it really is. There were already some of these eye trackers that weren't built. And he did this because at the time when Mike was doing this, there wasn't really an option to buy these things off the shelf like we can now. There were a few commercial systems around, but they weren't that great. And so Mike developed his own, and they were great. It was really good. You know, they work very effectively. So this is the only one that you've seen. This is the one that was used for quite a long time in his lab throughout the time that I was there. And this system here continued to be, we kept evaluating commercial systems while I was with Mike, and they were never as good as the one that he built. They never worked as well. They were never as accurate. And to be honest, although they're very labor intensive to use the ones that Mike built, they're still as good as anything you can buy and, you know, to a certain extent a little bit better. And they typically worked in the same way as we already see by splitting the camera image so that we could get an image of the eye and an image of the world in front of your head at the same time. One that was slightly different was the one at the end, which we had to come up with a slightly different design when we were doing something with the racing driver because of the worry that the racing driver crash something like this would just go straight through their skull. So we had to fix it on with Elkrow and in ways that the safety people were happy with not kill the driver of the crash. He didn't crash, thankfully. The first iteration involved us screwing it on. Anyway, this is the kind of data that we got from Mike's in my system. So you see here that the eye, which is because the mirrors it's upside down, and then the little spot that indicates what's being looked at where the phobia is pointing. So to get from the eye tracker to this video, it's hell of a lot of work, right, because this involved manually fitting a model to the to the eye and linking that to the to the dark on every frame of the video. So this meant that, you know, particularly in the early work took up an awful lot of time to get going and to get a lot of data from but this is what you see. In the classic things this might make in a cup of tea. This is someone I think Mike, but I'm not sure driving around the funding in trauma which may or may not exist tomorrow. I don't know whether there's no football stadium there. And then the one at the end is it is a slow down video of someone playing cricket, which is not mine. So he worked on these and these are, you know, some of this is most well known piece of work, but he worked in such a huge variety of real world tasks. He worked on table tennis on musical site reading on drawing on walking on driving really fast, and, and also a little bit of magic as well. And he did all of these things, and was was looking for both the things that define those individual tasks but also the common principles. So some of the common principles that Mike identified are now things that are so well established that we almost take them for granted, but we weren't known before you know these are real insights from Mike's work. And the first of those that you see as soon as you watch these videos, and you've heard about already is just this tightening between vision and action. We spend the vast majority of our time looking at the things that we're interacting with the things that we need for our current pasts and, and Mike and Jenny when they did the team making work. And we've created that that's something like only about 5% of the places that we look are not directly relevant to the task that you're engaged in. So we really are very much on task a lot of the time with our arts, and that's true across all of these different tasks. But of course that's not enough just say that because, you know, a task is a very dynamic thing and so as we move through a task, the priorities that we have for that task the things that are relevant and important to us also change. So in this, in this little excerpt from a driving video here, this is Mike driving around in Lewis, and here this was driving through traffic quite busy traffic, and occasionally having to stop the traffic lights. And what you can see is that when Mike was driving, he spends almost all of this time. Sorry, that's okay. So he spent almost all of this time looking at cars parked to the side of the road, the current front or oncoming traffic as you would, right. But as soon as things stop, as soon as there isn't that pressure of potentially crashing and you're writing up the lights although lots of the signal around you is the same lots of the information is just the same. Now, my spend almost no time at all looking at any of the traffic and we're spending all this time looking at people on the, on the payments on either side. So it's a very dynamic system tasks relevant changes so does where we look. But also very important in this is the fact that, again, you know, the is really actively seeking out information. And it's kind of the birth of these ideas about, about division, we're also familiar with now is showing that Mike had a really elegant way of taking a complex behavior and showing and finding the simple solution. The thing that explains all this complex data that we were handling. And what you can see is in his work on driving that the places that people tend to look are the right places to get the information that we need for the task. So when we're driving ahead, we, when we're driving on the straight road with Parker eyes are slightly ahead of the car. And when we do that tends, turns out to be the right place to get the information that we need to stay on the road effectively. And he showed that with Julia forward by using a very simple kind of driving simulator setup where they just show little strips strips of the road. So they could selectively show a strip that was either the far distance here so there's a little bit here and nothing else, the middle distance which corresponds to that where people look, or very close to them from everyone. Now what they found is if you don't go the very far distance in the road, then people are quite good at anticipating the curvature of the road and anticipating the bends, but they're hopeless at staying in late. Okay, if you show them just the near part of the road, then there's really little error on lane maintenance, they're quite good at that, but they're pretty hopeless at anticipating that events. If you give them this bit in the middle, they do pretty well at both. And so where we put our eyes corresponds to the bit of the road that's going to give us the most information for these two competing problems that we have to solve when driving. Similarly might show famously that you know as we turn into a bend, we look at this tangent point which is the bit of the inside of the bend the inside of the lane that sticks out most into our visual field. And again it turns out that that's the bit that we should look at because, if you might show that if you can work at the angle between your current heading, and that tangent point. Right, and you also know how far you are from the edge of the lane, then that tells you everything you need to know about how much you should turn the steering wheel in order to get around that bend. Okay, and so by looking at the gauge point and the fact that you're strapped to your chair. So your body orientation is the heading of the car. By then calculating the distance between your gaze angle to the tangent point and your body, then you know exactly what we need to do with the steering wheel to not crash. Similarly in cricket which we've heard that as well already might figured out that again you could reduce that you know it's a highly complex problem it's a very difficult thing to solve to get your back to the right place to meet the ball in the in the right place and at the right time to make the shop. And so might look at how we might be able to calculate that. And what he showed was that essentially what you need to know if you're dealing with a ball that's balancing the two key things you need to know are how long it's going to take from the bounce to get to you and how high it's going to be when it gets to you. And what he showed is that you can estimate those two, those two things you can't do it on the fly because it's just too fast right so relying on tracking the ball during this period is really not going to work. So you have to try and make some predictions and estimates in advance. And what he showed is again you can estimate those two quantities. So you know the release time so you know the time it takes to get to the bounce point and where that bounce point is relative to your, to your head. So how much you have to, and so you can solve this problem by looking at the bounce. If you look at the bounce point and you know when the ball was delivered, then you can estimate these two other quantities. It's not easy, because of course, what it does at the bounce is going to depend on surface properties, but you can learn those nappies and that's what we're going to say you can learn them they're awful. And so, just by looking at the release point and looking at the bounce point you can get all the information that we need to know to make this to make the stroke to make the hit. And so he showed again you know again it's a lovely example of how we can, how we can mathematically show that we're looking at the places where the most information that's available to us is actually to be found. And what this illustrates this tight coupling between vision action is active information seeking that the visual system is doing. But it's not just about coupling things in space it's not just about looking at the right places. It's about looking at the right places at the right time. And this is something that emerged from a whole range of different pieces of work that I think very nicely said from the work that might Neil and Jenny did on making cups of tea. And this was really about cataloging a natural behavior truly natural behavior. And when they set out on this project of course they didn't really know necessarily what they were looking for I think that's true isn't that. That's an unfair thing to say. And so they might have created these or they'll created these really lovely visualizations of just everything that was going on in the task. So then so Mike has a role somewhere of long tapestries essentially recording exactly what was going on in the videos at every frame as you go through it. And so they created these might was able to start to see patterns in the data to start to see the kind of things that that repeated over the course of this meeting for the team. And that's what you see here is summary of some of these repeating patterns so we started to look at the relationship between what the body was doing what the eyes are doing and what hands do. And then in those terms, and you start grouping those things according to the actions that go the component actions that make up the overall task and then you start to see that they have these relationships in time. So they similarly shaded bars, not only come together but they seem to come in a consistent order. And that's what what was found. So when you then summarize the timing of these things of these three different components across lots of different actions that make up the overall task. Mike was able to show that what we tend to do is we tend to start by orienting our body towards the thing that we're going to interact with. And that begins even before we start looking at it. Then we bring our eyes to bear on it and then we act on it. And these things lead each other in in fairly systematic ways and the key one here is that vision is right is leading the eyes by that half a second vision is leading the eyes. Sorry, this is leading the hands and the actions by about half a second to a second. So that is a really nice observation and it boils it down to this kind of simple, simple description of a very complex behavior, and that simple description of a very complex behavior has been found in lots of other things that Mike did and others have done since. You see this type coupling in time in driving. I saw this a little while ago, if you block the gaze angle so the and also the steering angle of the driver, you can see that they're very, very related to each other, but not perfectly lined up in time. Gaze is slightly ahead of what the steering wheel is going to do, and it turns out that it's about point eight of a second ahead when we're driving around a very windy road. And so, you know, vision is leading the action by about point eight of a second in this case, we also had the opportunity to do this as I said already with a racing driver. And so here we were just curious to see whether things would be rather different when you when you're driving 125 miles an hour instead of 30. And it turned out that actually, in a lot of ways things are very similar. Again, we could see, in this case, we've spotted the head angle because a lot of looking in the racing driver is done by moving the head and obviously the eyes. But if you look at the relationship between the head angle and the car rotation again we see very, very similar patterns and in this case, things line up about point nine of a second apart. Right, so gaze. So, so, so head turns are leading turns of the steering wheel by about point nine of a second so despite the fact that people are going so much faster. I think the leading time is about the same. So this principle of keeping the eyes about about half a second to a second ahead of action seems to be quite common. And he might also show it for people walking, playing music, so musical sight reading and various other tasks and other people have gone on to show or have also shown it in a range of other activities. It seems to be one of those principles that underlies a lot of the behavior that we engage in. I think it's still an open question as to why it's in that time so and we never, you know, I don't think we've ever really come to a good answer about what's so important about half a second to a second, but that does seem to undermine a lot of different behavior that's carried out at lots of different places. Of course there are situations where keeping the eyes systematically half a second to a second ahead of the action is just not possible. I think one of those already but but basically lots of lots of ball sports have this problem that things are just happening too fast. And in this case, it's not, you know, it's just not possible but there are still key instances where we get vision ahead of action. The way that we do that is probably slightly different, but what you see here in table tennis, if you can see the green probably can't see the green phrase versus the black one, but there are key instances when the ball is about to bounce, where the player is getting ahead, putting their eyes where the ball is going to bounce and waiting for the ball to catch up. It's exactly the same thing that was found in the cricket, and you saw that, you heard that earlier as well, but what happens is that the cricketer will, the batsman will watch the release point of the ball, they'll keep their eye there for 100 to 250 milliseconds or so, and then they make a big secad, the big eye movements down to a location where the board is going to bounce, and they get there, or they get in there, you know, 150 milliseconds or so before the ball arrives, and then just waiting for the ball to arrive. And this isn't just a case of I detect motion so I move my eyes down and wait to see what happens, these are actually predictions about where the ball is going to bounce, because if you get the ball to be delivered at different lengths, then so the eye movement adapts, that's what you see in these blocks of the run. So, you know, good batsman will change their initials as depending on where the ball is going to bounce, so they really get to where the ball is going to bounce rather than just somewhere. Okay. So they're making these predictions, and that seems to be true and others have shown it in other ball sports as well, that the key to success in the ball sports is being able to make these predictions of key points and observe what's going on at these key information rich locations in the past. So this is really this lovely description of his third participant, but you can see that this is someone who doesn't adapt their behaviour. So this is our, this is our next good player who just made the same eye movement basically for every pitcher ball. So they weren't able to engage, they got the fact that they have to move their eyes down, but they weren't able to do it in such an anticipatory way. Still hit the ball, which is more than I would say. This is also my first experience of the eye of actually using the eye factor was to go along with Mike to one of these recording sessions and just seeing how it's done. Right, so one of the things that came out of all of this work and particularly out of the team making work was another one of Mike's, you know, kind of simple. So you can throw it down to a simple principle that explains a lot. And that was this idea that they put forward in the team making work of this basic unit of action, this basic unit of human behaviour or the object related action. So you can you can decompose complex tasks into these sub units these sub components that are all about the coordination between vision and action. And so we're using our visual system to identify what's out there in the world to monitor actions and to allow us to complete those behaviors and, and sitting on top of all of that is what we could call a schema system or some kind of driver, a schema system for deciding what we need to do next. So this is this is supply information to these systems in order to tell us, you know, what to look for where we should find it, and what actions we should perform. And as we go through a task, and as we complete our actions so we can send back information that unit is complete and select the next thing in the sequence of the past. And then this scheme of control system course tells us that what's going on is a little bit more than visual, there must be something else there as well that's allowing us to make predictions about where things should be and what we should, what we should seek out. And then of, of how memory and representations feed into our everyday behavior is something is an interest that Mike and I shared as well and it's something that I've been working on a lot since then, but you can see it in various aspects of natural behavior. You can see in the work that we did on on racing driving because this was one case of driving where I said most things were quite similar in terms of timing they were, but actually in terms of where people that there were some differences. The difference was when approaching events, because on this bend here on the track, people did line up with a tangent point just like we would have expected. But on pretty much all of the others they didn't, they looked at other locations and they were very systematic about the same place. Every time they went around the track more or less, but it wasn't lined up necessarily with the tangent point and in one case, which one is this one here you can see it's almost on the opposite side of the road. And that seemed to line up with was their planned route. Okay, so drivers learn when they know of course they learn a racing line they learn where to drive that is not necessarily so some corners are clicked and some are swan wide. And what they were looking at was the, was that information or that location that's going to guide where they need to go, rather than the tangent point so there's clearly some component of memory there. There may be some component of memory in tasks like teammate being so I might have got really interested at one point in these big by movements, these big gaze locations where we take our eyes from a location to something that's that we can't even see at the moment. Okay, so frequently when we're doing a task will turn around and look at something behind us. And often that look around is done in a single case shift or maybe just to to you know so we're really making very large movements and might be interested in this for two reasons. One was to understand the coordination of all the different systems that have to work together to make it to allow that to happen. And the other was what this told us about the kind of representation and memory that underlies our behaviors. This was a situation where what what might need to do was record all these different types of bodily movement that were going on at once. And, and again in a very typically elegant and simple microwave, rather than go out and buy lots of recording equipment who went and bought a Cindy doll and amazing Susan from Willys and printed this and just simply rotated the doll so that it lined up with the orientation of the person in the video. It turned out to be some sort of noise of the reviewers, but this was really effective. And from this. Okay, so, you know, here are some examples. I'll show you a little video example. This is someone who's in a kitchen and they're making a cup of tea because it's all I can do. It's stuck with me. And they go, they're meant to be getting some tea. They've been told to get tea from a little pop that says tea on it, but they are knowingly spotted some tea on the shelf. They're actually going to get the wrong tea and realize they've done the wrong thing. But what you see here is two examples of initially searching for something. Okay, and then going back to the same location. They'll do it for both things. And what I want you to see is that that initial search involves quite a few I movements, but when they go back again when they realize they've made the mistake, then they get there almost didn't want to jump. So here they go. They found this tea that I thought I'd hidden. Right. It took a few fixations to find it. Then they spot this one, which is one of the minutes is now they went back there with just one I move back to the right place. And then they do the same thing to get back to the other team. Right. And that's exactly the kind of thing that you see here. You know, these are some examples of a volume of gaze shifts that were in excess of 90 degrees so that's in locations. You just can't see. Okay. And yet people are bringing their eyes to within about 10 degrees at the intended target and then making them a correction. So this led us to think about what kind of represent patients might be allowing this to be possible. And we came up with this idea that there's this, you know, that, that while obviously long term memories have seemed must be in some way our century, that what gets us through the environment at any one time. Of course, it's an egocentric model. And we had this idea that that model has to be quite expansive because it has to allow us to look at things that are currently behind us. And it's going to be something like, you know, a 360 degree panorama of the scene. And of course to be useful, it's going to have to counter rotate as we move our heads. Otherwise it's not going to tell us anything useful. So that was the idea that we had. And we described that, but the question is, couldn't we find any evidence for it. Well, Michael really had that and, and it already had that evidence from, I think a really elegant. But I think it does characterize most of what we did. But what Mike did to show that this is likely to be the case to show that we're likely to have this kind of rotating model in the brain. Is that he took some colleagues who took part in this, but he took some colleagues and sat them in a chair and spun them around until they were dizzy. He stopped them and asked them to fix it on a landmark and then close their eyes. And so then they experienced that the illusory pattern rotation lasts about 10 seconds or so. And then he asked people to do two things. He asked them to judge how much they thought they've rotated during that period and then point to where this landmark was that they were looking at. And the correspondence between these two is really strong. And this is exactly what you predicted. There was some reports have been made on the basis of some now corrupt, you know, rotating model in the brain. To say there's a to say there's some kind of rotating model of the whole world in the brain is to overstate how detailed that should be so it doesn't need to be particularly detailed it just needs to be good enough. So that you know where things are that currently outside your thing with you to get you close enough to then find them properly. Right. So that was the idea. And so, I want to finish ahead of a lot more pictures that everyone's shown me really so I just. These are some of these are some from things I didn't like this is like enjoying the paper at my wedding. But Mike had a huge impact on me on my life on the direction of things were going because thanks to Simon, I got in touch with Mike because I really wanted to send me into vision as a PhD student and so I went to meet Mike. And when I met with Mike, he also showed me not only is this fantastic work he's doing on insects but stuff he was doing at the time on driving and making cups of tea. I never admitted it to Mike but had absolutely no idea about human behavior at the time. But I just thought it was just, it's just so fascinating what he was doing and he gave me a bunch of papers to read on the train home. And by the time I got home and decided what I wanted to do. And that was to work with Mike and the beauty of working with Mike was, as we've heard already his enthusiasm for science and his ability to put that across in simple terms. And it was one of the big reasons that I wanted to work with him. And it's why, at the end of my PhD I didn't want to leave and I spent another three years as a postdoc in Islam. But I think he's had a huge impact on the community. He's had a huge impact on the way that we all approach science and the way that we interact with each other as well. So, thank you very much. So they're also enthusiastic and incompetent drivers. Maybe what you haven't done. Yeah. So how, so how much of these, you know, what we looked at is actually train what we learn how to drive or what is the name. Yes, so Mike, Mike, the PhD student, Matthew, similar to this. And what happens is it's increasingly difficult to find people who are really not as drivers now because of things like play stations. But if you combine someone who hasn't driven, what they tend to do is start by looking at the front of the bonnet, essentially. So they do that bit of the driving simulator that is just the near road. And that's where they look. And they spend all their time concentrating on lane driving and not anticipating events. So what happens is over the course of a few lessons and people shift where they tend to look up to the to the middle distance, but it takes time. And then what you're also doing at that point is shifting the time ahead that vision is from action. And this idea that the half a second to a second principle that seems to be everywhere. It doesn't mean itself that seems that you seem to need to learn, because there was a nice experiment by Randy Fannigan's group, I think, where they took a novel control device so you move a mouse around on it on a screen, a mouse cursor. In this context I need to say mouse cursor rather than real mouse. But you move this device and in a way that don't understand the mapping between your actions and what's going to happen on the screen. So first people move and then move their eyes to catch up. And as over the process of learning how this control device works, what happens is the eyes start to go in time with and then ahead of the movements of the cursor. And at the end of the learning process, they learn they're about half a second ahead for the time where the cursor is. So it seems it does seem to be something that we need to learn. We are rapidly learning stimuli that you're not prepared for even from the side or something like that. Do we actually look at those things? Yeah. Yeah. So yes. It could be a threat to you. We do, but there are tasks constraints on that as well. Right. So one of the things that's often debated is like, you know, is motion and overriding signal that kind of overrides even these tasks things. But it's going to depend on the context because when you're walking down the street, when you're walking down a pavement, the fastest moving things and the biggest objects are the cars in the road. But we don't spend a lot of time looking at them, right, except when we need to cross the road. And so it's still about relevance to the task and it's still, you know, if you detect an interception. So if you detect someone walking towards you, which you can do in peripheral vision, then you start paying attention to them. But if you're not detecting that your paths are likely to cross, then you don't, even though they're coming towards you, you don't spend that much time looking at them. So the answer is yes and no, depending on the situation. I was wondering how the eye movement would be fair when we are running. First, if we would also look at the disability, we would look inside the band, also if there would be other movements and also which would be the difference in the movement and the actual changes. And so when we're walking, so the example I showed for Mike was a particular, a particular task that people had where they had to like avoid stepping on the cracks in the pavement. So there was a good reason to pay attention. But actually how much we look down and how far ahead we look depends on the terrain and how predictable it is. So if you, so there's really nice work from Mary Hato's lab, where if you get people walking on rough ground, obviously they look a lot more at the ground and a lot closer to their feet than if it's smooth ground. So I think what Mike showed, you know, when you're going upstairs, but the first few steps you look just ahead. But once you've figured out the height of the steps and they're all the same, and you start looking much further ahead. And so you adapt, you adapt all the time, depending on how predictable the future is, and the ground it were on. But you do use your head. The other thing then do you look at the tangent points I'm not sure, but you do use your head in anticipation of how you've been a turn your body. And it's one of the reasons that we can figure out how not to walk into people, because we know that's what other people do because as you if you're about to turn, you move your head and your eyes first. And we use that as a cue to figure out to negotiate with someone else where you're going to go so that you don't bump into each other. You showed us how it works for us, how our mission and action work together. If you were to, you know, think of design assistance and structure, would these be, you think, which principles to go about or would there be other ways and what we see as just the product of evolution? Yeah, I mean a bit of both. Actually, so I've worked on a couple of projects with engineers trying to design an active vision system. And the solution for that is the solution that they kept trying to come to that was completely different. So they, they really wanted a camera above everything and reaching systems from the side so that they could really map things in a very different way. And they wanted the whole task to find a different way. So I think they're probably completely different solutions and this is a solution that we're coming to because of the apparatus that we've been given in the first place. But the principle, I think the big thing is that vision is there to serve action on the other way around.