 So I'm going live now and I think we are officially live. Great. Hello everybody and welcome to another session of our Sussex Vision Seminar Series, as always within the Worldwide Neuroinitiative. I'm George Caffèdiz, a former master student in Thomas Euler's lab and currently a PhD student with Tom Badden. As your host for today, I would like once again to begin by thanking Tim Vogels and Panos Bozellos for putting forward this very initiative towards a greener and much more accessible seminar world. Having said that, allow me of course to get back to the reason we all gathered here for today and introduce our guest from Flinders University, Professor Karin Nordstrom. Following her studies in biology, math and chemistry at Uppsala University, Karin went on and obtained in 2003 her PhD in cell and organism biology. With Dan Nielson in the Lungivision Group and Dan Larhammer at Uppsala University as supervisors, she focused at the time on the early evolution of vision but from a molecular perspective. During postdoc years with David O'Carroll at Adelaide University, she learned how to perform EFIS experiments and that was it basically. She returned to Uppsala University in 2009 and established their Demotion Vision Group where they use hover flies and the wide range of techniques from single cell EFIS to free flight experiments to understand how the nervous system encodes visual information. In 2015, Karin moved to Flinders University where she has been located ever since and nowadays holding the title of Professor in Neuroscience. With a number of fascinating projects including target tracking in rapidly changing natural settings, it is with great pleasure that I'm leaving the stage for here, Professor Karin Nordstrom, for a talk entitled Target Detection in the Natural World. So without any further ado from my side, please all welcome Professor Nordstrom. Karin, the stage is officially all yours. Thank you very much for that lovely introduction. I'm just gonna start my screen sharing. Is that all good, George? Yes, we're good to go. Yes, excellent. Thank you. So thanks for introducing me and thank you so much for inviting me to this great opportunity. So I want to start by showing this picture of Hedvik Lindahl, who is one of the best goalkeepers, I think in the world. This is from the 2019 World Cup. She's obviously playing for Sweden and she's saving an amazing goal right here. And I just want you to take a moment to first of all, appreciate the beauty of this photo, but then also to think about her performance, what she's doing. So you can see her flying through the air. She's virtually horizontal. She's parallel to the goal line. Her hands are exactly where they need to be at exactly the right time. So she has predicted where is the ball going to be and how am I going to get there to stop this ball from scoring a goal? So it's an absolutely amazing performance. And if you just think about the visual input that she would have gotten before doing this, there's players everywhere, the audience moving, there's sound, there's all these distracting stimuli, but still she has managed to calculate and predict this movement. So it's not only Swedish soccer players who do this, it's actually also insects. So here are five still shots from a high-speed camera taken by my collaborator, Paloma Gonzalez-Boledo and Trevor Wodil. And what you can see here, if you look, if you start down in the bottom left corner, is a killer fly sitting on the wall. It then takes off and rapidly shoots diagonally across the image. So these are five successive frames from high-speed video. And what it's doing is that it's going to catch the bead that you can see attached to a strain in the right-hand side of the image. So if you look at how the killer fly is moving in these images, you can see that it's predicting, just like Ed Wigg-Lindahl, the soccer player did, the goalkeeper did in the previous slide, it's predicting how do I need to move to be where I need to be at exactly the right time? So it's an incredible performance if you just think about the computations that need to take place for this to happen. So we could study this in humans, but the human brain is incredibly complex. We have about, in this diagram, they say 86 billion neurons, you hear slightly different numbers depending on what textbook you're looking at. But we could also look at the fly brain where we have about 100,000 neurons. So because we know that the same behavior can be done in a human, but also in a very simple or seemingly much more simple animal like the fly, we have a lot higher chance of actually understanding what is going on because it's just a scaled-down nervous system, but they're still able to perform all these amazing behaviors. So for those of you who don't work on insects, I just wanna introduce a little bit about how they see the world because there are some important differences compared to our visual system. So hoverflies and flies and all insects have something called compound eyes. And that means that they have these massive eyes built up of thousands of lenses. And in a compound eye, each lens provides one pixel of the image. So it provides a course of spatial resolution of the scenery. So what that means, I'm just gonna illustrate that in the next slide. So here's a scene taken from a typical hoverfly habitat. So it's a botanic garden. And I've inserted a little black square in the scene, which is roughly two by four degrees of the visual field. And just for reference, one degree of the visual field, I can't really show my thumb here in the camera, one degree of the visual field is the width of your thumb at an arm's length, just for reference for how big this is. You can all see that rectangle there, even when I remove the arrow. So all of you, I'm hoping if you're not have, if you don't have really poor internet connection right now, you should all see this as a sharp black rectangle with really sharp edges and it really stands out. It's high contrast and it's got different edges, different, it doesn't have this like fuzzy shape like the background. However, if we model what this would look like through an insect eye, so this is through a hoverfly eye, all of a sudden, all the defining features of that black rectangle have disappeared. So the optics themselves of the compound eye really provide really poor spatial resolution and they really limit what an insect can see. Despite this, they perform amazing behaviors. So what you can see in this photo, so in the top photo, there's a male hoverfly hovering in its territory. So hoverflies are highly territorial, the males guard their territories and they chase away intruders. And they're named hoverflies because they have this ability to just hover stationary mid-air. So this male hoverfly in the top picture, he's looking to the left in the diagram. And then in the bottom photo, which has taken just 30, 40 milliseconds later, he's completely reoriented his stance and is now looking down to the left. They got this cryptic coloring, so you often think that the head is going the other way, but that's a way to trick predators to think that they're gonna fly the other way. So you can see how the hoverfly has completely reoriented his stance. And that is in response to that little dot down in the bottom picture that I don't think any of you paid any attention to before I circled it. That's another male hoverfly intruding in this territory. So in 30 to 40 milliseconds, he's completely reoriented his stance despite having this really poor spatial input. However, insects have much faster photoreceptors than vertebrates do. So I just want to highlight here in the top diagram, we see a recording from a towed photoreceptor. They're pretty slow for being vertebrates, but this is still an illustration of it. And then in the bottom diagram, you see the response of a fly photoreceptor. What I want you to notice is the difference both in the X axis and the Y axis. So in the X axis, there's a 10 fold difference and in the Y axis, there's a five fold difference. So fly photoreceptors compared to vertebrate receptors are much, much faster and also give much stronger responses to small impulses of light. So I just want to illustrate what it means to have faster photoreceptors. It's almost as difficult as explaining to a colorblind person what color is, but a way to illustrate it is to show a video and then slow it down a lot. So if we could see faster, we could see fast movements better and for us to give us an impression of that, we can just slow down a video. So first I'm gonna show you a video where you can all pretend that you're humans and see what that little dot looks like. So I want you to focus on the video on the left and you need to pay attention because it's very quick. All right, that was quick. If we then look at this slow down 10 times and that's as a demonstration of what this would look like to fly. You can see that all of a sudden we could see all these fine movements and all these turns and changes in trajectory. So even though flies get a really poor spatial image of the world, they see things much faster than we do. So this is a summary of that background that we have the poor optics, we have fast photoreceptors and we know that they have amazing behavior. So this is kind of the background to my research career that I've been wanting to understand the link between the front end. So the photoreceptors and optics and the behavior and try to understand how does the nervous system now these amazing behaviors and that's what I've done since 2004 now that I've focused on this. So again, for those of you who are not fly people I just wanna give you a bit of an introduction to the insect nervous system. So here is a lovely fruit fly and in blue you can see the nervous system. So what you can see is that there's a big blue blob in the head, that's the brain and then there's another big blue blob in the body in the thorax part of the body. And that's where you have all the motor controls. So everything that controls the wings and the legs and how the abdomen and the thorax move all of that is controlled in the body. The head is pretty much focused on dealing with sensory input. So the brain part of the sensory nervous system. We are now going to look at a schematic of this scene from above and that's what you can see here. So the central brain, the gray and the green parts are the parts that are in the head and then the thoracic ganglia and the big blue part that's that big ganglia and that's in the body. So we have these big neurons. Sorry, so we have these I'm just gonna make sure what's on my next slide. Okay, sorry. So the neurons in the brain they are connected with this big ganglia in the body via descending neurons. And you can think of them kind of like the neurons in our own spinal cord. So we have processing happening in the brain then we have a lot of processing happening in the body a lot more than we do in our nervous system. And then we have these descending neurons that connect the two. So the green part in this diagram that's the optic lobes. And then in, if we look at the target motion pathway so that's the pathway that would detect small objects such as the intruding hoverfly in that in the photos that I showed you before where we had a male hoverfly guarding his territory and intruder came through that can be seen as a small target. These neurons, there are neurons in the brain that respond selectively to the motion of such targets which I'm going to show you soon. Then those neurons in the brain they connect with descending neurons which are also responding selectively to targets which I'm also going to show you soon. And we think that these neurons control the motion of the neck and the wing muscles and the whole tears and the different body parts. The reason I've shown this as dashed lines is that this still direct synapses still have not been shown. We've tried but we haven't shown direct connections yet. So this is indirect evidence that these are connected this way. So we can record from the neurons and optic lobes for those of you who actually do electrophysiology I thought it would be nice to see how we prep our flies. So there's a lovely hoverfly feeding from a flower. They are B mimics and they feed from the same kind of flowers that honeybees and bumblebees would feed from and they love nice sunny days, which is really good when you need to go out and catch them because you get an excuse for getting out from the lab and it's exactly on like sunny days with about 20 degrees warm or something like that. So lovely excuse to get out. We usually immobilize them by putting them in an Ependorf tube and waxing them down with a beeswax and resin mixture. So what we've done there in that photo is cut the end of the Ependorf tube. The head is tilted forward so we can gain access to the back of the head and we've just immobilized the whole thing with a beeswax and resin mixture. And the bottom left picture shows what this looks like from the back of the head. So what you can see coming in there is the electrode going into the optic lobes. And we use landmarks on the brain for knowing where we can target our neurons. We don't label them or anything, we just use landmarks on the brain and then we use the physiological responses to know what neuron we're recording from. And then we put the whole thing on the stand in front of a screen and we show movies and different things to the animals and record the responses. So I want to show you a video here. So what we've done here is record from a target sensitive neuron in the brain. And what you can see is the screen that the hoverfly was looking at. It was roughly centered in the middle of the screen. And we're showing a target moving back and forth across the screen. And you can see that we are varying the height of the target. So what I want you, I'm just going to pause that right there before it starts over again. So what I want you to see is that when it's a really small target, we get a strong response. And then as the target increases in size. So here you can see a lot of red spikes. So that means that we're getting a strong response. When we increase the size of this object, we're still getting a fair bunch of spikes. But then when we increase the height of that target even more, all of a sudden the response is virtually gone. And you can see that when the bar is even higher, the response is completely gone. So these neurons respond selectively to the motion of small objects. And this is summarized in the diagram here. So you can see that the peak response is a few degrees of the visual field. So again, just to remind you a few degrees is the width of your thumb at an arm's length. So what we did next, so this is all data, this is from my postdoc, is that we looked at what does it look like if we have a target either moving over a white background or over a moving background that's moving at the same velocity as the target. And what I'm hoping you can see if you look at the red response down at the bottom is that when the target moves over a white background or if the target moves over a moving background, the neuron still responds. There's virtually no difference. And again, I've summarized that in the diagram. So here you can see, so this is across repetitions. You can see that we get a strong response when the black target moves over a white background, but we also get a strong response when the target moves over a moving background with no velocity difference. When we discovered this, so this was one of the main findings out of my postdoc, this was completely against all prevailing models for target detection. And when I did this experiment, I had actually done it as a control experiment to determine how small velocity difference do they need to see or how large velocity difference do they need to see to actually see a response. So I just want to remind you that from the beginning of my talk, remember the poor spatial input. So that target on that background, this is what it looks like once it's blurred through the optics of the butterfly eye. So it's absolutely mind boggling that they can pick up such a clean signal when you know that this is the input. So, but I was pretty happy here for quite a while. So I thought, okay, we have poor optics. We find neurons in the brain that clearly respond to a target over a moving background which could in principle explain this amazing behavior. So I left this target detection field for quite a few years. So I worked on other parts of motion vision. I worked on natural images. I did pollination studies. I did a whole range of other studies. And then I've remembered what I just told you a few slides ago that the neurons in the brain connect to descending neurons and they control behavior. So even if I had a decent understanding of what is going on in the brain, the neurons that actually control the behavior are the descending neurons. So we need to know what they do as well. So, and this was about this time in 2016 that we started looking into this. So the first thing that we started doing was to try to identify target selective descending neurons that respond to targets as well. And again, I wanna show you how we did the prep. So here's on the left is the hoverfly which is ventral side up because insects have a ventral nerve cord. So the easiest way for us to access it is from the ventral side. You can see a little metal hook which we use to lift the ventral nerve cord. And that's just to give it a little bit of mechanical stability. And it's very, very hard to see. It just looks like a little string of jelly across the metal hook. So Sarah Nicholas who did this work she quickly found neurons that had a very similar size tuning. So they responded, they had a peak response to a few degrees to the visual field. The response rapidly dropped off. And then I asked her just to verify that these were the same neurons to do a similar experiments to what I've done during my postdoc. And that's what you can see in this video. And so first she shows a black target moving on a gray background and then it's a black target moving on a moving background. And I'm hoping that you can all see the red response down at the bottom that when we have the black target on the gray background, we get a response. But when it's on a moving background, there's no response. For those of you who are listening now who are postdocs and PhD students, I can tell you that I was that incredibly annoying supervisor who did not believe the data. So this to me was just a controlled experiment just to verify that they were the descending, the post-synaptic neuron, something neurons in the brain. So Sarah did these experiments. She showed me the data. I said, you must have done it wrong. I asked her to reanalyze the data. We checked all her stimuli. I even sat hung over her shoulder while she was doing experiments. And eventually I had to admit that yes, this is a real response. So just to remind you in the brain, if we have a black target on a white background, it responds the same way as a black target on a moving background. These neurons we think signups with descending neurons where the ability to see a target on a moving background is all of a sudden gone. And the descending neurons are the ones that control behavior. So it's a very frustrating result, but also I think highlights one, to me highlights one of the reasons why science is so much fun. You think you've understood the system. You think you've completed a story and then you're like, oh, hang on, I'm just gonna check this. And then that turns into just a massive question, follow on questions that you've realized that you haven't understood anything at all. And to me, this is really the beauty of doing basic curiosity driven science that you discover these weird things along the way. And if you are lucky enough to have funding and being in a lab that can support you, you can follow these weird quirky findings and work out something completely new. So what we did, oh, this is just summarizing what I just said that it just defies everything. So what we decided to do was to start looking at other backgrounds. So these are also experiments that Sarah has done. So instead of doing these backgrounds that had these kind of cloudy, clattery backgrounds of patterns, we designed a background pattern that itself consisted of targets. So it consists of thousands of targets that move coherently together to create a background motion. So first, I'm just showing you a video when there's a target moving over a stationary background. And what you can see is that when the target moves over the stationary background, you get a robust response as the target moves across the receptive field. So the receptive field is the thing that I've outlined with these contour lines. What we did next was have the target move in the same direction as the background. I can't see the target anymore even though I know where it's supposed to be. I'll just go back and show you the stationary movie again. So if you'll look at the stationary movie and try to memorize where the target is. And then we look at the video where the background moves as well as the target. It's incredibly hard to see. And if you look at the data on the left, you can see that for the neurons as well, it was incredibly hard to see this target. It was just, it disappeared. But what Sarah did next was that she then played the background in the opposite direction. And that's what she's done here. And what she discovered then was that if the background moved in the opposite direction to the target, the response is actually increased compared to when it's on the stationary background. And I just wanna explain how we normalize the responses in these data sets on the left. So you can see we normalize the response from zero to two. And one in this case is the response of a black target, the response to a black target on a white background. So if you show a target on a background moving in the opposite direction to the target, you get a stronger response than if you show a target on a white background. So less contrast gives you a stronger response. Again, one of those results that I just, how does this work? It's very, very bizarre. So we went on to look at what part of the visual field is important for this effect. So we divided the screen into four parts, as you can see in that top left diagram. And so we divided it into dorsal ventral, ipsilateral and contralateral part. The dorsal part is the only part that coincides with the receptor field of the neuron. All the other ones are separate from the receptor field. And first this is the same, if we move the background in the same direction as the target. So this is the same type of experiment that I showed in the previous slide. So you can see if the target moves in the same direction as the background and the background covers the full screen, we get a strong inhibition. If we show the background pattern only in the dorsal or the ventral part, we still get an inhibition. So remember the ventral part does not overlap with the receptor field, or the dorsal part does. And then if we show it only in the ipsilateral or contralateral part, there's no inhibition. There's no significant effect. And we saw the corresponding thing for opposite direction optic flow, so for the facilitation, that if the background optic flow only covered the dorsal or the ventral part, we got significant facilitation of the response. But if it was in the contralateral or ipsilateral part, there was no significant effect. So it's really striking that there's something particular about the frontal visual field. So we decided to go back and look more at the descending neurons and their receptor fields. So here we have a receptor field of an individual TSDN. It's just color coded to show the spike frequency. And you can see the midline and the equator of the visual field. So what you can see is that the receptor field is in the dorsal frontal part of the visual field. If we look across neurons, so the blue outline shows the same example as in the heat map on the left. And then I show 10 different examples in gray outline. We tried, we have over a hundred neurons now. When I did this graph, we had 95. Can't show all of them because it just becomes a big gray mess. As I've only shown 10 examples, but the pink lines show the median and the mean across 95 receptor fields. So you can see that they're really clustered quite tightly in the dorsal frontal visual field. We also looked at the direction selectivity of these neurons. So what this diagram shows is what is the preferred direction of each neuron? So for example, if it's in the blue quadrant, that neuron prefers motion to the right. If it's in the yellow quadrant, it prefers motion to the motion upwards. And then how far away from or ego it is, so how far away from the center of the circle it is, that indicates how directional it is. So with how directional it is, you can look at the equation down at the bottom. We just looked at the maximum response minus the minimum response and then divided it by the sum of the two. So if it responds maximally to right-wards motion, but not at all to left-ward motion, then you get a direction selectivity of one. If you still get a little bit of a response to the opposite direction motion, then you get a graded response. So they are all neurons, they are quite directional, these neurons. You can see a lot of them cluster around 0.8 to one, which is very strongly directional. And what you can also see that we seem to have a tendency of preferred directions to left and right. So we investigated this further and looked at the center of each receptor field and then color coded it according to the preferred direction. And that's what that diagram in the bottom right shows. And I think it's quite clear, just looking at this diagram that the neurons that have their center on the left side of the midline prefer motion to the left. And the neurons that tend to have the center on the right side of the midline prefer motion to the right. So this suggests that they tend to prefer motion up and away from the visual midline. So we decided that to understand this a bit more, we needed to look at what does the target image actually look like during pursuit. So the way we've done that, this is work done by Marlene Tissillius, is a PhD student who's still in Sweden. What she has done is that she built this massive arena. It's a one meter cube and she has attached beads to a fishing line and that's controlled by a rotor. And that means that we can program this bead to move according to predetermined speeds. It moves along a horizontal path and goes back and forth with a little pause in between each direction. And then the speeds are shown in a random order. We can control speed and acceleration. And because it's a physical bead, we can change the physical size of the target as well. It took her quite a while to encourage them to behave in this arena, but after a while, she managed to get them to behave and she's filmed about 100 pursuits or she's filmed more about 100 that we have reconstructed completely of hoverflies pursuing targets. So here's an example of one such pursuit. So the hoverfly starts at the blue circle and if you try to look at the numbers, so if you look at number two, sorry, so the pink data is the hoverfly, the gray data is the bead. So the bead started in the right-hand side and then moved to the left, paused for a bit and then moved back to the right. So if you look at the blue dot, which is where the hoverfly starts, at that point, the bead was all the way to the right. Then the bead starts, the bead moves to the right at the same time as the hoverfly starts its pursuit. So remember what I spoke about before right at the start, how the goalkeeper and also killer flies try to predict where the target is gonna be at the time it needs to be. So the hoverfly is flying towards its future location. It's not flying to where it is at that time. So you can see if you look at dot two, by that time the target has reached the left-hand side of the arena and then the hoverfly does this bend towards number three and that is because by that time, the target has actually turned around and starting its trajectory back again. So we have, I thought I had a list of this, but I've deleted it. Okay, so we have about 100 of these free constructions with different bead sizes. So we can use this and look at what does the target look like and what do the flies actually pursue? So this diagram shows at what distance does the hoverfly start pursuing these artificial beads as a function of bead size. I just want you to appreciate the massive range of distances that they start pursuing at. So it's from just a few centimeters all the way up to 100 centimeters, which is pretty much the distance, the limit of the arena since it's a one meter cube. So they really use the whole space, but it doesn't seem like they have a narrow little window that this is where I want to pursue targets. So because we know the physical size of the bead and we know the distance, we can then calculate the retinal size of the bead at pursuit start. And that's what this diagram shows. The numbers that are listed under each data set show the median size at pursuit start. So you can see that for the smaller size targets, the median is around one degrees of the visual field. And if you remember from earlier in my talk, I said that the size tuning of target tune neurons in the brain and in the descending nerve cord, they peak at around one to three degrees of the visual field. So this shows that the neurons that are involved in the behavior should clearly be able to respond to these targets. But again, you can see it's a massive range of sizes. They go from 0.4 degrees of the visual field all the way up to over 10 degrees of the visual field when they decide to start pursuing the target. So we looked at where in the visual field is the bead during pursuit. And we can do that by calculating something called the error angle. And that is the difference between where the hoverfly is heading and where is the bead relative to the hoverfly. So if the bead is, if the hoverfly is flying straight towards the bead, the error angle is zero. But if it's flying parallel to the bead, it will be 180 degrees. So this shows the error angle as a function of time. So from 100 milliseconds before pursuit start at pursuit start and then up to 500 milliseconds after pursuit start. And what you can see is that as the pursuit progresses, the bead tends to be more and more in the anterior part of the visual field, but it's quite a large spread, both on the left and the right side, if you look at the 500 millisecond data. Oops, sorry, wrong way. If we look at this from, sorry, I should have had another pictogram here. So in the bottom row, I show anterior to the left and then dorsal up and ventral down and posterior to the right. So we are looking at this, at the top, we're looking at it from above. And in the bottom diagrams, we're looking at this from the side. So if we look at the hoverfly from the side, again, you can see that as time progresses, the hoverfly tends to have the bead anterior, but at the start, it can be dorsal, ventral, anterior or posterior. It can use the whole range of the whole visual field is where it can have its target. So we went on to then calculate what would this target image look like on a screen that we can actually use as a stimulus in electrophysiology. And that's what Yuri Ogawa has done and what you can see here. So from that same pursuit that I showed you before, she has then calculated not only what does the target image look like at each point in time, but also how would I project that onto the screen so I can use that in electrophysiology. So I've color coded some of the same positions in this trajectory. Again, you can see how the target image moves up and around a bit and primarily in the dorsal frontal visual field. This is just another illustration of the same data. So I'm showing the probability that the target is in a particular location on the screen. So it's just color coded for probability that you would find a target image in that location on the screen. So that's for this one particular trajectory. If we look across six trajectories, so remember I said I had about a hundred trajectories. So this is across six of them. You can see that we start getting a congregation towards the frontal visual field. If we look across 91 trajectories, you can say we get a clear congregation in the dorsal frontal, sorry, in the frontal visual field. So this is where it's most likely that you find the target image during pursuit. So remember, the receptive fields are dorsal frontal. And here we found that the target image tends to be in the frontal visual field. One of the problems with our recordings was that to get the hoverflies to behave, we needed to have a really big arena. They didn't behave in smaller arenas. So that meant that we couldn't reconstruct the head movements. So we have assumed where they're looking without actually properly reconstructing the head movements. So we think that this slight discrepancy in the dorsal extent is largely because we haven't accounted for subtle head movements, which is something that we need to look at in the future, but we have to find a way to get them to behave in a limited space because otherwise we can't reconstruct the head movements. So this is the target motion from one short extract of a pursuit. And then if we replay these to the TSDNs, we can see how do the target neurons respond to actually reconstructed pursuits. And that's what this data set shows here. I'm just showing you a very, very short example of this. So the top left diagram shows across nine trials how the neuron responds. I think it's quite stunning. So it's over a recording of several seconds, especially the one spike happening at frame 165. If you'd only done one recording, you would have thought that was just noise, but you can see how it comes back at almost every single recording as every repetition, suggesting that it's something extremely specific about that part of the trajectory that really drives the response. And then we have nothing for a while and then a burst of spikes. So what Yuri has done in the top middle diagram is that she has color coded, where was the target when the neuron was responding? So that's where you can see that little small burst in orange. And then we have this really pretty rainbow color where we have the longer burst of responses. And if you compare that to the receptive fields that we have on the right, you can see that they are almost like a compromise between where's the target trajectory most likely to be and where are the receptive fields? So they tend to respond a little bit lower than the receptive fields, but they are largely driven by the receptive fields. So what I hope that I've made you appreciate is that you can think that you have a complete story. You can think that you understand the system and then you start digging into it and you realize that there's a million questions left to answer. I've loved working on this. Hoverflies are amazing animals. They're very, very robust in the lab, but they also perform these beautiful behaviors out in the field. And I feel like every control experiment we do turn out to be another big study because they never ever behave the way we are expecting them to. And these last few things that I showed you around the reconstructions, the target pursuits and the pursuits themselves, that's stuff that we're working on at the moment. And we're trying to make sense of a lot of the data, but hopefully we'll have something ready over the next few months to submit, but it's definitely work in progress. And I want to thank everyone in my lab. So the stuff, one of the big things that we're working on at the moment that I didn't speak about is work done by Raymond and Chris and Luke. So they have developed a virtual reality arena where hoverflies are navigating in this virtual world. And what we're hoping we are going to get them to do, they haven't done it yet, is to pursue artificial targets. And that way, we should be able to actually reconstruct the image, the target image during the entire pursuit because we can film them from a really close distance when they're navigating in the virtual world. But most of the work that I showed you today was done by Sarah, Sarah Nicholas, Yuri Ogawa and Marlene Desilius. And I definitely want to acknowledge my funding bodies and my collaborators. So thank you very much, Shib. Thank you very much, Kari. Fascinating stuff. And I'm really happy to see the clarity that you presented with, like given the time in a delay right now. So I don't plan to torture you with many questions. There's already one question from Tom, like while she was still with us. And then I would like to remind our audience as well, before I start with questions that they can either post the questions there or if they prefer they can join us in this room in case they want to follow up the discussion. So the first question is from Tom. I might have missed this, but how does the target size relate to the inter-o-material angle? Do we think these are possible effects that are locked to the o-material grid? Very good question. And I didn't say that. I should probably have done that. I just start my screen sharing again. So this picture shows a female and a male hoverfly. It's not Aries Daileys, which is the main animal we work with. But in most hoverflies in the Aries Daileys, which is the main one, the inter-o-material angle is roughly one degree. So it's actually a very, very nice correlation between the peak sensitivity of the neurons, which is here and the inter-o-material angle. However, so that seems like a really nice match. However, dragonflies also in dragonfly target neurons, they have exactly the same curve and their inter-o-material angle is 0.24. And there's also old work from cats. So way back in the 70s where they discovered target gene neurons in cats, and they also peak at about one to three degrees of the visual field. And obviously cats have a lot higher, a lot better spatial resolution than we do. So in hoverflies, there is a very good correlation between inter-o-material angle and the peak sensitivity of the neurons. But because of these other animals, I actually don't think that that is the reason. Thank you very much for addressing this. Again, in the interest of time economy, I will proceed with the questions. People are congratulating you and thanking you for your talk. Marion Silis, as the next one and kind of takes it to the direction. I also have some questions. So TSDN neurons are probably getting many other inputs than just from the small target motion detectors. What do you actually know about the pre-sign-up circuit? Nothing. No. So we think some STM... Okay, sorry. I'll just step back a few steps. So there's about 20-ish STMDs in optic lobes. In terms of the descending neurons, we've recorded a lot, but we cannot reliably cluster them into more than the left and right side. Like I showed you with the direction selectivity up and away from the visual midline, we find no good separation of the neurons. So we think this... And in Dragonflies, that's worked on by Paloma Gonzalez-Balito. She showed that there's eight target selected descending neurons on each side. So 16 all in all. So there are... We think some of the STMDs connect with the TSDNs, but because there's also this input with this effect of the background optic flow, this must happen somewhere. So there could be synapses from optic flow sensitive neurons that also signups onto the TSDNs or the STMDs do not connect directly with the TSDNs, but there's like a detour and that's where they get the interaction with the wide field optic flow, but somewhere the optic flow has to interact because you can't have a clean signal and then lose that signal without getting some kind of input from the optic flow system or the TSDNs just get their input from somewhere else. So we don't know. This is something I've been wanting to look at for a long time. It's very, very hard to do intracellular recordings of the TSDNs and get robust recordings and we need to be intracellular to get reliable die fills, but this is something that I really want to look at. And similarly downstream, there is sentences in an old thesis, not an old but an amazing thesis from Robert Goldberg where he showed that when he injected a current into the TSDNs, the wings moved. But again, no one has shown direct evidence. We have no synapses or anything. So that's again something that I would love to do, either do recordings of TSDNs and record from the muscles or at least look at where they signups. Sorry, that was a very long answer to probably a very simple question. I could have said, we don't know. It's extremely interesting. Like even if it's just speculation or if the evidence is still scarce, it's super interesting. And like I follow up question to that that I personally have is, do we know what controls the head movements? So we think that the TSDNs, so this is based on dragonfly work that Paloma has done. And in dragonflies, the TSDNs have their output synapses in the areas where the two wings are controlled. But I've also tried to look at the Drosophila atlases and tried to work out which neurons could potentially be TSDNs. So I think it's reasonable to assume that they also connect with the neck muscles. And because to be able to do these high performance target pursuits, it makes sense that the head is really involved in this. So how are we going to address this? I think it has to be addressed by some kind of dual recordings but also with proper die fills and do enter a grade and retrograde tracing to actually look at those targets. It's just, we haven't, we've tried but haven't tried hard enough yet. So we haven't been able to solve this. Yes, but it's definitely something that I think needs to be solved before I'm satisfied that I understand the system. Before I continue with the questions and I would like to remind the audience that they can join the Zoom room. We are currently sitting in by clicking on the link that I posted. You have a really nice comment from Simon Laughlin who says, nice talk Karin, going straight into my lectures on fly motion vision starting tomorrow. Oh, thank you. Thanks Simon. And I will try to hold my questions for now. Ines Ribeiro has two questions that we'll start with the second and shortest one. Is the error angle calculated as in land and collect? Just gonna find the slide. Pretty sure that's how they calculated it. So that annoying thing with the error angle and bearing angle and proportional navigation and all these things is that the terminology is a mess in the field. So this, I'm pretty sure this is what they did but it's midnight here now. So I could, my brain is not 100% functioning just because there's so many slightly different versions of how to calculate it. So what we've done here is that it's the heading. So where is the fly flying? The angle between its heading and where the bead is in each time step. So it's not where is for this calculation. It's not where is the target in the visual field. It's relative to where is the fly flying. And I can't remember if that is exactly what they did in that paper. I'll wish I should know myself tomorrow if I'm not remembering because I read these papers. We're trying not to torture you too much with the questions. And again, like of course we try to accommodate like for different time zones but sometimes it's super tricky. So most of our friends in the US are still asleep. So the second question from Ines was a very cool talk. I have the first question was, TSDNs do not respond to small targets so with the background contrary to their inputs. It seems that the input target cell encodes all complexities of the stimulus but it's or one, either encodes all the complexities or one of its downstream targets does not. Is there a computational advantage to divide the signal provided by a target cell for instance, control of self-motion? It could be. So we just published this in the one with the dots moving in the opposite direction. When we published this in PNAS, we should have included that diagram because we were suggesting there was that if the background and the target are moving in the same direction, the optimal response and the target pursuit follow each other. So then you don't actually need a target signal because the optimal response will drive you in the direction of the target anyway. But if the target and the background move in opposite directions, the optimal response and the target pursuit initiative go in opposite directions. So to override the optimal response you might have to have an extra strong signal to be able to do that. Again, so that could be, I guess, the advantage of doing this, but it's still, this result, it's like, why go through all the trouble of generating such a clean signal in the brain if you're then not sending it to the downstream neuron? So there must be, when you've gone through all this trouble of getting the signal, so there must be some kind of benefit of doing it this way. And I think we need to, or I need to remember that the neurons in the brain, they are not just moderate output neurons. They can send signals to other parts of the brain. So they can send the signals to the contralateral part of the brain. They can send signals to the central complex. They have all these interactions in the brain. So maybe for the processing that happens in the brain for keeping motivation, for keeping focusing on the target, you need to have a very clean signal. But the descending neurons, they're the ones that control behavior. And you don't need, all you need to know to control behaviors, should I keep going the way I'm going or should I turn or should I stop? So maybe for that, in that context, you actually don't need to have this clean signal anymore. You just need to know, should I follow the motor response or should I override it? And that could be a way to do that. But I'd like someone who's better at modeling than me to try to make sense of this, to dig into that. Yeah. Thank you very much for all the answers. I would like once again to thank you for giving a talk in our series. Like I have more questions for us. Simon is already in the room with us, but I will be stopping the live transmission in a couple of minutes. So if anyone from the audience is still interested like to tag along like for a couple of minutes before we let Karen go to bed, you can follow the link in the Zoom room. So in the YouTube chat that I posted. So one question I had, maybe it's super stupid, but from one of the first images that you saw, like males and females have a different shape of eyes, right? So like it looks like the males are connected and like they sample what's exactly in front of them. Do you think like the mechanism could be different between males and females? Yeah. So they have very, most hoverflies that have sexually morphing eyes and it's, hi, Simon. And it's what we use to actually tell them apart when we collect them. So it's believed because the males tend to have an acute zone or a bright zone in that part of the eye where the eyes join. So we've always thought that that's used to be able to visualize females and visualize conspecifics because you have better acuity in that part of the eye. It's the dorsifrontal part of the eye. So you're probably more likely to visualize it against the bright sky. And you have faster photoreceptors in that part of the eye. So all these pieces of evidence suggest that this could be used for target detection. However, then when we looked at these reconstructions, the target is a bit lower than we expected. And also we saw, I didn't really point that out, but they actually pursue targets, artificial targets in an artificial arena, but they do pursue them from above as well as from below. So they did not at all always pursue the target from below. So it was no asymmetry in whether it came from above or below. Again, a result I didn't believe. I thought my student had flipped the videos upside down. So I did not believe that result, but they perfectly fine can pursue them from above and staying following the target from above. So yeah, and what else was I gonna say? Females do actually have target neurons. So we published one paper on their target neurons in the brain and they are very sharply tuned as well. And they respond to small targets that can move really fast, but we haven't looked yet whether they have target tuned descending neurons as well. So we don't know that. Okay, because like the descending neurons at least in the males, it kind of looked that they sample space completely selectively. Like you said, the preference for the horizontal motion, but like for the vertical space, like it looked like there were no neurons for that, right? Like super low numbers in the figure. So was that the direction data that I showed you? Yeah, I'll just try to find that. Picture, then it's easier to... Okay, I can't find it. Maybe a couple of slides back, I think. Yeah, I think it is. There, that one. Yes, yes, from this I assume they will always approach their target from the bottom up? Yeah, you'd think so. So it's really, exactly. And Bob, what I just wanna point out here is that this is the preferred direction. It doesn't mean that they can't respond to downwards motion. So if you look in the diagram down the bottom there in pink, you can see that in this, this is just one example neuron. You can see that that one example neuron did respond to both downwards and upwards motion, but it had a preferred direction towards the right. So we never found a neuron, or I think there's one that prefers downwards motion, but that doesn't mean that they couldn't respond to downwards motion. Yeah, you're absolutely right. Yeah, so this diagram just shows the preferred direction. It doesn't show this is the only direction I can respond to. Yeah, I misinterpreted. No, no, no, I did too when I first saw this diagram. So it's, yeah, just, it's hard to find good ways to visualize data that includes everything you want to explain, yeah. So at this point, I think I will be stopping the live transmission and I will be officially waiving my moderator rights. So for people that are already in the room, please you can go ahead and unmute yourselves in case you want to ask a question to Karim. Thank you. Thank you very much.