 Okay. And it appears that we are live. Great. Hello, everybody, and welcome to another session of our Sussex Vision Seminar Series, as always, within the Worldwide Neuroinitiative. I'm George Caffetzis, a master's graduate from Thomas Soyer's lab and currently a PhD student with Tom Banden. And as your host for today, I would like to once again begin by thanking Tim Vogels and Panos Bozellos for putting forward this ever-expanding initiative towards a cleaner and much more accessible seminar world. Of course, having said that, allow me to get back to the reason we all gathered here for today and introduce our guest from Zanilia Research Campus, Dr. Kit Longden. Following his studies in physics at the University of Manchester, Kit went on and obtained in 2005 his PhD in computational neuroscience from the University of Edinburgh. After a couple of years in the Neuroinformatics Network, studying the neural basis of geotaxis in Drosophila, he moved to the bioengineering department of Imperial as a research associate in the CRAP lab and focused on the state-dependent visual processing in the blowfly. In 2014, Kit moved to Zanilia, where he has been located ever since, and nowadays as a research scientist in Michael Reiser's lab. With a number of fascinating projects and by employing a plethora of techniques ranging from behavioral to anatomical and to functional and computational, they seek to understand how different visual cues with color, of course, at the spotlight are detected by the visual system and processed throughout the brain. Therefore, today we have the pleasure of hearing about the latest and I'm sure exciting findings in his talk entitled Synergy of Color and Motion Vision for detecting approaching objects in Drosophila. So without any further ado from my side, please all welcome Dr. Longden. Kit, the stage is officially all yours. Cool, what a wonderful introduction. Thank you very much, George, it's very kind of you. It's an honor to present here today, both to present for worldwide Neuroscience. I totally echo what you were saying about that. It's been such a good idea, so thank you to all those involved, but also to the department at Sussex, a long-time hero at home of personal heroes. So thank you very much for the invitation. George does really interesting work on the retinas of sharks, so I really highly recommend you talk to him about his work if you get the chance. Okay, I work on color motion vision in flies and today I'm going to tell you about a study I wrote recently posted on Bioarchive and also a little bit about a paper we recently came out in Eli's that was a collaboration with Matthias Vernitz lab in Berlin. So when I say color vision, I'm really interested in UV vision. So we humans don't have very good intuition about just how useful UV can be. And the last few years have been a really exciting time for discovering how UV is used by different animals in different ways for very specific behaviors. So in the mouse, the retinas organize to see UV in the sky, not the ground. And this graphic is from a great recent paper from Thomas Oilers lab and collaborators and following their ideas and data, it looks like mice could use their color vision, for example, to spot predators in the sky. So here we go. So that's a task that very much involves motion. Now, on the other hand, Love of Zebrafish, a lovely work by Takeshi Yoshimatsu in Sussex's and Tom Bardens' lab. It looks like they use luminance, the brightness of light, to watch the sky, color vision to see the world beneath and around them, and UV to spot predators like, sorry, prey like paramecia. So spotting prey is also very much a task involving motion. So traditionally, motion and color vision are thought of as fairly separate computations and the color can help with motion detection when they're isoluminant edges. So say, if I was a fly seeing this image of a citrus fruit, there'd be some edges that were equally bright for my luminance channel. So we take these two edges here for my luminance channel, they could be equally bright, and then I wouldn't be able to see their motion. However, if I use color information, then I could. So I hope to persuade you today of a mechanism by which color and wavelength processing can do much more than this for detecting object motion. It's not just about, the mechanism isn't just about detecting features and natural scenes, but having mechanisms matched your behavior that allow your movements to augment your sensory experiences. So in flies, we know that UV illumination is important for many behaviors, including things like, here I just picked three. So it's choosing where to lay your eggs, and circadian entrainment, and navigation. So these are all behaviors where, in addition to having genetic access to the neurons, we also have access to connect terms of wiring diagrams of the underlying circuitry. So we can really look into how wavelength information flows through the brain and supports very different kinds of important behavior. But these behaviors are all about the illumination. And if there's a weak point in looking at color in the fly, it's that we know very little about how flies use UV to see objects that they're interested in. So that's partly because studies have used human displays using blue and green wavelengths, not UV. Seeing a UV can be really useful for the fly. For example, here you're seeing bananas illuminated by UV invisible light. And UV is very helpful for detecting ripe or bruised fruit. So for instance, this isn't just true of bananas. For instance, citrus growers use UV illumination to pick out bruised fruit. So I'm going to tell you about a mechanism I've identified that helps them to see UV objects like fruit as they fly towards them. Okay. So not everybody listening is a vision scientist. So vision in flies, as in mammals, is split into on and off pathways. And then in this image, when you look at the light edges, you see a vase. On the other hand, when you look at the light decrements analyzed by the off pathways, now when you look at them, then you see two faces in each. Now, both of these views are carried by your brain. And the mechanism I'm going to describe allows flies to process two views simultaneously that allow it to see the motion of UV objects, such as fruit more clearly. Now you're used to thinking of on and off as a sort of complementary pair with on and off being opposites of each other. But we're going to add wavelength processing to one of those channels so that there are two monochromatic luminance channels with different wavelength sensitivity. And this is going to have simple but counterintuitive consequences. Okay. So first, I built projectors to display UV and green patterns. So in this setup, I tether a fly and she's flying. And I illuminate her with infrared light. And I watch the shadows of her wingbeats. And here she's turning left. And I can see that because she beats her right wing with the greater amplitude. So by tracking these wing movements, I can tell how she's reacting to the visual stimuli. So these in this setup, the patterns are displayed onto Teflon screens. And when you do that, the irradiance patterns are very different for UV and green. The green is scattered much less than the UV. I corrected for this using a luminance mask for the green channel so that the green irradiance was linearly proportional to the UV. And as a result, in the experience I show you, the green channel is going to be held constant and the UV intensity is going to vary between 0 and 50. And if you're a college and scientist looking at UV and green, if you use spatial projections of UV, I really recommend you check the spatial distribution of irradiances. Right. In a setup like this, flies really respond well to simple patterns that generate things like bar stripes and approaching objects. And these generate robust behaviors. So I systematically investigated what their responses were like in UV. And that led to the discoveries I'm telling you about today. So they weren't really something that I was looking for, but I was very happy to find them. And I'm going to show you the responses to some of the most revealing patterns. So on and off edges and approaching discs. Okay. So when I show her a green moving off edge, she's going to turn left. And when I show her a UV edge moving right, she turns right. Now, I keep the green intensity constant and vary the UV intensity. And when I do that, I can find the UV intensity so that the left and right turns balance and she flies straight ahead. So when the UV intensity is dark, she turns with the green. And when the UV is bright, she turns with the UV. And at an intensity of about nine, the two balance, and that's the isoluminance point for off motion. Okay. Now we can repeat this for on motion. When again, when the green edge moves right, she turns right. And when the UV edge moves left, she turns left. And I can vary the intensity so that the green and UV balance and she flies straight. Now, the UV intensity needed to balance green on motion is about half of what's needed to balance off motion. And this means that in this setup, on motion processing is twice as sensitive to UV as off motion. So that's a big difference. And this asymmetry, it was very surprising. We didn't expect it. And this asymmetry, we realized later, is the basis for the fly's ability to see the movement of UV objects like food. And I'm going to walk you through understanding that. So when the UV object here, a disc gets bigger, we predict that when it's bright, it will be brighter than the green, it'll generate on motion. When it's dark, it'll be darker than the background and also generate off motion. At middle intensities, it will both be bright enough to generate on motion, but also to generate off motion. So now let's think about what happens when we just invert the color pattern. So we've got exactly the same intensities of UV and green, but just the patterns are inverted. And now we've got a green disc expanding out of the UV background. When the UV is bright, the green objects will see dark and it'll look like it'll be dark enough to generate off motion. And when the UV background is dark, the green objects will look bright and generate on motion. But now, at the middle intensities, the green disc just won't be visible. Even though the two patterns have the same color, the same chromatic contrast and the same luminance contrast. So to back up this intuition, I'm going to show you responses to looming discs. So when the looming UV disc is dark, first of all, she begins to turn towards it and then, as it just fully expands for a collision, she turns away. When the UV disc is bright, she robustly turns away from it. And all the intensities in between, she's responding to the UV disc. So in order to quantify that, I'm taking the response in a time period around one second, which is when the is the full expansion. I'm plotting that here. When we invert the color patterns, so we've got green discs against UV background. When UV is dark, it's generating on motion and there's a small response. When the UV is bright, there's a small response again. In between values, there's just very little response at all. So this is a very striking effect and it replicates across different control strains, setups, and whether the flies are male or female. For example, we tested the difference in UV sensitivity for on and off motion in different drosophila species. So in all the species we tested, on motion was more sensitive to UV than off motion. So all the on motion isoduminants much lower than for off motion. In this data, we were measuring all the flies for both conditions so that we could have pairwise comparisons. And because of that, it was limited to the range between nine and three. So that's why the data is limited between these two ranges. Okay. So to understand what was going on at the cellular level, we first looked at the photoreceptors. So flies have, across their compound eyes, they have five spectral classes of photoreceptors. Under each eye facet, you either, you get two flavors, either the pale or yellow material. In both, you have the R1 to 6 photoreceptors, which are the outer photoreceptors sensitive to UV and green. And then the inner photoreceptors are the R7s, which is on top of the R8s. And for both pale and yellow, R7s are sensitive to UV. And then the R8s are either sensitive to blue or to green. So using, well, the first question I want to answer was whether it's on motion or off motion that's affected by UV. So to answer that question, we make Norway mutants. And so these are mutants where the photoreceptors, the photocascade is not functional. So the photoreceptors aren't working. And then we genetically, we could rescue Norway expression in the R1 to 6 photoreceptors. So here the luminance channel is for sure going to be working. But the inner photoreceptors that are mainly used for color vision are inactive. So by design, these are colorblind flies. They have to be colorblind because they've only got one wavelength sensitivity for photoreceptors. Yeah. And then these and subsequent experiments, I just want to say thank you to Ed Rogers for making many of the flies with reagents, maybe I had them. So in controls, the on motion responses are much more sensitive to UV than the off motion responses. But in the colorblind R1 to 6 rescue flies, the on motion responses are the same. I don't have a significantly different isoluminance as for off motion. So this means that UV is selectively augmenting on motion rather than suppressing off motion. So next, we want to understand which of the photoreceptors are enabling this difference. So just to make the plots a bit simpler, I'm going to plot the difference between the on and the off isoluminances. So here I'm just plotting the same data as in the last slide. So here we've got the pairwise differences for on and off motion plotted here for the controls. And then the pairwise differences that are nearly zero in the colorblind flies are plotted here. When we rescue R1 to 6 and one of the R7s, so either the pale or the yellow R7s, so these are the UV sensitive photoreceptors, we rescue some of this effect. Not all of it, but some of it. In contrast, when we rescue either the pale or yellow R8, we either don't rescue the effect or with a very small effect size. And across different combinations of rescue photoreceptors, when we rescue an R7, as well as R1 to 6, then we rescue the effect. But if we just rescue combinations without an R7, so with the R8s, then we don't really rescue the effect, except for the yellow R8 where it's a small effect size. So overall, these data show that it's the R7s, which are sensitive to UV, that are providing the UV sensitivity of on motion in flies. Right, so what about the photoreceptor targets? So here I'm going to switch slightly, because we know quite a lot about the photoreceptor targets, because of this really exciting project that was the brainchild of Matthias Vannitz in Berlin, and Michael here in Genelia. It was a really exciting project, I don't know, it's really opened up new avenues for looking at circuitry of colour and polarisation vision in flies. So we took the existing EM data from our whole full adult flybrain, so the FFBEM data. So this is a whole Drosophila flybrain that's published in Cell from W Box Lab in the Zengertal 2018 paper. And then we traced the R7s, some R7 and R8 photoreceptors, in the central eye and along the dorsal rim. So along the dorsal rim of the eye, the photoreceptors are specialised for sensing polarisation vision. And we wanted to compare the circuitry between these two to see how they were specialised for polarisation vision or colour vision. So this is a big team project, and I was one of four people along with Emile Alliotian, Arthur, who led the analysis. The traces were amazing. So in the fly optic lobe, our one to six photoreceptors, so these are the, the photoreceptors are really important for luminance, they terminate in the laminar, whereas the R7 and R8 photoreceptors pass through the laminar without making any synapses and then terminate in the medulla. So the first thing we did is we took a class of cell MI1 type of cell and we reconstructed the MI1s across this whole EM volume, and that way we could create the retinotopic organisation across the medulla. So this is obviously quite a lot of work, and thanks to Arthur for this. He led a lot of that. Now, in the central eye, the R7 photoreceptors terminate in layer six, and the R8 photoreceptors terminate in the third layer. But along the dorsal rim, where they, which is important for polarisation vision, they both terminate in layer six. So Emil and colleagues could work out which of the columns in the medulla were from corresponding to the dorsal rim. And then for the central eye columns, pale and yellow, the individual genius of Alyosha Nern noticed that the AME12 cells, so these are accessory medulla cells, and really important for circadian entrainment, these cells selectively innovated pale columns. So this is shown, we're showing that here in this double expression pattern, where in magenta you've got the AME cells, and then in all the ones where they're not present, we're staining yellow R8 cells. So you can see that they only innovate the pale columns. And this meant by reconstructing the AME12 cells, we could designate all the columns across the medulla that were pale or yellow. So I really like this picture, it's like the best Easter egg. Now then, so then we took, so now that we know which, this was a major advance because previous to that, it was really difficult to know which were the pale and which were the yellow columns, you couldn't do it in tracing EM circuitry. So then now we could pick two pale and two yellow columns, and then three DRA columns, and reconstruct the targets of these cells. So this is a long paper, it's a really good reference work. I think standout features are good, is that the data is really high quality. So we traced 95% of the target cells, so that that's really pretty much all of the synapses we could trace to, we could say what the post synaptic partners were. And we also made loads of drive, we also made driver lines for many of the new cell types described. So not only can you read about the circuitry, but you also there's reagents to be able to investigate them. So yeah, this is a really exciting project, and there's an awful lot in it. And I'm just going to tell you about a very small number of results that relate to color processing and to this connection between color and motion vision that I'm talking to you about. But okay, so here we go. Here's a very small number of highlights. So the first thing is Christopher Schneidman, when he was in Dirk Ries' group, did some very important work showing that R7 and R8 photoreceptors inhibit each other, and that this is a really important part of color opponent processing. So we found that many of the photoreceptors aren't in the brain region, but they're sort of between the brain regions. So these are the R8 synapses onto R7s, and particularly the R7 to R8 synapses were missing in previous reconstructions. And here they are in the in the external projections between the laminar and the medala. So previously there'd been reconstructions of laminar and medala, but for some cell types, a lot of the synapses are present here in this axonal sort of protection between the two neuropiles. And retrospectively, it might not be surprising, but at the time we were very surprised. And for a number of cell types, particularly L3 and L1, so these are laminar, these are cells in the laminar. More than 50% of their inputs are in this, you know, in these axons. So they were totally missed in previous attempts to try and track the circuitry. So that's going to be very important for picking up the story of like how color can get into the motion pathway later. And now, as well as the motion pathways, a lot of the cells receiving input from the R7 and R8s that are involved in color vision are passing through the lobular. And in the lobular, there are a number of cell types that respond to approaching objects. So we think that the lobular is really important not only for color processing, but also for object processing, and particularly in the kind of behaviors where I'm describing where flies are responding to an expanding object. There's a number of different cells that you'd expect to be involved. Now, similar work from Chihon Lee's lab in particular had shown that subtypes of TM5 neurons, so these are neurons that go from the medulla through to the lobular, are selective for pale and yellow R7 and R8 inputs. So these are cells that could maintain the wavelength specificity of the individual R7s or R8s in pale and yellow columns. And he showed that these cells are collectively required to discriminate isoluminant green and blue patterns. So when flies are trained to discriminate between isoluminant colors, we needed to have these cells. So in our tracings, we could confirm that and sort of extend those findings. So for instance, the TM5A cells are selective. They just get inputs from the yellow R7s and not from any pale R7s in the pale columns. The TM5B cells get pale R7s, but not yellow. And then also the TM5 senior ones get yellow R8 inputs. In some sense, if this isn't your system, I can understand this being like an alphabet soup. But the point is that there are specific kinds of neurons that care about input, that get direct input from specific kinds of photoreceptors that have specific wavelength sensitivity. And so that information is being transmitted on into the brain towards cells that are important for seeing color objects. And not only could we confirm this, but also we could show that there weren't many other cells. We could say how many the number of cells would actually get these inputs. So another highlight, the paper for me anyway, was an entirely new pathway for information to get to the lobula. Well, newly discovered. So rather than traversing through the medulla, rather than just traveling through to the lobula like the TM cells, these ML neurons just go around the side. And they also make synapses in the central brain. These cells are really interesting. And I'm here, I'm just geeking out. So here we've got like microscopy. We've got the population of neurons and individual cells, beautiful images from the ocean. And the population of cells divides into like a dorsal population and a ventral population. So there's some level of retinotopy. Collectively, they cover the whole of the medulla. So they're covering the whole of retinotopic space. But one population is basically looking at the sky, one population is looking at the ground. So this is an example of something where there might be quite specific color circuits going on, depending on whether you're looking at the sky or the ground. But I'm speculating wildly. Okay. So I'd like to talk more about all that, but at some point, I'm going to go back to. So the kinds of things that we could do with all this data is safe. For instance, like here, this is the summary diagram from the paper where we could show all the cell types that care about the specific pale yellow R7s or R8s. And we can quantify that. So here, the aros, the thickness of the arrows is proportional to the number of synapses to cells that are selective for the individual photoreceptors. So here we can begin to sort of quantify the extent to which color information is progressing from the photoreceptors out to different pathways. And we can do that for the, going back to the UV objects, we can do that for the R7s and the R8s. And this tells us that there's a small number, relatively small number of connections of, from R7 to cells that can then inform motion pathways. So the T4 and T5 cells are the on and off motion pathways that we know a lot about. But there's many more connections to neurons to the, to the lobular and also to the central brain that may well be highly likely to be involved in seeing color objects. So I want you to, that's something that we could sort of quantify and know from this study. I want you to keep in mind that these cells exist. But I'm just going to go, what I'm going to tell you about today is just the role of R7 to these T4 and T5 inputs up here. Right. So when we silence T4 and T5 by expressing here, so an inward rectifying testing channel, we abolish all the responses to our looming UV discs. So we know that we need to have the T4 and T5 cells. But previous work by Christian Amelleter, when he was in Chihon Lee's lab, and other people had shown that when you presented gratings, the blue-green gratings, you could vary the colors so that you could find a point where there was a nice illuminance point and then the motion responses, the optimal responses would disappear. So the strong expectation was that motion processing shouldn't have, should have a nice illuminance point and you should be able to null. And so for T4 and T5, which two cell types that really important for motion processing implies, the strong expectation was that if you showed off motion, then you should get T5 responses. And if you showed on motion, so bright discs, you should get T4 responses. But there should be some intensity in between where you shouldn't see responses in T4 or T5 cells. So I, imaging from the T5 cells using our gecko, so we're imaging in red and showing UV and green stimuli that just the same as we did, very similar to what we use for the behavior experiments. You see the T5 cells respond when the discs are dark. And then as you increase the brightness of the discs, then the responses are eliminated. And to quantify the responses, we're again taking the responses of the cells as the, so as the discs loom, that point where they're fully expanding and you have the maximum response of the cells, that's what we're using to quantify the responses. So when the discs are dark, you have T5 response, and then the magnitude of the response decreases to the nice lumens around about eight. When I image the T4 cells, they're active when the discs are bright, and then as the discs are darker, the response disappears for an illuminance around about, isoluminance around about five. So this means that when discs are dark, T5 responds, when discs are bright, T4 responds, and when they're in between, T4 and T5 are responding. So as for the behavior, there's no intensity of UV discs where you're not getting a response. Now this was really surprising and I didn't expect it. And the question immediately is, how is information getting from R7 to the T4 cells? So we know quite, we know a lot about the circuitry of T4 cells. So here I'm showing histograms of the inputs to T4. So these are medullar cell types. So roughly speaking, information comes in from the photoreceptors to the lamina and then to the medullar through to T4 and T5. The medullar cell types are pre-synaptic to T4, or MI1, TM3, MI9, MI4. And these four cell types themselves are highly driven by the lamina cells. So L1 is particularly important for driving MI1 and TM3, but L5 is driving MI4 and L3, MI9. So this data is taken from the seven column medullar reconstructions from my genelia colleagues. So this is again a little bit of an alphabet soup, and we can make it simpler. So here's a diagram simplifying some of these connections that we can then follow through some imaging experiments to see how T4 can gain its UV sensitivity. So I've included DM9 because from lovely work from Rooney Benews Lab with Sarah Heath, we know that DM9 is a cell that also plays a significant role in color oponency in our seven R8 and is very sensitive to UV. So this cell can be like a benchmark for sensitivity to UV in our data set. And I also just want to give a shout out for Trevor Wardill's work on this. So he did a beautiful paper in 2012 that showed that somehow R7 and R8 feed into the lamina circuitry. So I just want to acknowledge that. So imaging, first of all, I'm just going to show you, so this is imaging the lamina cells. So L1 is really important for generating on-motion responses, and L2 is really important for generating off-motion responses. And these two cells are highly interconnected in the lamina, so you'd expect them to have very similar properties. It would be very difficult to explain if they were different. And they're both off-sensitive, so they respond when the discs are dark, and then the magnitude of their response decreases to an isoluminance, which we can see here. So for L2, it's very similar to L1, so that's good news. And if we, these two isoluminances are, so the line between the isolimances for T4 and T5. Okay, so L1 drives, is important for driving activity in Mi1 and Tm3, and when we, when I image them, their isolimances are also very similar to L1 and L2, as is L4 in the off, which is important for driving cells in the off pathway. Now things get really interesting. So L5 is a prominent input to Mi4 in particular, and both L5 and Mi4 are much less sensitive to UV than L1, L2, and even the T4, T5, so they're more sensitive to green, as is C3. Now, L3 drives input to Mi9, and both these cells are much more sensitive to UV than T4. They're not as sensitive as DM9, or sort of UV-sensitive reference cell, but they're more more sensitive than these mid-range luminescels. So this overall, this was just sort of really not expected, but there'd be this range of spectral sensitivity across such an early part of the optic lobes that are thought only to get input directly from R1 to 6, and then with sort of minor inputs to the luminescels in the mandala from R7 and R8. But the data indicates that the path from R7 to T4, the reason why T4 is UV-sensitive, is that UV information is is getting there from Mi9, from L3, and presumably from R7. But it is quite complicated. So for instance, Mi9 and Mi4 are both inhibitory neurons that are very much coupled to each other, and so in part some of their spectral properties are probably coming from their mutual inhibition. So it's not just as simple, it's just like a feed-forward circuit of just one spectral channel. I've shown you a sort of a circuit basis for how UV can augment motion vision in the T4 pathway, but it's important to remember that there are these UV-sensitive cells going through to the lobular, and cells in the lobular also be contributing to the behavior. So one of the things I'm doing going forward is imaging the activity of these cells to see how they might contribute as well. Okay, finally, I'm going to sort of pull back out from Fly World, and I'm just going to return to the question of color, motion, and nice illuminance. And I just want to walk you through how this mechanism could work for your visual system, sort of be it like another animal or just an artificial sort of machine vision system. So I'm just going to walk through how color could increase the motion signal for this orange that we met at the beginning of the talk. So we're going to do that, we're going to imagine moving towards, towards or away from the center of the orange. So like motion along these, just in these directions. And how we actually calculate the motion or implement many aspects of the model doesn't really matter. Because it's quite a simple mechanism at heart. But the way I've done it is that for every pixel on the image, so there's 170 hexagonal pixels on the image, I can calculate what the on signal would be, what the on contrast would be, so that for this little hexagonal pixel in the middle, I can look at the intensity in its neighbors. And if they're greater than in the intensity in these neighbors, then it would be on motion that you'd see when you moved in that direction. But if the intensity was less in these neighbors than here, then it would be off motion. And if flies are four cardinal directions, so we can do four cardinal directions left up right and down. And we can, so as we, we can just simulate what the motion is that we experience as we either go towards this orange or away from it. And then we can plot what that motion signal, we can just take all those 170 motion signals and plot them. And if you calculate the on motion using the luminance channels, so here I'm just adding up the red, green and blue in the photo. It doesn't really matter what the function, what it is, as long as it's just a luminance channel. If you, if you calculate the on and the off motion using the same luminance channel, then you're going to get very similar motion estimates where you're going towards or away from the orange. But now, sorry, if you calculate the on motion using red, so to go back to the example, if I calculate the red intensity for these pixels, and if it's greater than the red intensity here, then I've got an on motion signal going this way. We do this across the whole image. If you calculate the on motion with, so it's sensitive to red, and if you calculate the off motion for blue, then when you go towards the orange, you're going to get a greater motion signal. And that's, I think that's pretty obvious. It's not, it's not that complicated. Something with red in it is getting bigger, and the background with blue in it is getting smaller. And conversely, when you go away from the object, you're going to have a lower motion signal. And I think, again, that's pretty intuitive. The, as you go away from the object, the amount of on red that you've got is going to get less, and the amount of blue off you get is also going to be less. So what you're doing is setting up the mechanism to favor the detection of something that's a very common feature of the scene through your behavior. So by moving towards it in a certain way, you're just going to see something that's just there. And you're not going to see it in other situations, but I think not seeing an object as you fly away from it is probably perfectly acceptable for many, many animals. And you can, you can flip it around. You can either, if you set, for instance, the, the on motion is being sensitive to blue and the off motion to red. Then you would then augment how you, the motion as you went away from the object, right? And at the expense of not seeing it so well as you approached it. So whatever your animal, like sort of, as long as it's got on and off processing, you can choose how to add wavelength information to an arm or an off channel to enhance seeing the motion of, of that color object. So if you want to see red, if you're a moth and you want to see red flowers or your zebrafish who wants to see a shadow against a UV background, you can just add an asymmetry to your, to your on and off channels to facilitate that happening. And you're not, you're not tuning, you're not creating a filter that's a match to, you're not matching a natural scene statistic. You're just adding a mechanism that allows your behavior to give you more in one situation and less in another. I hope that makes sense. Right. So to sum up, I hope I've shown you evidence that there, in flies anyway, there are two luminance channels, one for off motion and then one for on motion that's supplemented with UV. And the in flies, this is set up to detect UV objects. But whatever your animal, so with mouse or zebrafish or, to be honest, it could be a collision avoidance system for a car. You can, you can do this too, just to augment your motion detection as you, as you fly towards the object. Okay. I just want to thank an awful lot of people. I'm going to thank Michael for being just a very clever and lovely PI and Ed for making just a huge amount of flies and being a great colleague and Heather for her reagents and Alyosha for making some wonderful driver lines and being a great collaborator and Jerry for being very supportive. David Stern's lab gave me the Drosophila speech, other Drosophila speech. That was wonderful. And Kristin Branson's group are really helpful for discussing the machine vision algorithms. And then the fly EM work just pulls in so many people. It's, it's hard to thank them all, but Greg Jeffries and Martin and FFP community in fly wire and at Janelia and after Janelia, Davy was creating, Kazanori Lujanir and Ian, Ruchi led the team of tracers and the guys in Berlin are awesome. It's really fun working with them. And it's a really great community to be in. There are lots of people like the discussions and stuff. I just want to thank them all. Okay. Any questions? Thank you. Thank you very much, Kit. Impressive amount and quality of both work and presentation. As the audience is still trying to grasp fully what you presented, like for me, at least like some of these maps are quite complex to fully follow. I have already posted the Zoom room link in the chat. And the first question will be for me. So I think I will start with a technical question. At some point you showed some data for the L1 and L2 neurons. For the L1, if I remember correctly, it's zeroed and then it went up again, right? Yes. Yes. So is this like, do you think it's meaningful or some peculiarity in the data? I think, I'm not sure. Here we go. So some of it, some of this is, what's happening here is the off response. So the stimulus is becoming brighter. And then we're resetting to a green screen. So what's happening here with the gray response is just the offset of the stimulus. So that's just a genuine off response. But there is a tiny little response that I think is genuine. And I don't feel qualified to really say, but it means that there is a tiny little bump up here that feels like it's genuine. But I don't recall from L1s often enough to have a good information. Thank you very much. So the first question appearing in the chat is from Ana Vlasic. Really cool idea that the system would perhaps go against seen statistics to highlight relevant materials in the environment. Are there other examples of this in the fly or even other organisms? Yeah, I mean, most of the ones that come to mind is when the, I'm not sure it's not going against the statistics. It's just as well. So there are many examples where you've got private communication channels that you can see and you can prioritize. But yeah, what I was trying to emphasize here was that in addition to seeing things in the natural scenes, you can also just give yourself a neural mechanism that when you have the right behavior, then gives you an edge in certain situations. And that's, you're not really, it's not really about the information being out there. It's about how you're set up to behave well. And would having more color axis either improve or refine this synergy of color and motion vision? So, yeah, I don't know how specific, yes. So if you had like, I haven't thought it through properly in a trichromatic system, you know, sort of how specific it would be. What I do think is that, yeah, so I can't say, I imagine that it could be quite specific to a particular band of wavelength. But one thing you can do is you could set up parallel pathways so that, that were the opposite of each other. So that you, you then complemented the situation. And so that you then could see, say, for instance, you could always, in my stimuli, you could always see the green discs or the UV discs, you would just, you'd have two pathways to do that. But you'd have to double up on all the neural hardware. And whether the cost of doubling up outweighs the benefit, I don't know. So I think if you made it very specific, what's mind blowing to me is that the whole of the T45 system might be geared up to sort of be biased in this way. But if you had too many parallel systems, you could be just quite wasteful with your resources. So how that all plays out in practice, I'm not quite sure. Thank you very much. There are a lot of people both greeting at the beginning of the talk and congratulating you at the end, as a reminder to the audience, because we have access to the messages right now, even though they're on public display, because he's focusing on the talk and I'm focusing on moderating and the technical parts. I will be posting the link once again, as I will be terminating live transmission and live broadcasting any minute now. So if you would like to continue with us this or any other informal discussion, make sure to follow that link. Another question I have personally is like when you mentioned the yellow and the blue columns, as you saw like how they are distributed, do we expect any regional functional differences in their properties or do we expect them to be uniform throughout? Yeah, so they're stochastically determined during development and so the organization of them is random and stochastic. And there are other insects where you do see like a gradient of blue or UV receptors, it's an adult eventually, and there is like tiny distribution of for some Drosophila species, but basically it's random. And so like one problem with, for instance, looking at like blue-green stimuli is that only about a third of the these omatidia are specialized to see blue. And so then if you wanted to have like an object detector, your spatial resolution is really quite poor because you're stochastically sampling, you know, so you'd have to view over quite large area. It's a favorite game is to think why it's a good advantage to have the stochastic arrangement. Yeah, maybe Simon wants to offer some insights. As people are already joining us, I can once again repeat that I'm officially waiving my moderator rights. So if someone wants to ask a question, you can freely unmute yourselves and go ahead with it. Thank you. Thank you. Simon, I see you are unmuted. I'm unmuted, yes. I don't know. I have a control here. Yeah, so I mean, I think what really interested me about this was that it looks as if the rules for using color as a source or spectral sensitivity, different spectral sensitivities as a source of signal is really quite specific for the detection of relatively small objects, moving objects. I mean, the old idea was that an achromatic system is better because you don't get interference, contrast interference between color and motion. And I think what this talk has highlighted is that that's true if you're integrating over large elementary motion over large fields. But the minute you begin looking at local things, maybe there's some information there, some very useful signals that you can get from your other, from your spectral channels. But I'm still puzzled if this is all so useful, why doesn't the fly have a better complement of color channels, right? I mean, it's invested incredibly heavily in achromatic inputs. It's bundled color and polarization into a very relatively small number of small photo receptors. Yeah. I think the dynamics are really important. And so I think when the stimuli are new, then it's really useful. And then once they've been there for a few seconds, then I think the adaptation is kicking in, and you're back to us of just a single luminance channel again. Yeah. I think it's hard to know how to wait, how to present it in the, it's quite new. I think it is quite new and a different, like the, clearly the luminance channel. Yeah. I mean, that is the way to present it. You're quite right, because it is new. And it is a different way of thinking about processing of inputs from, in optic load. And getting on to what Anna said, I think it would be very interesting to know more about these spectral signals in nature. Yeah. And I don't know that there's really been a lot done on the distribution of information in UV channels, period, because, you know, you can't buy a camera that does it. Yeah. Yeah. And it's quite difficult. Yeah, it's difficult to measure them. You know, like, yeah. You haven't made a camera that you can mount on the head of a fly yet, so that you can just capture its natural world. But they did that. Did you see the beetle when they did that? I think I did, yes. Beetles are bigger, right? Rob DeRoyta did that. And he built a, and he had a fly mounted on his bicycle helmet, recording from H1, extra cellularly. And he found that it was too dangerous to cycle fast, fast enough to drive it. And that's why he did the fly on a stick experiment. So it's actually, it's very difficult for a human being to mimic a fly's natural movement. So you want to buy yourself a nice little drone. I really hate interrupting you, Simon, just for the sake of clarity and of closing the broadcast. There was one last question from Claudio Alonso, and they read it. Brief naive question. Is UV isoluminance wired in the system or somehow learned via experience? What happens if you grow flies in the dark? Thank you, Claudio. So I don't know the, I don't know that. We haven't tried, if they're growing in the dark, I don't know. But I do think that adaptation in the R7s is playing a big role. So that there's a temporal dynamics to the effects. And in that sense, it's not fixed. It's affected by prolonged exposure to UV. Thank you very much, Kit. And with that, I terminate the broadcasting and we continue here. So thanks, everyone, for joining us. And of course, thanks, Kit, for honoring us with your presence and your talk. Thank you.