 are officially live. So hello everybody and welcome to another session of our Sussex Vision Seminar Series as always within the Worldwide Neuroinitiative. I'm George Cafetsis, a graduate from Thomas Euler's lab and currently a PhD student with Tom Badden. As your host for today, I would like to once again begin by thanking Tim Vogels and Panos Bozellos for putting forward this very initiative, this ever-expanding initiative towards a greener and much more accessible seminar world. Having said that, allow me, of course, to get back to the reason we all gathered here for today and introduce our guest from Wurzburg University, Dr. Anna Stoeckl. Following her bachelor in biology at the University of Heidelberg and a master's in neuroscience at LMU in Munich, Anna went on to Sweden and Lund University for her PhD, where she worked with Eric Warrant, studying neural adaptations for dim light vision in Hockmoth's. Following a postdoc year in the lab of Al-Laurela in Aalto University in Finland, in 2018 Anna joined as a senior scientist the lab of Kiram Pfeiffer and has been located there ever since in the University of Wurzburg, studying visual processing and how it underlies natural behavior. Recipient of many awards and distinctions for her research and equally admirably for science communication, having worked with a plethora of organisms, including jellyfish together with Danerik Nilsson, weekly electric fish, mice and Hockmoth's. It is with great pleasure that I'm leaving the stage for her, for her talk, dynamic special processing in insect vision. So without any further ado from my side, please all welcome Dr. Stoeckl. Anna, the stage is officially all yours. Yeah, thank you so much. First of all, to the organizers for giving me the opportunity to present in this wonderful seminar series that I also watch myself with great joy. And thank you, George, for this kind introduction. I'm really excited that you're all here and give me the opportunity to share work that we have collected and are still ongoing to do over the past few years on dynamic special processing in insect vision. And actually, I'm sure you're all aware from also personal experience as visual end users that our natural and also artificial environments that we operate in present highly dynamic visual input, there's rapidly changing contrast as we move through the world in this example through a forest, which differs in its spatial frequencies and also in the probability of occurrence across the visual field. So for example, you see more contrast structures moving in the ventral hemisphere than you see in the dorsal hemisphere in this example. And it has been shown by a range of studies that the light intensity ranges that occur in one of these natural visual scenes range for about one and a half to two long units of light intensity. But if we take these scenes across the day, then we can get light intensity changes of up to eight orders of magnitude between day and night. And that really brings our visual systems and the visual systems of other animals to the edge of their capacity for extracting the necessary visual information to control behavior. And how visual systems extract this relevant information from the complex and dynamic input and how they adjust optimally to perform behaviors under these different light regimes is what our group is interested in. And we studied this from the input at the eyes to processing in the brain and to the behavioral readouts. And our model animals that we currently use for these investigations are hawk moths. And this Lepidopteran insect family concerns both nocturnal and diurnal representatives, including some of the absolute visual specialists for night vision, like the elephant hawk moths, the elephant alpinor, which was the first animal for which it was shown that they have trichromatic color vision at night. But there are also diurnal representatives, as I said, that are directly comparable in body and eye structure and in their behavior, but that live in a completely different light environment. And so today I want to present some of our work from the past years that looks at how hawk moths process that spatial structure of the environment across a large range of light intensities and look at different levels of this processing at once at their eyes or the structure of their eyes in neural processing and in the behavioral output. And I will highlight some of our projects. One of them is looking into how hawk moths exploit the spatial structure of a visual scene to control different types of behavioral function and how this segregation of information from a visual scene might come about from eye structure and visual processing. I will also show you how eye structure constrains the use of visual information, in particular sensitivity and spatial acuity across eye sizes of different individuals and how neural processing serves to adjust the visual system to different light intensities and potentially to exploiting different spatial information in a given visual scene. So extract different spatial structures. I will start this wide overview over different projects of looking into spatial processing and spatial behaviors in hawk moths with a recent project where we looked into how the spatial structure of visual scenes can be exploited to control different behavioral functions in a flexible behavioral switch that hawk moths perform to visual information across different parts of their visual field. And to give you an intuition of what the question or the research focus here actually is, we return back to this little video where you see in this visual scene as an agent. In this case, it's me on my bicycle, moves through an environment, how the visual scene of the environment moves past the eyes of the agent or their phone and how this so-called optic flow that is generated here can be used as to obtain feedback about the agent's movement. So we can gauge from it, for example, how fast we're moving the straightness of our paths and the distance to nearby structures like the ground and the surrounding trees. And you also can already anticipate in this environment that there's different types of structures in different parts of the visual field. So there's a lot more contrast structures in the ventral hemisphere than there is, in particular in these more open parts in the dorsal hemisphere of the visual field. And to obtain a much more control measure of this optic flow and of the visual information in natural visual scenes, my PhD student, Dronja Bigger, actually set up a camera setup where she could film short videos in different natural environments and calculate from these videos, for example, the translational optic flow that is present in these different visual scenes and also extract contrast edges across the visual fields. And what you see in this brief example here is one visual scene where we obtained more or the strongest magnitude of translational optic flow as already very intuitively shown in this video in the ventral hemisphere, whereas we see the strongest magnitude of contrast edges in the dorsal hemisphere. And that is because that's where we have structures like foliage contrasting against the bright sky. And Dronja did these measurements across different types of natural environments and what we call open environments where we don't have larger trees or structures above the horizon semi-open where we have some bushes and trees and then close where we have complete coverage like you would find in a forage or if you go through the undergrowth of some bushes, for example. And so in all of these different types of environments, she collected this data and what we can see in this overview of averaged optic flow diagrams and contrast edges is that for all of these environment, the strongest magnitude of translational optic flow we find in the ventral hemisphere of the field of the camera and of this habitat and the strongest magnitude of contrast edges we find in the dorsal part of the visual field. And so this is the outset of our project, the structure or the distribution of information in natural environments. And from this, we were wondering how animals are actually using these natural scene statistics to optimally inform their behavior. And we were investigating a particular type of behavior to give us an answer to this question and that is a flight control paradigm that is very well established in insect research. It is a paradigm where we have the animals in this case, our hummingbird hog moths fly through a tunnel that connects to flight cages and we can show different types of visual information on either side of the tunnel and can and this is typically used to investigate how visual information that is from which optic flow information is extracted helps to control flight in these tunnels. And we use this paradigm to particularly show the animals contrast edges that provide strong translational optic flow so that have their main orientation across the flight direction of the animals from which they would obtain very strong translational optic flow information and we also provide contrast edges that do not generate strong translational optic flow but provide some sort of directional information that the animals could follow and we wanted to know how they respond to these types of information. And so you see some of our results that we obtained in summary, flight tracks of the hog moths that flew in the tunnel on the right side and crossed it to the left side and you see what they do when we present them with translational optic flow cues on the sides of the tunnel or on the floor, they very much fly through the center of the tunnel in rather straight and controlled paths. A very different picture emerged when we showed exactly the same stimuli on the top of the tunnel. All of a sudden the animals, first of all didn't want to fly through this at all but if we waited long enough some would and what we saw is when they did that they had these zigzaggy flight paths that look very different from what we saw in the other conditions and we could look into this further and quantify the zigzagginess by quantifying the amount of lateral movements in proportion to the forward movements parallel to the tunnel's length and what we saw is that there was a massive increase in the amount of or relative amount of lateral movements when we showed these gratings on the top of the tunnel whereas actually when we showed them on the floor of the tunnel, the amount of lateral movements was decreased reflecting that the animals could use this translational optic flow to improve the straightness of their flight paths. And similarly, we could also observe that when we looked at the flight speed, the average flight speed of the animals a phenomenon that is also well described especially in hymenopterans but also dipterans that fly through flight tunnels that the animals can use the translational optic flow information to control their flight speed so they're actually slowing down when they see these grating patterns on the side or on the floor of the tunnel and they're massively slowing down when you show it to them on the roof of the tunnel. So both in the structure of the flight tracks but also in the average speed and in the lateral movement of the animals we see clear indications that what they do when you show them these optic flow patterns on the roof of the tunnel is a very different type of behavior than what you see when you show it to them on either, on any of the other sides. And so we investigated this phenomenon further and wanted to understand what the animals are actually doing there. So our first thought was, okay, let's see. We noticed that the animals do not actually like to fly through the tunnel when we put these patterns there. So we expanded that a bit by just covering half the tunnel with either translational optic flow patterns or perpendicular to the animals' flight direction or parallel to the animals' flight direction that would generate very little translational optic flow. And in either case, the animals prefer to fly in the open part of the tunnel. You can see this in the analysis where we look at the position that the animals crossed the tunnel and that was always on the side where the tunnel was pattern free. And that's absolutely not the case when we put these half patterns on the floor of the tunnel. So again, a very different type of behavior than what the animals do when we show them exactly the same patterns on the ventral side. And the final indicator for what the animals were actually using these patterns on the dorsal part of the tunnel for came when we showed them a simple stripe that entered the tunnel on one side and then crossed over to the other side and then exited on this other side that provides very little translational optic flow but very clear directional information to the animals. And you can already anticipate this a little in the flight path that the animals tended to follow the switch of the stripe on the roof of the tunnel. We have further quantified this by calculating the lateral position of the animals as they're entering and as they're exiting and calculating the difference in what we call their cross position. And you see that there is a difference from zero with the dorsal stripes and the direction is indicative of the direction that the stripe switches in the tunnel but no such thing when we show these directional information on the ventral side. And so from this, we conclude that our hog moths use visual information very differently in different parts of the visual field. They use translational optic flow to in the lateral visual field to control their position to potential obstacles or their distance to potential obstacles to control their flight speed and straightness. And they also use this translational optic flow information in particular to control their flight speed and path straightness in the ventral visual field. But when we show them similar cues in the dorsal part of the visual field they actually use this as directional information to guide the flight direction and position. And so it seems that we actually have two separate systems here operating in the hog moths using visual information in different parts of the visual field. And we were wondering if these systems operate in parallel and if we could actually pitch them against each other. So we created a stimulus regime where we have our switching stripe making the animals fly towards one of the walls and at the same time show translational optic flow on this wall that the animals would like to avoid. So what do they do? Well, this is what they do. This is a quick summary. And from this overview of the flight tracks it looks very much like they're mainly avoiding the wall with the translational optic flow. But actually if we dig into the structure of the paths a little bit more and calculate again this cross position index where we check what lateral position do they fly in and what lateral position do they fly out and take the difference. We see that they're crossing in terms of lateral position just as much in this paradigm as they would when there's no translational optic flow there. So when there's just the switching stripe at the same time they're avoiding the wall with the translational optic flow to the same degree that they do when there's none of these switching stripes. So they're actually trying to do both things at the same time following the direction of the stripe and avoiding the wall with the translational optic flow. And this very much suggests that there are two systems operating in parallel. One that guides the typical optic flow-based flight control of insects that keeps a distance to strong translational optic flow cues and regulates speed and straightness of paths. And the animals mainly measure this information in the ventral half of the visual field and then a directional flight control system that mainly uses information from the dorsal part of the visual field and those two operate in parallel and also produce mixed responses when we put stimuli in conflict. And the nice thing is that these two systems seem to very much reflect the statistics of natural vision environment where most of the optic flow information present is in the ventral hemisphere and most of the day widely strong contrast information is present in the dorsal hemisphere. And so this is how much we could gauge from the behavioral responses of the animals. But of course we were wondering how this comes about and whether there are mechanisms in either the structure of the eyes or in neural processing that would explain the separation of information into these two systems. And well, a very, very brief glimpse into the neural processing that could support this. And this very much thanks to inviting me for this talk because I actually, when I was thinking about this I thought, wait a minute we might have some information on that. And I dug into data that I collected for a separate project where we were measuring responses from wide field motion sensitive neurons and neurons that are thought to be involved in the optic flow based flight control system of insects. And we were measuring responses of these neurons for questions of how they respond in different light intensities but we actually did collect a lot of receptive fields from these neurons where our hawk moths were looking on the screen that was centered on the center of their frontal eye. And we can look just very, very briefly at all the different receptive fields I collected from motion neurons in this hummingbird hawk moth. And I indicated the kind of the horizon so where the upper and lower hemisphere of the visual field is. And you see that the vast majority of receptive fields is actually centered in the lower half of the receptive field of these motion neurons. And very few exceptions I could find that are clearly in the upper half or at least have contributions in the upper half. So one of the answers of how this segregation is potentially supported is that their wide field motion neurons that extract optic flow might have their receptive fields concentrated in the ventral hemisphere and to primarily extract information there. It's absolutely an open question on what actually or what pathways drive these dorsal directional responses. And this is something I would love to discuss but it's certainly not inconceivable that there are directional response systems in insects that specialize on information from the dorsal hemisphere. For example, celestial orientation in insects certainly has pathways that are very much focused on this part of the visual world. All right, so we don't know much but this could be an indication for what pathways or how these pathways segregate visual information. And a second thing that we are very interested in for many reasons is how the eyes of the hawk moths and the structure of their eyes actually helps them extract information from different parts of the visual field. And we, well, I say recently but actually now quite a long time ago had the chance to collaborate with Emily Baird on learning more about the hawk moths eyes and we recently analyzed this data, I should say. Thanks to Rebecca Gritmar, who did a master's thesis in my lab. And so Emily provided us with micro CT scans through the hummingbird hawk moths eyes from which Rebecca could reconstruct the facets and also in some of the retinas and the raptome distributions in the retinas of the eyes and in the color index, what you see here is essentially the facet diameters and the distances between the raptome. So it gives you an indication of how densely the facets and the raptomes are packed and the darker it is, the denser it is and the lighter it is, the wider they are spaced apart if you want. And so we were hoping with these 3D reconstructions to be able to reconstruct the visual fields of these eyes and know where the animals are looking at and what resolution they might have in the different parts of their eyes, very much how it has been done, including by Emily and her group in other insects with this type of data. But we are facing one major difficulty with our data and that is that these reconstructions and of the visual fields of eyes based on this 3D data have been performed in animals with opposition compound eyes. So where we have compound eyes where the optical elements are tightly coupled to the receptive elements. And where we can then look at the optical axes of these elements and reconstruct the receptive fields. Now our hot moths have superposition compound eyes where the optics are uncoupled at least, well, physically and potentially optically uncoupled from the receptor mosaic in the retina. This would all still work fine to reconstruct the visual axes and the visual fields of the animals if we had spherical superposition compound eyes where we can still make this alignment between the optical elements and the visual units in the retina. But as you see in this overview here, we have everything but spherical superposition compound eyes. So these eyes are absolutely not spherical and also the curvature of the cornea is different from the curvature of the retina. The packing of the cornea is different from the packing density of the photoreceptors in the retina. So basically it's impossible to deduce from the anatomy what the visual projections in this eye actually are. That's sad at first sight but it's actually really, really exciting because it means we have an eye that holds many, many surprises in store. And I just want to give a brief glimpse of one of these exciting things in these eyes. So actually more than 20 years ago now, Eric Warrant and colleagues have described that these hummingbird or moth's eyes have acute zones in particular in the front of their eye. They have relatively high resolution and they measured this with optical means. They measured that the interpretation was that the raptome packing in the front of the eye should be very dense because the raptome span very narrow visual angles in the front of the eye and they become wider in the lateral part of the eye. But we don't see this represented in the raptome packing at all actually in our 3D reconstructions which suggests that this is actually a property of the optics of the eye of how the facets project light onto the retina the particular shape of the retina and the particular packing of these two. So to really gauge what part of the retina sees what in the outside world we need very careful optical modeling and I'm putting this out here because I would be very excited if someone is interested in this and would like to talk to me about it or collaborate with us about it because I think there's really a lot to find out here about how this system actually works. But for now it means that we can't say from the 3D shapes of these eyes what different parts of the retina or different parts of the facet mosaic actually see in the real world. However, we can learn something else interesting from 3D reconstructing these eyes and that is essentially what constraints they pose onto extracting both resolution and contrast in visual scenes and particular what constraints they pose in large and small animals. So what are the structural constraints for spatial acuity and for sensitivity in large and small eyes? Because I'm sure you know that very much like for our human population also in the hot moth population we have larger and smaller individuals we actually have quite a large range of sizes that we find in hot moths as in many other insects and they also then come with larger and smaller eyes though not in a one-to-one correlation. And here you see an example of this on the right you see the eye of this larger hot moth that I colored in orange compared to the eye of the smaller hot moth in purple and they are to scale so they're about 30% difference in body length. And so generally when you have an animal that changes body size and correspondingly eye size there are different strategies for how to do that. And so if we start with our larger animal if we go to the smaller animal and kind of squeeze the eye into a smaller size we could retain its resolution so we could retain the number of optical and visual elements in the eye. But that means that we have smaller facets so smaller apertures that light can be collected through and thus less light is collected per visual unit and we lose sensitivity if we want to keep the resolution in a relatively smaller eye. Or we could keep the facet sizes and thus our sensitivity but then we can pack less units into our relatively smaller eye and thus we lose resolution if we want to retain sensitivity. And it's a relatively common or actually very common trend across different insects with opposition compound eyes to retain sensitivity at the cost of spatial resolution in smaller animals so that smaller animals would still have relatively good sensitivity but comparably lower resolution to larger individuals of the same species. And so we were wondering how that actually plays out in our insects that have superposition compound eyes that are naturally already more light sensitive than their opposition relatives and whether we will find similar trends here. And so what we did is we collected many different eye parameters from our reconstructions from animals of a wide range of body sizes. Here you see in comparison again where our two-hawk moths I showed you now sit in this range and we calculated different eye performances including the sensitivity of a single omatidium. And what I show you here is this sensitivity calculated for a visual unit for the scaling or for the eyes that we have investigated across different body lengths and I also calculated what the sensitivity would be if the eyes scaled isometrically. So if we just took an eye and made it every component smaller to the same amount. And what you see is that in our eyes sensitivity actually drops a lot less in smaller animals than it would if the eyes scaled isometrically. So also these hot moths seem to preserve sensitivity in small individuals. And conversely, the spatial resolution or the acceptance angle of a single omatidium is actually larger. So the spatial resolution is worse in small individuals compared to a medium sized individual and also worse compared to an isometrically scaling eye. Which is all nice and fine but what does that really tell us about generally how eyes scale and why this particular scaling has been selected in evolution? So what we really wanted to know is why this type of scaling out of the vast range of scaling factors there could have been for the eyes including isometric scaling. And so what we did is to take these calculations and extract the variance in the sensitivity and the resolution parameters that we have across body lengths. So essentially this range here for the scaling we have for the isometric scaling and for many other scaling parameters. And that's what you see here. So there are different scaling parameters and you see the variance in this case sensitivity across animals of different body sizes. And what you see is that with the scaling that we have we have a relatively low variance in sensitivity from large to small animals. This is not the case for resolution. The variance is relatively larger than if we had isometric scaling where there would be no variance. But from looking at this we thought what about what we call the eye parameter? So this is a ratio between the investment in sensitivity versus resolution. And what we see is and that was really quite surprising how bang on our scaling fit in this comparison. What we see is that the variance in eye parameters so in whether the animals invest more in sensitivity or resolution was, well, nearest to the smallest for the scaling that we actually observed in these hog moths. So what we conclude from that is that the scaling in these hog moths is optimized to keep a balance between sensitivity and resolution investment and to keep the eyes of the hog moths large and small as similar as possible in terms of their resolution and their sensitivity output that they send to the brain. So that a large and a small hog moth would send relatively similar information about their outside environment to the brain and it wouldn't differ between a very large and a very small hog moth. And the beauty of that balance between sensitivity and inequality across animals of different sizes is that then the neuro processing that takes this information from the eyes can be relatively similar because the variance between animal sizes is minimized. Yeah, and so we can learn at least something about scaling and about how visual information is extracted by these eyes in particular for large and small animals from our 3D reconstructions and what information is then sent to the brain for further processing. And in the next and final bigger project I want to present here, we were concerned with this neuro processing that is taking part, especially spatial processing that is taking part in the hog moth brains, in particular with respect to processing information across different light intensities and adapting the visual system to optimally function at these different light intensities. So we're looking into dynamic spatial processing across light intensities, in particular in the laminar of these animals as I will show you. But first, we need to remind ourselves about the challenges that vision has when it operates, especially in very dim environments. So I mentioned this before, the light available to the eyes can change by an order of up to eight magnitudes from day to night. And I like to give this example that if photons were raindrops, then our photoreceptors in the eye would experience a slight drizzle of rain at night and comparably standing under the Niagara Falls during the day in terms of the photon input that they have. And I really liked this example because it gives a very good intuition for one of the main challenges of vision in very dim light, because we can very well imagine the consequences of this stochastic arrival of raindrops or photons for the signal to noise ratio. And we can imagine that if we decide that we are counting raindrops or a photoreceptor counting photons, and if we count raindrops in this orange circle, then we will count a slightly different number of raindrops at every given point in time that we count because the arrival of raindrops very much like the arrival of photons is governed by statistics where if we have a very low overall number, we have a large variance in the counts that we make over time. Meaning to say that we have a very low signal to noise ratio because we have a low signal and very large noise. And we can improve this just intuitively in our raindrop example and everyone who has collected water in their garden probably knows this by collecting our raindrops over a larger area. That automatically means we're collecting more raindrops and the nice side effect of that because of the statistics that govern raindrop and photon arrival is that we also have a lower variation in our count. So basically we have a larger signal to noise ratio because we have a larger signal and a lower associated noise due to this process of integrating information over a larger area or in other words by summing information or visual information in this case, our raindrops in space. And we also find this spatial summation implemented in the visual systems, both of vertebrates and of invertebrates. And in invertebrates or in particular in nondipteran insects it is thought to take place in the lamina and in a very particular type of lamina neuron. The lamina in insects is the first visual processing area that takes information directly from the photoreceptors in the retina that end in the lamina in this retinal topic arrangement where there are individual units that we call cartridges that process if you want information from one pixel of the image or from one omatidium in the compound I. And because of this and then we have the main relay neurons in the lamina the lamina monopolar cells that pick up information from the photoreceptors with the lateral processes. And because of this beautiful anatomical arrangement it's very easy to conceive that if these lamina cells had extended processes they could essentially some visual information in space because they could integrate over neighboring processing units or pixels. And indeed it has long been suggested for many, many decades that this is what lamina monopolar cells in the insect species where we find these extended processes do. And there is a strong trend that is also reflected in hot moths that nocturnal insects have longer processes than diurnal ones supporting this idea. And we finally wanted to find out whether these neurons actually perform spatial summation physiologically following these anatomical hints. And so we set out to measure in the responses of the neurons if we actually find the neural equivalence of the increasing sampling area. So the essentially increasing receptive fields of these neurons as they're starting to perform spatial summation. And for that I recorded intercellularly from lamina monopolar cells and recorded their receptive fields by sweeping a black bar through the visual fields of these cells to which they respond as you can see here. And then I lowered the light intensity and checked what happens. And if you do that, you see very much like what you would expect for spatial summation to happen, their receptive fields increase in size. And just to convince you that this is not a property of something that happens in the eyes or the photoreceptors. I did the same recordings in photoreceptors and they don't show a corresponding increase in their receptive fields. And more importantly, the photoreceptors actually didn't respond to the slowest light intensity at least not consistently to the sweeping bar. They mainly had single photon responses showing that these laminar cells not only had wider receptor fields but also higher sensitivity, which is very much in the effect of the spatial summation that they perform compared to the information they get from the photoreceptors. And in a next step, we then wanted to know if our narrative is actually correct and if these lateral processes of the laminar cells could be responsible for integrating information laterally and therefore producing a spatial summation. And to answer this, I actually, when a while I recorded from these laminar cells, I injected dye to make sure to know the morphology of these neurons and we could identify them from a previous classification we had made using Golgi stainings and indeed all the laminar cells I recorded were what we then called type two laminar cells which have these distinct long processes that leave their visual unit and these short ones in their visual unit. And we used the distribution of processes that we had from these studies to generate very simple spatial filters. These are really just the distribution of lateral processes, the long ones in the solid line and the short ones in the dashed line. And I really just use these that we got from anatomy, convolve them with the photoreceptor responses to produce an estimate of the spatial responses of the laminar cells if they were integrating from their long or their short dendrites. And the result you see here for the bright light intensities and you see that the estimate for just integrating the short processes fits the responses very well. And in dim light, the estimate of integrating just these long processes again fits the responses of the laminar neurons very well whereas the just integrating these short processes underestimates the width of the responses. So what we can conclude from that is that at least theoretically their dendritic distribution or the distribution of the lateral processes is very much suitable to explain this dynamic change in spatial processing from dim to bright light. And our final factor that we were very interested in is how this spatial processing and the spatial summation in these laminar cells actually affects generally visual processing and visual tuning in the visual system of the hogmoths. And for that, it's important to know that at least some types of laminar cells have been shown to be the essential input to the motion vision processing that we have talked about before. So if you remember the motion neurons that I've shown your recordings from. And so we would expect that if our neurons that form the spatial summation form the input to the motion processing that we see the same shift in spatial responses that occurs because of the spatial summation in the laminar cells in the motion neurons responses and that we actually see the effect of the spatial summation on other parts of the visual or generally on pathways in the visual system. And so what we did, we recorded our laminar cells in a paradigm very similar to what we would do with motion neurons. We showed the moving gratings of different spatial frequencies to extract to what spatial frequencies they still respond in the different light intensities and compared this to the spatial response filters we extracted from motion neurons in different light intensities. And just visually you see that this actually fits reasonably well, suggesting that our laminar cells tuning across light intensities explains the spatial tuning of motion neurons and thus shows that the laminar or at least strongly suggests that the laminar neurons improve sensitivity in the motion pathway and thereby also change its spatial tuning to lower spatial frequencies in dim light. And so what we could show is that the spatial tuning of laminar cells and in particular the increase of spatial summation in dim light helps to improve sensitivity in the visual system, including in the motion pathways but by sacrificing spatial information as the light intensities drop. And so for the final minutes, I want to share ongoing work that we have and actually ongoing work that has puzzled me since starting this project. And that is essentially a consequence of what I have just shown you and that relates to how laminar cells generally extract the spatial structure of the visual environment. So what I've shown you is that we have good reason to believe that the lateral processes of the laminar cells can integrate visual information and spatially some visual information and explain the spatial tuning of these neurons. But I've only shown you this for this one type of laminar monopolar cell. In our hot mouths, we thought then four types of laminar monopolar cells and this is very similar for many other insect species that have different types of laminar monopolar cells with differently long lateral processes that could potentially extract different types of spatial information about the world, some with more potentiality to spatial summation and thus lower spatial resolution and some with less potential to some and thus retain higher resolution. And for everyone who works in vertebrate vision, this is probably not a big surprise that you might have different channels that extract different spatial information from the visual world because you very much have this in different types of retinal ganglion cells. But it has really puzzling consequences in these insects and one of them, at least back then and one of them is that we know in insects that visual information is segregated into two channels in the laminar and in particular for motion vision and that what our laminar cell one and two in flies extract on and off contrast that then feeds into the motion vision system. And it always puzzled me that if our neurons here and this is very similar to the classification in other butterflies and bees, if these are laminar cell one and two and they extract on and off contrast, then they would do that at different spatial tunings in different light intensities and feed this into the motion pathway potentially with different spatial resolution. And it always puzzled me how that works or what effects that produces. And so I really couldn't stop thinking about what and if more laminar monopolis has actually provide this parallel spatial processing. And to understand more about it, we were lucky enough that we could collaborate with Kentaro Arikawa in Japan who provided us with a beautiful serial block phase sections of our laminar that allowed us to reconstruct the morphology of our laminar cell types from these sections and really dig into the cellular basis of the spatial tuning in the different laminar cell types. And this is work again by Ronja who with the help of fantastic Bachelor and Master students has reconstructed one cartridge in our laminar section. So she has reconstructed the morphology of all of the laminar monopolar cells in this one cartridge and of the photoreceptors. And we found just briefly mentioned here, we found seven photoreceptors that terminate in the laminar and two long ones that go onto the medulla. And very surprisingly, we didn't find four but five laminar monopolar cell types that you see here in their embrace. And then I highlight for you here individually. This is how they look like. Some of them look familiar, some of them don't. And I should just point out that the fifth laminar cell type that you see here hasn't been finished in its reconstruction. So the backbone is there, the branches are there but the branches are most likely much more feathered out like you see in cell type three. So this is still missing but the general layout of the branches you can already see. And so yes, we wanted to know more about parallel spatial channels and thus we wanted to know more about the exact structure of the laminar cells. And we found out many surprising things including that our expectations for what cells we found was well, somewhat right and somewhat wrong because yeah, we found an additional cell type that we didn't expect. And we also realized that some cell types well, are double if you want. So if we very briefly want to make sense of this what we thought was type two before that I recorded for in this spatial summation project seems to be represented twice or very similar morphology seems to be represented twice in one cartridge. And then we have cell types that we can make sense of that look very similar to what we had before and then something that we've never seen before. That's very interesting, our fifth type. And just to compare what I think how this compares to the Drosophila neurons because that's important for function and that will be my final point. What we currently have is that we think these two blue cells with the long and the short dendrites compared to the laminar cell one and two add to the Drosophila L1 and two. And yeah, L3 looks very similar and the rest we are very unsure. But that these guys here might be L1 and L2 which is also supported by the fact that they have the largest axons in the cartridge. They share a membrane along the entire laminar and we have very good evidence that they're actually joined by gap junctions or many features that we also find in Drosophila L1 and L2. That really now saves me a lot of sleepless nights because it can explain how L1 and L2 can provide the on and the off contrast channel if that's the same in hot moths to motion vision because they have the potential of very similar spatial processing in these two cells which is great. And what we are currently busy with is to understand from the morphology of these neurons that we have and actually from the synaptic connections that they form to other cells how these neurons or how the synaptic connections of the neurons could support the spatial processing the dynamic spatial processing that we see in physiology. And so very briefly our working hypothesis is because we have just two bands of long processes that in the upper part of the laminar they're performing spatial summation where the information comes in from the photoreceptors and that at the end where the information is moved to the medulla they're performing lateral inhibition and that is actually a beautiful segregation of spatial processing for summation and lateral inhibition in the different parts of the laminar and this all sounds amazing until you look at the laminar water polis cells of the same type in nocturnal hot moths which don't have the segregation of processes at all and back we are to the sleepless nights and to many, many interesting things to discover and to wrap this up and show you briefly what we want to do with this in the future is we want to use our knowledge on the synaptic connections of the laminar cells to generate a computational model that could explain the dynamic spatial summation and also lateral inhibition that we see in the neurons responses. We want to reconstruct the laminar neurons in nocturnal hot moths to see how this all compares and we want to, or we currently have already started recording physiological responses from the different laminar types to get closer to our answer whether the different laminar types actually support parallel spatial processing of visual information. And so with this, I come to a brief overview of what I have shown you how hot moths extract information of the visual world, how they extract information spatially segregated in different parts of the visual field, how their eyes scaling balances extraction of sensitivity or balances their constraints on sensitivity and security and how laminar monopolar cells provide spatial processing for vision in different light intensities and potentially for extracting different spatial components from natural scenes with many open and exciting questions still to answer that we're currently working on. And this brings me to thank the team spatial processing I should say in my group that is currently and has been working on analyzing this data, generating this data and putting it together. My collaborators on this project and I would also like to thank team pattern vision in my group for being a wonderful, wonderful team of people who work with me in Wurzburg. I would like to thank the department, the funding and I just have a very brief announcement before we stop with joint kind of hooking onto the keyword of pattern vision because actually this great group in Wurzburg will come to an end. And soon by the end of the summer move to Constance to dig more into the topic of pattern vision in insects and how that is actually neuraly implemented. And yeah, you can have a look it's still in the process of being all organized but you can watch Twitter for announcements of open positions. And in this way, I also want to say I know that this all probably comes very late for all of the students and scientists that are fleeing from war and oppression and are in dire need of support and also of a scientific host to continue at least some of their work in what normality is left to generate. But I want to say I want to do whatever I can to use this project to help support anyone who needs this help. So please feel free to contact me and to discuss possibilities for support that I could provide. And with this, well, Somba reminder of the current state of the world, I will end my talk and invite questions. Thank you very much, Ana, for this wonderful and holistic overview of two different stories including reconstructions, ifs and behavioral stuff. Before I leave the moderation part at the very capable hands of Maxime, who is already in the room, I would like to remind to our audience that they can click on the Zoom room link that I just posted so they can follow us in this room because at some point we will be terminating the live broadcast. And again, I would like to thank you and before I leave the moderation part to Maxime, judging from where Tom's questions is taking it, I would like to seamlessly ask a question first. So with respect to the lamina neurons that you mentioned, the dendritic trees, do we know anything about whether their ratio of short and long dendrites changes with different light intensities or throughout the day? You mean whether they are plastic across the day? It's a dynamic. Yeah, so what we do know, especially from work on flies, is that the axon diameters change of the neurons across day and night. We don't know anything about how, especially the long processes would be very interesting because they're likely supporting spatial summation, how they change. It's certainly something we have on the list and we would like to investigate, but yeah, we don't know that yet. Right, thank you very much for that. People are already joining us in the Zoom room and because you don't have the YouTube tab open, I would like to let you know that there are a lot of people congratulating you for this wonderful talk, so it's not just me. And yeah, Maxime, are you taking over, please? Yeah, thank you, George. You can go ahead. Sorry, George has to leave us because he has some teachings duty, so we'll see you soon. Thanks again, Anna. Like George said, if you want to ask your question yourself, you can do so either in a chat that is still on YouTube or you can join us in the Zoom room and directly interact with Anna, who is with us today. So I don't see that many people. I hope that because everybody's at cosine, I really hope that. I have one question from, well, Tom Baden, Tom, you're already with us. Do you want to ask your question yourself? Helene, sorry for forcing it. You are muted. Now I feel ambushed. Hey, Anna, great talks, very exciting. So I've got lots of questions, but the one that's actually in the chat is, so these monopolar cells that you're showing, you sort of hinted at the end that the branches, they don't just sit homogeneously distributed across the entire thing, right? So you've got the stubby ones and you've got the long ones. And, well, A, the stubby ones, there's way more of them than the long ones, right? So that's presumably good for something. My question is what's that good for? And I guess the other question is, what are they doing in the different layers? Yeah, yeah. Well, it's a good question. So I don't have like everything I will say now is hypothesis because we don't know for sure, but very likely the large amount of these short processes helps in particular, we have to think about the bright light intensities when we have a lot and a lot of input from the fault receptors to keep a good dynamic range in the responses and does not have only a few processes that pick up information and easily saturate, but have a large range of processes that can actually, some of them be saturated, some of them still be responsive and thus keep the dynamic range of the system active is, yeah. I would, well, and again, our suggestion is that the long processes at the retinal side of the lamina that they are performing spatial summation because it would make a lot more sense if information is first integrated. And then once you have like tabulated all the information that comes into the cell that you then perform lateral inhibition and see which cells are very strongly active and kind of inhibit each other, whereas would make less sense to do it the other way around that you, when you just started to integrate information from the fault receptors that you already inhibit each other and then start to sum it kind of after that. So that's, but that's really just a, yeah. We have no evidence for that at the moment at all, but we could get a first hint if that is actually correct. If we look at neurotransmitter staining in the lamina, which is really the next step to see if we get evidence for inhibition in particular at this band that is facing the medulla. So I guess I'm gonna follow that with the obvious question that I always ask. Can you image them? Probably. I mean, technically, yes. We can't access them genetically. So we would have to backfill them in the medulla. And then the difference, difficult thing would be to hit only the one type of LMCs and not the other type to get reasonable answers. I'm sure this is technically possible. Would be much easier if we would have genetic access to them and could individually image one type. Thank you for that. Unfortunately, we don't have that much question on the chat today. So I would encourage anybody on the room with us to come up and speak, come forward. Michael, hello, how are you doing? Hello, can you hear me? Yep, thank you, Anna. It was an amazing talk. I could ask you like 30 questions. These are just randomly. I mean, how, what is the prospect of going one step beyond the monopolar self? I mean, I guess I have two questions. One is what you can tell us anything about the diversity of the processing in the five LMC types from your prior recordings? If maybe even some fill, maybe the smaller ones are harder to fill but you can tell that they're somehow different. So if you can tell us anything about that and then maybe what's the prospect for going just one synapse down speed? Yeah, so interestingly or maybe not surprisingly at all, we're mainly recording these type one and two with the large accents because they have the largest accents. I think they're easiest to record and they seem to have very, very similar responses. So I can't tell them apart at all from just looking at the responses. And unfortunately also they seem to be, whenever you record one of them, you get two cells labeled. So they very much seem to be coupled and they look very classical LMC, like very high pass filtering the responses and have very similar receptive fields. We have obtained, I think one or two recordings from what we think is L3, but only two light steps, not to the special stimuli yet. And they seem to have slightly different responses in particular, very much stronger hyperpolarization to light on steps. But this is all like, that's what we just started, I should say. So we really just started to try to get the other cell types and really record as many LMCs as we can, look at the anatomy, look what we get out. And we've actually also not just started to use spatial stimuli, but adapted from Marion Sili's group who have recently shown this difference in luminance and contrast coding in L2 and L3. I was hoping if we show similar contrast and luminance stimuli, maybe we also see a difference in our cells. And we have one or two recordings, which might be L3, which might show slightly similar responses, but this is still so early and hand wavy that it could go either way. We could get two more recordings and all of this, yeah, is not the case. So unfortunately the answer is, we really don't have data that shows very clear differences because mainly we record from L1 and L2 at the moment. And could we go one step into the medulla? And do you know, is it known from anatomy based on your fields or prior work, whether they like in Dipdra, right, where the axon terminals go to different layers of the medulla, is that known? They do. It's not. I think this would be very interesting as more and more people. So there's just been, or it's about to be published a classification of a butterfly LMC cartridge. And I think as we are doing more of these, we will realize that not all of the classifications we had are correct, but there's certainly terminals in different medulla layers, whether they belong to what we now call L1 or L2, that will probably change a bit, but it looks very much like we have very similar as in Drosophila layer one, two, three, at least in the medulla, where they terminate in almost all insects, non-Dyptrin insects that I've seen, same for us. So what we call L1 and L2 terminate in layer one and two in the medulla. We don't know which one is which yet, because they're always co-stained, but yeah, it looks very similar to Drosophila. Okay, so I mean, just to follow Tom's suggestion for imaging, I mean, I'm not sure if this is worth doing, but we did years ago do using genetic tools. We did panoronal labeling and you can image just across medulla layers and you see stuff, right? I would say it's better than fMRI and people, right? But you still have this volume labeling problem. So you only see the largest things, but you will clearly see on and off pathways. You will clearly see that some layers are a bit faster than others. I think anything about spatial processing is probably hopeless because of the spatial pooling of the imaging process itself. But for things that are stratified functionally by layers, they do pop out, right? So that was actually... Yeah, but you're all clothing. Yeah, you're right actually. I didn't, honestly, I didn't think about, of course we could image the layers. Yeah, and so, yeah, you're right. It could be possible, yeah. Yeah, okay, thank you. Hi, I was really, really nice talk. I was wondering just kind of very simply, you're talking about spatial integration, but do you, have you seen anything temporarily as well? Yes. Yeah, I have this all out because I thought, okay, spatial is enough, it fills the hour. Yes, it's, I haven't put this all very nicely together. So we see this at different levels, very much like in flies as well, you already see that the photoreceptors become slower as it gets dimmer. And we also see beyond that, if we record from the motion neurons, we also see that the temporal frequency tuning goes to lower temporal frequencies beyond what the photoreceptors explain. So there must be another slowing process in there. And in LMCs, have you checked? Very much, yeah, but it's not the LMCs. So they still retain their high, like their high pass responses, they become slower as the photoreceptor responses become slower, but there's not from what I can tell the additional kind of temporal integration step. So we always hypothesize that that might be in whatever performs the temporal correlation or in the, generally in the response dynamics of the motion correlator in our model or in the medulla neurons that are responsible for that. But we have no evidence, I mean, we haven't recorded them, I don't know. But it's not the laminar cells, very much not. Interesting, thank you. Yeah. Thank you for that. Just stay with us. I will just say to the audience that I'm going to close a YouTube stream. So if you want to join us and continue chatting about today's topic or what you're into now vision, whatever you may like, I'm going to close the stream now. So please do join us. And I will now finish my moderating rights. We are offline.