 Hello, everyone, and welcome to another online seminar. For those joining our vision talk series, let me just quickly remind you that we are part of the Worldwide Neuroinitiative, COVID-inspired seminar hosting platform for neuroscience. You will find all links and personal information on these video descriptions, but I will so far encourage you to really take a look at the upcoming talks or to subscribe to the mailing list for a weekly reminder of upcoming seminars. So it's usually more than one talk a day and most previous talks are accessible on podcast platforms. Like I said, there is very series that you can find on our own YouTube channel. Of course, hit and subscribe if you want any updates. Anyway, I will be a host today. My name is Maxime, and I am part of the Baden Lab at the University of Sussex. Today we are receiving a collaborative house arrested at Sarenberg from the University of Tübingen in Germany. I re-obtain his PhD at the University of California, San Francisco, where he studied the verifigial visual oculomotor and cardiovascular functions. He then moved to Germany and joined the University of Rydberg where he investigated dopaminergic neurons involved in locomotion. Finally, he started his own lab at the University of Tübingen where he and his team continue to work on vision and oculomotor circuits. For the most, he studies the ecological adaptation of the verifig brain to its habitat and behaviors. So hello, Ari, how are you doing today? Pretty good. Thank you very much for the invitation. So thanks for the opportunity to present our work on motion processing in the zebrafish brain to you. We are using the zebrafish as a model for the vertebrate visual system because its brain is transparent. So we have access to every single neuron in the brain and we work with them at a developmental stage where they're only about four millimeters long and show already a diverse set of visually mediated behaviors. And their brain is really tiny. It only contains about 100,000 neurons, which we hope helps us to make the brain more tractable and understand what sub-quarticle visual processing is about. Here you can see a circuit diagram of primate cortical visual areas by Feldman and van Essen. And you can see how complex it is. And since the brain, the larval zebrafish is smaller and it does not contain a visual cortex, we hope that it will be easier to understand its visual processing. And here you can see in yellow the 10 aberration fields, the 10 input regions in the mesencephalon and diencephalon that receive inputs from the retina that have been identified and are currently studied by others and us. Zebrafish show diverse set of visually mediated behaviors. For example, prey capture, optomotor responses and optokinetic responses. And these three responses will be important in the context of my talk. So let me introduce them to you. So for the optokinetic response, it's a gaze stabilization behavior where the animals try to minimize the motion slip on the retina and thereby hold the eye position stable on the stimulus, consists of these slow phases or following phases, and then with alternating quick phases or saccades. And humans perform a similar behavior when you sit, for example, in the train and watch the trees pass by outside your eyes will also perform an optokinetic response. When you present a moving grating below the animal, Zebrafish will show an optomotor swimming response where they try to follow the direction of the moving stripes, which you can quantify by looking at the tail beats over time. Mammals also show optomotor behavior, in particular, an optomotor head and neck movements. And the third behavior that will be important in the context of my talk is prey capture behavior, foraging behavior and Zebrafish where they feed on paramecia, which they hunt down. And the last stage, maybe you can see it, they have the paramecium and binocular view and use both eyes. Our laboratory is studying the sensory motor transformations underlying these visually mediated behaviors. And we are currently mainly focusing on the sensory brain areas that receive input from the retina, so mainly the optic tectum and the pre-tectum, which will also be the topic of my talk today. Furthermore, we work in the premotor hind brain where circuits exist that stabilize the eye position, which I won't have time to talk about today. Given that the Zebrafish brain is so small, it should rather make efficient use of its brain resources to be able to survive. And to make efficient use of the brain resources, the neural encoding should depend on the statistics of the sensory environment of the animal, so on the natural habitats in which these animals developed over evolution. And furthermore, we hypothesize that the circuits should be task-specific. Only those information that are relevant for behavior need to be encoded. And the concept of sparseness posits that the neural activity should also be minimized in a way to save energy, to having fewer neurons active at the same time, and also to make the encoding in individual neurons independent of each other. And an example of such efficient sensory encoding is found in the visual cortex of mammals where the receptive fields of simple cells in the visual cortex look into particular positions in visual space for oriented features. So this has been shown experimentally, and you can show by simulations in the computers in neural networks that when you train a network under the assumption of sparseness and feed the network with natural images that you get resulting units that have receptive field structures that very much look like the receptive field structures you actually find in the experiments as well, arguing that the encoding of sensory information in the visual cortex is efficient and likely adapted to the natural environments. Now, it is also known that there are visual field anisotropies across the retina. So in humans, for example, there is a point of sharp side, the so-called fovea, which contains a high level of photoreceptors, photoreceptor density and also retal ganglion cell density. And in wolves, a visual streak is present where a higher density of photoreceptors is there in the region surrounding the horizon. So likely where the prey animals for the wolves appear. And for zebrafish, region of increased retinal ganglion cell density has also been identified, namely in the lower temporal part of the retina and aria temporal ventralis, which is not a real fovea, but a region of increased retinal ganglion cell density. And these visual field anisotropies are also relevant to the behavior of animals. Here is an example of a fitler crab, which shows an escape response to an approaching seagull, for example. And when you present motion stimuli and measure where you need to stimulate to evoke escape responses, you find that the likelihood of the animal to show an escape response is much higher when the stimulus is presented above the horizon, not when it's presented below the horizon. Likely because over evolution, the animal has learned that predators will not appear from out of the sand from the bottom. In related work, Ziad Harford at the University of Tuvingen here has shown that in the primate superior colliculus, the visual field is also represented in an anisotropic fashion, where in the upper visual field, you have many neurons that have small receptive fields. And in the lower visual field, you have neurons that have larger receptive fields. And these adaptations are likely due to the different needs for processing far versus near space in the upper and lower visual field in primates. So next to this visual field anisotropies that exist, there are probably also specific processing channels that the brain forms to drive particular behaviors. And those can be anisotropic across the retina or visual space as well. Here, for example, the W3 ganglion cell, the mouse retina is present mainly in the ventral retina to look up in the sky and detect small moving items and avoid and predators. And these neurons are not responding so much to larger stimuli. In the larval zebrafish, we do know the location of the retina recipient brain areas. I've shown you these 10 different aberration fields, but we don't know very much about their functions and the differential processing of motion stimuli yet, although there are notable exceptions. From morphology studies by robots that we know that different retinal ganglion cell types exist here, so according to the Dendritic RGC morphologies which project to different brain areas. So you can see which brain area receives what type of retinal ganglion cell inputs. In our laboratory, we have recently studied the unequal visual field sampling for different behaviors in zebrafish, and I'll show you these results in a minute. And we investigated the representation of the visual motion stimuli and different non-quarticle processing channels, namely the pre-tectum and the optic tectum. And in the last part of my talk, I want to tell you about the integration of optic flow information across the two eyes and how the animal manages to distinguish different directions of ego motion so that in the long run, we will hopefully get a very detailed understanding of the mechanisms underlying efficiency and robustness of motion processing. So let us start with the optokinetic response. We can place our animal inside a spherical arena. The animal is embedded in agarose, so it cannot swim away, and we remove the agarose surrounding those body parts that are still allowed to move. Here, for example, surrounding the eyes. So the animal can follow the visual stimulus and we can measure eye movements in this optokinetic response. This is the slow phase of the optokinetic response, and we can then change the size of the stimulus. So when I say size, I mean the steradian size of the stimulus, how much of the visual field it covers here, and we can characterize how large a stimulus needs to be to drive the behavior and where in the visual field we need to place the stimulus to drive the appropriate, to drive behavior as strong as, so we can place the stimuli anywhere on the spherical surface surrounding the animal in equidistant way to characterize where the drive is best. And for the optokinetic response, we make use of a software and hardware solutions called SEPI-TRAC that we developed in the laboratory and for which the source code is freely available. And we use a spherical stimulus arena, which consists of 14,000 or so of LEDs, which is using much of the technology that has been developed in Drosophila vision field. And we then can stimulate in the entire or almost entire visual field of lava zebrafish, which is huge. And here you can see these optokinetic responses, sinusoidal slow phase modulations here. And these we quantify for stimuli located in different positions on the sphere and these different positions on the sphere, we then map onto the two dimensional space to be able to more easily look at them. So let me explain to you how this Mercator plot works because you're gonna look at these types of plots quite a bit. So we have here the azimuth running from minus 180 to plus 180 degrees. And here the elevation running from minus 90 to plus 90 degrees. So that this position here at zero zero corresponds to the nose of the animal. And this position here and here corresponds to the tail of the animal. This position here to the north pole or center of the animal. And this position here to the south pole on Nidar. And we see for the optokinetic response, the color code here that shows where the response was strongest. And we see the response is strongest for stimuli that are located laterally relative to the animal and pretty much centered on the eyes. Now we were wondering whether the photoreceptor density itself dictates whether or not the optokinetic response is good. So we mapped the optokinetic response drive in the visual field onto the photoreceptor density maps in visual space and found that the region of highest optokinetic drive, which is shown here in white, roughly coincides with the overall position of the eye but does not very well coincide with the region of highest photoreceptor density. We were interested to know whether the visual acuity of the optokinetic response or the ability to resolve fine structures is related to the photoreceptor spacing. So the photoreceptor density or whether this is not the case. We performed the coding experiment where we used stimuli of different spatial frequencies and presented these different stimuli on different locations of the visual field and got this curve here, spatial frequency tuning curve, for different visual field locations, which peaks pretty much almost at the same location at about 0.05 cycles per degree, arguing that the photoreceptor density is not the limiting factor for the visual acuity of the optokinetic response. In another experiment, we wanted to see whether the response is mostly driven by retinal factors maybe also by other factors. So we inverted the animal sitting upside down here, looking to the bottom. And we asked the same question. Where in the visual field do we need to stimulate to drive the optokinetic response? And what we found is that when we invert the animal, it still prefers upper environmental positions, which correspond to slightly ventral positions relative to the animal, which cannot be explained by the retinal photoreceptor densities or the position of the retina. So there appeared to be some extra retinal effects that determine where in the visual field you need to stimulate to drive the optokinetic response best. For the optomotor response, we mobilized animals and let the tail move freely and then quantified the tail beats of the animal. We made use of bilaterally symmetric stimuli presenting at the same time in the left and the right side and then used different stimulus sizes. So here a whole field stimulus was used and in another part of the stimulus protocol we use smaller stimuli down to this very small stimuli where only two patches that are very small move into the anterior direction to drive forward motion. And to our surprise, we found that the regions that drive the optomotor response best are very different from those that drive the optokinetic response best. They are located in the temporal-ventral visual space. So there the response is strongest and actually very small stimuli are sufficient to drive the optomotor response, arguing that it's maybe not so much a whole field driven behavior, but it's driven by local circuits here, which I think is exciting because it has been the past been discussed as a whole field gaze stabilization behavior. So yes, it's responsive to global optic flow and likely mediates to stabilize motion relative to the visual environment, but it only makes use of a small patch of the visual field for this. For prey capture, other authors have shown and Bolton that are that the prey capture is performed best when the animal, when the paramecium or the rotifier is in front of the animal, which is the result of the zebra fish swimming after it. But the zebra fish also approaches it from a little bit from below, as you can see here. It's not directly in front. It's a little bit also from below. This is the side view of the animal. In dorsal view, you can see the stimulus locations that drive prey capture best there in front of the animal. And these regions in the upper nasal visual field correspond very well to the regions of highest photoreceptor density shown here for all cones or just for the UV cones, which are thought to be heavily involved in processing prey stimuli. So in summary for these three behaviors, optokinetic response eye movements, optomotor swimming, and hunting behavior, we have shown that they all have the different peaks in the visual field where stimuli drive these responses best. The hunting behavior is driven best by the region in the retina in the visual space discovered by the highest photoreceptor density shown here in pink. The optokinetic response is driven best by stimuli that are centered on the retina and a little bit in the upper and nasal visual space, whereas the optomotor forward swimming is best driven by stimuli in the lower temporal visual field, a region where the photoreceptor density is already quite low, suggesting that a high level of retinal processing specificity might occur at this position. So now that we had shown these visual field anisotropies for three different behaviors, we wanted to relate them to the encoding in the brain. And for that, we investigated the two major visual brain areas, the optic tectum, which is thought to be involved in prey capture, and the aria pre-tectalis, the pre-tectum, which is thought to mediate optokinetic responses and optomotor responses. The aria pre-tectalis has its analog in the mammalian accessory optic system consisting of diencephalic and mesencephalic nuclei that process different directions of motion. The optic tectum has its homolog in the superior colliculus in mammals, which is not depicted here. And mammals have on top of that the LGN and visual cortex pathway, which is missing in fish. To look at the neural representations, we use calcium imaging. And to photon imaging, the calcium indicator G-Camp, you can see here responses over time to moving stimuli on the right. Every little circle here is one neuron. And this image is about 250 micrometers across. And we can record for the setup the brain activity at the same time as the behavior. And we can visually evoke behavior using our visual stimuli. And the first experiment is a receptive field mapping experiment where we recorded in the optic tectum and in the pre-tectum using stimuli that either covered the complete stimulus arena. And the stimulus arena was only presenting a stimulus to the right eye, in this case, not to both eyes. Or stimuli that moved only in smaller patches of the arena down to this very small stimuli that are only 30 by 15 degrees in size. And we then record the calcium traces. To example, neurons are shown here that respond to these gray motion stimulus phases. We find neurons like this. This here is this neuron, which responded, shown in yellow, to the whole field stimulus. But it not responded at all to any of the small field stimuli, showing that this neuron here is a large-size receptive field neuron. And it was found in the pre-tectum. And in contrast, we can also find neurons like this that do not respond to the whole field stimulus, but respond to very small stimuli and only for particular positions of the small stimuli. And this neuron here is then a small-size selective receptive field neuron that was found in the tectum. We quantified a few thousands of these neurons and found that within the optic tectum, the small receptive field neurons prevail. More than 80% of the neurons are small receptive field neurons in the optic tectum out of the motion selective neurons. And in the pre-tectum, very many large-size receptive field neurons can be found. Now, when we look at the distribution in the brain, we find back the textbook topography for the optic tectum. So here's the rostral region, this is the optic tectum and they're color-coded here according to the azimuth position of the receptive field centers. Such a topographic map does not exist for the pre-tectum. However, we do see a certain type of map for the receptive field-size distribution, which is a color code here. So the more cordial neurons tend to have larger receptive field sizes. Now, to link these neurophysiological results to the three behaviors that I've shown you before, we mapped the receptive field centers in these two different brain areas to visual space and found that in the optic tectum, the small-size receptive field centers are mainly located in the upper nasal visual field, whereas in the pre-tectum, the large-size receptive field centers, they are biased towards the lower visual field, which roughly matches the results for the optomotor responses. So let me summarize these first results. We have shown that in the optic tectum, there is a bias for the upper nasal visual field and in the pre-tectum, the neurons have larger receptive fields and are biased below the horizon, which matches the visual field bias of pre-capture behavior. So pre-capture and tectum match and the optomotor response drive regions, which are strongest in the lower temporal visual field, which matches roughly the locations of the large-size receptive field. And for the optokinetic response, where it's centered more centrally on the retina, the link is not so good here yet. So now we've shown you the different visual field ionizer trophies for three behaviors. And we would also be interested in relating the processing in the brain to the sensory environments and see how efficient the encoding is with regard to the environments in which these animals have evolved. And for this, we have a collaboration with Todd Thiele, who is spearheading this project and Emily Cooper and Scott Junty, where we recorded videos with a robot that carried a camera and moved underwater in Africa and India for two fish species, the zebrafish and an African cichlid, Astatopilapia botoni. And this work is ongoing. If you're interested to know more about it, then please have a look at this presentation by Todd Thiele and Emily Cooper. In our laboratory work, we noticed that optical artifacts can pose a huge problem in aquatic vision experiments. They're due to the different refractive indices of water and air where the electronic equipment is usually placed because it does not like water. We get physical effects like total internal reflection, light reflection, light reflection, light dispersion, water meniscus or light absorption, which can all affect the way the stimulus looks like. And to quantify this more precisely, we developed a computer graphics simulation approach in Blender where we present a visual stimulus pattern outside the animal and then model the experimental water chamber in the computer and place a camera inside to see what the stimulus looks at the position where the fish eye would be. And we have developed also in practice, water container that is spherical and made out of glass, which we think is much superior to many previously used water containers. So let me highlight one result here we have. So here we can see the stimulus how it looks like from within the water container, looks very clean and clear. And from within the petri dish, you see all these optical artifacts which are mostly resulting from total internal reflection that happens at the water surface, but also at the petri dish bottom, at the interface between the plastic and the air. And we think that our spherical glass bulb arena is an improved setup. And we also showed this experimentally in this experiment where we measured direction selectivity. So we have stimulus that moves in eight different directions and we measure the neural responses, is the response of a single neuron here. This neuron is direction selected. So we plot these responses in the polar plot. We see that it prefers the direction 90 degrees this neuron. We can then measure hundreds of neurons in the optic tectum for both the setting where we use petri dish lit as a container or glass bulb to see which preferred directions prevailed. And we find that when we use the good glass bulb, we find four represented preferred directions in the optic tectum. However, when we use the petri dish lit, we only find the horizontal directions, two directions, but not the vertical directions. They are missing. So why is that? We think we can fully explain this by the total internal reflection that happens in the petri dish lit because these reflections will lead to inversion of the vertical stimulus. You can see these arrows point in both directions and the letter L is inverted. Whereas for horizontal motion, the stimulus is reflected, yes, but it does not change its direction. The orientation of the small leg of the letter L is still pointing to the right. So only the vertical direction is affected, but not the horizontal direction. With these direction selectivity experiments, we have found that both in the pre-tectum and in the tectum, the same four directions are represented which roughly correspond to the horizontal directions and the vertical directions. Now, how can the animal make use of these different directions that it has encoded in its brain to judge ego motion? This is not a trivial task because the animal needs to decide whether to move its eyes or to swim. When the stimulus is rotating, it can best control or stabilize the stimulus by just rotating its eyes for which the pre-tectum needs to activate by different processing steps. The abducens nucleus and the oculomotor nucleus to drive the extraocular eye muscles, whereas for the ophthalmotor response, the pre-tectum needs to relay the information to the reticular spinal system, to the NUC MLF and to the reticular spinal cells in the hind brain, which then drive the muscles in the tail to drive swimming. And this is mainly shown ophthalmotor forward swimming in response to forward moving stimulus. So how can the animal distinguish these two behaviors or make the decision for these two behaviors? Given the optic flow that is present, it likely needs to decompose the rotational components from the translational components for this task. And our laboratory has shown in the past that neurons exist that are selective for translation in the horizontal direction. And more recently, also translation selective neurons that are present for encoding vertical directions and we also found rotationally selective neurons so that we have come to the conclusion that the most likely common circuit motif is such that direction selective cells in the retina project to the other brain half because the optic chiasm is completely crossed and activate their monocular direction selective neurons in the pre-tectum shown here in green and then already within the pre-tectum binocular selective neurons exist that receive input from two such neurons that have connections to both eyes to make a binocular neuron. And this neuron can then obtain a translation selectivity or rotation selectivity. And in our data sets, we find every single one of the six possible degrees of freedom for rotation and translation. We found it to be encoded. So we found neurons for it. And these results on contrast to recent findings by the Naumann group who by Naumann et al who claim that the selectivity for forward moving stimuli is not present in the pre-tectum and that the selectivity for sidewalk moving stimuli is also not present in the pre-tectum. So in their model, they need a processing that we would claim is visual processing, namely to compare between the two eyes to decide whether a stimulus is moving forward, moving sidewalk, that this decision is not happening in the pre-tectum but that much more of the brain, the early and late anterior hind brain are needed for this. Whereas we think that the visual processing is occurring within the canonical visual pathways and likely no further processing is needed. So they could principally at least directly inform optomotor and optokinetic behavior. So now that we have these binocular selectivity data and we wonder how do the receptive fields of these neurons actually look like. There's beautiful data from fly research that shows that receptive fields such as these exist in the tangential cells of the lobular plate and flies where the neuron with its optic flow direction selectivities is really selective to the type of global flow that occurs during a particular rotation direction. So here we have rotation along the body axis. This is roll and this is resulting global flow and then this neuron here directly forms a template for this type of motion and can be used to judge rotation with this translation. These beautiful experiments have been done using electrophysiology, which is difficult to do in zebrafish. So to do something similar in zebrafish, we first had to develop a new stimulus protocol that works for calcium imaging, which is slower but with which you can record hundreds of neurons at the same time. And we succeeded in doing that. So we call the stimulus protocol the contiguous motion noise stimulus, where the motion at any given pixel position is correlated in space and time locally. And we can then see the resulting brain activity and perform reverse correlation to ask how did the stimulus look like when the neuron fired? And this allows us to estimate the receptive fields of the neurons. And in the following slide, I show you four example neurons in the estimated receptive fields. So these two neurons are unimodal and these two neurons are bimodal. So the unimodal neurons have a single patch in the receptive field where they prefer a certain array of directions and the bimodal neurons, they have two modes. And in this case, for example, it's a translation neuron that prefers translation with this year as the point of expansion. When we look at the unimodal neurons of which you find very many, this is the receptive field center distribution in the individual space. You find back the same directional tuning for the preferred directions. So these neurons are maybe not the most interesting ones, but these bimodal ones, they are really interesting. We find them mainly in the pre-tectum. And for those, we can look at the mode distance. So the distance between two modes of the same neuron. So for example, I don't know, this neuron here has its one mode here and it's other mode here connected by the lines. Every pair here is one neuron. And the mode distance is not uniform. It can be clustered with a clustering algorithm and according to whether or not the neuron is monocular. So where both modes are contained in one eye or binocular into three groups of bimodal neurons. So these neurons here are the monocular neurons where both modes lie in the same eye. And these two are binocular neurons either with a smaller distance or with a larger distance. And we can also ask how these different modes, how they combine the directions here. We saw the bimodal neurons according to their direction selectivities of the right mode. So what the right eye likes to see. And we see here that when in the right eye the stimulus goes to the left. In the left eye the stimuli should go to the right making a translation selective. And when we summarize these data into these polar plots we find that for vertical motion, neurons exist that are selective to translation and also neurons that are selective to rotation. Whereas for the horizontal motion we mainly find neurons selective for translation not so much for rotation. So neurons that like forward motion or backward motion but not so much clockwise or counterclockwise rotation. And to really understand how the neurons are capable of helping the animal to judge a particular ego motion direction. We can now relate the receptive fields that we measured to the closest translational and rotational fields to ask how similar the receptive field is to one of these global flow trajectories. In this example here you can see a neuron with its receptive field in red and in the back in gray you can see its closest optic global optic flow field. And you can see that this neuron likes translation apparently whereas this neuron down here likes rotation about this point. So we find that the responses of the receptive field and generally match very well one of these global flow directions. And we are currently investigating how well it really helps the animal to make the distinction. One thing we are quantifying for that in this ongoing work is the similarity of this stimuli to rotation and translation. We find a lot of neurons, these gray ones here which can be modeled by either rotation or translation and we find other neurons these ones here which are translation selective. There's no global flow field for rotation that would match well for the receptive field of the neurons and other neurons that seem to be more rotation sensitive where we cannot find a translation global flow that would match very well with a neuron. So in summary of the second and last part of the talk I've shown you that in the brain in the pre-tectum and optic tectum there's a representation of four orthogonally arranged preferred directions. And I've shown you this new contiguous motion noise stimulus that we use to precisely map the flow fields of the neurons and find out how they really process rotation and translation help the animal to make a decision in mixed environments as well where rotation and translation are mixed. And I've shown you evidence for binocular selective neurons and how they could in principle inform optomotor and optokinetic behavior. Of course it still needs to be shown what path is really chosen in the fish for driving optomotor and optokinetic responses that is still unknown. So with that, I would like to thank you for your attention. I would like to thank our sponsors and I would like to thank the PhD students and scientists in the laboratory who performed the work I presented today. Konwang, Yuzang, Florian Demelt, Rebecca Meyer, Julian Hinz and Tim Platnik. So yeah, thanks again. I'm open to any questions you might have. Thanks a lot, Ari, that was very impressive. I will ask everybody in the audience to join us in a room if you want to ask yourself a question to our guests today or if you just want to join us for further talking about this topic. So Ari, I have a question in the chat for you. Question from Tom Badden who's asking, how do you think we can link the four to 90 degrees of direction selectivity axis you see in the spherical arena to the three to 120 degrees separated direction found previously? Yeah, from the retina. I mean, yeah, so that's a good question. So we have in the retina three directions encoded in the tectum and pre-tectum, we have four directions encoded and yeah, it's really a question. Why do the animals do that? And I mean, there's a publication by the Martin Meyer Lab that claims that at least how in principle you could obtain these four directions of responses by neurons in the tectum that syn neurons that would help in establishing these new response types. So that's at least an explanation like how you can get these responses but it's still unclear how the animal actually makes use of it so why it's better to have this one representation the retina where there's the other representation in the brain. And yeah, right now I don't have a very good explanation to it. I mean, for the vertical directions it has been proposed that the orientations, I mean, the rotation orientations that are encoded that they match with the axis of the semicircular canals in the vestibular system and also with the extroocular eye muscles to make it more efficient in coding. So it's maybe something you need at the level of the tectum and pre-tectum but maybe not so much yet at the level of the retina. Yeah, but it's still an open question, very interesting one. Very interesting indeed. Sorry, stupid personal question. Will you consider a spectral stimulation to how to your motion? Yeah. In between the UV density, so UV cone densities and the distribution of some receptors of motions. Yeah, so far we have not been able to investigate that because our stimulus arena is monochromatic. It's green to us and activates mainly the red photoreceptors of the fish. And that's something we would also be interested in the future in an effort to understand how color processing and motion processing are disentangled and to what extent it goes together and to what extent it's different. I mean, it has been shown that for the optomotor response, it's more strongly driven by red and green stimuli not so much by blue stimuli for UV stimuli, the data is still lacking. And it would be interesting to find out what exactly these dependencies are and something that it can only speculate about right now. So do we have more question? What I'm looking for, can I just ask you what is your, the maximum resolution you can achieve with this simulator and your spherical arena? Yeah, so I said it's about 14,000, a little more than 14,000 LEDs. So along the horizon, the azimuth where the spacing of the LEDs is optimal. There we obtain about a degree per LED and pairs. It's like, yeah, about the resolution limit of the retina. And in other parts, it's maybe a little bit worse. So yeah, it's on the order of one LED per photoreceptor, but it's, yeah, if you wanna look at pre-capture behavior or so then it's maybe better to use an arena that has a still a little bit higher resolution but we right now are mainly interested in the global flow processing for the optic netting, optomotor responses there. It's not so important to have a very specially resolved, highly resolved stimulus. Before the average. Thanks. I have a follow up question from Tom Baden who's asking, how are these DS cells linked to the optic flow response you showed? How are they linked to the optic flow response? So it is known that the DS cells in the pre-tectum are responsible for driving optokinetic responses. I mean, that's data also from other species, rabbit and other mammals that show that this region is sufficient and necessary for driving optokinetic responses. And also Fumiko has performed experiments with optogenetics and zebrafish to show that you can activate the pre-tectum and drive the optokinetic response for the optomotor response. This is not so well, well it is also thought that it's mediated by the pre-tectum but the direct evidence is maybe not as good yet. So arguing that the direction selective neurons in the pre-tectum are responsible for mediating these behaviors. Now that would mean that the responses in the tectum that we see have mostly modulatory function. I mean, it's data that if you ablate the optic tectum then you actually don't get a... then it's not completely absent the optokinetic response. So there will be my take on it, that there's a modulation but we don't know very well. Also within the pre-tectum and there's like several pre-tectal areas and we don't know which ones are really driving and so I think for that we need to do like single cell ablations and similar things to see what each neuron really does. Thanks you for that. I have a comment from an advocate buyer. Hi, I regret to talk. A neuron can have a large receptor field but still be selective for small size stimuli. Maybe I missed it. Did you find such a neuron? Yes, we found such neurons that have a large receptor field but really preferred small stimuli in part of the pre-tectum but they were a little bit difficult to grasp in terms of the reproducibility. So in some recordings we found many of them and they will be really exciting because they might be like neurons that alert the animal to oh look there's like prey items swimming around and it will be really cool. But in other... in animals you did not find so many of those neurons so for that reason I did not include that data today because we still need to do further work to really understand what these neurons are doing better. The other way around is very clear like neurons have a small excitatory receptor field but that are inhibited by large field stimuli. I've shown you like this one small size neuron in my talk that I called small size selective neuron because it is suppressed by a stimuli that are larger and 80% of the motion sensitive neurons in the optic tectum that are small size responsive are also small size selective meaning that this is a general concept that the tectum will not get activated very much by stimuli that are global flow stimuli that move everywhere. I have a comment from Michael Reiser but I see he's here with us in a room maybe Michael you want to ask a question yourself? Sure, can you hear me? Yes. Okay great. I actually have two questions. It was a really great talk, very thought provoking. So one is about the optic flow structure the fields you find they seem primarily to be centered close to the equator if I understood correctly and I wasn't sure if that was more about that particular stimulus and that visual display system or if you think this is like fundamental for the neurons is one question and the follow-up question is just about speed tuning do you also when you do all these map things are you following through something like temporal frequency or speed tuning you know from the direction of selective neurons through the flow fields to the behavior because in flies we have big mismatches. For the first question we don't think as of yet that this is necessarily really like what the animal only cares about that they are like all oriented along the horizon although we think it would also make sense but we have to be cautious there because I may be glanced over in my talk but for the receptive field mapping that we perform we still use the cylindrical arena which only covers the visual space from minus 40 to plus 40 degrees elevation we don't have our full globe setup yet present and we would like to repeat the experiments with the globe setup to really see whether this arrangement where it's mainly where the bimodal neurons are mainly located along the horizon whether this really holds or not but it looks from the data like yes they care more about this translation rotation that takes information from those regions of space but yeah that's something that we need to look into more detail for the other question the speed tuning a very interesting question and one that is difficult to answer with our current data unfortunately we do have a stimulus that has a certain speed statistic and when we draw the stimulus arrow like the size of the arrow it looks like well maybe it's representing speed but it's not really speed it's more like how much of the stimuli were present that went in this direction when the neuron fired so it's not really only about speed it's a mixture probably of a speed tuning and how direction selective this neuron is and that's something where we need new experiments to test precisely the speed tunings we can really nail it down and find out what these neurons really like and this is an important question also the concept of mixed optic flow regime so you have rotation and translation mixed and you have like speeds and of certain size like how well the animal is really able to distinguish those that depend on the speed tuning so to really nail it down we will have to understand that as well thanks thank you sorry do we have any more questions yes I have one from Keisuke Yoneara it's a great talk is it possible that retina-independent mechanisms may generate direction selectivity in relevant brain areas or do you have evidence that these can be reestablished by retinology as cells yes so we think that the most parsimonious explanation is that the direction selectivity already exists in the retina and it does not need to be established in the optic tectum or pre-tectum however there's also evidence that it is actually possible to reestablish direction selectivity in the tectum and the independent of the retina so those experiments performed at Florian Engert's laboratory where they had a stimulus where they forced the neurons to rewire to the brain in such a way that the retinal gangon cells from both eyes would end up in one tectil half but not in the other and then they presented stimuli to one eye and the other in such a succession that for each eye it was not a motion stimulus but if you put the two eyes on top of each other then there was a motion stimulus and they found neurons in the optic tectum that would be selective for that so arguing that yes novel direction selectivity can be established in the tectum but if you ask me I would think that this is probably since you already have it for free in the retina why should the brain not use it I would think that they probably the tectal neurons make massive use of the direction selectivities in the retina but yeah we need more data to really say but there will be my take on it that's very interesting okay well thank you everyone I will soon end the live streaming so if people want to join us I will suggest they do now because I will close the link in a few seconds thanks again Harry for accepting our invitation and talk to us today and for the audience we will have another talk in a two weeks time so hope to see you there bye and the live stream is off