 And I will start the live stream and I believe we are live. Welcome everybody to another session of the Sussex Vision Seminar Series. And it's a part of the Worldwide Neuroinitiative which was established by Tim Fogels and Panos Bocellos. I'm Havins Eifert, PhD student in the lab of Tom Barden at Sussex University. And I'm delighted to host today's guest, Professor Dr. Havick Bayer, who is a director at the Max Planck Institute of Neurobiology in Munich. Havick studied biology at the University of Konstanz in Germany and did his PhD with Friedrich Bonhilfer at the Max Planck Institute for Developmental Biology and Tubing. He then did a postdoc in the lab of William Harris at the University of California. And he sat in California and became professor at the University of California in San Francisco from 1998 to 2012. And currently he's a director at the Max Planck Institute of Neurobiology in Munich. This position he has since 2011. And Havick and his team use quite a diverse area of different methods to study Zipafish vision or vision in Zipafish mainly. Methods like molecular, genetic, optical, iconatomic, behavioral and computational approaches. And I think Havick's studies have given quite a sophisticated insight into different aspects of Zipafish vision. And that's why I'm really excited to see what work and what data he has prepared to show us today. And I think with that I would say the stage is all yours. Havick and I think I wanna thank you once more that you will share your work with us today. Well, thank you very much Marvin for the kind introduction. I'm also looking forward to sharing my data with you. I'm going to share the screen first. Okay, so it's a pleasure to be speaking in this series which I've been following for a few months now. And I'll gladly talk about our data. I hope there's going to be some audience there or everyone is tired of all the Zoom talks and Zoom meetings. But it's going to be if I understand correctly on YouTube so you can also check it out later and tell your friends. So I will talk about our work on the visual system of Zipafish. And I'm going to keep it or try to do it in an interesting fashion by being somewhat provocative in my statements in my conclusions from this work. Because I think that our work on this vertebral visual system has really changed at least my view of how vision works. And I'm going to kind of guide you through the fish visual brain and I'm going to go from molecules to behavior. So the first video that I'm going to show you was not taken by us. I lifted it from Wired Magazine. Someone took the pains of filming fish from below. So here's a minnow in a lake and which is frolicking happily in the water. Unbeknownst to this little fish there is a looming predator that is going to snatch the little fish out of the water. So here's a kingfisher that has successfully caught this prey item and is going to eat it or feed it to his young. So what exactly is happening in this situation so let's break down the situation for the two visual systems involved here. So that of the kingfisher bird and that of the fish. The bird detects a moving object of the right size from above, the bird categorizes the object as prey. The bird locates the prey. The bird selects one prey against a background of distractors. So maybe some of you saw another fish crossing the field of view so the bird really has to pay attention to one target and select this and not let himself be distracted. The bird aims his movements to catch the fish. The fish recognizes a looming shadow so try to escape at the very last minute in this case unsuccessfully. The fish escapes or tries to but it's not fast enough but clearly this fish since it's been around has had a line of ancestors that were quite successful in us avoiding being caught. The bird catches the fish. So if we summarize and try to derive some basic principles from these observations we can conclude that successful behavior be it hunt or escape requires object detection, classification by valence, good or bad, the localization in space and a directed movement which of course involves a great variety and a great number of different muscles and that movement can be toward the prey or away from the predator. Notably the behavior does not require a veridical representation of the visual environment. It does not require a coherent image or a higher order object recognition. The bird does not have to define the species of fish. The bird does not have to distinguish individual fish from each other. The fish is not interested in identifying the species of bird. The fish simply has to escape a looming shadow. So I'm emphasizing this because now I'm asking how does this work and how is kind of that textbook version of how the visual system might do it and why I think that our textbook version is wrong. So a cartoonish view of the visual system that you find in textbooks, in reviews, in systems, neuroscience papers is the following. There's assumed to be serial processing of visual information from the, usually, people cite the retina, projections to the thalamus, so the LGN nucleus, and from the LGN to V1. And 20 years ago, people felt the LGN was simply a relay station. All the interesting stuff was happening in visual cortex, V1, that has changed a bit, but still this view of a serial processor still permeates the scientific literature. There's also the idea of hierarchical processing. So the retina is often treated, not maybe not by this community that is listening currently, but by in many, especially the primate literature is kind of a camera that does a pixel by pixel representation of the visual image. And then it sends this relatively crude pixel-related information to the LGN and then to the visual cortex where it's then pieced together and gives rise to the representation of features such as orientations, direction of motion, size, color, et cetera. There's also the idea that you find quite common, quite often is the idea of parallel distributed processing. So it is assumed that the cortex encodes information by means of distributed population vectors. So combinations of neuronal ensembles encode the information about the visual scene. And this is in a kind of a weird twist of science history, I think was taken from computer science from the idea of deep neural nets that was first developed in the mid-1980s by McLennan and Rammelhardt and it has kind of found its way back to the visual system literature. There's also a very vocal camp that claimed that the visual system is concerned with the coding of image information in an efficient way. And of course that started in the 1950s very much influenced by information theory. And Ateniv is kind of the very first one who applied this idea of efficient coding to the visual system. And then Horace Barlow proposed three principles, one of which was the efficient coding. The first two are I think undoubtedly true and uncontroversial, the efficient coding I think is still up for debate. And then finally, there's also kind of the view of complex circuitry being in the brain being shaped by sensory information, by during development, at least by activity or patterns of activity and by experience. And this is subsumed under the term of self-organization. So I will provoke you all today and I'll tell you that I think all of these ideas are probably wrong or at least misguided or maybe one could maybe more kind of leniently or graciously say they applied to certain specialized visual systems but they're not generally true. So on the right I'm showing here sort of this cartoon version of the rodent visual system starting in the retina transferring or conveying information to the LGN and from there to the visual cortex. And this of course on the right is then the picture that has been advertised in the primate visual system. And so if we think about the primate visual system which has of course influences our thinking is very important to us primates. This is the view that's been proposed by Feldman and Van Essen's important work. Really heroic work of tracing the connectivity in this middle part of this schematic which is all cortical areas that have been implicated in visual processing. If you pay attention, you see here at the bottom this is the retina with two output channels the M pathway and the P pathway and here's the LGN also an inconspicuous gray box. So the information goes through channels through LGN all the way to where the interesting stuff happens. Now let's turn to zebrafish. So I showed you a bird catching a fish and now the tables are turned. This is a little fish lava, a zebrafish lava just half a centimeter in length at this age maybe a week of age. And Duncan Merz in the lab has recorded with high speed, with high temporal resolution these prey capture events. So what's indicated here in this red circle is a paramecium. There are other paramecius running around. The fish chooses and aims at one of these prey items and catches it. And this is actually the same fish filmed from above from a different angle with a different camera. So you can see that the fish sneaks up on the prey likes to come from below and then opens them out and sucks the prey into his mouth and a hungry little fish lava can empty a dish now with about a hundred paramecia in an hour. So here's just the still showing the last moment in the life of a paramecium as it disappears in the gaping mouth of its predator. So Julie Samohack when she was in the lab pioneered the imaging of brain circuitry during behavior. She developed this paradigm where she mounted zebrafish in agarose. So here's a little fish lava that's embedded with its head in a slab of agarose. You can see actually border of the agarose gel here. The tail is still free to move and tells us about the movement direction and also the intention of the fish. In a variation of this essay we can also free the eyes and observe eye movements and jaw movements in this kind of preparation. So Julie did some psychophysics by showing different types of stimuli to the fish to determine which ones would evoke prey capture. Here she was just having fun moving with the mouse the little white dot around and could evoke these prey capture behaviors in this head fixed preparation. You can even see the so-called J-turns which are orienting movements of the tail which would if the fish were stuck in agarose lead the fish lava towards this assumed prey. And she did psychophysics, systematic psychophysics this and found that in this preparation a fast moving five degree spanning white dot on a dark background was the most efficient stimulus to elicit hunting behavior. Now, when Julie used the different stimulus such as a black disc on a white surface that I'm going to play this again that was expanding in size and displayed on the side. She could elicit an entirely different behavior which was a very vigorous escape maneuver which would if the fish weren't stuck in agarose would have propelled the fish away from this approaching object on a collision course. So with this kind of paradigm we can do imaging because now we can place the behavior on the stage of a two photon microscope. And with G-CAMD transgenic fish we can observe the activity in the brain. And we'll show you several examples of this in my talk. We can also do optogenetics by injecting light via a laser into the brain and activate certain populations of neurons. So let's just take a very short detour and tell you and confirm with you that the retina of the zebrafish looks like a regular vertebrate retina. There's a photoreceptor layer that passes on via synaptic connections the visual information to a layer of interneurons here. We have horizontal cells, bipolar cells, amocrine cells. And then of course a row here in the gain and cell layer of cells that receive synaptic inputs from these interneurons and then send long projections across the midline of the brain into visual centers. So we were interested for a few years now in how the central visual brain areas are organized and Stuart Robles in the lab carried out a heroic series of experiments where he labeled hundreds of individual retina gangnen cells determined their patterns of dendrites in the retina and for this very same neuron also trace the connections into the brain. In this case, the retina gangnen cells is a bit of a diffuse arbor. We don't see the axon because it's deep in the brain, crosses the midline at the optic chiasm and then dives up to project here to the optic tectum in the zebrafish midbrain. And here it arborizes in one of the tectal layers. Estrado collected close to 500 of these images. We now have close to a thousand retina gangnen cell morphologies and then classified the shapes of these retina gangnen cells by their dendritic morphologies. And we found sort of the typical assortment of retina gangnen cell shapes that you can also see in chicken or in mammalian retinas. So then we have monostratified types that arborize their dendrites in different layers of the inner plexiform layer in the retina. We also have bistratified and diffuse types and also funky types like this one here, which reminded us a little bit of a girl in a dress upside down with a very long foot. So this is actually kind of a very funky zebrafish specific retina gangnen cell type. And then also this one here, which is interesting because it has, it actually reaches out to the outer plexiform layer and receives direct input potentially from the photoreceptors. So this is where the photoreceptors makes a phone synapses on horizontal cells. So here is a schematic depiction of all the shapes that a storero could see in the retina. This is probably a very conservative list schematic. So conservatively we find about 14 dendrite stratification patterns, which is in the same range as it's as shown in other vertebrate retinas. And just to remind you the idea behind these different shapes of retina gangnen cells is that each of these represents the type and each type serves as a feature detector. So each type responds to a one aspect of the visual scene such as off or on information, edges, chromatic information, motion, direction of motion, et cetera. And all of this is then combined and sent through the optic nerve into the brain where it is then used to analyze the visual scene. So this is of course not our work but work from many labs including the Euler, Barden, Meister, Sainz, Maslund, Wessler. Hoske lab, I'm probably forgetting some important contributors here. So, but I told you that we can also for the very same neurons for which we have their dendrite shapes we can also determine where in the brain they're projecting. And here Estrado could identify 20 axon projection patterns. So he recorded them individually and found some simple ones. So for example, this retina gangnen cell just arborizes in one layer of the tectum. This one is a bit more complicated because it makes it collateral here in an arborization field in the pre-tectum so-called AF7. And then we have all kinds of shapes. As I said, 20 different combinations all the way to very complicated ones. This is the record holder with collateral in the hypothalamus, in the thalamus, in the pre-tectum, so in one nucleus of the pre-tectum and here in the deepest layer of the tectum. Now, when Estrado counted up all the combinations of dendrite and axon shapes he came up with 75 what we call morphotypes. So these are specific shapes of types of retina gangnen cells as defined by both the dendrite and the axonal patterns. So the notation here is such that here's the cell body and then these are the layers of the inner plexiform layer to which dendrites are then projected and these indicates the different pre-tectal and thalamic areas and the tectal layers. And you can see all of these different combinations are encountered in the zebrafish brain. However, counting more of them and registering more of these gangnen-shell shapes doesn't give us more shapes so we believe we have saturated the system and there's not going to be more than 75 or so of these types. So the conclusion is so unlike and sort of deviating from the textbook view there's in fact very many more than two channels that originate in the retina. We find 20 parallel processing streams that originate in the retina of the zebrafish and these different types of retina gangnen cells are indicated here in different colors. They project to nine different targets here in addition to the optic tectum which receives the majority of the input. And these pie charts are meant to indicate that different dendrite shapes or different dendrite types are combining their projections in each of these aberration fields in the brain. So I don't have time to show you all the experiments suffice it to say that in a handful of cases we've also shown defined function for particular processing streams. For instance, we identified populations of retina gangnen cells that project into AF7 here. And these retina gangnen cells respond to small motile objects that represent the frontal visual field. The AF7 is organized retinotopically that is there is a miniature map of visual space in this area. Of course, there's a big and prominent retinotopic map in the tectum but there's also kind of a little outstation here of the map here in AF7. And this area as we've shown with laser ablation is necessary for pre-detection. Isaac Bianco's lab a few years ago showed that optogenetic activation of neurons that receive input from AF7 could actually elicit parts of the hunting behavior. So we believe that this is a processing channel into AF7 that is important for pre-detection and for the initiation of hunting. And these are the two types of retina gangnen cells that project to AF7. There's just two of them, a bi-stratified and a diffuse type. And these guys here. And we don't exactly know what their individual function is but one cool idea that we are currently investigating is whether maybe one of them may encode speed and the other the size of the object and together these two features may confer prey identity to a visual object. But we are still working on this. So this is currently a working hypothesis. So do the morphotypes also correspond to molecularly defined cell types? And for this, a few years ago, Yvonne Kölsch in the lab, a very talented graded student teamed up with the lab of Josh Sainz and his postdoc, Karthik Chakkar who now runs his own lab at University of California Berkeley. And Yvonne took advantage of single cell RNA sequencing approaches and sequenced the transcriptomes of tens of thousands of Russian ganglion cells isolated by fact sorting. And when she did that, she could, using dimensionality reduction, she could identify 32 different types of Russian ganglion cells indicated by these clusters. So each of these dots represent one cell, the transcriptome of one cell. This is plotted here in two dimensions to allow visualization of these clusters. And many of these clusters are very small. And then there's also one cluster that encompasses about 30% of the retina ganglion cells. And so we now have a pretty good estimate of the number of molecularly defined cell types, which is less than the morphotypes. I should mention that when you look closer and recluster just individual of these clusters, then we find further subdivisions. And when we do so, we end up with about the same number of molecular types as with morphological types. A nice kind of side effect or a nice product of these kinds of sequencing approaches is that we can identify molecular markers for each of these clusters. And many of these markers are specific to particular types. And those can be then the starting point for making a transgenic line that gives us genetic access to individual types. So four of these lines are shown here that Yvonne generated using CRISPR-Cas9 genome engineering. And it is very hard to see the retina ganglion cells. Here you can see a few, this of course is the tecton to which these ganglion cells project. But as is common for pretty much all of these transcription factor markers, they also are expressed in other subpopulations of neurons outside of the visual system. So Yvonne needed to resort to an intersectional genetic trick to label just the retina ganglion cells in these populations. And she took advantage of a combination of the Q system, which for those of you who are not familiar with this system is similar to the GAL4 system, GAL4-UAS system, and the CRIS recombination system. So when she used a pen retina ganglion cell CRI line and crossed this into a type-specific QF2 line, then the CRI would cut out this cassette in our Q-UAS cassette would cut out anything that's between the LOXP sites, which is the GFP and the resulting cells, which are the retina ganglion cells, would be then be labeled red while everyone else who's not a retina ganglion cell would remain green. So here we find the muscles are still green, the cells in the hindbrain are still green, but the retina ganglion cells here in the retina are now colored red. So we can do this now on a decent scale and investigate individual populations of retina ganglion cells using this approach. And when Yvonne did this here for five different lines, she found that each population of retina ganglion cells, for instance, here's the MAFA8 type, project into subsets of the retinal targets. This one labels AF7 and the superficial layers of the tectum. Here's one that just labels one SFGS, so middle layer of the tectum. Here's a type that labels actually a mix probably several types that project into these hypothalamic and thalamic nuclei in addition to the deepest layer of the tectum, et cetera, et cetera. So I think we have now access, genetic access to each of the sub-opulations of the retina ganglion cells and a lot of work still awaits us and everyone is welcome to check out the catalogs of lists that we have published. So here's an example of one line that Yvonne created which labels the EOMS-A sub-population of retinal ganglion cells. These are enriched here in the ventral retina and when Yvonne ablades these cells, the fish is unable to show phototaxis. So they no longer prefer the light side of the dish. And we also of course know the projection patterns which go into these protective areas, AF4 and AF4 and the thalamic area AF4, which by other researchers before have been implicated in phototaxis. So these are candidate pre-detector cells that express the transcription factor MFAA. There are, this transcription factor is expressed in the two sub-populations that I've indicated before. Unfortunately MFAA is not terribly clean. It's also expressed in other subsets of retina ganglion cells. And we are currently trying to use genetic tricks to isolate this population of cells to do the experiment that, similar to the phototaxis experiment that Yvonne did by ablating these cells or manipulating them to show that these cells are in fact involved in hunting. We were more lucky with this one here which is, we have one transcription factor that labels just one type of retina ganglion cell. So the picture that has emerged is that of a combinatorial processing of feature-selective ganglion cell inputs. So we find 32 different ganglion cell types that project into these different areas of the brain, including the layers of the tecton here, shown on the right, and form 20 different channels into the visual brain. And each of these feature detectors several feature detectors are often combined in individual areas. So one of these, what I dare to call a labeled line is responsible for prey detection and for initiation of hunting. These retina ganglion cells come in two flavors and they project into AF7 and into this most superficial layer of the tecton. Another line is important for detection of looming. There is a complicated system that detects off or dimming of the entire visual scene. There's a phototaxis system here in the pre-tecton and thalamus. There's also a direction of motion dedicated line that forms a collateral in AF5, in the accessory optic system and then continues to arborize here in this particular layer of the tecton and et cetera. And so forth, so ongoing work is going to or future work is going to probably map out the entire visual system and assign functions to each of these channels. Okay, so they are parallel channels for what you may call visual reflexes. Okay, but what about real vision? So what I mean by real vision is things that maybe machine vision people might be interested in. How can we detect, categorize objects in space? So the area in the brain that is probably, that's the best candidate for doing these kinds of things in the larval zebrafish visual system is the tecton. The tecton is the largest area it receives input from more than 95% of the vaginal ganglion cells, often only by collateral, but okay. It has a complicated architecture with different layers that represent different features of the visual scene. It has a beautiful retinal topic map, so a complete representation of the visual environment. So it is in a position to now do interesting things by distributed coding, by population activity, and might encode in complicated ways visual objects. So we did an experiment to test this hypothesis to give distributed coding some opportunity to show itself. So for this experiment, Dominic Firsta in the lab created a triple transgenic line that expressed cytoplasmic de-camp in the axons of retongue ganglion cells. So this is here the neuro pill of the optic tecton and these bright structures are the axons of the retongue ganglion cells that are very densely innovating the neuro pill of the optic tecton. And in the same fish, he also uses a nuclear localized G-camp that is now restricted to the cell bodies of the tecton neurons, okay. So, and now by simply by recording in a fish that has both of these trans genes, he could in the same fish determine input and output relationships of tecton processing. Okay, and this is of course, not done on the stage of the two photon microscope and via projector, we can project different stimuli to the zebra fish. The battery of visual stimuli that we've used was quite adverse, all the things that we could think about dark ramp, meaning a slow change in ambient illumination, dark and bright ramping, also sudden flashes of darkness or of brightness, prey-like stimuli, larger moving stimuli as well as grating motion, looming, et cetera. So when Dominic hierarchically clustered the responses of the tecton neurons to these diverse stimuli, which are plotted here on the left, he found that there was a group of cells that was very specific and had a constant response to small, either stationary or moving stimuli. So they have the strongest responses to these kinds of stimuli that are shown here, five degree dots moving in two directions and were silent when the whole visual field was changed in dominance and responded only weekly and only some subclasses responded to some other stimuli. So they were quite specific to small moving objects. Another group was quite specifically responses to looming and dimming stimuli. And they were really not in this clustering, not related to the ones that responded to small moving objects. Then in the middle, there was some sort of a large group, maybe half of the tecton that responds to the immediate sizes and might respond to close objects or approaching objects. Okay, so here is just a pie chart that shows different classes. So this is a simplified schematic of what I just showed you. So the most frequent classes of resin ganglion cells, these are the ganglion cell responses and then these are the tecton neurons and Dominic with the help of Thomas Helmbrecht in the lab and Joe Donovan investigated how you can get, how you can transform the resin ganglion cell population responses to the tecton population responses. And the answer was actually quite shocking in that most of the tecton responses could be explained by a linear combination of the resin ganglion cell inputs. There's a little bit that the tecton contributes. It introduces more directions and direction selectivity to some of the channels, but actually the contribution of the tecton for the sensory encoding was quite limited. So Dominic was interested in whether the tecton was organized in such a way to represent a prey object in ventronotopic coordinates. And he was starting with sort of a simple assumption about the hunting episodes, what happens to the visual image as the fish is initiating a hunt. Now in the typical case, the fish will first detect an object here and its peripheral visual field at a distance. So it will appear small and will be represented by the nasal resin ganglion cells and in the posterior part of the tecton. Then the fish will detect this and categorize it as prey and turn towards it to bring the prey here into this binocular field of view right in front of it. There it will of course be much larger because the fish is now closer and will be represented in the anterior part of the tecton and actually in the anterior half of both tecton in the binocular zone. So during a hunting episode, there will initially be a small object here in the posterior tecton that then moves across the tecton surface and will be bigger and be represented in the anterior tecton. So the cells that represent these kinds of objects distributed in this way as predicted by the sensory ecology and the answer is resounding yes. The back of the tecton is enriched with cells that respond to a small moving object that is actually moving. Then there is a sensitivity to small objects all over the tecton but then in the anterior part of the tecton the responses to the larger objects are enriched. This is also shown here in the population distribution. This is the axis of the tecton and you can see this very nice peak of cells where the cells that are responding to small moving objects are enriched. So this of course now recapitulates the sequence of hunting as I've indicated before and very nicely matches the images developing during prey capture. And I'd like to give a shout out here to work of Dan and Tom Barden's lab on asymmetries in the zebrafish retina that has also discovered asymmetries in the arrangement of photoreceptors but also retina ganglion cells in the zebrafish retina which also presage or beautifully match the expected sensory ecology and it will be very cool in the future to link these different patterns in the retina to those in the tecton. Okay, so just this is kind of the fate of even of the most beautiful works of PhD students that five years of work are summarized in just one or two slides when the API gives a talk. This is a summary of what Thomas Henry, a super talented student did in the lab. Thomas was interested in moving in understanding the sensory motor transformation that was going on in the tecton and identifying the output channels of the tecton now. The tecton is connected to areas in the hindbrain and in the midbrain tegmentum and of course steers the movements of the fish. So he was interested in whether he could identify particularly dedicated channels that were specific to the behavior of the animal. The answer is yes. So here's the summary of his thesis. He identified two bundles of axons that originated in the tecton. One was dedicated to escape avoidance responses that projected here along these medial tract of this tectobulba fascicle. When he activated these cells the fish would make a very vigorous tail response away from the eye that was connected to this tecton. So this was done using optogenetics. And he could also show with calcium imaging that this tract of axons responded to a looming stimulus. And another group of axons here in this lateral region of the tract was activated when the fish was seeing a prey-like stimulus. And in fact, there was a topography among these axons in that the further lateral you came the greater the amplitude of the tailband was that he could elicit with using optogenetics. So what we conclude from this is that the connections from the tecton to the hindbrain are organized according to a space code and partly not a rate code as many people would have predicted. Distinct labeled motor lines exist for approach and escape. Okay, so I think I've shown you I hope I've convinced you that the zebrafish visual system is organized in a way that contradicts or at least challenges the textbook view of how the visual system of a vertebrate might be organized. Now, let's kind of challenge another dogma of the field namely that of the self-organization of neural circuits. So many years ago, Linda Nevin, when she was in the lab performed the following experiment. She exposed the fish to different light regimes. So this would be the normal night dark cycle. She kept the fish in the dark. She changed the balance of on and off by blocking the on channel or she actually inhibited and blocked any synaptic transition using butadiene toxin. And she could not in these experiments find any difference in the layering of retching gang and cell projections into the tecton which led us to conclude that at least for layer specific targeting, experience or even a retinal activity, synaptic activity was not necessary. But we wanted to revisit that this issue now with better methods, molecular methods. So Shahar Shaman in the lab, a talented graduate student decided to use a single set RNA sequencing to develop a, to survey the cell types in the visual forebrain and also the tecton of fish that have never seen, have never been, have never seen array of light through the retina. So this is just sort of, this is the single set RNA sequencing data from 100,000 cells from a, this is done from a transgenic line that labels pre-tacta with thalamus and hypothalamus. And he finds in this collection, garboergic neurons, glutamatergic neurons about a 40 or so cell types each. And he even finds progenitor cells, precursor neurons, et cetera. And then also the habanula is part of that pattern also. And here's just the garboergic and the glutamatergic population. And beautifully, he can also follow the differentiation path from the progenitors. So these are still dividing. And then there's a neurogenin one path that leads both to the habanula and to the glutamatergic neurons. And then also a trajectory, a developmental trajectory to the garboergic neurons which use the transcription factor A, C, L, one, B. And the habanula then has its own kind of private path that uses both neurogenin one and the chemokine receptor C, X, C, R, four, B. Now, when you look in lacrytus mutants which are devoid of all retinal ganglion cells. So these are fish that look normal. They are dark because they're unable to do visual background adaptation. They lack all retinal ganglion cells because they have a mutation in a transcription factor that is important for the cell fate determination of the retinal ganglion cell lineage. So, and then do the same kind of single cell RNA sequencing of the, in the same line. Shahar found no major difference between the wild type and the lacrytus mutants. And I don't have time to show you all the statistics that he did in order to detect any differences. He basically could not detect any cell type that was missing or not induced in the lacrytus mutant in the forebrain. And he did find individual genes that were up or down-regulated, but the cell fates themselves were unchanged. And even the proportion of the cell types was unaltered in the lacrytus mutant. So this was very provocative for us because of course there's a huge literature on how visual experience and activity shapes the neural circuits that are downstream of the retina. So we asked, well, can we maybe find the deficit in the behavior itself? Now, of course, these lacrytus mutants lack all retinal ganglion cells so they don't respond to any visual stimuli. They are completely blind. So we've resorted to optogenetics to elicit an optokinetic response. This is based on work that Fumi Kugo did when she was in the lab which was recently picked up by Yunmin Wu in a graduate student in our lab. So what you do here is you put the fish in an arena and a wild-tiped fish when exposed to a moving grating that moves around the fish will respond with optokinetic eye movements which are probably familiar to all of you. And Fumi and Yunmin now managed by placing an optic fiber near the accessory optic system and using general doxin-transgenic fish to elicit optokinetic eye movements. You can see the two eyes moving here in this animal. So this is notably without any visual stimulation. So we can evoke optokinetic responses in this animal without any visual stimulation. Now the question was, can we do the same in Lacrette's fish? And the answer is yes. So here is the OKR index plotted for a wild-type and Lacrette's genotype where there's no stimulation, there's no optokinetic response. When there's a visual stimulation, only the wild-type responds here with a strong optokinetic response. And when we do an optogenetic stimulation then both Lacrette's and wild-type show these typical eye movements. So this suggests that the circuits that evoke these eye movements are assembled properly in a fish that is not innervated by retinal ganglion cells. Okay, so let me summarize. Oh, I need to wrap up. In what way is the view of vision changing? So the visual system of larval zebra fish employs matched filters at each stage of processing. There's no such thing as a pixel-wise representation of the visual scene by the retina. Retinal ganglion cells are single feature detectors. The downstream neurons then receive inputs from a combination of retinal ganglion cells. And in the pre-tectum and tectum we observe combined feature detectors. The 20 or so visual pathways resemble labeled lines. They are often dedicated to particular behavior outcomes. Attractive and aversive stimuli are processed in parallel segregated circuits. So not parallel distributed circuits, but parallel segregated circuits. I did not have time to show you that there are circuits that ensure that motor commands are not in conflict with each other. A fish cannot approach and avoid at the same time. So this is done by interneuron populations that inhibit in a reciprocal fashion different lines. Elementary forms of spatial attention. Also, I didn't have time to show you that. I implemented by circuits that are dedicated and that suppress responses while enhancing responses to the most salient stimuli. Then finally, I showed you that cell types, layers, connectivity in visual brain areas are assembled by activity independent genetic programs. And as an aside, maybe for those of you who don't care about zebrafish, so it's just something to think about. So why is biological intelligence still vastly superior to artificial intelligence? So the problem is that even the most sophisticated deep neural networks fail at simple tasks. They need very large training sets that need to be labeled to train the net. And they're also very inefficient in terms of their energy consumption. The possible reasons, so those are maybe more points for discussion, that machine learning ignores the source of innate knowledge that our brains have. Machine vision is trained to classify images, to name objects and to recognize faces. Advanced systems may estimate agent's behavior. These are all very human-centric tasks. This is not what most visual systems care about. Brains have been selected to survive by responding properly to external cues. And for this, they don't need a vertical representation of the external world. This by itself does not confirm any advantage to the animal. Brains are so efficient because they use the energy spent by development and learning over the lifetime of the organisms and indirectly over evolutionary time-skates. And then something that I think the AI community also begins to realize is that the brain assigns meanings to stimuli. So, and we don't know how those are encoded, but clearly brains have found clever ways to develop some semantics. So the common claim that deep neural nets are biologically inspired can only be justified if you believe that common misconceptions of the visual system, I mentioned serial, layered, hierarchical, parallel distributed, self-organizing are accepted. Okay, with that, I'd like to just show you a few more slides to advertise our atlas of the zebrafish brain. I showed you a lot about the anatomy of the brain, but I did not mention or did not emphasize is the small size. So this is the lava zebrafish brain in the bottom left compared to that of the adult mouse. So any conectomic or functional neural net anatomy project is going to be much more daunting in the mouse. So we've over the past couple of years developed atlases of the zebrafish brain with map lots of individual cell morphologies into a standard reference brain. These are the tecton interneurons. These are the tecton projection neurons mirror to the other side of the brain. And we have over the years collected many more morphologies of neurons and now filling the entire central nervous system of the zebrafish with really thousands of morphologies. This is a kind of a pipeline in the lab that we are using. And check out the atlas at the website that is given and you can find the shapes of neurons in your favorite area. And you can actually also determine where these areas are receiving input from and what its outputs are. In the same space we've also mapped now over 450 markers, most of them transgenic lines that you can also check out and pick and choose and go shopping for. And we are now adding, this is in preparation and not published yet, also gene expression pattern using HCR in C2 hybridizations which are super sensitive and can be multiplexed. And we are adding this information about gene expression also as we speak. Okay, so with that I'd like to finish up and I'm happy to take questions. Well, most important slide is this one here. This is, oh, I should. This is the group of superheroes that have done all the work. This is in the happier pre-corona times at one of our lab parties. And I mentioned the individual people as I showed there the data that they've contributed. With that I'd like to thank them and thank you for your attention. Thank you, thank you very, very much. It was a really great talk. I really enjoyed it. And I can see here that you haven't lost a single viewer during the whole hour. So, I mean, there was so much in there. It's incredible. I see there's some questions. One question that I had, I think you mentioned that in the summary slide. But maybe you could say one or two sentence mortars. The real environment of the lava zebrafish, I guess, is very confusing because they're all of the stimuli at the same time. Like different power muses are swimming around, maybe the light becomes darker, some spot comes closer. So, do you think that this might be the really important role of the tectum that it kind of decides which stimulus is actually the most important at that very moment? So, that is certainly, I wouldn't call it a function of the tectum, but I think it's a necessary task or it's a necessary feature of processing. So, the function of the tectum in my view which is actually conforms to the classical view of the tectum is that it transforms a sensory map, a visual map into a motor map. So, this is what all these complicated, this neuro-pill connectivity is for. And we don't understand how this transformation works. We know inputs and we know the outputs thanks to Thomas Heimreichsberg, but we don't know how these two maps are converted into each other. This is a fantastic problem. But of course, in order to choose one visual target and initiate the appropriate behavior, you need to ignore lots of other stimuli. So, I want to refer you to one paper that we've published recently that showed that for stimuli that are presented with the same eye, it is in fact already probably in the retina that there is a lateral suppression. So, in enhancement of a cluster of retina ganglion cells at the position that represent the most salient object and the suppression of other responses in the neighboring regions. And we claim that because we see in the retinal tectal system, we see this kind of prioritization in the activity of the axons. And it does not depend on the tectum. So, we related the tectum and we just had a tectum that had the retinal axons as an intact sheet. And they as a population still could choose the most salient stimulus. So, if you present now the stimuli to two different eyes, then there's an additional nucleus involved, the nucleus Isthmie. And we think, I wouldn't say the case is not close yet, but all the evidence, the correlative evidence points to a goal in the nucleus Isthmie in determining which tectum is going to be favored, which tectum is going to activate the behavioral program and which one is going to be suppressed. Okay. Check out the paper that came out in Moran a few months ago. Yeah, thank you. So, I'm just going through the questions that are being written in the chat. So, one question is lava are quite reflexive in their visual ways, but this becomes less obvious as fish major. Perhaps what one might call a traditional image forming vision becomes the luxury of the bigger brain. Yes. I mean, that's a possibility, certainly. We start with the lava because it's small and can be imaged, et cetera. But of course, the fish grows substantially and the tectum grows with the growth of the rest of the organism. So it is, I mean, I don't know in volume, but probably a million times larger. Maybe that's exaggeration, but it's much larger. And there is room for population coding in the adult tectum, yeah, I grant that. But at least it shows this visual system at this stage doesn't need that. It doesn't employ that kind of coding. Another question is, so, so Emery says, impressive talk, can you please tell more on ganglion cell innovating zebrafish equivalent of mammalian, LGN. Do LGN innovate ganglion cells have futures different than others or rather they have mixed futures? Okay, so LGN, that's a big debate in the TVRs TVRs. So this branch of vertebrates, fish, some reptiles, birds, have used as a major way of information transfer from the retina to the telencephalon, they use the tectum. So the projections, so there's a tectothalamic projection. So a visual LGN probably does not exist in the way it does in mammals, okay. There are three retina recipient telamic nuclei in zebrafish, but none of them is really projected through the telencephalon. So none of them is a good candidate for the LGN. I was in very much in doubt of this, but I've come to, I've become persuaded by the TDOs neuroanatomists who tell me that. There's a nice review article by Thomas Müller about the zebrafish, about zebrafish thalamus, visual thalamus, that makes the point very clearly, I think. So, yeah. All right, Phillip Bartl asks, what do these data mean for perceptual spaces in zebrafish? Perceptual spaces. Yeah. Maybe Phillip can... Yeah, not so that kind of person, sure. ...perceptual spaces. Yeah, I mean, I don't know more than what I showed you. We published in the first 10 years of my life as an independent faculty, we published paper, Perception and you know, claiming perception in zebrafish. And we got flak from the reviewers that fish don't perceive anything. We defended that. And now I've come full circle and think maybe, maybe there's no such thing as perception, love of fish. So I'm becoming very skeptical of the papers that do whole brain imaging and say, I don't know, chromatic processing in distributed networks in the fish. And I think, first of all, you have to show that there's color vision in fish. I mean, there's certainly selective cone usage. So you from the bottom let know that, right? So certain behaviors use particular cone channels. But is there really something like color vision, like we have? I'm not sure that's up for an interesting discussion. Yeah, perception is a difficult concept. Maybe, maybe Philip will join the talk. It's like cognition, cognition is problematic as a concept. Yeah, maybe Philip joined the discussion afterwards. And I'm looking forward to talking with him. So Leon asked, Leon, like now to ask, to the topic, what happens in optic tectum? You said that you think that the responses of neurons in the optic tectum are linear combinations of inputs from the ganglion cells. What do local interneurons do? And then he extended the question a bit, adjust weights, perhaps depend on signals from other senses or changes in internal state. Exactly, all of the above. So, first of all, I think the tectum, as I said before in response to your question, the tectum has a sensory function but also a motor function. So that transformation, I think that's what these interneurons do. And the tectum also is important for the saliency encoding that I think also you asked about. The tectum is also important for mediating the state. So we've shown a few years ago that a hungry fish processes prey differently than a satiated fish. And so hunger recruits more neurons that respond to prey. And interestingly, it does not switch the feature selectivity of the tectum neurons. So, but it wakes up from a quiescent state neurons that in a satiated fish not respond to prey. So the tectum has lots of functions. It does, the linear combination is sort of the general rule. There are additional things that, so for instance, Martin Maier showed very nicely that the tectum has additional direction set of activities. Tecton neurons have additional direction set of activities that are not impinged on or not kind of imposed on them by retinogene cells. Right, I think that's all the questions for now. I will now post the link so you can join the Zoom room. And so I will let this live stream run for a few minutes longer but then afterwards I will shut that down and then unfortunately also the chat will disappear. So if you wanna join the Zoom discussion, you kind of have to decide now. I think Tom is already joining. The Zoom link was also in the email that you sent me, right? Is the difference all right? No, you just stay where you are. It's just for our viewers. So for example, Tom has already joined and I think some more people will join. All right. We're still live, are we? We're still live. No secrets, huh? No, not yet. That was great. Yes, I was particularly struck by one of the very last slides that you had which showed the in situs that looked really lovely. And then you put us in the way so I couldn't really see it properly but it looked really good. This is what the method that we shared with you, right? With the HR in situs, yeah. Now this is very nice and it took us a while to register these in situ stain hormones to our standard reference brain, right? Because you have to do all this warping but this works now very nicely and it's going to be something like 150 or so genes that we, really interesting genes that come out of the SCRNA seek data that we've mapped to the brain. But there will be a lot of information. No, I mean, I like the idea that these little fish use labor lines for a lot of their tricks. This is actually something that's starting to be more and more obvious in mice as well. I don't know if you saw the recent paper about Daniel Kirschner-Steiner, the one where they killed some alpha cells and then they don't hunt so well. Yeah. Yeah. No, I think the idea is there and I'm kind of maybe put a nail in the coffin of the parallel distributed processing idea. I think it lingers below the surface of many of these studies. And I think initially the mouse field tried to make the mouse a little monkey and now it turns out, no, the mouse visual system is not really interested in faces, right? But it's interested in escaping or in finding crickets to eat and it certainly is. I'm just wondering to what extent it's mutually exclusive. The mouse could still be interested in faces as well as escaping. Uh-huh, yeah. Well, yeah, maybe that was Philip's question also. You know, where's the perceptual space? Yeah, no, it's true. Yeah, maybe I was, I mean, I was trying to be provocative to make that. No, I mean, I think for the larvae. I believe in it, but you can still be provocative with what you do. No, I think for the larvae, I think this is probably the right direction. I wouldn't go as quite as extreme as saying, there's no general processing. I mean, what you're basically saying is everything's specific, right? So one does this, so two does this. And then once you've done all the cells, then that's all the functions that the fish has. Okay, you are more linear, more gracious. I think putting out the strong hypothesis has the advantage that you can refute it, right? With one context example. So I'm waiting for that. Except that general coding is really hard to prove. Where a specific probe, yeah, it's easy in the sense that you kill the cell, it's gone, what else is it gonna be? Right? I guess, what would the proof be? What would be the experiment that would show that there's what you might, well, I don't know, the term visual processing, essentially, interactions between these channels, I guess. Is that what we would need to demonstrate? The interactions downstream of the retina between channels, is that, what's the criteria? I mean, actually the interactions happen. I went through this too quickly probably. You would still have mutual inhibition so that you don't have conflicts, right? I think I would accept an experiment that did the following, that you could activate one population of cells, get one behavior, and activate an overlapping different, but overlapping population and get a different behavior. So this is a relatively simple experiment. So that would show to me that, yes, it's a population.