 Hello. Hello, and once again, thank you for joining us today for another SysSexVision talk. This talk series is part of the Worldwide Neuroinitiative, which recruits online seminars in many fields of neuroscience. The initiative is about to reach 150 seminars, hosted since March, and many others are still to come. You can already watch most of them as a podcast, you'll find the links in the descriptions. So today I will be a host, my name is Maxime and I'm a member of the Baden Lab. Today, we are very happy to receive Thomas Holler from the University of Bloomingen. Back in his youth, Thomas obtained his PhD at the Max Planck Institute for Brain Research in Frankfurt. He then worked as a postdoc with Richard Maslund in Harvard. He later came back in Europe as a Max Planck Institute for Medical Research in Albanburg who worked as a research fellow with Rindvindank, the inventor of quantum microscopy. He is now a full professor at the Institute for Optomic Research and a center for integrative neuroscience in Albanian. Thomas is mostly known for his work in retinal signal processing. His laboratory has established an impressive methods catalogs for optical measurement of light-driven population activity based on calcium sensors. Complimented with single cell electrophysiology, immunocytochemistry, and large-scale data analysis. With his collaborators, he aims to unravel the function and organization of retinal micro-secrets toward a better understanding of the underlying conventional principles. Thomas is also a close collaborator of house and we are happy to share his inclination toward open access and open source philosophy. I will finish by saying that we are all looking forward to the next ERM, the European Retina Meeting, with Thomas and his collaborators will host in the beginning of 2021. So good afternoon, Thomas. Thank you, Maxim. Thank you very much for the invitation. Hello, everywhere. As Maxim already told you, my lab is focusing on computations in the retina, especially in the mouse retina. But today I'd like to summarize our ongoing work and partially very fresh work in our attempts to connect the work and the retina with visual ecology. And the project or these projects I will tell about are collaborations between several labs. So first of all, it's a collaboration with Laura Busse's lab in Munich, and as well as with a long-term collaborators here in Tübing with the labs of Katrin Franke, Philipp Berenz, Matthias Biedger, also with Alex Alexander Ecker, who is now in Göttingen and more recently with Fabian Sins. So it's quite a big collaboration. Okay. I think we all agree that when we study the visual system we have to take into account that animals are adapted to their specific habitats. You can see here a figure from a recent review article that Tom Phillip and I wrote together, where we show adaptations of different species to the environment here in terms of a high acuity center in the retina, where it's located and how pronounced it is. And clearly these animals have to deal with different environments and their different environments are also imprinted in the visual system. So their environment is reflected in the representations in their visual system. So this is what we have to consider when we try to understand how, for example, a mouse sees the world. Approximately seven years ago in 2013 we started to look a little bit into what is actually interesting for mice, instead of just stimulating it, for example, with very artificial stimuli. And in our first paper we looked in food receptor responses to different wavelengths and I'll show you in a moment what we did. But I want to show you first a couple of sentences from the review we got for this paper. The reviewer said here, mice have very poor vision in general and are colorblind. Excuse me, Thomas, I will stop you. You are not sharing your screen here. And you don't realize it was not transferring here, sorry. From when on? The beginning apparently. Yeah, now it's on. I'm sorry about that. My host didn't tell me earlier about that. Okay, so I'm sorry. So I go one back, one slide back, I think. So this is a slide I was just referring to different visual environments, different specializations of the retinas, for example. So we always have to consider the environment and the animal that lives in this environment. And this is also true for mice, which became a very important model in vision neuroscience in the past couple of years. And the viewers comments which I thought were very telling at this time. So he claims or she claimed that mice have very poor vision in general are colorblind, live in nests and navigate hidden tracks using their viscose in the dark, and not their eyes because they rarely see the sky. This was kind of revealing how people conceived mice as animals in visual research. And in the end we were able to to publish the paper. But I think the times have changed quite a lot and I'm really excited to see more and more mouse studies where complicated recording techniques are used and where different behaviors are tested. So I think we're now on the right track of studying mice as an animal in their visual world. So what do we know about mice? So the retina mice have three types of photoreceptors, two cones and one rod. And one of the short wavelength sensitive cone is has its peak in the UV. And this is a point that I want to make right at the beginning. If you show a mice TV screen, it won't be able to see the blue component. So you're missing quite a lot if you're using a human made device for displaying stimuli to mice. So this is my first point. The second interesting point about mice is they have this gradient of obscene expression across the retina. So in the dorsal retina, the dorsal retina is dominated by green obscene expressing cones while the ventral retina is dominated by UV obscene expressing cones. And in addition to this gradient, there is the population of cones that are in both parts of the retina UV sensitive. And this is illustrated by labeling from a paper from Silke Havakamp. This is not a very unusual setting. So at this point, we usually say, okay, we have a lot of animals that have also this division into a UV sensitive lower part of the retina and the green sensitive upper part. So for example, this is to a certain degree in rapids, but also in hyena and many insects or in shrews. Recently, Heinz Wessler pointed out that there is a nice paper on mice that I almost forgot about where it shows that several mice have this division here. For example, these two mice, the house mouse and the Steppen mouse, but they're also mice that don't have this division. For example, here the wood mouse has no gradient and an obscene expression along the retina and this pygmy field mouse even lacks the S cone. Interestingly, the two lower mice here, they live in a wooden area while the upper mice are except maybe the house mouse, they live in a wide field. So they are more interested in an open space and what appears on the horizon. But this is just a side note. It just means that also if you're saying we're studying mice, we have to consider what species and how are they really adapted to the environment. I think a new study on the mouse photoreceptor distribution came out. This is from Wehli's lab where he showed that not only this strong distribution of the opsin with the green opsin up here and the UV opsin down here, but he also showed that the post-anopic circuitry, so the bipolar cells that specifically contact the true S cones, have a high density here in the ventral retina, which means there's not only this expression of UV opsin, but there's also a high density of the circuit that processes UV signals in the ventral retina. And to point this out again, this all the circuit is ignored if you use a normal screen to display stimuli to mice. So this is the photoreceptor side, what is on the behavior side? On the behavior side, there have been a couple of studies. And probably was here the study by Jacobs and coworkers where they showed that mice indeed can discriminate short wavelengths and long wavelengths. And more recently, there was a study from Denman et al. where they confirmed this result basically that mice can discriminate colors in the UV and the green range. Okay, so the mouse has everything it needs to process color. And now we want to know what is the color, the chromatic properties in the environment. And I will briefly summarize this study that we did in 2013. In 2013, we built a little scanning device. So basically a spectrometer that scans the environment as shown here. And then you have a full spectrum of for every pixel and you can calculate with the absorption curves of the mouse options. You can calculate what, how does the image look like for a mouse. And this is the mouse view of the scene in the woods. So we created the S cone band and the M cone band into images and then looked how is contrast distributed in these two bands and in different parts of the visual field. And in a nutshell, we found that in the upper visual field in the UV band, that the contrasts are distributed in this cute manner. There is more dark contrast in the sky region compared to the ground region. And then the green band we found that here, especially in the lower part of the visual field, the contrasts are more symmetrically distributed. In parallel, we had done recordings in most cone photoreceptors using a transgenic mouse line where we can read out the output of cones with a, under the two-photon microscope with a calcium indicator. And we displayed light flashes of different contrast and different color, for example, dark contrast and bright contrasts. And then we recorded the responses. And what we found there is that many of the S cones in the ventral retina, they are actually biased towards dark contrasts. While the most M cones in the dorsal retina, the green cones, they show a more symmetric response to these contrast steps, which means already at the level of the photoreceptors, this bias in the statistics and the contrast statistics is reflected in the function of the photoreceptors. And what we suggested at this time was that this might be a useful feature for mice if they wanted to take dark contrasts in the sky and dark contrasts in the sky are usually something that's not good for a mouse that could be, for example, a bird of prey. So we like to study very much. And shortly after that, also Tom went to his own lab and we followed similar things in the two labs. And it became clear that this approach on the long run is needs more. So one issue is illustrated here. So this is the field work with the scanning spectrometer with Tom here on the right side. The fact that he was able to drink a beer means that taking the whole picture took a terribly long time. Yeah, so what we actually need is we need movies of the mouse habitat. So we need a camera that is able to record chromatically correct movies of the mouse habitat. And we need a way to display the stimuli in a spectacular correct manner, but also in a spatial way, we need a spatial stimulator. And this is what I want to talk about in the next half an hour or so. So first about the simulator because that's already published. So we recently published this paper as a collaboration with between the labs of Catherine Franke and Tom Barden. Maxime here was also involved. So we published open source designs of a spectral simulator that can be adapted to the needs of the species. It has up to six spectral channels. It uses a light crafter and a light engine with different LEDs to project the light stimulus, for example, onto the retina or into a chamber with a sea proficient. And this has the capability to produce spatial resolved patterns, as you can see here, and it's highly adaptable. And if you're interested in this, there is a GitHub repository with all the details and it's also being updated constantly. So for our purpose for the mouse, we build a version that can stimulate the blue cones, the V cones and the green cones separately. And at the same time is able to use fluorescent imaging to record signals from different calcium indicators. So this is done by spectrally separating the stimulation light, for example, in the yellow part to stimulate the green cone and the UV part to stimulate the UV cone, but also to record green and red fluorescence. In addition, we have also spectral temporal separation of the stimuli and the scanning and you can read all the details here. So this is the stimulator does not only work for electoral recordings, but this can also be used for imaging experiments. And in addition, so we have detailed work sheets in that repository to explain how we can calibrate the stimulator in order to change the gamma curve of the stimulus representation or for example to calibrate the intensities so that you can directly calculate in photoreceptor excitation with your two wavelengths. This is a demonstration that this really works. This is again recordings from mouse photoreceptors here, very similar as we did in the 2013 study. You see here the responses to a green sine wave and the UV sine wave for cones in the upper in the daughter retina or in the ventral retina or somewhere in the middle. And you see how the sensitivity of the cones shifts from green to UV sensitive. And from the calibration, we can also then determine what would be a silent substitution stimulus. And you can see here that we can separate the response of the to the green stimulus in the green cones from that to the UV stimulus and UV cones. So, using these calibrations, you can very precisely project a natural scene onto the retina. Last week, Catherine Franke presented also in the series her work on color vision. I just want to repeat this because it fits so nicely to to the need of of studying color vision and mice. So Catherine showed work from Marie Lee here, who's a student in both our labs, where she recorded cone photoreceptors across the whole retina and she found that that basically like the opposite distribution, the dorsal cones are green selective while the ventral cones are more UV selective. But for the surround, the result was different, resulting in color opponent cone responses or centers around color ponency in the ventral cone photoreceptors, while dorsal cones are not or to a lesser degree color opponent. So this was already done with this kind of stimulator and it nicely confirms that the ventral retina is indeed able to pick up chromatic signals. So so much to the color to the stimulator. So now how do we get movies of the natural environment. So basically we want to do this we want to stick our head here's in the place where mice live in the ground and then we want to capture what is relevant for mice, like the direct environment and also bad things, possibly on the sky. And we want to do this in a way how my seed so with a proper resolution and in the chromatic channels that are required. So the the best way obviously would be to take a mouse put a camera on the head and put it in the field and record what it sees, which is obviously not that easy. So we decided to go a simpler way. So we thought, what can what kind of movies can we record that capture more or less what my see when they move around. And for this, we looked into eye movements, not directly but we looked into the literature, and there are very nice papers just recently coming out that study I mouse I motion motion and in high detail. And as it turns out from the from the last couple of papers, they are at least two different kinds of emotions. So mice move their eyes over long stretches of their behavior in order to compensate for the position of the head for head tilt, just to stabilize the gaze relative to the plane of the field of view. And this was very nicely shown here in this paper from Maya at all. And just recently, it was shown that mice also do something else they use the head and their eyes to do to perform circadian fixate pattern So this is something we were not considering here but this is, I think, in terms of behavior extremely interesting. Just to go to the first point here. So in that Maya paper from 2018, they showed by recording I motion and the the head position that you can basically predict the I motion from the head position and vice versa, which means at at times when the mouse is roaming around and it's just it's moving the head because of motion. It tries to stabilize its, it's the image onto its retina on the horizontal plane of the retina. And this makes the job for us a little bit easier if you want to record movies from the environment. So what we did was, we say we just consider these eye movements and we built a camera that is mounted on a gimbal. So this is here young wrong student in my lab. So he and CC built this camera in the last couple of months. So this camera is sitting here at the bottom of the stick. This is a gimbal. You see the gimbal here. This is the camera and it this gimbal keeps the camera always stabilized relative to the horizon. So no matter how the when he walks this the whole thing is shaking the camera is always stable. The lens is the fish island sits here on the bottom of the camera. So you can bring it extremely close to the ground so you can walk around and move the camera along the ground and record the scenes. The camera is actually two cameras. It's built from two respirable pi cameras. One is here, one is here, and they're sitting behind two spectral filters to select the bands that are relative that are meaningful for mice. So the UV band and the green band here. We have that croik in the middle to split the image into these color pathways. And here you see the fish islands that takes the image. So how does do these movies look like next slide shows an example. So this is the whole fish eye lens movie on the raspberry pi camera. This year CV channel on the left here. This is the green channel, and this is the overlay of these two channels. And you can see the camera is very stable towards the the the ground to the horizon. So let me forward that a little bit. I cannot forward it now. Okay, but you get the point. So, young wrong and CC went now and to very different scenes, close to tubing and took scenes of the environment at different times of the day, mostly during the day but also some scenes at in the evening. So then we took these movies and pre processed them. So as we, as I said, we use two cameras that means we have to align them spatially and temporarily for the temporal alignment. We record signals from two LEDs that are built in here, so we can use a synchronization markers and then really assign the frames of the two cameras to one single overlay frame for the spatial alignment. We use features in the two frames and then overlay them in a non rigid pixel wise alignment manner. And for the spectral calibration, we use defined LEDs and a spectrometer. And I will show you this in more detail here. We know the sensitivities of the camera due to the filters here. So we have LEDs for which we know the spectrum very precisely and where we can tune the intensity very precisely. So we can record the LEDs with a camera and then get calibration curves for the intensity where the intensity on the camera is related to power of the UV and the green LEDs. And with this information, we then can turn the raw images from the camera into the calibrated images. I just show you the same scene here, gamma corrected for better visualization of the screen. So this is the calibrated image from the camera. And here in comparison, you see the image from a scanning spectrometer again. So like we did in the earlier paper, we took a couple of scenes with a spectrometer so we can compare the intensities in every pixel to the camera images and you can see very well that they're matching quite nicely. Then we looked at different features of the scenes like grass, trees, sky and compare the calibrated images with the results from the spectrometer and found that we can correct the images spectrally very well. Okay. So we took a lot of different scenes from different environments. This is an example here. So we have single trees in a grassy area. We have also footage from scenes where the grass is very high or from close to the forest. So we took these scenes and first looked at intensity distribution along the vertical axis. And this is shown here. So this is one movie as an example where we take the intensities in the two channels here in the upper and the lower visual field. And what you see from these plots here, so with the green intensities and UV intensities plotted, you can see very nicely that in a lower visual field, these two chromatic channels, they correlate quite nicely. So here you have a high correlation between green and UV intensities while in the upper visual field, the correlation is not as clear. So you have much less correlation in the intensities here. The second thing you see is that at the ground here, usually the intensity range is lower compared to what we see at the sky, at least in these clear sky areas here or with a few trees. Another thing we noticed is that the brightness of our scenes is quite different. And we wanted to look at the scenes in a more organized way. So we decided to divide the movies up into three categories. So category with low intensities, mean intensities in the two channels, a category with medium intensities, and a category with high intensities. So these are just divided up by the mean intensity in the channels here just by dividing up the histograms. But you can see already from these example images that they also represent slightly different scenes. So in the low group, you find usually scenes close to the forest or in shady areas, while here in the high intensity scenes, you find areas where there's little grass on the ground, high reflectance from the ground, or where the sun comes, for example, from the back. So in our analysis, we always consider these three groups of intensities. Okay, so the first thing we looked at was how is contrast distributed in different parts of the visual field. And for the following analysis, we always focus on image crops from the upper and from the lower visual fields. So we just consider central, I mean, these areas here. The main reason for this is that we have distortions due to the fisheye lens towards the side. So we wanted to focus on parts where there's little distortion. This shows example images, image crops from the upper visual field and the lower visual field. These are the UV in the green channel. This is the scene overlaid. And then we can filter these scenes with different receptor field sizes. And in these receptor field sizes, we calculate the contrast. So we use here the RMS contrast, the root mean square contrast as a standard measure to determine contrast in natural scenes. So this is the same scene here now filtered with these receptor field filters. You can see here how the contrast is highlighted in the two channels. And this is the difference between the two channels. Already from this observation, you see that there's much more going on in the upper visual field in terms of RMS contrast. This even increases when we use larger filters. We use here 10 degrees, 10 degree filter, you see that the contrast in the two channels gets higher. And this are here the histograms for the contrast and UV in the green channel. Again, the distribution of contrasts is much more diverse here and much broader in the upper visual field compared to the lower visual field. So we did this now systematically for the different intensity scenes and for different receptor field sizes. So the range of receptor field sizes we picked here ranged from something that would correspond to the smallest receptor field size we would have in the mouse retina, up to something that is approximately a receptor field size in further stages in the early visual pathway of the mouse. And then we looked in detail for the distribution of the contrasts in the two channels in the upper and the lower field. And I just show this for the different intensity ranges and for two different receptor field sizes. You can see that the larger the receptor field size becomes the more these distributions get different. And they are more different here for these low and medium intensity scenes and less difference for the high intensity scenes. We can summarize this also in plots where we show their arm as contrast for the UV channel in the upper visual field. These are the solid lines here as a function of receptor field size and you see with the increasing receptor field size, you get a higher difference in contrast for the upper visual field. And this is similar for the medium and less so for the high intensity scenes. Since we're looking here only at the medium of the distributions, we also wanted to have a more unbiased measure of how different the distributions of contrast are in the two channels and the two visual field parts. So we used here the Jensen and Shannon divergence as a measure of difference between distributions. And you can see here, this is the difference in distributions in the upper visual field between the UV and the green contrast is much higher than in the lower visual field. And this is also true for the medium scenes. Okay, to summarize this, we have the high RS contrast in the upper visual field in the UV channel and it increases with the receptor field size of the filter. And we also see that UV and green are much less correlated in the upper visual field and less correlation means there is chromatic information that can be used by the mouse. And these findings resonate quite well with what Katrin presented last week in her presentation about color vision in the mouse. So this is the work from Claudia here in Katrin's lab. And this was the paper that was just published the end of last week, where she showed that there are ganglion cells that have color opponent responses and that these cells are predominantly in the lower part in the ventral part of the retina. So you see an example for one of the ganglion cells that showed high frequency of color opponent responses. And here is the summary of the multiple cell types that she found ganglion cell types. And you see here the red dots mean that there is a color opponent in many of the ventral retina ganglion cells, which means there is a, this fits extremely well to what we see here that there is a lot of chromatic contrast present in the upper visual field. And this fits as well to the behavior experiments that have been done in this study by Denman et al, where they showed that the color discrimination of mice is much better in the upper visual field than in the lower visual field. Okay. So the next step was we looked also at the contrast distribution in terms of positive and negative contrast. For this we use the different measure we call this on off contrast, which basically describes whether there's a positive or negative contrast in a certain region of the, of the study in the upper visual field. Again, we have these example images from the upper and the lower visual field here are the overlays measure I mean, analyzed with a, with a, sorry, receptive field of two degrees here. These are the, the filtered maps. And you see again, there is higher contrast in the upper visual field compared to the lower visual field. Again, this contrast increases with receptive field size. Here's the same scene filter with 10 degrees receptive fields. You see the bias here in the, in the histograms. So this is a distribution of, of, on off contrast for UV and for green. You see this, this bias towards dark contrasts in many upper visual field scenes, which is similar to what we found in this 2013 study already. While the lower visual field, both distributions are much more symmetrical and also narrow. So to do this more systematically, we looked at low, medium and high intensity scenes from the upper and the lower visual field. You see here the distribution in the upper visual field for the UV channel is much broader and skewed towards dark contrasts. This is also true in these scenes here a bit for the, for the green contra, for the green channel. And if you look at larger receptive fields, you see, again, this, this difference is, is increasing. So when you look with larger receptive fields at high at the upper visual field, there's a predominance of dark contrast or bias for dark contrasts. And this is something that has been described earlier, for example, in the paper Ratliff in 2010 that there is a bias towards dark contrast and natural scenes. We find that this dark contrast also is increased with a receptor field size. And in general, this, this dark bias shift, as I said, fits well to what we found in the photoreceptors already earlier, but it fits also toward we see in the ganglion cells. So when we plot the distribution of an off contrast here on this axis as a function of receptor field or filter size here from small to large receptor fields, you can see here that there is a, there's a bias towards dark contrast already in these, in these histograms for the low, high, medium and high filters that we use to analyze these scenes here. But when we take the data that was published in, in the study by Barden et al in 2016, where we recorded a lot of ganglion cells, we can also look at the relationship between the on off index, which is something similar to the on off contrast here. And we have also receptor field size of the cells. So when we plot the same plot here for the, for the real cells from the study here, we find actually a similar distribution. So there is a bias towards dark contrasts. And when we look at which cells are present here that have a bias towards dark contrast, these are actually cells with large receptor fields. So this is the, the on off index of the, of the real neurons recorded as a function of the receptor field size. You can see that the larger the cells, the cells for the field size gets that, that on off index is biased towards dark contrasts. And then we think that this distribution of contrast that we see in the natural scenes of mice matches very well with the, with the, with neural representations at the level of the retina. Okay, so we did also one further analysis because we wanted to know if these, these contrasts could also be picked up by a neural network. So we set up convolutional auto encoder network. And the idea was here, we, we trained it with scenes from our natural environment and we looked how well it can reproduce the scenes here. But at the same time, we put a bottleneck here in the middle. So we restricted the information transfer between these two layers by different parameters to see whether the model still learns important features of the input movies. So, as a measure of performance, we compared the input and the output images. And this is shown here for different levels of, of, of regularization, which we used to, to, to make the bottleneck between these two layers narrow. So, when we have almost no bottleneck, we can reconstruct the scenes very well. If we decrease the information flow here very much, we get basically no risk reconstruction. So we picked a parameter combination where we still get reasonable images reconstructed from the original movies, but where we restrict the information flow or make the bottleneck very narrow here already. And then we checked what kind of filters does the auto encoder learn from the movies. And this is an example here for, for this combination of parameters. So these filters can be viewed as receptor fields. And what we found was that if you train this, this auto encoder with movies from the other visual field, we find much more often color opponent filters here. So this is an example that responds in opponent ways to UV and to green. So we did this systematically for different parameters and repeated this experiment we found consistently that training this auto encoder with other visual field scenes makes it, allows it to learn color opponent kernels. While if you use scenes from the lower visual field, you very rarely get color opponent kernels. And to us, this is a different way of showing that there is interesting information that neural networks in this case can learn from the different parts of the scenes. Okay, so this was all about movies at recorded at daytime. We also took some movies close to dusk and dawn. And the reason why we did this is, first of all, this is when when mice are also very active. But the other reason is shown here in this in this plot from a study by Johnson at all. So that towards sunset, the the band in the in the blue UV range is relatively increasing towards the long wavelength span so the the there is a relatively high ratio of short wavelengths close or close after or close before sunrise and sunset. These are example images that we took at dawn in this case. So you can see here, the sun is slowly rising in this in this sequence of images here. This is the channel and this is the green channel. And you see already from these images that you each channel, the sky is much more homogeneously illuminated compared to the green channel. And we looked at this also more quantitatively. So at different time points, the intensity at this line here from the sun towards the way from the sun is plotted. You see that the UV intensity along the sky is usually much more homogeneous. And at some points it's even a bit brighter than the green wavelength here in this in this range. In terms of our RMS contrast, the MS contrast here in this regions away from the sun is higher in UV channel compared to the green channel. And again, this is increasing with the receptive field size. And so this suggests that UV channel is quite useful also at these low light levels, especially around sunset and sunrise to detect dark objects in the sky. So we tested this by flying a drone into the field of the camera. This is here an image of the drone, the dark drone. You can see very clearly it's it's much better discernible here in the UV channel compared to the green channel. So from this, we suggest that UV channel in at least at least light levels might not be that important for color vision in classical sense, but it might be an extremely useful channel to detect dark objects and a relatively homogeneously illuminated sky. Okay. So in the last slides, I want to show you what does the retina think about our natural movies, because so far we just analyze the movies, but we didn't ask a mouse what it thinks about it. So the next slides are very preliminary and I just want to give you this as an outlook because I think this is an exciting approach. So we took our movies. And this is an example, the same crops that we used to analyze the different contrast properties and use them as a stimulus for for the mouse retina as a whole mode for ganglion cells. So this is an example of a stimulus movie playing on the on our simulator. We had a certain. So this the whole range is here 30 degrees of visual angle. We have several test clips test sequence and training sequences, and we play this to a field of about 100 cells and the ganglion cell layer labeled with a calcium indicator and then imaged with to photon microscopy. And this is that also Claudia did here in the lab. And this shows you that the cells actually like the movie so this is a movie presented to a field and daughter retina and to a field and eventually retina. The cells blinking, but there's quite some activity while the movie is playing in the on the retina. So the retina that actually respond to the movies that we think should be ideal for a mouse retina. So we analyze the the activity in these neurons. We again used a CNN conversion conversion network neural network model that was motivated by the by the retina retina structure basically so you have music of cell types with the same of the same type and they cover different parts of the visual speed visual field. So we read out the network at different points. And a key function of this network is that it consists of a features feature space that is shared by the different neurons, and it is learned also jointly by the different neurons and this this concept of using this fair shared feature space was developed here and this in this work by David Clinton in a collaboration with Alexander Ecker and Matthias Beatka and published here as this paper. The whole metal modeling framework that we used for the analysis was developed by the group of Fabian sins here and it's called here and fabric. So this, this model was restricted so it can use 16 kernels per color channels are in total 32 channels. So how do the responses of this network look like when we when we train it with the neural data from the game themselves. So this is an example fingerprint of such such a neuron here. So these are the responses to a moving bar and a trap stimulus. So these stimuli we still play in order to compare them with with our older data set. And here you see the response of the neuron to a small section of the movie. So you see here the UV mean intensity and the green mean intensity of a small movie sequence. And in gray, you see the actual response of the cell and in red you see the predicted response by the model. And here we calculated any eyes. So most exciting images of that are predicted. So basically movies that are predicted to excite this specific specific cell type. The best. I mean, you can understand them also they look like the field filters but they actually represent the best stimulus you see the spatial organization of these stimuli here and the temporal time course. So in this case this would be an on cell. And the last image is showing just a frame from the movie and it shows where these the model cell was located within the within the image and this is in the center where we recorded cell. So this is this is a good result. So we have movie such recordings from the dozer retina and you can see here that we get different cells that respond to the movies and we get also different any eyes. So these these are on cells here we have to off cells. You see that the prediction by the model of the responses is quite good. We see also cells in the venture retina here. And if you compare the time trace here that for the green and UV in the venture in the dozer versus the venture retina you can see that also the green here the green response as much higher as you expect and the dozer retina. Well here the green response is lower. You see a stronger UV response also this this matches to what we expect. So these me eyes these most exciting inputs. They're not only stationary images but you can also visualize them. So this is an example for one of the cells. This is the stimulus that this cell is supposed to respond to ideally. This is an off cell here. And in one of the very last experiments, we also found what we think is a color opponent cells so this is this is an example fingerprint for this here you see the two me eyes. You see they have opposite polarity in the surround and in the center and you see here, the MEI animated, you can see that the green and the UV stimuli look quite different here. I think this is very promising but it's also very preliminary because the before you can really say that this is the most exciting stimulus for this particular neuron, you have to test this. And ideally you do this in close to experiments as have been shown here by these papers. So what we're actually doing so we are recording cells to the natural movies, then we calculate the most exciting stimuli for these neurons play for play them back to the retina, and then see if each neuron indeed responds, optimal to its specific MEI. And that would actually be the confirmation that these are the specific stimuli that these cells respond to. I think I'm out of time. Let me just summarize. I've shown you that we have an open source spatial simulator that can be adapted to different animal species. And this is an important tool in order to deal with the different visual requirements of different species and we have a mouse version that we think works very well in presenting natural scenes to the mouse retina. I showed you some analysis for our scenes cam movies where we recorded footage of UV green of the mouse environment. We analyzed specifically the contrast statistics and think that it suggests that there's a rich chromatic information in the upper visual field of the mouse. And we tested with different methods with this auto encoder model but also with most traditional statistic methods. Whether the color dependency can be extracted from the other visual field. We suggest that UV channel may support predator detection at dusk and dawn specifically. And in the end I showed you an outlook. And I showed you that the gang themselves actually respond to these natural movies in an interesting way and that we can estimate most interesting movies from the responses of the stimuli and hope to learn from that to watch features in the natural scenes, these images of the mouse respond the best. With this, I think I like to thank all the collaborators and read all the students and postdocs who were involved in the studies and contributed different parts. And here are all the groups that directly collaborated on this work and this is a lot of fun and I really enjoyed it. And I'm hoping that you have enjoyed this and that you have questions. Thank you. And I think that was that was quite interesting. I mean, I have many, many questions, but I'm mostly technical I guess I'll bother you later with that. I'm just going to share a link to the current zoom room we in if you want to join and interact with us. We have actually quite a few questions. Can we just start from the top or what is what is actually you have many questions and most of them concern the receptor field. I think you have to ask the question and then I answer it. So I have a question from Greg Schwartz. What is the difference is big enough in the respect field of bipolar cells, approximately one in five degrees for the mechanism to start there, or is this likely happening after bipolar cells. This is a good question. I think, I think the one and a half degrees could already be large enough to pick up these these difference in contrast for specific scenes and not maybe not for all scenes but maybe for some scenes. So it could start happening there, I would say, but we haven't looked for smaller filters than two degrees so far. There's a follow up from Greg Schwartz as well. On average, does a dorsal retinal patch respond better to a dorsal movie than it does to a ventral movie and it's ever so. Yeah, that's that's one of the motivations where we did the experiments that would be cool. Right. When you see different responses of ventral cells to dorsal movies and less to ventral movies. This is, we don't know yet. We think there are differences. So when we look at the population in the ventral and dorsal retinal we have so far there seem to be differences in the responses but we haven't quantified that and it's this is a bit early. But that's clearly something that we want to look at. I'm sorry for the name. I will say it wrong. I have a question from democratic. Sorry, does the model also predict the artificial stimulus responses you recorded, for example, the church. No, we haven't tested this. So the responses that I showed are the raw responses from the cells that was modeled. And currently we just use them to assign or to to to get an idea of this of what cell type that would be traditionally from from our knowledge of the different cell types respond to the jobs and we could then make a link to these, these previous cell types, but we haven't tested that. Right. I have a more general question from Marla Feller. Does the mouse lens of flat transmission curve across all wavelengths. Does it have a flat transmission curve. I think it cut out some part of the wavelength. I, I think so but I'm not sure. I think I remember very long discussions with Frank Sheffield, who is a specialist on mouse lenses or lenses in general, and he did a lot of measurements. I'm, I'm not sure but that's something I should probably check out. Marla doesn't continue with another question and actually we are thinking of the same in our own lab. Have you ever compared the response property for from while total mice versus lab mice, which are raised with very different color and contrast statistics. The question I guess is mostly since those mice never seen UV, do you think that a while mice will be more tuned to this kind of. This would be a very interesting experiment. Unfortunately, I'm not really sure how to do that. We cannot just catch mice here in Germany. I have to be very careful what I say. No, I mean, sorry. Never mind. So we cannot do this. We had we had a mouse line or have a mouse line that is supposed to be genetically closer to whatever is considered the most original. And these mice certainly behave very differently. They are not, they're wild mice. They are, they're really behaving differently. So, and we wanted to test whether they respond differently. But so far we just looked at a general immunistic chemical markers to compare the retinas. We had a collaboration with Christian puller in Oldenburg, but so far we haven't seen anything really exciting so it looks as if their retinas are very, very similar to the normal Plex six mice. Yeah, but I mean in terms of what I showed at the beginning the difference in photoreceptor distributions at mice that are partially closely related. Yeah, it's a good question. Do what we see differences in a more wild type mouse. I don't know what time yet. So something here we wondering for the brief future. I have a question from dog cafes is one of your students. Approximately what percent of sales within the imaging field respond to the natural movies. Sorry, which question is this. What percentage of cells respond to your natural movies. Depends on how you define how well respond I think. So if I remember that's correctly I think the percentage of cells that go give good responses following our quality criteria are very similar to what we get when we display dense noise stimuli I think there's not much of a difference. So they're not, not obviously responding better but also not, not obviously worth. I have one from Baden. Our MEI polarities and trip responses inverted relative to each other. You mean in this presentation. Yeah. Sorry. No, I can go. No, I'm not. I mean, for this we have to look at it again. I know that we had a long discussion how to show the MEIs and the temporal filters correctly so that that represent the response of the cell. So this there might be just a mismatch in the, in the presentation but we can look at this afterwards what I mean. I guess we'll continue talking about that later on. I have one from Martin Spasek, which was apparently very interested by your drone slash hope advice. He wonders what happens to UV when the sky is completely overcast. Does that have implication on avoiding predators on cloudy days. Good question. I must say I failed to make my students go out in the rain or a bad weather to record movies so they so surprising number of the movies we got are at very well very nice weather. It's I don't, I don't think that's a coincidence. I mean there are overcast recordings I don't think it makes much of a difference but I have to look at this good. Good question. Did you have one from Tim Gollisch. How different are the MEIs from classical immersion the rest of the field, example with the white nose. Also good question. This is something we're comparing right now. So with these in these close to experiments that we've started we also play play other stimuli and we also try to play the natural the dense noise together with the natural movies. We don't know yet how different they are because you cannot get classically measured receptive fields from the natural movies so you're using two different methods I guess to determine the proper the features of the cells. I don't think that we have to look into but we don't know yet. As I said these MEI recordings is something that happened in the last three weeks or so, or four weeks. Looking forward for my results. I will finish with Greg Fields, who has a question and a comment. So first he really apparently enjoyed it and he has a suggestion to make your movie better is just to add. You will talk to him. Then his question is what are your plans for having local motion and flow for these movies. Yeah, we looked at the optical flow in these movies but this is, I don't think it makes much sense because the optical flow represents the student who is carrying the camera. And, and whether he's shaky hands or whether he thinks he moves like a mouse. It's, I think this needs to be done with with cameras that are mounted on on on the animal itself. And there are these these movies and there are these very exciting papers that just came out as I think a preprint that came out recently where they have four cameras on the head of a mouse and a great quad almost everything. I think this is, this is the way to go if you want to look at at the motion statistic and the optical flow you need something that moves like a mouse head and in combination of course with the movement of the eyes. That would be interesting to develop in it. Thanks a lot Thomas. I encourage everybody to join us on this on this little room if you want to discuss with us if you want to continue asking questions. If you want some follow up. While we're waiting for others to join us. I just want to say that it was very nice hosting all those to sex vision talks. So lab is hosting Michael tree do to do. I'm not sure to say that. Next week, after that we're going to take a summer break, and we will start this talk again in September on a more regular business with Thomas lap. So hopefully we'll see you again in September and we hope to have more talks. So, hello everyone, I see that Jeff diamond is with us. I'm not surprised. Hello, you are all muted crowd filters in. I was feeling up quite quick. Jeff is muted. The Jeff diamond. I think I need this. Besides the obvious. Yeah. Oh my god, Thomas, that was so awesome crowd filters. Nice talk Thomas. That was awesome. Awesome. Thank you. Wait a minute. Besides the obvious. Can I just ask you guys to mute your YouTube videos because we hear your feedbacks through your mic. So while everyone is switching off the YouTube. So Thomas, have you in the meanwhile recorded a video of a drone approaching in the UV channel. I haven't done this. But we are, we have to get funding from for a drone. The biological drone like a bird. Yeah, this, this would be really cool. I mean, we were looking into falconers. What is this? This is the right name. So people who handle birds, but they train, maybe you can train them to fly onto some target. And then record that that would be really cool. But we haven't found any anyone who's doing this. Yes, collaborators. I can help you up if you want in France. Yeah. Yeah. Okay. Yeah, sure. But that would be nice. I mean, like drone probably does the same thing but in terms of the talk would be nicer to have an actual bird. Natural wing beat statistics. So those color opponent cells, how many of those do you have that are really color opponent in that way at the end that was very different than the ones Catherine was showing right that was not a rod surround thing that was color opponent in the RF center. Right. And you have both I forget whether that was UV on green offer vice versa. So it was blue on a UV on green off. Not many. I mean, not many that we trust. So this is this as I said, this is something that happened in the last couple of weeks and we didn't do two of these close loop experiments. One worked and the other one seemed to have worked much better but didn't work as well and we don't know yet why. The second experiment the cell came out so we, we don't know whether it would respond to this stimulus actually the best. Yeah. Right. Right. You have to check the neural network. Yeah, it's just interesting because the dense noise or everything that Catherine was doing mostly didn't pick out things that were like that right so it suggests that perhaps you actually might need a natural movie there. The dense noise. This is also tricky. I mean, we tried to get with dense noise, color point receptor fields that's. I don't know maybe it's the short recording time. I don't know what it is. But I think there's just too little surround so we try to play stimuli where the surround is is more pronounced and then you, you, you see that better. I think it's, yeah, I'm still a little puzzled. So hope that we can find more cells with this MEI methods that are really color opponent. And the cell that I showed you that was actually also dorsal retina which is also a bit weird, but Catherine's data showed that I also dorsal cells are a color opponent but just not so many. How much data do you need before you get these MEIs. The movies are 15 minutes. But it looks as if we need a bit more so maybe 20 minutes of recording time. And then you have to calculate these MEIs quite quickly because you want to display them to the same cells without they having me having changed over the time course of the experiment. And that's I think that this is the most tricky part that you have to get an estimate of the quality of the cells in the second run. And take into account that you probably lost also some of the fluorescent dye by preaching and so on. This is, but yeah, it seems to work. So the calculation of the MEIs is not the limiting issue currently it's putting the Roy's reasonably into the field, because it's still mostly hand checking manually checking. Thomas when you said you were you went back to confirm that the MEIs were really the the MEIs. Were you then comparing each cells response to its presumptive MEI with all the other MEIs. That would be the control experiment. Okay, that's what you want to have a confusion matrix where you ideally have just a diagonal every cell response to its MEI best. Or maybe if they are if they have very similar MEIs then they they cluster around the same MEI. If you compare them to what people see in the visual cortex. I mean, in the visual cortex look much more complicated complicated much of God was tribes and and most of the MEIs that we see here there are roughly centers around with a little little things around it so it's it's not so unexpected for the retina I would say. But yeah I think I think they won't be that cells specific I mean that's currently my guess. I'm not sure we have to see. Mala you're muted. So I have a question about that and this is very naive you guys are all much more familiar with these methods. But you know when people have done reverse correlation in the past like with EJ there's been this argument about this linearity right of being able to do this this reverse correlation based on assumptions of linear of that didn't apply to using natural scenes right so EJ has made a big point about there being any structure in the stimulus made that not legitimate mathematically right for the reverse correlation so how can you just kind of explained to me having heard EJ talk about this for like decades why it's okay to use an MEI to identify an MEI. His argument for that. My argument for using MEIs. Yeah, as far as they approve of you using MEIs I guess maybe that's another way. I mean first of all I think I've never talked had to talk with so little knowledge about what I'm talking about. So I'm learning a lot of stuff about the MEIs and so on this is this really is I think it's exciting but I'm still learning. And as I understand them is that you're compared to the traditional methods you're letting the network finding the best fitting movie and that allows also this is a nonlinear process so you're it's not the same thing as the traditional linear regression approach so I think it's, I think it gives you more more degrees of freedom. And if EJ would make the argument if you were going to say that that stimulus caused the spike train, right that you had to assume that there was this, this linear relationship, and that's why you had to use an unstructured stimulus in order to do that. I don't know, you know, Greg someone else can probably make this argument better. Yeah, it does. It solves that it mostly solve I mean it solves it statistically with as long as it's not overfit. As long as the neural network actually converges on something reasonable. It won't have to but there's always a danger in the neural network approach like that that you just get a garbage thing out because it train well enough. You can always get that but it allows for spatial nonlinearities I believe and also I mean this is why we restricted the number of kernels that it can use to make sure that it's not overfitting. And so I mean as as an experimentalist I think the the step is really the closed loop experiment, you have to show that a response better to its own me I and then maybe even better to if you if you then get the receptive field I mean from from your the stimulus based on a traditional noise stimulus and calculate this this preferred stimulus and then compare and it should also respond better to its me I that would be the ultimate test that that works is. I don't know. Yeah, but it would be. Thumbs up I don't know that anyone's actually done that for any like neural network I mean it's it's similar to maximum informative dimensions but I don't know that that direct comparison has been made either. How would how would you go about analyzing the differences in the MEI and the spike trigger average that the linear approach when you got those two things and the MEI elicits a better response. How do you. What do you do to distinguish what the what the elements are in the MEI that aren't in the linear result. That's a good question. Compare the two kernels and see what what is the difference and what what features make it respond better and then you could I guess you could try to take features put this on the linear filter and see when it happens to become better or not. So something like this you morph them into each other see what what is the key feature that makes it respond better. I guess that's a has a lot of dimensions as problem could be, I mean could be the temple, the timing of of the two color channels, for example. Thomas. Can I ask a question about optics. So when you're making these movies. This is a question about mouse eyes do mice accommodate. Is there an accommodation. I think that yeah, so it's just as blurry near as it is for the mouse. Yes, I mean this is I mean Frank Frank Sheffield claims that everything from this enough to send me that meters is basically this in focus for the most. And I think that's the specialty of the mouse eye and the advantage of it being small and just see that Lara is now in the zoom. So everything concerning me as you can ask her. She she knows this much, much, much better than me. But Roland so you. Well, it's whether it's reasonable to use this kind of lens and just have everything in focus or so. Yeah, I think it's, I would say it's a reasonable assumption. It's unclear. So, so that one reason where we take these central crops of the images also, I mean Tom pointed this out and comments to the manuscript that a chromatic aberrations that you cannot prevent really towards the edges of the lens. And I think in the center they are fine. But if you're going towards the edges you have certain chromatic aberrations. And that's, I mean somebody asked this I think in the, in the question list whether. I wasn't at the question, what about chromatic aberrations in the mouse. I know this was the flat transmission curve for the way things are it goes in the same direction. So how much of the render is your stimulus covering. How much of the retina. I'm just wondering about sort of longer range interactions. 30 degrees official angle that's on the on the retina. That's one millimeter. So a big chunk. Yeah. Hey Thomas, I was wondering why did you choose to use a fisheye lens is it really just so that you could project it across more of the retina somewhat representative. Yeah, I mean I think the decision was at the beginning we wanted to have the camera match the properties as close as possible to the mouse mouse I has almost 180 degrees. So, seeing really the movies from the mouse perspective, but then it's really difficult to deal with the spatial distortions and the edge of the lens so in the end we decided to use the central crops. But I mean, in terms of visualizing what what might be the input to the mouse I think it's still useful to do this. Yeah. But if you want to have better optic optics you probably would replace this by a smaller field of view lens. I was thinking about the data you showed about the differences across color and on off contrast dorsal versus ventral. And then the differences, or how it behaves across receptive field size. I was just wondering in the dorsal retina, we have a lot larger receptive field sizes right. We have that in the mouse retina we have that in the guinea pig retina and I was wondering if you ever considered, you know, how that then behaves and how it then really looks for looks for the mouse. If you consider these differences. And why, why do we have then larger receptive field sizes in the dorsal retina. It's probably not because there's less information maybe it's to pick up then more of that little contrast you have in the lower visual field or something like that. Did you ever think about that or did I miss something. I don't know. No, I'm not sure how to relate this really. But, but did you get my point. You're saying there's a mismatch between the receptive field size that's not not necessarily a mismatch maybe it's a perfect match, depending on on on the function. You know what I mean you have larger receptive fields and the dorsal retina to be more sensitive. I don't know. Is that a general feature or type specific. I'm not aware of this. A little bit type specific but generally, I would think just based on the general density distribution in mouse and also in guinea pig. There are exceptions because I mean, you know the alpha ganglion cells in the mouse or whatnot or you have the, of course in the guinea pig you have the actual streak where it's different but if you look like really dorsal periphery. There are so few cells, basically so they, they must almost all all types must be larger. I mean if you said dorsal periphery that looks straight down right there's probably a huge amount of motion noise there. That's, I wouldn't, I mean it. It's not straight down it will. Yeah, I don't know. I mean, I mean not straight down but it's closer to the ground and I think totally. Motions would be much faster so you could also speculate that you need larger cells in order to, to pick up motion differences better I mean, I don't know. It wasn't that this paper about the one visual cortex in mice, but the different motion statistics in different parts of the visual field. Then you comment on this Tom. That was the camera like yesterday or something right. I did comment but I forgot what I said. I think they're showing that there seem to be different preferences for motion and the upper and the lower visual field. The lower visual field likes motion more. Yeah, yeah. And then you, I think you said, yeah, well, flow fields and so on. Yeah, because fish do do that so Ari Ahrenberg showed that. Yeah, so it seems to be similar to mouse. Could I ask about ecology then a little bit. How much time does do mice spend out in the open. How much I read at the beginning. No, no, no. I'm just wondering if should we all be if we want to do serious experiments on mice now. Do we have to set up a UV system as the one like the one you showed you showed or or how much mean mice also spend a lot of time. Moving around at night or probably most of the time moving around at night, right. And so for the most part, you could argue that they're using the rod system and that we could study the mouse right now just through the rods. What are your comments on that. I mean, I mean you probably know also the papers where they where they study mouse activity during the year and depending on how much food is available when they come out. So if there's enough food food available in this day, wherever they are and come out at night. So, but if food is where they come also they extend their activity periods. And I mean, from my personal experience, if I walk here around tubing, you see mice everywhere. So I think they're not active just during night they active whatever nobody is disturbing them. Because I mean, but I guess you could argue. I don't know. I would argue against this. I wouldn't say I think it's an interesting point though just to think about the fact that in parallel, the whole system has to work in the night and in the day right like there are probably only motion statistics there are probably interesting differences in nighttime to dorsal ventral and those might be very different than the ones that are around during the day. And that all has to work in parallel right you can't you only have one set of ganglion cells and every other kind of cells. So it's kind of it's an interesting question. I mean there are owls at night that eat mice right. Yeah, there's all kinds of they still have to run they still have to move right all the motion statistics are still still there although with you know different absolute luminance but The owls will still use their cones when the mice have to use their rods right because of the huge eyes. It's a bit unfair. I think that matches. Don't they all snuggle. But don't they also have rods. Obviously but the point is that the big eyes of the owl means that they can use cones when the mice can't assuming that they equally sensitive which we don't know. These auditory cues as well right. You could also argue that that the rods I mean since the rods seem to be active also and highlight levels, they play a role at every time you have to consider them anyways also in the in the interplay with a with the ue signals. Maybe one should define the rod cone as a functional unit in the mouse. Because it's, I mean, Whoa, whoa, whoa, whoa, whoa, whoa. You're finding three for the reserve. It's steady on steady on. I do think this question though about how we're studying lab mice. I mean maybe it really doesn't matter so maybe you know visual experience doesn't matter at all for how these circuits wire up and that's possible but I mean the more natural seeing stuff I see from mice, the more deprived I think our lab mice are. I mean their UV cones are like never stimulated ever right and it's hard to imagine that that doesn't impact something about the downstream circuits, but it might not. Yeah, and McKellis paper had that as those amazing wild caught like UV clubs right. Yeah, found a few in Israel I guess and they have like these insane clumping of the UV cones, as if there's a circuit sitting underneath them. That was just in the supplement of her paper but I thought that was amazing. Yeah, that's exactly I was thinking of that and you know there's always whenever you talk to people who have done even brain plasticity people have done like, you know, LTP in a wild caught mouse and it works entirely differently. You know so there seems to be evidence that, but of course these are wild caught and, as Thomas has pointed out there's many different species I don't know what species they were, or some species but it was. I don't know. It does make me think we need to, even if we don't do wild caught mice that maybe we can come up with ways of raising the mice in cages, at least they have some sort of visual, you know UV stimulation versus zero. Right, it could be interesting. Yeah, no I agree. I just say guinea pig. Thank you. Right. So I know this is really for Thomas but I just want to convey a very little short little story. So sorry to interrupt here but we had Gerald Westheimer come to our lab meeting yesterday. And Gerald Westheimer is the one who brought Horace Barlow to Berkeley to look for the discovery of direction selectivity, and I asked him to tell us the story and so I just want to tell you very briefly. The discovery of direction selective ganglion cells apparently is very similar to Hubel and Beasel finding that orientation was important, you know, cortex, and that there was like one researcher at Berkeley who was doing rabbits. Nobody else was doing rabbits at that time everybody was using cats and primates. What was that was this Roland. It wasn't Roland. Little, I think his name is it little is that right Roland. And, and so Gerald Westheimer convinced Horace to come to Berkeley because they were all looking for, for primary components of perception like what are the primary components of what you know people can see. And so they put electrodes in the rabbits, and while they were setting up for the experiment they shown a flashlight on the ceiling, and the flashlight went in one direction and they saw lots of spikes the flashlight one in the other direction they didn't see any. And that's when they started studying direction selectivity rabbits. And so they did like this short little series of experiments while they were here and then they stopped working on it everyone stopped working on it because nobody was working on the rabbit, and a field was born. So, there you go. Just wanted to convey that little, little story. I recommend everyone invite Gerald Westheimer he's 95 years old, incredibly lucid, wonderful to talk to about the history of neuroscience if any of you are interested he's very open to doing this. I'm very interested. Alright, you should because he's the father of the stuff offered effect way he studied a lot of the stuff profit. Way well we can organize a little chat with Gerald if you'd like I think I'd love to. Please, please count me in on that. Gerald Westheimer is the reason I study the retina. I had no idea what what I was doing and first year graduate school, he taught the retina lectures in the in the Berkeley neurobiology course. Yeah, and that was it. He's from Berlin, actually so he talks a lot. Yeah, he has a great history. So, I'm sorry to defray the to divert the conversation, but my like maybe you can organize a social with him. I can organize a social with Gerald that is such a great idea. Okay, I will do it I can. Like, I can be our so online virtual beer hour you mean a virtual beer. Since we move to a different topic. I just want to let you know, thanks for coming. I'm just going to hand up the stream now. So this discussion will not be private. Sorry, I forgot to do it. This discussion wasn't private. Yeah, we're going to give it to us. That was just happening on YouTube. Now we can be honest. Thomas that talk sucked. Yeah, you know, there's, there's a