 Okay, and, hello everyone. So after a short break, we are back. So we are back for another episode of the Sussex Vision series. I know that for some it will sound a bit redundant, but let us have a quick word to remind you that this weekly series of talks about vision and visual neuroscience are part of the world right now in the city. WWM is a platform for neuroscientists in various fields to exchange more recent work. And I have to say that up until now, it has been a great way to exchange with our peers while traveling to conferences was still difficult. So we hope that this kind of talks will remain a regular practice in the future. I mean, only if the future will tell. Anyway, I'm Maxime from a PhD in top evidence lab. And today, I'm very glad to receive Maxime and Josh from IST Austria, the Institute of Science and Technology in Vienna. Maxime started studying astronomy at the Ponticica University at Catalega de Santiago in Chile. And later he moved to Tumigen and Munich to finish a degree in biochemistry. He then obtained his PhD in a laboratory of Alexander Bosch at the Max Plant Institute for Neurobiology in Munich. And then he worked as a post-doctoral fellow with Marc-Hübsmeister and Joshua Sense at Harvard University. Now, Maxime is an assistant professor at the Institute for Science and Technology in Austria. And there he leads a research team focused on understanding the neuronal basis of visual transformations and the role in his 14 innate behavior. So we're very happy to receive him today. And we are really looking forward to hear about his parametric view on vision. Hello, Maxime. How are you doing today? Hello, Maxime. Thank you very much for inviting me and giving me the opportunity to give this talk. First, shall I share my screen or how shall we start? Don't use yours. You can go ahead. Next, let's share the screen. Can you tell me if you see everything fine? I see your presenter mode. Okay, it should be a one-screen. So, hello, everybody. Oh, I don't see your full screen. You don't see my full screen? Let me see your presenter mode. Let me see. Again. What about now? Better. Thank you. So, hello, everybody. It's difficult always. I'm still not used to talk to a non-audience. But I have to start by saying a few words on something that's completely unrelated to the work that we're doing in the lab. And it's because of what's happening on a world stage and I know from many people that are affected, have the families, including in my lab, where the morning discussions is, how is your family doing? And they're still alive. So, with that, I would just say that please support everybody who is now in going through this hardship. And let's, as somebody who is much smarter than me once said, let's hope for genuine peace and the kind of peace that makes life on earth worth living. And yeah, I just wanted to say those words because they were important to me. Okay, let's go jump from here on to science and I will try to be as excited as I can given the situation of something that I find really neat that we've been doing in the lab in the past couple of years. And it's a study that combines theory and experiments and some new low-tech development that might be useful for the community. And so the story that I'm going to tell you today is about basically reflected in this picture. And I chose this picture, one, because I think it's beautiful. Second, because it's my dog. And third, because you see the most relevant parts of what I want to talk to you about. So you can clearly see the horizon. You can see the difference in statistics on the sky and the ground and the luminance. And so the entire talk today will be to try to address to what extent these features, these statistical features might have shaped the efficiency or the way our visual system might encode our panoramic surround. And I'm going to start first by introducing the team who did the work. This is Vivianc. Let me see. A pointer, please. What? Vivianc, Gupta, Victor Miljasi, Olga Simonova, and Jan Sva. This work is just recently published in Bayer Kai. So if you're interested, you can look at all the details over there. I know that this lab is well aware of how the retina works, but I don't know if all of the audience is. So I want to start giving a very quick introduction on which part of the visual system we're working on, which is the retina. And so the vertebrate retina is divided into, let's say, three gross stages. One would be the photoreceptors. You have three types of photoreceptors in the mouse retina that sent light and sent this information to a very complex circuitry of hundreds of different cells. But roughly you can think of it as they have some direct bipolar cells that excite the retinal gangon cells and a whole set of different amicron cells that are inhibitory and sculpture the information that comes from the photoreceptors to send a particular information to the brain via these retinal gangon cells. The study today will focus on the response properties of these retina gangon cells in the retina, basically the message that is being sent from the eye to the brain. Why? Because we think this is a good relay where we can perhaps start looking at differences and adaptations to the statistics I was mentioning to you at the beginning. And so what about this retinal gangon cells? What do we know about this? Well, there are a lot of different types of gangon cells. I see a very beautifully shown in this study by their morphologists. And so you have like the small ones, the bigger ones with asymmetries. You have very big ones and so on. Each of those can be genetically defined, you can also physiologically find them by the response properties to light stimulus. For example, this would be a local edge detector. This might be more sensitive to color, opponents, others might be sensitive for motion, et cetera, luminance, whatever you think might be important to basically divide the visual channels into how people think of it streams of information. This idea, in addition to the fact that each of those independently piled retinas, you can depict here in one beautiful review from Rich Maslant, is that each of those perhaps this small led cells will basically have the receptor, the genetic fields organized in such a way that the uniform basically sample across the retina space. And you have that also for the bigger ones as here. Let's say this would be the blue ones. And they're also basically on the genetic sample. And the idea is that overall, these cells will send each spot and position in space will send information to the brain having all possible streams of information there to be read out. So this is one way of seeing this entire story. But as I was telling you before, there are strong asymmetries in the statistics of natocene. These were studied in more detail recently last year in some work from the oil lab on from Hiroki Asari where they tried to basically match how the mouse and now all the work we're doing in mouse, but exactly how the mouse would perceive the world and how the statistic will change. And as the image that I was showing you before, what you see there is that there's a strong change in luminance if you're looking at the ground. There's a strong change of a gradient of luminance if you look at the sky as depicted over here or over there. And there is this very crisp line on average that you can find on the horizon. So we said if the system should efficiently encode the natural work, perhaps it takes this into account to shape the way the retinal ganglion cells might be relaying information to the brain. So the question we were asking right now is how all of those channels would be modified such that you efficiently basically match those statistics. But this is kind of a challenge because I was telling you that each of those different ganglion cell types, for example, the small ones might be important for edge detectors. This one for direction selectivity and so on. But we don't know this is basically matching some functional response properties of a particular type. So we said, is there any, let's say, more general way of describing it? And let's go a little bit old-time about how people were describing those cells and they were describing by some aspect which would be their center-surround receptive field. What is that? What is the center-surround receptive field? Well, you can see it over here. The idea of it is that the center of the receptive field will be the input that it gets directly to the energetic field, whereas the surround, which is basically depicted here by the bigger donut, will be the information that these cells integrate via this amachrine inhibitory surrounds, shaping the way these cells respond. The most classical way of doing it, this would be one example of one cell. This would be an off-center depicted here if you stimulate on the energetic field. It spikes, whereas it has a very strong antagonistic surround. When you have an annulus, basically the only exciting of here it will spike when it's on. A different way of describing this phenomenon would be using spatial filters. And that's what you see here. Basically it's for the same cell. You have here the blue part, which would be blue, meaning it spikes when the light goes off and has a particular shape. Whereas the surround of this would be red and that would be depicting when the light goes on, will be driving the cell most strong. And we will try to go use this feature of each of the same ganglion cells that we have measured to basically ask to what extent they may match the statistics of natural systems. And this is something that has been done previously. And all our work is basically inspired by this work of predictive coding done by Srinivasan, Laughlin, and Dubs. And we implemented this normative model and expanded it such that we can actually ask the questions. I'm going to first describe how we did it and what it can. So perhaps a way of thinking of our work would be a refreshed view on inhibition on the red. And to start describing this model, I want to first describe the fishing coding theory. An efficient coding as formalized by Laughlin in the 80s is an intuitive description on how the stimulus has to match a response. And assuming here this particular box that you see over there, it will have a distribution of intensities. And so assuming now stimulus independent additive noise, an efficient way of neural coding is to maximize the stimulus response relationship that makes each of the possible response equally like basically if you think here, if this would be the all intensity and this would be the response properties, you would like to have the sigmoidal curve so that it is efficiently relaying the information sensory neurons to the brain. One implementation of this very classical idea would be the predictive coding idea. And as Barlow said, this perhaps this is a very nice way of putting it. This could be a neat packaging of information. It's like it's the best way of how the system should be encoding information such that you use the resources efficiently. And so this is basically the essence that the first step an implementation of that would be the predictive coding idea. And let's see, this would be basically the same thing I was telling you before you have the box, you have the distribution, it matches the stimulus response curve. Now imagine you have a different point where and this will elicit a different distribution of inputs. And here the stimulus response curve is not neatly representing the stimulus distribution. And what we would like with the predictive coding hypothesis to match is that we shift this distribution such that it matches again, the intensity and the stimulus response curve to be more efficient. How would you implement something like this? Well, first you need to know that natural scenes have spatial correlations meaning that the statistics at nearby points in space are similar. Then you could implement it with spatial filters. For example, in this center, another center surround filter, in this example the center would encode for the luminance of the scene whereas the surround will encode for subtractive prediction of the spatial correlation that you would expect to see based on natural statistics. And so one subtracted by the other would be basically one, the measurement subtracted by the prediction would be the predictive error and that's what basically we would like to relay on. And one important point here is that all of that is dependent on the signal to noise ratio. So if you want to tune it, you have to shift the signal to noise ratio and it will become from one to the other such that it becomes more efficient to subtract. And so one way of formalizing this would be having a very simple encoding model of a retina ganglia cell. This is probably the most simple way of representing what happens in the retina. You have an image sent by photoreceptors, you add noise, convolve it with a spatial filter and you see what is the output. And now the whole game that we're playing here is to optimize numerically such that we reduce a cost function would be basically reducing the fine rate. So I was telling you that noise is an important aspect of this and that by changing the noise you would change the properties of the filter if you optimize. So what about noise in Natus? One important aspect is to know is that there's a ton of intrinsic noise in cone photoreceptors. So for topic levels, it seems to be that is three orders of magnitude higher than what you would expect from broad vision. And so you can think also of the change of natural statistics here on azimuth elevation of luminance and contrast also some sort of signal to noise ratio across the natural panoramic scenery. And so this is basically the value that we're going to use to optimize the filters. In addition, we realize that there is this very clear and sharp change at the horizon which makes a very clear asymmetry which might also affect the way those receptive fields might be shaped. When you run the simulations what you can see is that three aspects work. One, if you see here if you change the signal to noise ratio what you see is that the center surround relationship the surround changes. The stronger the signal to noise the stronger the surround. Another aspect is that the center size will change. The stronger the signal to noise the smaller the center. And another prediction is the symmetry. Actually, the stronger the asymmetry you will have the more asymmetric the receptive fields will become. So assuming that this predictive coating hypothesis is implemented we should be able to see some of these aspects in the written response. And just to summarize the same finding or the same hypothesis of prediction in a simpler way would be to think of it as ground the sky the higher the cells are looking like or the most ventral cells that look at the most higher point in space should have a stronger surround whereas the surround should decrease and the centers are increased whereas at some point in the horizon you should see that this asymmetry. So now we have a model we have a prediction but what would we need to basically test it and for tested we would like in one particular retina to sample as many possible cells in ideally the entire retina. And we didn't have a two-photo microscope to use for these recordings. MEAs have different issues if you want to exactly know what you're recording from. So we thought perhaps we can we had a neat idea perhaps we can use it to solve this this issue. And these need ideas extremely simple. And so if you think of let me just have this thing here that's a little annoying just give me a second. So if you think of what has been done before it's particularly here by Tom and during his postdoc and a beautiful work where you can see character as response probably across thousands of different cells with two-photo microscope imaging and some issues on wrinkles and so on there were scanning a little position in space across a retina. And so required different retinas to have let's say an overall picture. Other people have done it with a little bit more with bigger areas this also works but it was still small if you were aiming to image let's say 40-50% of entire retina. And so short story long story short basically we could figure out a very simple way such that we can start imaging across large regions of the retina. And what you see here is just the response of particular retina which we can later tie on and what the size of this is around 1.4 millimeters. And so what is the trick? Why can we do this so easily? And the trick is remarkable simple. And so it's based on that most of the photoreceptors in the cone photoreceptors of the mouse are EV sensitive. And they have the absorption spectrum around basically 300, 370, 380. Normally when we're using two-photomicroscopy we use a very long wavelength to excite the system. However, it's so much energy that even the exponential decay that you don't see here will also activate. And so it's like if we're already activating why not just using a red indicator and shift the wavelength such that we can start imaging at longer wavelengths with epifuricens? And what we just do is a very simple epifuricens microscope which allows us to basically record for very long periods of time very large fields of view. And I'm going to try to convince you that for ganglion cell recordings stimulating UV light it makes it is a great plus and it's very easy to adapt. So how do we approach this? So we used a line that is a genetic line expressed in gearchable specifically in ganglion cells. So all of the cells that we expressed here are ganglion cells using the V-glute-3 line, V-glute-2. And we basically know that all of the cells are RB-PMS phosphorus meaning that they are ganglion cells. But this line even though just because of genetics laves 40% of ganglion cells in a very specific way. So for example, this is a staining for alpha cells. Just one type of alpha cells seem to be in our line. And we can image those across a very big field of view. So this would be for example, one example, Retina what you see here depicted by this orange box would be one of the fields of view that I was showing you before. And we can basically tile image here. One, the other one too. The third one would be here in the middle. And then we can basically have a very large perspective on retina coding. And that's what we need because the difference that we are looking at are spatially at a very large extent. So can we look at response properties? Of course, we can basically look at single cells. We see the response properties. The other interesting thing is that we can record for very, very long periods of time if you record on one field of view. So we have recorded up to an hour and a half if I'm incorrectly. I mean, there's some bleaching coming up but the responses are still there. And in all of that it is roughly 20 times cheaper than a classical two-photo microscope and it spits at least 50 times more data for retina recordings. So if you make the calculation it's perhaps worth trying it out. So what about response properties? Here's some classical response properties I've been seeing. Like this is recorded for over here. You see a direction selective cell simulated with a bar, a bright bar. You see the on-off responsiveness. This is basically like in alpha cells responding disregarding the orientation of the stimulus. We can basically map direction selectivity and ask which are direction selectivities that we see and we basically see the cardinal directions. But interesting, we can in one retina reproduce some very neat findings that have been done in the past. For example, this work by Savite All where they've been looking at how the direction selectivity is mapped and this is just looking across the retina. And this is looking at the same data in one of the retina beta where only we're stimulating this just to see if our system allows us to reproduce prior results. And this is not a quantitative statement but it's more a quality statement that it seems to be if you look at the crosses here and the crosses here that they were changing the orientation depending on where they are in the retina. So it seems to be that this at least allows us to reproduce this type of data. And for chirp responses, of course, this is also a classical way of looking at how the system is representing how much the system representing cell that and even though we have 40% of the cells and we believe that they're always the same 40% and genetically find once we can very easily classify them based on these response properties only. So we have on, off transient, on transient we have suppressed by contrast in even we see some of the some of responses that we haven't been able to match with prior results that have been published. One thing that is a beauty error here if you like to compare and we realize after the fact normally people use the frequency chirp first and then the intensity one such this is basically inverted. Okay, so I hope that I can convince you that at least for the classical response but if we can match with this very simple approach redefine or reconstruct not reconstruct reproduce previous results but we're not interested here in studying chirp responses or direction selectivity per se we're interested in looking at receptive fields and by looking at receptive fields we also need to have a good assessment of the center response and the surround which is tricky in a way that is parametrizable. And so what normally people do is basically use white noise stimulus to basically reconstruct those receptive fields. The issue with white noise stimulus is that if you have big checker you don't have very good spatial resolution but you have a very strong response then if you lose small checkers you might have very good spatial responses but many of the cells don't respond at all because you need to record for a very prolonged period of time. So basically we said why not combining both ideas and having a big checker that's shifting in space and that does that if you are if yourself receptive field if your checkers are similarly the centers around the same time by shifting them sometimes it will be exciting the center sometimes then surround and on average you will have a nice representation and that worked remarkably well. So now we have a method with our system to start reconstructing spatial temporal receptive fields in a very simple way and so it doesn't matter where exactly the the checkers are at the beginning because they will be shifting randomly such that at each point they will have they will be exactly on top of the center and the size of this of one of those checkers was decided such that too much roughly the size of the smallest retinal ganglion cells. Okay using this approach we actually could reconstruct a few of receptive fields. We basically image and we can tire retinas let's say nine we have around 30,000 spatial temporal receptive fields and in here you can basically see some have this time of I don't know double or like some others don't have surround so now those here have a very strong surround and some of that might have an asymmetrics around you have whatever you want you will find there. The nice thing is that the signal to noise is good enough such that we can use very simple difference of gaussians to fit them and parameterize them because after when when you have 30,000 cells it's it's it's you need to have some unbiased approach. And so over top you see basically some example data receptive receptive for from some of the cells that we recorded and in the lower part you will see the fit. And if we make an R2 if you basically try to see what's the error it is and very much most of the cells seem to have a very good fit. And and such that we can start analyzing using this data set to analyze and ask if theoretical predictions actually hold. And so let me remind you what the theoretical predictions were. And so we said that the relative surround will depend on a relative center to surround strengths will depend on the position on the dorsal ventral axis in such a way that the more ventral you go the stronger the surround is. And and then the other prediction would be that the center strengths will be diminished so the more ventral you go the smaller the the sizes of the center will be. And the other prediction that we have is that there must be a particular position that is exactly there where the horizon should be where on top of the horizon you will have more signal to noise on the bottom less. And so you will have a very strong asymmetry path. So let's see what the data says. So this was our prediction. We use this receptive field parameterization. And now we can look at the responses that we measure. And so this is on top. You will see all cells from nine retinas basically aligned into the same coordinate axis. And what you can see is a gradient from ventral to dorsal basically I'm sorry for the for the relish surround strengths and what you can basically mimic here from a little more purple purple to green is that the strengths will diminish with respect to the position. So the more dorsal you come the less surround you will see. The sizes also for all cells here start to be bigger. The the more dorsal you become you come in this dorsal ventral axis. And when you ask for the surround how strongly asymmetric to surround is you will see a very very clear stripe at this position. If this would be the optic nerve these are the cells that we recorded. You will see a very strong asymmetric streak. And this is if you plot this in the not dorsal ventral axis but nasal in basically on the horizon you don't see any correlation. So it beautifully you can basically see that relationship in one single retina. And this is what we did here. We basically cluster each of the receptive fields of all cells in a particular bin and inverted the on one because we just want to see the relative change from from centers around. So on off spatial filters are all grouped here at the same in the same retina. So this is the response of one retina. And what you can clearly see is that there is a strong surrounds in the upper visual field. When you go dorsally if you perhaps you can see it the the centers get bigger. And over here you have the streak of very strong surrounds that are asymmetric. When we look at this we basically stained by by the S option. And then we can actually see exactly where we are in the retina. And we make a little box and ask how asymmetric these cells are. You will see how how strongly the effect is though I'm sorry. So this is exactly where the streak is. And then most and if this would be this the number of cells that are basically pointing ventrally and would be here strongly asymmetric where if you go a little lower it's become less asymmetric if you go higher in this in the cells that are in this box. Basically, of course, you might have some bias there depending on how you're sampling but overall there is this this this directionality of asymmetry is gone. So if we reconstruct basically this streak and try to put it back into visual coordinates and ask all the cells that are actually very strongly asymmetric where the position and where they're looking at you basically seek the horizon as what we would have expected from before. So now I can sell you that on population of cells we can match our predictions. So the surrounds get bigger the more dorsal you get the strength of the surround sorry the center scope bigger the more dorsal you get the surrounds got stronger the more ventral you get and you have an asymmetric scope. So then we would like to once to ask so is it just because of particular neurons that are much more asymmetric it's like a population of cells that might be up here and exactly on the horizon or some cells that for whatever reason have have a very strong surround that is varying and that will basically overtake the entire responses. And so we said okay what way could be functionally classified as cells to ask if it's across cell types or is it just property of a particular subclass and so using the temporal filters that we had we started saying okay perhaps we can use the central response the center temporal response to classify cells and we did so and we basically classified them into 10 different types which you can see here on the left here is time in color code would be basically the temporal filters and you see that there are different groups of responses some are off some are by physics some are on some are more sluggish some are faster and if we use those groups and ask where the receptive fields are it seems to be that so for some of them they tile more or less I mean at least from how we sampled it so there are not much overlap between those cells types of this particular cloud and whereas by others you have a strong overlap and for example number eight seems to be having many more cells and this is something as what we would expect but it cleans indeed a little bit and we can start asking okay for all those groups some of them might be cell types that are specific some of what we mixture of cell types how would those properties change across the dorsal ventral axis and what you see here is basically the same thing as we saw in entire population again for the relative surround strengths it becomes weaker the doors will be good the surround become bigger for I'm sorry the the centers become bigger for all cells and you have this very neat peak for most of the cells of the symmetry at the horizon so basically now I've been telling you three things first that we have a model that has certain predictions we establish a system to ask if these predictions hold and then we we tested them until now at least for for the problems we have used we see a very clear match positive match so you might ask what are the mechanisms that could be happening to make this gradient or made this differences so apparent and one thing that we've been thinking of we don't know if this is true is basically the the gradient of options so in the mouse visual system and in muskulus visual system you have three options rods that are uniformly distributed and you have S-opsins cones that are basically more strongly rendered there's a gradient on them as depicted here and you will have also some green options which are more more prevalent in the in the door cell retina and so given this basically gradient that you will see on the S-opsins it could be that for whatever reason you need more drive in these cells and to excite this around such that you can start seeing the differences across across elevation and so why is this interesting I think this is interesting because of the because it would be evolutionary a very simple way to adapt your visual system to the constraints of natural systems and there is evidence that something might actually happen and this is some old work and done by I don't know remember his first name but title slap and where he basically the look at the option distribution across different mice and interesting mice that live more in the prairie and they're basically a more open field which would be most musculos or this other field mice they have this very strong gradient of S-opsin but if you look at mice that are that that live in in dense forests these screens are basically either gone they have because there is no S-opsin or because they are as options uniformly and distributed and perhaps that could be a mechanism to tune the the visual system such that you match the response repeat in a way to efficiently encode the natural surround and one thing that I would like to discuss is that this will have also repercussions to how these cells might actually encode information and I was telling you at the beginning that we we as a field think very much of each of those cell types as ways of relaying a particular type of information would it be color oponency direction selectivity and so on and so this is basically perhaps nicely described with a cell that I've been working during my poster for a long period of time GRDC and now I inverted everything such that ground is in the bottom and top is would be the sky and these cells have been described to have an asymmetric surround and this asymmetric surround basically makes them become direction selective to very particular set of stimuli of course there are all the others other properties them that happen around but each asymmetric cell will have a particular set of parameters where you stimulate them and you will elicit some direction selected response and so now if you look at the responses of all of those cells suddenly all of the guidance cells that we measured will have a particular industry some particular symmetry and so here we clustered all the all the asymmetries for all cells so we're the pointing towards and what you see in collared would be the cells that are very strongly asymmetric and in gray these are not as symmetric but our match are basically our parameterization makes everybody have some asymmetry but but there might be a very little because there are two Gaussians so it might be very little asymmetric but all the color ones are where the symmetry is very strong it's just like what you expect from your GCs and interestingly in the ventral retina you see some peaks of asymmetry too but these are matching to all four cardinal directions and just for the efficiencies and don't seem to be direction selected cells whereas for the dorsal retina you have this huge peeking over here basically this is the streak that I was telling you before and so somehow we can think about this as okay this is important such that you you somehow compensate for changes in the statistics on natural scene or you have some new emergent properties and I have no idea how to interpret that but for myself it makes me wonder to what extent and this would be nice to have a discussion this global changes in asymmetry that seem to match some some efficient coding framework might have influence in how these cells actually improve information and last but not least some of the early works of responsibility in the superior colliculus that we have been trying to match basically from dragon who will basically say that you would have very strong upward bias in in in the superior collicus these are cells that were actually recorded from from inputs that are located positioned towards the horizon so these cells that are responsible because of how they were doing the experiments so perhaps you have this very strong bias because you also have this very strong asymmetry and in the visual system such that when you record for that you're all in and you reconstruct the spatial temporary filters of critical neurons in that position you see something very similar you see this asymmetry that is more on the upper field of view so to summarize what I've been telling you so far is that the global architecture of receptive fields in retina matches what we expect from some efficient from our efficient coding framework basically depending on what you expect from the signal to noise level and natural statistic at photopic levels you will see some differences in the receptive field structure you can see that in the model and you can see that in view and this is basically what I told you so far we in this work Victor Milanski implemented this normative model and expanded on the predictive coding framework such that we could have three different predictions of center sizes centers around the relative strength of centers around and also on on the asymmetry on the horizon we develop a new method that I think would be very useful for many questions it is very cheap the first time I implemented it cost me 500 bucks and if you want to do it fancy it will cost you let's say an order of an order of magnitude a little more than that and it's very robust and allows you to have a lot of data for the effort of doing it and we use that system and verify our model and basically it seems to be that qualitatively all prediction matches and at the end we see that also that this goes through all ganglion cell types at least as we cluster them and here I would just like to open the discussion of what it actually means if you have gradients of the change and in a way that might influence the way they are encoding the natural scene and the take home summary is basically that the retina implemented some graded sunglasses such that in the upper piece of view you have a stronger surrounding basically whereas in the lower field of view you don't and the last thing I have to thank many people that have worked in this project particularly I mean and Vivianca Olga Victor and Jan who did the work the entire lab and the old members of the lab we've been here the lab is now five years old and now it's starting to be not to to to give real outputs I hope you like the work we're doing and of course thank you for all the people that supported us financially ISD and many other granting and foundations so they said thank you very much for listening thank you a lot Maximilian that was very interesting technique and reasons you got here particularly interesting this trick you got from the on a vertical axis and I just like to remind audience that if they want to join us on this room and interact ask questions possibly discussions the link is is on the chat so actually I have a couple of questions sorry you had a lot of spam today let me just give me a minute to find it I saw a discussion about UV cones between Marla Feller and Ana Vlasit hello YouTube by the way yeah so we're talking about UV cones so your stimulation about UV cones so Ana was saying that she was curious about whether you have thought about whether UV only stimulus affects the properties of the receptive field you collect if we use only UV symbols if it would change the properties then if it would be using green stimulus green that's the question I think so of course I mean some of some of the properties will will will change that's for clear I mean the best example is GERGC where you have let's say the greens around and UV center in some part of the receptive field so this is not the ultimate truth of how the receptive field will look like and it will change depending on the light level and and spectrum that you use unfortunately we cannot use green stimulus right now we we're testing with synthetic dyes to shift it a little bit more to the red and in theory we could actually see green in our calculation the problem is that it's a it's a stimulus contrast issue so depending on the light that we use the stimulus contrast is too weak and if we increase the stimulus contrast we have too much bleed through just the technicality but I think that in general yeah I think that in general I was saying that okay I think that in general the predictions that we did we also did for the UV channel so basically but for green that would be very similar and it could be that on on in general terms I mean if you think of GRDCs they're they're surround would be asymmetric and would be also positioned eventually so there's an indication at least some of the cells will have similar properties in the symmetry and pointing contrary to what you would expect and from from the distribution of cones green cones and and if green cones and rods would be simultaneously active which there is evidence for that perhaps it is a method where the UV will give you like an actual DC sort of like green even if if green would be symmetric if you only have symmetric receptive field with green stimulus you will add the the asymmetry that comes from the UV and perhaps that will make the difference I don't know if I'm explaining myself well but basically I don't know to what extent at this level it will change my impression is that it will but everything points towards that for for what we how we think about it it might not matter much for our conclusions well I think that answers the follow-up question from Anna I mean we have Marla and Hannah on the chat if you want to follow up on this question sorry there's a little bit of background noise but I I could follow up a little bit sorry there was a time difference so I missed the beginning of your answers the question but my I guess I'm wondering because there is a lot of work showing that the receptive fields can be color specific in different regions of the retina and so if we're in the dorsal retina where green is the dominant photoreceptor and then also you're not really stimulating the rods very strongly with a stimulus how can you say that the properties of the receptive fields that you find are sort of the the overall receptive field property but no no no I'm not I mean I'm first it's one of the pictures that we so first I don't think I did the work where I basically show that it is important to look at the difference between green and UV and that that might change the receptive field that's for sure I cannot I mean this this is clear this is not the ultimate truth because we don't have green and there is it is impossible to do this work while having the green right now I mean we would love to do it but at the scale that we're doing it it's not possible I think that generally if you think of a system where green and rods green cones and rods will be active if you bunch them together you will have basically a uniform input across the across the retina so one way of thinking of it is that even if there is no asymmetry difference because now we think of the mechanism of being the UV cones if it would be basically uniformly spread across entire retina assuming that this is the case and assuming that that will have as an effect that most of the green the response would be symmetric the UV might break that symmetry and the UV might make it make it still stronger and weaker to to the to the UV channel and everything we optimize is to to our UV to the UV input that people have measured again if you use UV or if you use green it's basically the same so I would love to know but that is technically not possible right now yeah it's really cool data so to see especially seeing the like the DS tuning across the whole distribution just in one retina that was really neat we have Michael who would like to ask a question if you want to go ahead can you hear me yeah yeah hey max it was very clever and totally convincing is very impressive so I have a I don't really I have a hard time thinking about the trade-offs for the actual animal right so as you know eyes can move the head can move and as far as I know the world is not flat right and so you could imagine that like overplaying this optimality for the average case is actually maladaptive right so I'm wondering if your analysis one question is do you find the animals are actually or the retinas are not fully optimized right as your theory predicts and they sit kind of one step below right really pushing for this global maximum alternatively um you know maybe you could speculate about some kind of dynamic compensation like do you think you have to re-weight all the outputs of RGCs if you tip your head a little bit or if you're on a crooked surface that would be fascinating what is going on so if there's a new paper from the Care Lab that came out just recently I think it's an e-life and they're basically looking at how the animal stabilizes the eyes and it's remarkable how well the horizon is basically stabilized all over right so in the lab it's in the lab right it's just like on hunting hunting it's in the lab it's while hunting but about but I think there are at least mechanisms to compensate I don't say that eye movements are are there to compensate such the horizon stays at the same position I don't know but there is a lot of stabilization going on there and as far as I think if on average the horizon is basically kept there you don't need to have a perfect match to to increase optimality but this is a very difficult question to ask because we don't know we haven't looked at when this would break and of course at some point it will it will not be optimal anymore but if the horizon stays seeing what the animal does and at least in this lab conditions with the cameras on the head so you can basically see what the eyes are doing and it seems to be that the movements of the eye with respect of how the animal shakes and runs around they're highly stabilizing them such that they are looking at basically a particular position which would improve at least such an efficiency constrained that we are imposing I don't feel that okay no it's just it's interesting right because it just means that you can't ignore the behavioral side of it right that the that the behavior have to sort of clamp the visual system in a way that you can actually take advantage of all of these trade-offs right yes and I mean to what extent that actually improves let's say because all efficiency arguments are energetic cost arguments and so to what extent actually the system is improving calories or something like that it's a very difficult question to ask and and then you I don't know we can estimate it but I think it would be kind of like I'm not really accurate thanks Michael okay let's continue we have quite a few interesting questions so I will jump to the next one I have one for Minesh I mean she's here with us if you want to ask it yourself yes hi thanks for a nice talk Max I want to ask and thanks for let me ask myself I was a question about so you looked at a lot of cells and presumably a lot of stimuli as well because one hour and a half of stimulation do any properties of the spatial or temporal properties in the centers around that you've observed do they predict any specific tuning property to certain stimuli you know I mean and I know now I see that you only look at UV but that would be good so I mean there's a very nice questions and there is a master's student which is starting we'll use the data to see to what extent we can predict with this spatial temporal receptive field any of the response properties and no we haven't and we don't record for one hour and a half this was just like at the beginning when we're trying to see like push the system and we could record for an hour an hour and a half maximum of one region but normally record let's say 20 minutes because let's say 15 minutes of white noise and then some chirp responses on and then we move to the next the next area because we want to tile the red night so it's a good system but we cannot it has its limits also in how long we can record and but yeah it's a very interesting question and we would right now try to exactly ask that see we have I don't know a few thousand chirp responses we have for all of them receptive fields to what extent what what should we do to basically be able to predict one from the other and to what extent we can do that hopefully we will soon yeah it's interesting very interesting things Good morning Max Hello Why I'm good, how are you? Good, have fun, go ahead Sorry, I'm just waking up and groggy and can't speak and but while going through the part of the talk I was wondering you talked emphasise the spatial filters whatever the temporal filters do they change in the same way and we they don't at least the centres and no not really they don't change in the same way otherwise we wouldn't have been able to use them to cluster them so we have if you look at although our temporal clusters they're somehow uniformly represented across elevation and so we don't see that they adapt So is that expected or not based on your predictive coding hypothesis? We didn't we didn't we didn't ask that particular specifically but you would expect that there will also be some adaptations into temporal domain and at least if I remember correctly yes we don't see that but we also didn't look at it in in in much detail so far I mean mechanistically you might expect those to be linked right or not or maybe it'll give you insights into the mechanism that are generating those surrounds I mean if there are I mean normally things and as you tighten in space you'll tighten in time as well right or maybe maybe Tom you can or anyone else the flying guys can tell us what happens elsewhere perhaps if we if we add the green then they tighten in time I don't have an answer I have a question actually for retina people speaking of mechanisms of centers around receptive fields is there a consensus like that actually accounts for a proper sort of circuit architecture do we have good evidence that really explains how a center surround receptive field comes about or I mean I've seen the cartoons I just mean how does the data and the connectomic work and the neural anatomy actually line up with that I mean yes and no so yes because we know horizontal cells are useful and I'm a green cells are useful and it all makes sense but what hasn't really been addressed is the fact that you're going to get layers of centers around right so it's kind of you putting rings into rings into rings and how that is a good thing or a bad thing I don't think has been looked at much I don't know are you guys going to any thoughts on that in there and this is what I was alluding to with those different mechanisms that surround some might aim the the temporal and some might in the spatial parts of it and maybe you can disentangle it that great because to me it's surprising otherwise that you can have you know this gradient of spatial filters and not change the temporal tuning properties but again so one other thing you said we're looking at calcium no and so it could be that there is ah yeah yeah yeah to be that calcium calcium responses are not even though yeah yeah yeah we have relatively fast filters that we won't be able to see it because the change is not very much bigger yeah I think that's possible but I I bet you won't see it right because if you if you measure a fast slow fast process with a long pass filter if you measure it nice enough it'll just you know in a in a systematic manner go longer that makes sense so you know like we can tell a part of fast from a slow set even at calcium quite happily right it's just both of them are really slow it's just one is even slower than the other right but you tell a gradient across the retina might be pretty challenging of a certain type right maybe maybe actually you might see it in the chirps before you see it in the kernels no but otherwise it's an amazing technique actually and we've been Victor have been looking at this in a very systematic way and he sees and very when you're trying to cluster the chirps based on position and based on response property together he sees suddenly systematic changes that you won't see if you let came in cluster because they're very very slight perhaps exactly that what we're seeing in the rental retina that might be might be a little faster I have a little this component but if you average in the it's very if you don't take the spatial contribution at the component into account it's very difficult to to put them together perhaps we already seen that so we're in tight schedule let's try to follow up I was about to ask Marion's question but she just joined us yeah I decided to come online as well thanks Max thanks Marion I mean it's related to the conversation and that you guys were already having if there are other properties of receptive fields that change across space especially because there were so many receptive fields that you showed in this like zoo of receptive fields that really burn centers around right no no there are some that look basically a perfect abore and reproducible it's not just one cell so there are a lot of them and so yes I mean there is there's a lot of things to unpack here but you do one unpacking after the other and and to what extent you can use basically the entire spatial temporal receptive field somehow to to define that would be perhaps the next question to ask and in there is a lot of things happening in there and it's now we're basically looking at the static view of the basically spatial filter but of course it's also dynamic view which were the surround might come in move in one direction the other and others might be more static where surrounds always staying in one particular space and all that we see and these the ball like receptive fields is this something that you specifically pull out with the stimulus that you are using to map them no I mean you can you can see that I mean I'm I'm I'm still presenting my what to be sure I have to be sure okay let me see if I go back to this slide so if you look over here number two it somehow looks like that but with the bigger receptive fields and then we look at the small receptive fields like classically you you see something and it becomes much much nicer when you use the new stimulus which works very nicely and actually I have to say that yesterday I saw paper published describing completely separately to us but more from a theoretical perspective this receptive fields method or they call it super resolution mapping of receptive fields and and basically it really works it also works for spherical eucalyptus and cortex and whatever you want okay cool thanks yeah I think they've also implemented this in in time so you kind of shift the temporal properties of the stimulus and the measure of the response and you can get high resolution temporal high temporal resolution as well so yeah see our temporal resolution we see where is it here so um yeah it is kind of slow but yeah so the next question is can you use these techniques for gluesniffer imaging do we have new gluesnifers that might match I mean we're using a I assume so I think the the difficult thing is to if you want to have with gluesnifers you would like to have it more sparsely and such that you don't have I mean or zoom in bigger I am like but I mean if they are response properties that you can image if I think if there are lots of different tones at different positions you might it might be blurry so that is perhaps disadvantage but if so if you are sparse enough then yes for sure a bit too green no need a regular sniffer I think it will work in green with UV2 yes I don't think that this is the you will see responses just because you will saturate the green pathway here I don't think we're saturating it but we cannot get response because our contrast in the green channel is just too weak it's like four percent calculated and if we put it bigger we have a very strong bleeds through so then we cannot use it anymore but I mean for for Ganius and reporting it really well before we continue because it's been an hour already I'd just like to tell our audience that is still live on YouTube that we will soon close the stream so if you want to continue this discussion and join us you can still do so on the link which is currently on the chat so after this short question I will stop the the YouTube stream what a success because there's a lot of questions that we can still discuss but Tom but then suggested a discussion regarding special temporal asymmetries do you want to bring that so Gautam has has since deliberated on that but I can just steer away Lee's question which was about well you can you can manipulate the mouse to be less asymmetric with the with the option gradient at least right either with genetically or by giving it some well taking away thyroid hormone I guess messing with hormones so imagine you now take your mouse which is super asymmetric and you force it to be symmetric developmentally would you expect that to give you symmetric receptive field so do you think that's a different property um I mean there is this albino mouse which basically have no gradient and ideally you could basically test that by by crossing them or back crossing all of the lines to and yes I mean that would be my prediction that it becomes less symmetric and that would be nice way of testing that hypothesis what the benefits are for a mouse I think it's very difficult to test that behaviorally if this is the next question you might ask I mean if it's a it's a mouse without a gradient more coping better in a natural environment not I mean these these questions are very difficult to address but I would assume at least from the physiological level that the that this asymmetry might not be there anymore but I might be wrong if there could be other mechanisms that that are important perhaps I mean there's a gradient of synaptic strengths that and just that we think of the the obscene reading as being them them a direct obvious link I mean it must be quite hard why it right because these are the poor mice sitting in some page somewhere not like they've ever seen an horizon exact yeah rising okay thank you for that so we will close this discussion now see you all next week for another talk I am now closing the live stream and I'm considering my mother at the right to you all please continue thank you and we are