 three, two, one, and live. Hello everyone and welcome for another vision web seminar. So once again, I will start by reminding you that these books are part of the worldwide neuro-initiative. You will find all the relevant links in the description below this video. So today I'm very glad to receive Petri Halalovina. Petri obtained his PhD in Engineering Physics at the LCK University of Technology. He then went to the US East Coast for Project World Position at the Boston University School of Medicine. Then he moved to the West Coast for another project position at the University of Seattle. Now Petri runs two labs in Finland, one at the University of Helsinki and the other at Alpo University, where over there they aim to bridge expertise in bioscience and engineering physics. The goal now is to understand how the retina processes are signals originating from single photons to drive visually guided behavior in mice and humans. So, hello Petri, how are you doing today? I guess we were supposed to host you back in September. We had to postpone so many times due to COVID and due to weird schedules. I'm so glad to finally have you here. So, hello Petri. Okay, hi, hi and thanks for the very kind invitation for this worldwide neuro and for the nice introduction. Just a little detail there. I was doing my second postdoc at the University of Helsinki with Petri, but I think it's close enough, same post. How do you, the stage is yours? All right, thanks. I'm gonna try to share my screen. Let's see if that works. So, if I did everything right, you would see my first slide now. I don't see any comments or any podcasts right now, but I think it's fine. So, if there are something else I should see, just let me know. I'll let you know if there's something important. All right, so, hello world. The title today is the dark side of this and resolving the neural code. So, if you have watched Star Wars, you know that Darth Vader says that if you only knew the power of the dark side, you would have incredible opportunities. And today I will try to lay out what the dark side of vision really means and what the power of the dark side actually is. So, if you think about sort of a sensory processing across senses, all sensory information is encoded in spiked trains, in action potentials, whether it's seeing, olfaction, taste, hearing, touch. The brain is listening to these sensory signals and integrating the sensory signals, originating from action potentials to form reliable picture of the world around us. How the process happens and how the brain decodes these signals is an incredibly difficult problem in the field of neuroscience. And today I will explain one step in one dimension, how we can make progress in these kinds of problems. And that relates to studying vision in darkness or at very low light levels. So, when I call vision in dim light, I will normally speak about light levels where a single ganglion cell sees something between zero to 100 photons. That means that the single rod ganglion, and these ganglion cells will collect signal from 10,000 or so rods. So a single red rod basically hardly ever sees a photon. So my team is insanely deep, so it's not just rod vision. And because species from human to mouse to amphibians, it has been known for decades that our visually guided behavior can get very close to the absolute limit set by physics in encoding extremely low light levels. And that itself means that the impairs sensory modality from the retina to the higher order circuits have been optimized extremely well to these tasks. So from the perspective of the first question I asked, this means that vision at the real low light levels is a great opportunity to try to link behavior of performance into and from photons to circuits to behavior. So additionally, I would like to highlight that in the mammalian system, the retinal circuit encoding the lowest light levels is extremely well-known. It's called the rod bipolar pathway. And it's probably the best known retinal circuit. The features of this circuit is that the single photon signals originating from rods get collected by bipolar cells sent through A2 amacrine cells to on and off cone bipolar cells and then sent out as spike trace to the brain by on and off ganglion cells, the most sensitive on and off ganglion cells. So there is a lot of convergence in this circuit such a way that the most sensitive on and off ganglion cells collect signal from thousands of rods in this case. And the circuit is preserved across mammalian species from mouse to primate to human. So if translation across species in vision is always difficult, but if in any place that can work well, it's in this paradigm in the stimuli detection. And today I'm gonna show results where we actually are gonna translate findings and hypothesis across all these three species to make progress end to end from photons to behavior. So my talk has three parts. Part number one focuses on how does behavioral detection of the weakest light increments depend on retinal on and off pathways in the mouse vision. And this work has been published at the end of 2019 in neuron and it Lena Smiths was the first order. So I will show this work very carefully because then I'm gonna build up part two and three which are more like which are unpublished new results which we have they are based on the approach we take here. So it's very important we go carefully through this paper first. So if we think about connection between retinal circuits and behavior in this stimuli detection paradigm, it's important to know that in 2014, when I was in Fred's lab, we figured out that the last genus of the own pathway operates as a nonlinear mechanism. It's like a coincidence detection mechanism and such a way and that is missing from the off circuit. So the rod signals converts way to amacrine cells and now the split between on and off pathways happens here in the inner retina and the on pathway has a unique nonlinearity which is not in the off pathway. For those people who work in the retina one very important thing to highlight here you might have heard that even the first synapse of the circuit is nonlinear but we have to remember that now we are talking about such a dim light levels that that nonlinearity just operates linearly just as a loss of single photoresposes. So this side, this one we found in the primate recognize the only known nonlinear coincidence detection mechanism in the system. And what is important here is that and then on cells send information about incredibly dim lights by increasing their firing rate and the most sensitive of ganglion cells they have a very high firing rate actually in the darkness and then they become silent in response to a few photons. In mouse these would be alpha cells, alpha type ganglion cells in the primate parasol, on and off parasol cells are the closest analogs of alpha cells. So then the question is that how does the mouse brain or how does the behavior depend on the most sensitive on and off ganglion cell outputs? So the challenge here is that even though the responses are very different on cells have a nonlinearity they increase their firing of cells are linear and they decrease their firing rate their sensitivity is very similar in wild type mice. So in order to kind of take a step to this question we needed a tool to make an asymmetry and the sensitivity is between on and off pathways and then develop the tools to correlate these outputs to behavior. So the trick here, what we, what Lena Smiths and you see who worked on this part of the project what they did was that we used a mouse model called OPNLW mouse which expresses a small fraction of human L cone pigment in their rods. The, it's just like 0.5% is human L cone pigment and the rest is rhodopsin. So basically it's almost pure rhodopsin and we can quantify this, the expression of the small fraction of cone L cone pigment by measuring by suction pipet technique the spectral sensitivity of these rods and as you can see that this is a log scale now that there is a slight increase in sensitivity to redness in this model mouse compared to a wild type mouse shown by black symbols. And based on this difference in spectral sensitivity and feeding Kovarovsky nomochrome here one can estimate even the fraction of human L cone pigment. What is important here is that for reasons what I'm not gonna go into too much is that these rods even with this little amount of human L cone pigment they appear to be in a bit like adaptive state and the single photon response is only one third in its amplitude compared to wild type mouse rods. So the hypothesis then that if you have a trace holding non-linearity in the on pathway that will take a much bigger impact on these OPN mice that in the on pathway in the OPN mice than in the off pathway and our hypothesis will be that we create an asymmetry in the sensitivity between on and off pathways. And as you can guess because I'm giving a thought maybe the hypothesis will be correct. That's how we always phrase these things because in the starting phase of the project we didn't actually we were kind of figured out this as a surprise. So then the question is that so how does this asymmetry actually impact the ganglion cell output? So now what we did we take flat mount preparations of dark adapted mice retinas and we are now recording responses of on and off alpha sustained cells to the repeats of extremely deep flases of life. So what I'm showing here are response rosters in response to two different flat strings, the above rosos the intensity was 0.01 R star per second. So the in the below the it's 0.02 so basically if you think about collection from 10,000 rods you can imagine that in the integration time of the ganglion cell the ganglion cells sees just maximally several tens of our isomerizations even in the in the brighter flash. And as you can see then we quantified the sensitivity of the cells by running a two alternative force choice task on these basically between pre points and post points. Essentially we run an ideal observer detection task such a way that the algorithm had to decide whether the response happened where it really should happen after the flash or preceding the flash. And as you can see as the flash strength gets stronger the algorithm always detects that there is light and if you make the flash really deem eventually there is the algorithm reaches a chance level. So the basic finding here is that on and off ganglion cells so very similar sensitivity of cells are slightly more sensitive in the very lowest light levels because of the non-linearity. So okay, this basically is not too much new information this is exactly like parasol cells would be in the primate already published before this work. So how about now the OPN mouse, the model mouse? Well, surprise, surprise our hypothesis was correct. So we saw a big sensitivity shift in the on path way compared to off path way such a way that on ganglion cells are now about tenfold less sensitive than off cells. So we got exactly what we wanted to have and now we have a tool to kind of ask the question how does behavior depend on these ganglion cell outputs? Before that though we were lucky we had tough reviewers in this submission so they required more evidence that alpha cells actually are indeed the most sensitive cell types in the mouse retina. We don't do multi-electoral array like Craig Field and colleagues. So we were first when we got the reviews we were sort of slightly kind of forgetting our bells and whistles to get together and figure out what to do. But thanks to Lina Smith's heroic efforts we took an approach which worked for us. So she went in flat-mount preparation she went after every Soma like she found on and off alpha cells and then went after every Soma nearby and during the time three or four hours while the preparation was in good conditions she could get anything from up to 30, 40 cells in sometimes in these kinds of experiments. And since we know that all of the ganglion cell classes they dialed the retina and the mosaics are independent or semi-independent if we take into account now the metamosaic findings, et cetera. Then you can just compute how many cells you have to go through and what is the probability of missing any class by taking into account the cell densities published by Tom Paden and colleagues. So we found out that if we go after 200 or more cells here what we did then it's for each class this was the probability of having at least one cell of each class we found out that we maximized could kind of miss one type or so. And as you can see in this sort of a sample preparation to on and off cells in this one month they indeed were the most sensitive and then other cell types as span even several log units intersensitive in these kinds of stimuli. So distance of population data in wild type here on the left you can see that on and off alpha cells indeed are more sensitive than all the other cells kind of a cluster into single cluster. And interestingly in the OPN here indeed off cells are more sensitive than on cells but even the on cells on alpha cells where on average still more sensitive than most of the other types. So the interesting the kind of a key part from the point of view of our stories that for on cells there is a tenfold sensitivity difference between OPN and wild type for off cells there's a slight heat, threefold sensitivity difference between wild type and OPN mice. So now we have a really nice tool animal to try to figure out behavioral connection to ganglion cell outputs. Okay, so then of course the question would was that how to correlate behavior to the retinal outputs. So how do and then to understand how these sensitivity differences especially between on and off pathway how they manifest at the level of mouse behavior in a maxing behavioral task. So for that we have spent a lot of time over several years now even preceding and after this paper and we have developed this the black maze end to end quantification of behavioral decisions from single photo stimulation. We have this black room where experimental this work based on night vision year and these days can also attack the metadata based on an eye ink technology which doesn't reflect infrared light and the mice are in this come to the experiments for this room. We have a black maze there. Everything is kind of a dark only way to visualize things is based on infrared light and using so low light levels of infrared that its activity levels are below noise level in the system which kind of like for me the darkness is editing what is activation which is insignificant compared to for example pigment related noise in the system. And so in this behavioral paradigm we place mice into this water maze and in the center we have a transparent tube shown here such that they will be orienting and trying to see a dim light in one of the six channels and if they are trained to these tasks they try to swim towards light and there is an escape ramp from water below the water if they find the light. So this way we can run these experiments and then the data goes. So we have, we track the mice based on our own deep learning based cell network which is very fast and then we can store the raw videos and metadata and analyze the data also online. So before experiments we need to train the mice and generally it takes about a bit more than two to three weeks if they are going to be used bright light such that they learn to associate the light with the escape ramp. I had to highlight that they only swim four times a day so two weeks or so it's just like it's just, you know, some real really in the order of some minutes real swimming time. So if you, if one like inspired by, you know, Markus Meister's recent papers, they one computes the learning rate. This is actually incredibly effective task compared to many things, many tasks so that they learn very fast when they actually doing this. So the two weeks, most of the two weeks is that they're sitting in the mouse facility but this is a very effective technique. And then we start, when they have reached the steady state of learning, which we call 80 to 100% with bright light, then we start the real experiment and real experiment being that they have to, we start to manipulate the light level and map the probability of finding in this case the first channel. It goes without saying that the location of this, this stimulus is randomized across these, these six trials such that they can't use any, any other clues for finding the light. So then let's look at the data. So this slide now shows the black symbol. So data from the wild type mice and red symbol from the opium mice and the insets. So population track data in one, so in one intensity, I guess in this case, the intensity, which is here at the, where we see the biggest difference. And as you can see from the population data highlighted in the inset, you can see that now when we have blocked the data such a way that the right choice, the stimulus channel is on the top that the wild type mice easily find it at this light level where opium mice are all over the place. So then we, now when we quantify the whole psychometric function, you can see that the doctor, they reach once, they find the right channel roughly, you know, 17% of the time, which is the chance level, highest light levels, they basically find it always. There is some other phenomena happening real bright light levels, but they start to reach that high level before this kind of avoidance of light starts to happen. And this, the key reach and here is where the transition between chance level and always finding the place that the right place happens. And you can see that there's about 10 fold difference between wild type and opium mice. Okay, so 10 fold behavioral difference, 10 fold difference in the sensitivity of the on alpha ganglia cells, only three fold difference for off cells, bingo. So looks like behavior is defined by on alpha cells. Again, we were lucky, we were pretty happy with this but reviewers wanted, they were really, we were really lucky, we had some reviewers who wanted to see this done properly. So they asked, okay, you need the full model to do this. So, okay, what is the full model then? So of course then my side is a qualitative comparison of 10 fold effect here, 10 fold effect there. So in the real task, mouse is looking for the light. The light is traversing in the mosaics of ganglia cells in the retina and then the ganglia cells will convert these sparse photons hitting to the retinas into the spike trains. And they are fed to the brain to make decisions. And we can then make an ideal observable model such that the ideal observer based on how the individual mice behaved, they can rely on either on or off alpha cell mosaics and try to make optimal decisions. Of course, we don't know exactly how the mouse brains but we can make the optimal, optimality argument with the model and then look how, what would be the best behavior behavioral decisions one could make based on that model what we implement here as an ideal observer model. And that's what we did. So in order to do that, you need tracking and we luckily had the tracking place. So we have marker, I didn't maybe mention it, but our camera is on top. And since the mice are swimming in the water plane that and their head is sort of always a bit upward. That is actually a really great place to do tracking because we can marker, let's say identify the head direction for every frame when they are swimming. And by pushing this a bit more, we can even do uncalibrated eye movement measurements. So if the mice were moving in 3D and had the tits and roll dimensions, also this would be incredibly difficult. So we have kind of a, it's almost like head mounting but heads are not mounted. The animal actually can have a lot of behavior and strategy here. So this water maze is actually quite a remarkable place to look behavioral decisions. And what I'm showing here is just a couple of examples of how we do the tracking. So these are, and we did also uncalibrated eye tracking. And as you can see, eye and head sort of directions are somewhat uncorrelated. So there is a corrective eye movement which we can take into account in our model as well. So this one, this video will now just show one trial where the mouse is placed on the center. So the tracking system also identifies the exact position of the maze. And now these two eye cups here demonstrate how when the actual mouse is making the behavioral track that how the stimulus spots, which is located in one of the channels how it will move in the mosaics of ganglion cells. And even though the stimulus is now highlighted as a green kind of a spot, remember that it's really consist of very sparse hits of photons. So it would be like extremely sparse photon rain instead of a... So here the mouse does to do the behavioral task. Stimulus is located in this channel. You can see how the stimulus moves in our model based on what the animal does on both of the eye cups. And then our task is to kind of based on that understand how the stimulus gets mapped into spike trains of the ganglion cells. So this one, so Johan Vesta, who mainly was a postdoc in the lab who worked mainly on this part and Natalie collected nicely data from ganglion cells for this part. So they constructed a very simple sort of linear nonlinear model to quantify the mapping of sparse photons into spike outputs of the ganglion cells. So we measured the temporal and spatial filters in these ganglion cells. And then we validated the model by actually driving moving spots of stimuli across the receptive fields of these ganglion cells. And the motion or the move, the rate of motion was constrained by the map, the movement what we measured or predicted based on our tracking. What turned out to be that the model predicted very well responses to these moving spots. And sort of for this audience, an important thing to highlight with kind of bridges back to the dark way the later quote is that these cells, some photons in time and space such a way that time and space are independent so that it's real time and space independence in terms of summation. And that means that the motion, for example, doesn't engage different nonlinearities. And that makes the kind of a really low light levels also a very powerful place because you can have a very good prediction from sparse photons to actual spike rates and in these cells. So then we are kind of forgetting towards the end of the first part of the talk. So there is one free parameter in these models. We don't know how long the mouse brain integrates information. So if you are totally ideal, you could make the integration time to be infinite, which means the entire swimming track sector that would be this dashed line. And as we can see behavioral data falls luckily sort of that. So they are not integrating perfectly. Then we can adjust this integration parameter in the wild type mice such a way that we, the data matches the model. And if we use for example about 300 millisecond integration time, there's a nice match between the data and the model. So in wild type mice, the on and off cells do about the same thing. So now it's an important thing. So we did this in wild type mice and we locked the parameter. Now we don't have any free parameters. Now we asked, okay, how about the OPN mice without any free parameters? And here's the kind of a punchline figure of this paper. So you can see that now everything plotted in the same figure, the wild type data, the behavior and the model. And now the OPN behavioral data here. And as you can see that the OPN behavior data is extremely well-agreement with the ideal observer readout of the on-cell mosaic, whereas it falls clearly sort of the off-cell mosaic. So the behavior can't use optimally the off-cells which encode information by silencing their firing rate. They really rely on the on-cells. And the conclusion here is that the brain doesn't optimally use the information content in decreases in off-cell firing rate. So if you just think of from information, like if you could build any kind of decoding mechanism, it would be better to read off-cells, but that's not what the nature seems to be doing here. So you need behavior to know how the nature works. So concluding the first part of the talk, behavioral detection of light increments that visual threshold rely on the information provided by the retinal on-pathway. And secondly, this happens even though the off-pathway would allow higher visual sensitivity with the cost of lower fidelity in them. And this conclusion based on the OPN mouse model system. So I think I'm doing quite fine with timing. So now I'm gonna switch the gears and I'm gonna move to... So everything so far has been published. Now I'm gonna switch the gears and we kind of start to go towards unpublished orders of science and show you some of the more newer stuff from the lab. So second part is that you might ask, okay, what are the off-cells then doing if the nature is not using their optimally? Specifically, if we think that the finding was that for increment coding, the decoding mechanism was reading the increasing firing rate of the off-cells. So if we think that on-cells and off-cells are kind of a counterpart, the on-cells respond in light increments by increasing firing rate and off-cells by decreasing firing rate. So for light decrements, it would be exactly the reverse. So we wanted to ask them that how does behavioral detection of the weakest light decrements depend on the retinal on and off pathways. So this is ongoing work with Sweden quite far. And so to this stimuli, we have been asking maybe for a hundred years, what is the dimest light we can see? But we haven't kind of asked the question, how many photons can be missing? Like in incredibly dim light, they are called, I would call them just, they are like quantal shadows. This would be a quantal shadow detection task. Imagine that, you know, if you are in an extremely sparse rain outside and the rain, just the mean rate of the rain change is just a tiny little bit that you and you would have to detect that, okay, this is less rain. So these are the stimuli where we recreate shadows, which are like quantal shadows. So this is the darkest of the dark stimuli you can imagine. And then the question is that how do on and off ganglia cells, the most sensitive on and off ganglia cells encode these quantal shadows and how does behavior depend on these signals? So we had to develop a bit of new technique, technology here again. So since we are in Finland, we thought about the winter war and where Finnish soldiers were wearing these white outfits and we came up with this, you know, our modification of the white maze. And here is a sun that we're experimenting very similar outfit as our soldiers in the, you know, 1930s and this shows the white maze. So we have like a white, like everything is reflecting the sparse photons kind of a setup and there is a reflective maze. And one of the channels now has a dark spot which doesn't reflect or doesn't reflect any light or reflect is made out of material which the reflection is as close to zero as possible. This is now highlighted in, you know, kind of a bright green light, but you can imagine when you make this extremely dim that you don't see like anything else than, you know, kind of a green fog of photons and your job is to figure out where some photons are missing. So this shows now video of that maze. So everything is, the whole approach is completely similar that in other ways that in the dim light detection task, except now the stimulus is this dark spot, these crosses and stuff like you see here are just orientation markers for our tracking system and Tuomas who is here has developed has sort of optimized our deep dark cut tracker for this thing. So basically here you can see a mouse is put into the center of the maze to start to look for the dark spot and the tracker detects the position of mice and also head direction. Yeah, I'm sorry, this visualization is not, but just believe me, the tracker does a good job. So, and then then we take a similar modeling approach. So basically when we know what the mouse is doing where the head is moving, we can map the dark stimulus spot on the receptive field mosaic of the cells. I forgot to highlight this, actually this figure shows quite well. So here you see a small fraction of the cell mosaic and as you can see when the mouse mice are in the center of the maze, the stimulus size is sort of in the order of the size of receptive fields of these cells. So it's not hitting multiple, like it hits a small number of cells of a particular type at one, during one frame. So then we made ganglion recordings and constrained again LN model to quantify on and off alpha cell responses to these stimuli. And then we feed them to an ideal observer work where Natalie Martinik did the recordings and Johan did the modeling. And for the sake of the interest of time, I'm not even showing you ganglion cell data, but I'll kind of cut directly to the end conclusion of this part of the talk. So this is the key figure the panel see now. So percent of correct choices, the function of background light, behavior data shown by black symbols and theoretical limit if you did perfect job of set by defined by physics and the loss in the system. You can see that behavior falls sort of that limit, but not very far. So the mice are doing credibly good shopping, detecting, first of all, light decrement. So they will see a shadows where only few photons are missing in the pools of hundreds of rods. It's incredibly good. And then the second key conclusion, as you can see, the behavior now with very similar modeling across really fits the off alpha cell prediction. Whereas on cell, if you would rely on on cells, you would fall short of this, the behavioral fall short of the on cell prediction. So it seems really based on the data, what we have at hand that dimest light decrements are encoded by off alpha cells in the mouse retina. So conclusions of those sort of a short part, two of my talk is that behavioral detection of the dimest light decrements or quantal shadows, how we call them is depends on the most sensitive off ganglion cells of alpha cells in the mouse. And then the most sensitive off ganglion cells can detect light decrements. That is, it's only a few missing photons among hundreds of rods. So sort of a kind of a conclude like the part two and one and two now is that it's quite remarkable that there is the distribution of labor between increment and decrement coding starting already at extremely low light levels such that on alpha cells carry the role of encoding light increments of alpha cells, the decrement coding. So the distribution of computational labor happens already at these light levels and the particular cell classes at least seem to have correlation to particular behavioral task. So then I'm gonna move to part three of my talk, the last part of my talk, which is also unpublished data. So I told that in this paradigm, one can move from species to another. And now we did exactly this. We started to look human and primate vision collaboration with Fred Ricci and we asked a question that kind of a hundred year old question that what is the dimest light that humans can detect and what neural mechanism actually limit our detection of the dimest light. So the approach now relies on using monkey retina as a proxy for human retina. I think it's a really good proxy because these recordings require extremely good preparation condition and that is more likely in the case of monkey retina and then we correlate that with human psychophysics. So there is a little bit, you know, being a liberal across species, but I think liberal is a good thing in general. So we kind of went back to the classic psychophysics question. And this, of course, was already asked when soon after quantum physics came about it. It was the question was asked then can humans see a single photon? But especially in 1940, is the landmark paper by Hex, where we then saved our understanding of this question and the experiment they did, this is data from Hex. Hex on ice that they were giving sequences of extremely inflases and to their own eyes and they into optimal in the dark adaptive condition in the peripheral retina. And they asked if you can see the, you know, flash or not. And they found out that, you know, 60% or so performance level than it corresponds a bit more than 100 photons at the level of cornea. Then based on the estimated loss factors in the eye and the shape of this curve, the conclusion was that humans can be dead, you know, something like seven to nine fold isomerizations, absurd photons or so. So this paper already sold like one of the, you know, one of my, at least my own favorite results in the science is that since the photons were so sparse that they landed to about 500 or so rods, this already showed that a single rod must be able to encode a single photon. And this trichophysics experiment preceded, you know, almost 40 years that the recordings by King Wai-Yau, then his bail on Trevor Lam where they actually electrofusorates showed that a single rod can encode a single photon. So in the vertebrate system. So this was, I think, quite cool. So now, having said it, so that the number, whether it's, you know, seven or nine photons, et cetera, this has been discussed decades now. And the original model what Hex-Viren-Piren applied was basically such that you have a linear retina and then decision mechanism in the brain which can implement the critical number of observed photons needed for detection. And if you tweak this number to higher, you can see that this impacts the shape of the psychometric function, what was the kind of a key parameter they extracted, the steeper the shape, the higher the number. And since then there has been lots of more progress in this domain, but essentially all the other models, also with differences in the noise model, et cetera, they still kind of a fall in this sort of a general picture that you have a linear retina and then a single non-linearity or decision mechanism. And then later on it has been so since there are the noise model, it has been so especially based on Craig Field and Fred Rickey and others that since the system is not constrained really well enough with the noise model, et cetera, those numbers are the interpretations ambiguous. The one of the challenges here that you just have the psychophysics data and the photon data, but not access to estimate what the retinal output does. So this is where we try to now make progress. And there are two kind of two key points in the approach, what I'm showing and the data is kind of quite quickly to go through, but first of all, is that the work what Fred and I did was that we showed that indeed there is a non-linear coincidence detection, sorry, on the pathway. And now Lina's work showed that behavior in mice really reliant on that way. So if we are now species liberal and we think that all vertebrates work the same way the prediction to hypothesis would be a new kind of a model. This would be that behavioral detection of light increments relies on the non-linear on the pathway of the retina read by the brain which can add further nonlinearity. And we wanted to test these hypotheses against the classic model. All right. Classic model having linear retina but still potential nonlinearity in the brain. And that has been also discussed. Sakit and others have proposed that the brain might actually have access to each individual photon. There's also newer work claiming that humans could see single photons. So our hypothesis is that nonlinear single process in the retina on pathways, it's a fundamental limit to the detection of the weakest light flashes in humans. To really test this hypothesis we needed a bit more than just, you know, recording primate on and off ganglion cells and human psychophysics. It seems that the detection paradigm itself is not powerful enough paradigm to really nail which model is correct. So we did a slight, like we did the detection paradigm which means that you are showing, you are doing darkness versus flash trials but we also did discrimination task. And this task, I'll come to that why this is actually really important here but the task is such a way that your job is to define in between two extremely dim flash intensity which one is brighter. So we did the detection task which means that you have to define if there is a flash or no darkness versus a dim flash and then detection of brightness discrimination between two extremely dim flashes of light. We did recordings on and off parasol cells in Fred's lab on Macaachretinas in optimal conditions and then human psychophysics in my lab. So now I'm coming to this important feature of the detection versus discrimination task and I'm showing this basically just by demonstrating how this splits the model predictions between linear processing versus non-linear processing. So if you think about, let's focus first of all to the left side of the slide. This describes linear processing domain no matter what is the retina or the entire system but if you have pure linear processing and for the sake of simplicity this just assumes a model where you have darkness you have some intrinsic noise intersystem then you add flashes of increasing intensity that adds Poisson distribution, distributed photon counts and you get noise distribution, signal plus noise distribution, one higher signal intensity and then again higher intensity. So if you think about detection task your job would be to discriminate between in two ways, two alternative force choice framework you're discriminating between noise trials versus signal plus noise and Poisson distribution in pure Poisson distribution the mean and the variance are equal so these get wider the further the further you go in intensity that means that if your job is to figure out what is the discriminality between two distributions with the fixed mean you do always best job here when your intensity is lowest because the distributions get wider so the detection task which is shown by blue gives you a better performance than discrimination task in a purely linear Poisson defined system then you can add noise etc and that sort of a shift now if we plot the intensity needed for just noticeable difference as a function of the pedestal intensity the comparison intensity you can see that the detection task is always this one point and then if your pedestal increase you will just do worse for non linear processing model it works differently because you will lose if you set the thresholding you will lose everything before you reach the threshold that means that you have this early part where you do worse detection task does worse discrimination task and then again things start to get worse when you kind of ramp further into this Poisson providing in domain these models linear versus non linear processing model will lead to a different qualitative performance in these models so we wanted to test now this first of all retinal level in on and off parasol ganglia by driving this task and then in psychophysics level in humans and then kind of bridge the two in the framework of the model so these now so recordings what I start you know we what some of it's I was doing while still post of it Fred and Fred has been doing collecting more of this data in his lab now so which is really fun to we still able to put your own picture even to data collection mode I hope that I will get more back to that in the future so now here you can see spike rasters on parasol ganglia cells in different really dim light intensity you can see that this is for example about two isomerizations in the entire receptive field of the ganglia cell to eight and the highest light intensity the high light intensity is still incredibly low and then this shows for the on cells and you can see the big difference in the firing rates of parasols have a high firing rate in the dark parasol cells are almost silent and then they respond to light by increasing their firing and immediately now when we looked at present correct plot as a function of intense you can see that off-cell off-cells indeed have a better performance for detection test and for discrimination task it essentially is pretty equal to the linear processing model and for on cells it's the different the discrimination performance better than detection performance then we plot just noticeable intensity as a function of the pedestal intensity meaning the flash intensity against which we test these things and the dashed line shows theoretical limit computed from physics and you can see that off-cells behave this is the detection task so they are here and then they sort of discrimination task just gets sort of worse and for on cells we see this exactly this deep so the for detection task they are almost the log unit off from theoretical limit so detecting a single photon is extremely it's very difficult but then discriminating between let's say 8 to 10 isomerizations the system is almost perfect so they do incredibly good job in contrast detection by sacrificing single photons so how about human psychophysics so this shows the human psychophysics data so we did 2AFC task the 80 degrees of eccentricity dark adapted humans and here you can see that exactly like on parasocial discrimination task they do way better than detection task for the same difference of mean just noticeable difference and behavioral performance for detecting just if there is like you are almost a log unit off from theoretical limit but for discrimination task the humans get extremely close to the theoretical limit of physics lining up qualitatively with the data from on parasocials okay so now how do these two things link together so now then with Johan's help we did some modeling here to summarize the final part of the talk so this now shows just noticeable difference in intensity and as you can see they are from one isomerization to 10 so incredibly deep light as a function of pedestal intensity here is the darkness so the detection task theoretical limit you can see that on and off ganglion cells both are almost a log unit off from theoretical limit for single photon detection but then on cells dig towards theoretical limit whereas off cell sort of gets worse and here is human psychophysics which is shifted to higher intensity so you can first of all from off cell data there is not a simple way that you could ever read you know you can get exactly this result not at least a sort of realistic model where which where you could get to human psychophysics performance off cells are doing worse here but from on cells you know if we just look at the model where where we just have the sub-unit non-linear in the retina and then sum them without any further non-linear the brain we can't get exactly the psychophysics performance but then by creating a model where basically we have one more non-linear in the brain we can basically perfectly predict the psychophysics data and essentially the sub-unit numbers now we need to increase a bit from a single on cell so we are talking about integration from a few on ganglion cells and setting up a second non-linear that we can get exactly what we have in psychophysics so data lines up with the idea that as in mice humans would rely on cell code in detecting the mass-light increments incredibly interesting I think that they are sacrificing really for the detection task which has been the classical question humans don't do so well but for contrast discrimination they are incredibly good so then I want to summarize the last part of my talk by saying that human visual performance for detection and discrimination of the dimest light increments can be predicted from on but not from off parasol ganglion cell responses and secondly that non-linear signal processing in retinal on pathway sacrifices single photon events for optimal contrast coding at the sensitivity limit so a bit like a sad way maybe we have been asking the wrong question for 100 years human vision is not about why would we see a single photon basically what would you gain from it but if you can sacrifice singles and a ton of noise and reach incredible contrast discrimination through on pathway perhaps that has been more correct interpretation of the optimization that of course is a point of discussion rather than date finally I want to acknowledge the incredible people who actually did all the work my group and here here is a picture from the summer get together which was outside in August because of the Covid times and then we have added here to the side row Tuomas who was somewhere on travels that time and then Gabriel who just joined the lab and then Alexandra and Krishna the new PhD students in the lab and on that note I would like to thank you for your online attention and I'm happy to take questions well thank you very much Petri it was very interesting thanks for that I'd just like to point to the audience that if they want to come to this room to ask questions themselves or if they want to continue the discussion the link has been shared on the chat so if you can just follow this link you will join us on this room now if you have any questions okay that is not a real question really we won't have any okay that is surprising you are a terrible audience maybe I I'm a terrible speaker well okay so I guess we can move to the part and so we're just waiting for people to join us here on this Zoom room and we can continue on the topic so we're waiting for you I put the link in a chat so join us as we're going to hands a YouTube stream very soon so do join us now I have one from Greg Schwarz in that last slide was the number of subunits in the non-linear model am I right assuming then that this is basically two or three overlapping on paracelargesis is this a coverage factor yeah that's exactly correct so that was the number of on comb bipolar subunits and as Greg kind of brilliantly sort of read out from there the model would be consistent with the idea that the high order out has access to a slightly bigger number than a single on parasol cell but just the order of couple of parasol switch is in mind with what we believe is happening in these tasks thanks for that hello Simon we had Simon laughing with us it is appeared hello you are muted we cannot hear you yeah I have a question so in Barbara Sackett's paper from 1970s she reported that the human subjects their ability to their absolute threshold was sensitive to whether they smoked a cigarette whether they had drunk a cup of coffee or whether they had been on a very bright beach in the previous three months and I was wondering whether you have evidence that mice are similarly sensitive to their previous history of visual experience and diet and they don't smoke but yeah thanks for that's a great question and we haven't particularly looked in that but in terms of of course not smoking but diet or hunger but we have looked into something and we published this recently in current biology and we showed that if you if the mice are in different urinal state if they are even if they are dark adapted but if they are living their nominal date time versus night time they do way better in this psychometric task if they are living their nominal night time and since now in the paradigm we have an estimate what is the retinal first of all they did a different strategies for the light spot during night time but that only explained part of the difference also the high order high order integrator was in a better state in the night so it's sort of somewhat related that there was high sensitivity in terms of at least towards the urinal rhythm in these tasks I mean that's really interesting that was going to be my second point was whether there was a diorama rhythm but there obviously is and that came just out the paper we published in current biology I think about a year ago 2020 January so I'm happy I can send you the the reprint so I mean is it known whether there are any structural changes in the mouse retina for the light today and there aren't any retina so the underlying reason we started to look into that and we didn't see in the increment detection in darkness we didn't see a difference at least in the our flat mounted preparation in the sensitivity of the chemical outputs but indeed there is a lot of fusiological evidence for example gaxons of coupling and multiple features change so what we kind of hypothesize also in the paper for detecting the dimest light spot in the dark that may not be in circadian conflict retina phase but probably some other computations are and the interesting domain I think to look would be misopic light levels where the tradeoff between space and temporal coding for example could be much more utilized in just detecting the dimest light is not necessary to kind of the task to look at and we have a bit of analysis but now we are trying to seek more into that that what computations could be under circadian control that sort of ongoing work great thank you thanks so before Tom can ask a question I have one from Ines Filippa apparently she missed a part of the talk but where do you think the brand linearity modulation of the signal comes from you mean the brain part I guess she doesn't precise yeah we don't know and we don't even know the precise mechanism actually of the retinal non-linearity either so that would be it would be interesting to look into that one there is some previous work especially based on Horace Barlow's approach and then later followed up where instead of if you do a behavioral task when you don't just say that you see the light you can express yes maybe or no then Barlow showed that there is a tradeoff between sensitivity that the criterion level of the high order can be adjusted that was one of the reasons why we ran our experimental paradigm into two alternative force choice framework and I can see Marku who is the psychophysics expert in our team is also part of this discussion and we were particularly discussing that this would be an important thing so that we push we try to set up the paradigms that we alleviate the criterion high order criteria as much as possible because it has been also a questionable thing that whether like especially related to Sakit's work and also a recent paper by Pinsley whether humans could have access to single photon responses in our hands it seems that one is almost impossible but two or three etc. for all parasols are very doable Tom, you had a question? Yes, actually I had a question about salamanders and stuff but I'll spare you that for now, I've got another one so what really intrigues me is that you're saying it's all about the off channel yeah the satellite but it's the wrong one because the robot all the cells are on this is a terrible design, what's happening? but in this zone if we just focus on the vertebrate circuit we are looking at the primary road pathway where we share the on and off pathways are really shared until A2 omachines and then you do the split in the retina it would be a very different ballgame maybe if you go to the light domains where you drive the road signal through cones or if you start to drive the cone signal through those pathways so I think this sort of extreme dim light level is a very particular place especially because on off splits happens so late it's basically one synapse but what I find super exciting is that there is already a distribution of labour to well defined behaviour and that mice in the dark if you are doing a survival game shadow detection actually is probably a very important computation that off alpha cells do an incredibly good job in encoding them that was also interesting in that sense sorry maybe a long answer but kind of could excuse me to get going that for the increment part we needed this OPN animal to really create the asymmetry for the decrement part when we put really dim backgrounds on cells don't do a very good job because if you do light decrements you can only silence the light towards darkness and you start to your stimuli also limited by the fact that the last contrast you can do is quantified by where darkness is and that way on cell response properties don't allow to encode very effectively decrement so there was a natural speed now in the retina which made it easier for us in a way that we didn't need to get to maximize I do buy that and it kind of that is an explanation of why the off ganglion cells are doing it not the ones where they're better but it's not an explanation of why the road pathways that way round yeah that's true I mean so basically you're saying that maybe at these light levels it's not about the road by polar cells too or maybe it's about the direct path yes and the other thing like to more like coding thing is that you know of course we looked now two computations increment and decrement but having said we still looked two computations where we can really link them now to retina it seemed like the decoding algorithm was very simple like if the ganglion cell you know you have like you know better than me but you know 44 or so cell types and if the cell sees you know what it likes it increases its firing rate and the phrase yeah I know this guy saw the feature it's interested but the decoding doesn't necessarily like do all these fancy things we can do in math lab to do information of gaps and all of these things at least not in these two computations which shows that maybe nature is doing simple and beautiful things with the design yeah well it certainly seems so let's still remain intrigued why the road by polar cell circuit is that way round yeah where there's not a second one I guess it just works and that's what it is well then let me ask about salamanders as well they've got much bigger roads yes yes I know are they better I don't yeah so I don't know if anyone has looked into extremely dim light detection in the salamanders I know that the roads are you know I worked on the salamander roads in my first post and roads are brilliant brilliant in single photon detection and actually I have a recollection which comes now with a bit of an error bar that the overlap between continuous noise distribution and the you know single photon response distribution in terms of amplitudes even might be even better more favorably in at least in a one based salamander system but I don't know really the downstream my kind of a hunch would be that one couldn't identify the ganglion cell types there well and they are sort of from some on and off and it's sort of current understanding like you might know better that how well one could hit the most sensitive ganglion cell types in salamander and what would be the good readout of yeah I don't know I think we looked at but one thing here what I think why mouse and even monkey retina especially parasol has been really great because you can really try to go after the cell type specific readouts in this question and then constrain that in extremely well matched condition against the psychophysics or behavioral experiment and sort of try to push the into the you say that right but then so you're linking now the parasol cells and the alpha cells right is that a link that's actually been is that are they linked are they the same cell in a different animal well that's a good question but to my knowledge that's the closest homologue of alpha cell parasol and the which extent it has really been nailed for example that in monkey radiation primate and you wouldn't you know no other cell would come close to sense I think that's not done it would be the closest proxy you know in the unideal world I looked across when I was in Fred's lab I looked across I was interested in this questionnaire at the time so it's not published but I look across multiple cell on and off types and it seems like in darkness although even mid-set cells became highly nonlinear in darkness so in a way that the last you know multiple on cells share this feature so they're all off kind of a linear versus nonlinear this split might be much more you know universal across many of the types and they might manifest somewhat but I don't think anyone one thing also what they didn't highlight the getting this even the monkey retina in this condition is is not trivial so it might be one preparation out of you know five when you really have the ultimate sensitivity and you kind of even when you do the dissection and you feel the attachment of RPE even you know kind of feel that if you see the pigment granules everything then yeah this would be a good preparation and then often times it's like the whole recognize in great shape but then most of the time when I was in Fred's lab I was seeking like the best highest sensitivity parts of the retina and when I identified those I started to work maybe you know seven seven hours mounting piece of the piece to find like something which is in great shape and recording from there so if you just sort of took some pieces of primate retina most of the cells would be no linear state high firing rate less sensitive but you know the best of the best preparations you you see this feature this is like the hardest thing to so the pillars of when I worked there I had a couple of projects I worked on other projects whenever I didn't have to ultimate this you and only on this thing when when stuff was perfect now I think Fred's lab is a bit further in this state but it's still not like easy thing preparation wise and I think multi-elector array might have even more like because you're kind of squeezing it might be even more difficult there to to get all of this in that that state let me just have in here just to say to audience that I'm going to hand the stream now so if you want to join in a room do that now I give you two minutes to do so thanks Petri very much that was a very interesting talk I'd like to remain or audience that we will have another talk next Monday another vision talk so I hope to see you there thanks again Petri thanks all right well I'm still wondering about the whole evolution thing so you know the