 Δεν είμαι. Και νομίζω ότι είναι εξαιρετικό, πρέπει να είμαστε αρκετά. Ας ευχαριστώ, καλώς είμαστε καλώς στον συμβουλή της Σεμίναρς Βασίου Βασίου, όπως πάντα στην πρόεδρα του κόσμου. Είμαι ο Γεωργία Καφετζής, ένας δημιουργός της Σεμμιουργίας της Τόμμας, και τώρα ένας ΠΕΜΑΤΑΝ. Και επίσης, όπως είμαι your host για σήμερα, θα ήθελα να ξεκινήσω πάνω, για να ευχαριστώ τον Σεμίναρ Βόγγελς και τον Πάνω Σποζέλος, για να παίρνουμε αυτή την εξαιρετικότητα της Σεμίναρς Βασίου, διεξιότητας στους παραδοσιακούς και πιο επίσης στην Σεμίναρς Βόγγελς. Αυτό που είχα says, δεξιότητας, να βλέπουμε πάνω για την στιγμή που βρίσκουμε εδώ για αυτό Canadians, και να παίρνουμε τον Καφέρα μας από την Εμπιουλή Δελαβίσιο, οδηγείο Ολύβιέ Μάρ. Ολύβιέ είχε η ανταγραδική σχέση σε εγενία, σε ένα κολλπολογιαγική γέννη, και κυρίως από την Ευρω-Σινάς από τη Παρίνσταση. Για την ΠΕΣΤ με τον Ιβ Φρενιακ, στο κομμάτι του Ευρω-Σινάς, Της ευρω-συνάς και την κανένα, Ολυβιέ οπότε από τη θεωρατική και την ανταναλή μικρακύτερη αυτοκρότητα τη δεχατεύτη της ευρω-κορτηκής πιστυνομίας, με την επόμενη σκληρά της προηγισης. Το δηλαδή ήταν πρήντες κατά του δικασία στην Πρινστον, να δημιουργήσει το Μυαλμήν της λογικής συμβαίνειος των Λεφτανοδάσεις του Ρετίνα, σε κομμάτι της κομμάτρας. Στο 2012, δημιουργήθηκε στην Αυυσιανία, στους στους χρόνων, μεταξύ του Σωνογουρμού της Ρετίνας. Στους κομμάτρας της κομμάτρας συμβαίνει, αλλά στους κοντάς της Ρετίνας, από ό,τι η Ρετίνα κάνει και καλώς, διάστημα στους κομμάτρας και της κομμάτρας, για τη δημιουργία κοινωνικής προστασίας, μεταξύ της οπτογενέντρικας ή της γνωρίς της θεραπείας, και μετά για τα ευλογία των εξοπλότητες της εξοπλότητας, με ενδυναμία προλυμπιστική δημιουργία, με τον Μαουσ-Λεμούρ's δημιουργία. Σε τη δημιουργία του σκέφτητας, όχι, θα ακούουμε κάτι που δεν είχα μιλήσει τώρα, όπως η εξοπλότητας προστασίας, στις εξοπλότητες, και για τι στιγμή η κονταξία συμβεί. Η δημιουργία του σκέφτητας είναι εξοπλότητας της δημιουργίας της δημιουργίας για την Ιρατίνα και μετά, όχι από τη μου σκέφτητα, πλήθως, καλύτερα, τον Μαρ. Ολυμιέ, η δημιουργία είναι εξοπλότητας όλων τους. Ευχαριστώ πολύ, Γεωργία, για την πολύ καλή δημιουργία, όχι όχι, δεν νομίζω πως να μιλήσω πολλές δυνάσεις, αλλά, ναι, ναι, δε θα είναι καλή. Από τη δημιουργία σας στο κόσμο της ολυμιέ. Ναι, λοιπόν, αλλά η δημιουργία μου είναι η δημιουργία της δημιουργίας που είναι εξοπλότητας για την ΕΛΑ, οπότε, πιστεύω, πιστεύω, πως τίποτα... Έτσι, όχι, δε θα έχουμε όλοι οι πρόσφυρες στην πόση του Θεού. Πρώτα, θέλω να ευχαριστώ πολύ, Γεωργία και τον Τυρφι, για την δημιουργία, but also, in general, for organizing this series because I mean, this series has been really informative and great and also probably one of the few best positive outcomes of COVID, I guess. So I really wanted to thank them for organizing that. This has been a great resource that I've used a lot in my lab for group meetings in particular. So as George said, I'm going to talk about basically how the recent progress on how the retina progress process natural scene. But before that, I just wanted to make a general point, which kind of goes beyond the retina, which is the way we kind of study that kind of question. So let's say you have, you know, a complex natural scene in the normal coding question is basically trying to understand how these complex visual scene is transformed into a set of spike trains from different neurons. So here you see a raster where this is time passing, every line is one cell and every point is a spike. And essentially, the normal coding question is trying to understand the relation between the stimulus and this response, wherever you're recording this neuron from. And broadly speaking, there have been many approaches to try to address this issue, to try to understand how this can be done, how you can transform this visual scene into a set of spike trains. And I mean, on one end of the spectrum, I'd say there's something like a classical approach where you use relatively artificial stimuli like this check about patterns, on them check out patterns or this grating with various orientation. And this has been extremely useful to get insight about how neurons process visual information. So I've mentioned, of course, a few historical papers and reviews and more recent ones. There's obviously tons of it that I couldn't just mention a small piece. The ideas that are out of that, you can get a very interesting property, like you know that neurons that process visual information have receptive fields or can have orientation tuning if you talk about the visual cortex. Similar example could be fine in the retina. And this is definitely very useful. However, I guess the drawback here on that side of the spectrum is that you still have to address the issue of whether this is relevant to how these neurons will process natural scene, natural scene directly. Because hey, what's more ecologically relevant than a natural scene. And this addition dates back in various forms from long time ago, at least from Jerry Levin who is here, quite smoking in his rig. You can see the part I cage behind, I think. And of course on one end, this is great because this is ultimately what you want to know about how does that work in real life. On the other hand, the issue there is that visual scene is composed of many parameters. And while artificial stimuli, you can vary systematically the parameters and find when the neuron are responding here. There's many things evolving at the same time. So it's obviously harder to tease apart what is doing, who is doing what. I mean, the most recent flavor in that direction is actually to try to learn complex model and try to fit them to the data. So like for example, deep network models, which are basically just successive layers of filtering and non-linear processing. And so trying to predict as well as possible the neurons you're recording from. So this is obviously difficult. First, you have to find the right model. Second, you have to evaluate if the model is right or wrong at predicting. And this is actually more challenging than it sounds. I'll come back on that. And finally, even if you actually get a model that works, there's still an issue that you need to interpret it. And of course, if it has many parameters, that's not an easy task. So for that reason, many, several people, I've tried to look for a middle ground. And there is among many others, I just mentioned in a very biased way, the one from the retinadide that I've done talks in this series. So there's like, you know, so I'm citing the talks basically. So there's Frederic Kieting-Golish, Higish Liski, Marcus Moister. There's obviously others. I just cited one of the most recent reviews, but there's many papers and many attempts to try to on one and keep the ecological relevance of natural stimuli. But at the same time, somewhat managed to keep something, to do something more systematic and better parameterized, lack artificial stimuli to try to have a more exhaustive look at what's going on there. So that's basically what we have been trying to do here too. So we are trying to make small progress on this question that was already explored by others with different means. And so here are going to be a bit more specific. So I'm going to talk about the retina. I guess most of you know where the retina is. And this is, you know, the retina view from the side. Most of you know that most of it starts with light conduced by photoreceptors. Then the signal goes down through several layers, ending up with ganglion cells that are sending their spike to the brain. Ganglion cells come in different types. And the textbook view, which is that a ganglion cell, each type kind of encode a different feature of the visual scene, extract this feature at different places of the visual scene. And broadly speaking, the kind of question I'm asking here is, what actually is this feature? So how to address this question? First, you need to record these neurons. So I'm not going to say a lot of methods, but let me just show you my methods slide here. We're performing, as George was slightly alluded to during his introduction. Lasker recording in the retina. So basically we just flattened the retina against an array of electrodes and record from many electrodes. This is now, I see, pretty much a standard method, so I'm not going to expand on it. Out of this, you can get the activity of a couple of 100 cells and you can edit for several hours. So here in this work, I will talk about mostly mouse and axolotl retina. Sorry, no sharks for George. So now let's say you actually can record one cell and you want to know what these cells care about when it's looking at visual scene. What is the feature that is extracting? Well, to address this question, the classical method is to do spike trigger average. So let me just, most of you probably know about that, but let me just describe again what we're talking about here. What we're talking about here in a simplified way. So you're going to just present various random frames, random checkerboard frames here while recording the activity of the cell. And the cell will just respond to, let's say, selecting a bit the temporal axis here, will respond to each of them with a number of spikes. And so if you want to compute the spike trigger average, the standard method would be that you take each frame, multiply it by the number of spikes it emits, it evokes in the cell, and then take the weighted sum of that. And by doing so, you would obtain a spike, what is usually called a spike trigger average, which is a good linear proxy for what people usually call the receptive field. And so thanks to that, you start to have a good idea of what these ganglion cells care about. For, formally speaking, what this is telling you is what should you do to increase the fine rate of that cell. And here it's pretty clear what you should do in that simple case is that, well, there is a region that these ganglion cells care about. So this is where you should do something. And what you should do is to actually increase light inside this area, because it's an on cell. And so if you were to do that for various ganglion cell types, you would obtain a various type of receptive field. Somewhere it would prefer light increase and somewhere it would prefer light decrease. And if you were to expand a bit more on the cell types than two, beyond two, you would also find different activity for the size of the object. So basically, this receptive field would have different size. And of course, there's also surround and all that. So basically, that is already giving you an interesting insight of what ganglion cells care about. But of course, as I said in the introduction, you have to check if this is all there is when this retina is watching. The retina is watching a natural movie. And of course, a lot of people have addressed this question. I'll just take one example from the Shishin Liski lab. There are others, including from my own lab. But here the idea is that you should basically take this spike to your average and use it as a filter. And any stimulus that comes, they can just filter it with this spike to your average. And then, well, I'm not going to spend time on what happens after. It's just basically something to transform the scalar number that comes out of this filtering into a set of spike train. The details, I don't think they matter here. The point is that if you do that and if you do it correctly, then you can test if that model predicts how this ganglion cell is responding to, let's say, a sequence of spatial top-all noise. In that case, you just display a checkerboard evolving over time. So here basically you repeat the same sequence of checkerboard stimulus over and over again. Each line is just a repeat of the same sequence and it's just one cell responding here. And you see in BLAD experiment and in red what the model is predicting. And in that case, the model is doing surprisingly well. However, if you now try to do that for a natural scene, the same model actually fails miserably. I mean, it's not completely horrible, but let's say it's really missing a lot of stuff here. So obviously there's more to, from this result, there's more to the processing performed by a ganglion cell on natural scene than just the linear receptive field. And it's not about trying to fix these latest part here, the non-linearity part. I mean, you can really, I mean, at least maybe, at least in some cell types, you can really show that there's something more than just the spike to your average. So now you're left with the question, well, ganglion cell care about more than just the receptive field when they're watching a natural scene. And so what do they care about? Or can I find about the feature they care about? And as I said, many people have tried to address this question. So we try it our own way. I mean, it's an important question because we have seen these on and off selectivity here happening. And we can wonder if that is even true for more natural stimuli. So our own way to address this question is to start again from these local, sorry, from this receptive field and kind of take the hypothesis they're making seriously and push it until it breaks basically. So the idea is that, as I said, this spike to your average, classical spike to your average tells you that your ganglion cell is selective to light increase inside this small area. So if this is to be true all the time, including for during natural scene stimulation, then this is what should happen. So let's say you're just recording that ganglion cell and you flash a natural image and you obtain a response. If this is all there is, if this is really the only thing your ganglion cell care about, what happens, the luminance inside this area, then the best way to increase the response to that natural image should be to increase the light in this area. So literally to add this receptive field to the image. So you would obtain something like this, which is a bit ugly, but you get my point here. Basically, this should be the best way to increase the response of the cell to that image. So now this is actually an hypothesis we can test and basically this was the motivation for the following experiment. So what we did here was that we, so we were recording ganglion cells, as I explained before, and first we were presenting a set of reference images like this one. So we just flashed the image for 300 millisecond and then we, they are interleaved with like gray screen, with a gray screen. And basically we record the response to this image. And then we perturb this image by adding a check about pattern. But one important point here is that we calibrate the amplitude of this check about pattern to be just, I mean, the smallest possible that will evoke a slight change in the response. So basically, the smallest amplitude that will evoke a detectable change in the response of ganglion cells. So we really try to make it as small as possible to be some sort of a kind of a small perturbation. And so we obtained this. We obtained basically the same image with a perturbation. Here the check-out is a bit hand-hands to make it visible. And the idea here is that now we show this and we record how the ganglion cell will change its responses. And then we repeat the same thing. So we have like basically always here the same reference image, but every time a different pattern. So one thing I should say here for to really put you on the right track is that so there are many perturbations for the same image. But what I'm not showing here is that this is interleaved with the presentation of other images. So there is no specific adaptation to this image. It's not really pattern adaptation we're looking at here. It's really just poking the nonlinearity in the system. And I'll come back on that. Okay, so if you find this a bit twisted, let me just give you two other ways to see it. The first one is the more abstract way. Basically, if you think about the stimulus space, which is obviously a very high dimensional space, these natural images one point. And what we're doing is that we're staying in a neighborhood, but just poking these stimulus in various directions and see how these impact the response of the ganglion cell rerecording. If you're more on the biological side and all this seems very abstract to you, well, I like to think about it that way too. Basically, what we're doing is that looking at the system at work, which is basically the retinal processing natural stimuli, which is what it's supposed to do. And then we introduce some perturbation. So classically, when you think about biological perturbation, you think more of something like knocking out some genes or maybe doing some optogenic stimulation or silencing some group of neurons. Here, the perturbation we're doing is a bit different. We're just perturbing the stimulus itself. So you might think that this is absolutely different, or you can think about it as something conceptually similar or related. Hopefully that helps. Okay, so now we're just flashing these different perturbation on top of the image, and then we'll look at basically each of them will evoke different responses in the same ganglion cell. And so now we can basically just try to correlate the change observing the response and the pattern of perturbation. So the way we do it is very similar to the spike trigger average that we did before. We just take each perturbation and multiply it by the number of spiked evokes by this perturbation in the response of one ganglion cell. And then we just take the sum and we obtain something like this. So that's what we have called local spike trigger average. So spike trigger average, of course, in reference to what I was talking about earlier. And local, because as I said, we're just always staying close to the image and somewhat we're inferior, at least we must be dependent on this image. So that's what we call the local STA. Or you can also call it image-dependent receptive field, but this is how I will call it in the following. Okay, so now what does it look like? Okay, so here is two examples. So this is one classical off cells in the axolotl. This is basically the receptive field measure with classical tools. And this is an on ganglion cell in the mouse right now also with the receptive field measure with classical tools. And now we actually, for both cells, we have measured this local STA starting from different images. And so here one column correspond to just really one cell and this is just the other cell here. And so at first you see that, you know, if you start from these image, for example, or that one, or that one, or even the gray screen. The local STA doesn't seem so different from the classically defined special receptive field. It's looked kind of the same story. So, so far, nothing really surprising. But for other natural image that we start from, so it's the same cell. Now if we basically measure these local STA starting from, from this image, now we actually see something very different. Now actually it seems that it we get an on local STA. And so that means that basically the best thing to do if you want to increase the fine rate, if you start from this image, it was to decrease luminance in this area. On the other hand, now for the same cell, if you start from these natural image, the best thing to do if you want to increase the response of the cell is actually not to increase luminance in this area. And so conversely, for the mouse and cell here, the ongoing cell. So if you start from these image, the best thing to do if you want to increase the response to these natural images is actually to increase light in this particular area. But now if you actually start from these one or that one, actually what you should do is decrease light in this small area. So it seems that depending on the image, what you have to do to increase the fine rate is can change from light increase to light decrease or vice versa. And, well, usually when I show these results to at least to recognize the first question is OK, so for what kind of cell type do you see that. And the answer is actually for many cell types need that part. So here is basically each line here on the on the right correspond to the different type of cell where we have actually seen consistently this effect. This basically on off selectivity changing with the natural image we start from. And so what one thing is which is interesting to notice. First is it's like it's not just one cell type. We see that for various cell types. The way we've classified the cell types or you should have mentioned that is following an approach that was done by by Tom, but also Catherine Funker and Thomas Euler where we display a special uniform stimulus. So this is basically luminance of the special uniform stimulus over time here. And here you see the average response of the cell. So each line is a different cell. What I think is most interesting is the pink area where you have the you know the on flash here and the off flash here. And you see that even some cell like this one, for example, are really strong on response and almost no off response. And I said almost no off response can actually see this switch between on and off local STS measured before. So it's really depending on the natural context. I mean, one question I often ask is as on this is whether this is just only on off cells that are showing these behavior. And I think the answer is clearly no. You do have on off cells like this one, for example, here or here, but it's not the entire thing. On the other hand, not all cell types show these behavior. And I think one thing to say here is that if you actually have a type where you could call that you could call a pure on or pure off with absolutely no response, then usually you don't see that. So in fact, what's really happening here is that if you just go in again, take that example, there is a strong on response. There is a really small but visible off response that you might neglect. Well, it seems that with the proper natural context, with the proper natural image, you can actually just change that balance and make that off response much bigger, much more important than much more dominant than the on response. So in effect, we're not saying that all the cells can be on and off depending on the context, but they can definitely change the balance between on and off as a function of the stimulus. And if I say it like this, it might remind you a previous work from T.G. Jambourian et al, but also from Pearson and Kaschenstiner, where they have shown that when you change the background light by large amount, you can actually change the respective on and off responses. So here we're not changing the background light level, this is maintained constant, but we also see the same effect by simply changing basically the natural context if you want. So, okay, so really the relative importance of the on and off responses can depend on basically what stimulus you start from. Okay, so polarity, and I mean by this basically on off selectivity, is context-dependent. So of course, it's a strange result and you'd like to understand it better, and if you want to understand it better, you have to defer to modelling. So can we find a model to reproduce this finding? This was basically the next step of our work. And to address this question, during the same experiment, we were also presenting unperturbed natural images that were different from the image we perturbed. So this is really this joint set of images, and we use this data set to train two type of model. The first one is a classical LN model. So this is basically just a linear filter plus a non-linearity. This is something that has been used a lot in the retina, usually as a null model. And then we also used something fancier, which is basically a convolutional neural network model, so a CNN model, which is basically composed of several features in the first layer. So it's a two-layer network with several features in the first layer, several filters followed by non-linearity, and then another layer of linear pooling. So if you don't know so much about the network, it doesn't matter. The point here is that it's a two-layer network, and so far this is the only thing you need to know. So when you learn this model and then basically first what we did is to assess them by asking them to predict the response to repeated, unperturbed flash natural images and see how well they do at predicting the response to that. And of course, CNN, which is kind of more complex, does better than the LN model on that. But the thing that we really care about is whether they can reproduce the local spike trigger average. And so here you see the same two-cell I was showing before, and now we're going to ask if these two models can actually reproduce this finding. We can basically ask them if they can predict the local STA. And the LN model is really, really performing badly on this. Essentially, it's like never changing. I'll come back on that in a minute. And it's really not capturing this change in polarity that we see nor the displacement that we often see also. On the other hand, the CNN model does pretty well. It's not being perfect, but it actually captures most of the polarity and the position of these local STA across different images and all that. So okay, we're happy. We have another that seems to work. Before moving on, I'd like to somewhat give a bit of intuition of why the LN model fails and why do you actually need to resort to this kind of non-linear CNN model to make it work by stepping back a bit and be a bit more abstract. So let's consider for a minute the function that's going to transform the stimulus into a response. So that's just simply the function that takes the stimulus as an input and output the response of one neuron. So of course it has more than one dimension, but it's just here for the simplicity. I'm just plotting here as if there was only one dimension in the stimulus. But what have we done when we try to compute these local STA? Essentially, we are just focusing on a few points which are the reference natural images like these two guys here. And we are exploring a small neighborhood around that given point. So we just poke the stimulus in the neighborhood in various delta here. And we just ask, okay, what is the response in that case? And then the idea of computing the local STA, so computing the linear approximation of what happens here, means that basically we're trying to approximate what happens here with a linear function, so with a slope. And if you remember your math classes, maybe that stays back from some time ago, but that should ring a bell. Because basically what we are doing here is like approximating locally the complex function by a linear function. And that's basically here taking the derivative, so calculating the slope and taking the derivative of that function. In a multidimensional space, that derivative is also called a gradient. So basically essentially what we are doing here is that we estimate empirically the gradient of our stimulus response function around a few points here. And that's basically what you can think of as the local STA. So basically the local STA that we're computing here is an empirical estimation of the gradient of the stimulus response function. And if you have studied a bit that there's one property of the gradient which is well known, which is that if you have a linear function, the gradient of a linear function is a constant. And that actually explains why we see here that the LN model, which is not linear but almost linear let's say, actually has a constant local STA because its gradient is constant. So that's why it just cannot work in that case. And basically that's why also estimating these local STA is really estimating the signature of nonlinear processing because the thing that nonlinear processing does is really changing gradient. Okay, so after that part of this we can so far call ourselves happy because we actually have found a model that does well at predicting these local STA. But now we need to try to understand why these model works and what's critical here. And so for that purpose we've tried to simplify the model as much as possible why trying to retain the qualitative feature of predicting at least qualitatively these local STA and this inversion in polarity. And we ended up with a model composed of basically still two layers, but the two layers have basically at least two filters. There must have at least two filters, one which is basically off like, so a filter which somewhat select for, you know, like decrease, and one which is kind of online so selecting for light increase after these two kernels. So they are both convolutional, which means that they are both convolved with every point in the in the image, but that doesn't matter for the moment. Then the output of this kernel of this filtering is then passed through a nonlinearity which is basically thresholding. So putting to zero everything that is negative and keeping as such everything that's positive. And then there's a second layer of linear pooling plus nonlinearity. But then actually looking at this we can start to have a better idea of what's going on, because let's say you have a stimulus, which is activating more strongly the off kernel but not so much the on kernel. So the on kernel output is negative. Well then the on kernel here will be set to zero while the off kernel is actually just intact. And of course when you perturb this, you're going to see the impact of this perturbation out of the off kernel and through the nonlinearity here. But the on kernel on the other end, since it's zero, it will still be zero. So basically the perturbation will have no influence on that zero output after the nonlinearity. And for that reason basically, then the organic cell which is predicted by the two layer network will basically reflect the response of this off kernel. And for that reason, if you start from there, basically you're going to have a off local STA. Conversely, if you change from the image, if you change the input, so that now you have another image, and that image activates a lot the on kernel but not so much the off kernel. So the off kernel is kind of thresholded out while the on kernel output is intact. Well then you get the opposite behavior. Then basically once you perturb something from this image, the output of the off kernel is always zero. It's not changing while actually there is some change you can observe in the on kernel. And so in that case, of course, you're going to have actually a on local STA because you're dominated by the on kernel. Okay, so looking at this model, which kind of somewhat at least qualitatively explain what we see. It's obviously very reminiscent of the retina circuit as we know it. So in the retina, as most of you know, you have like a off pathway of bipolar cells and you have the on pathway, the on bipolar cells. And they can be converging to the same Ghanaian cells either directly through direct connections or actually indirectly through polysynaptic connections. So a single Ghanaian skill can definitely be influenced by both, by the way. And so out of that, we thought, okay, so basically if there is a mapping between these kind of kernel and the off bipolar cells and these kind of kernel and the on bipolar cells. Well, one way to kill the on local STA would be to block the on bipolar cells and that we perfectly know how to do. It's a very easy experiment. So you can basically just put LAP4, which is a chemical that will basically block the transmission from photoreceptors to on bipolar cells. And if you do that, basically it's equivalent to kind of removing these on kernel. And since this is necessary, it's necessary to have this on kernel to have on local STA. The prediction is that by doing so, you would still have intact of a STA, but actually completely gone on local STA. And basically this is what we see. So here this is just one cell, which has shown both on. It's a on cell, but it has shown both off local STA and on local STA in the normal conditions. And then we basically just murder again these, the same local STA after putting LAP4. And in that case, the off ones are pretty much intact while the on one are completely gone. And this is something we see over many cells. I mean, it's probably relatively obvious, but actually, yes indeed, this changing of the polarity of these local STA really depend on the convergence, we think, of the on and off bipolar cells. I'm not saying that all the cells that show these actually are by stratified and receive direct connections from on and off bipolar cells, but at least indirectly they are somewhat influenced by both on and off bipolar cells, we think. So we can find a nonlinear model that predicts this finding and it relies on basically a nonlinear combination of on and off input. Now, this is great, but I mean, there's one last question remaining here, which is that we want to know better basically what's the relation between this, the image and the local STA we get. And the reason for that is that a lot of study just to show that whatever processing is done by whatever neurons is depending on context, in sensory context, and sometimes end up here. The problem with that is that then if basically every time you change the image, your ganglion cell care about something different, then how can you extract anything reliable from that ganglion cell? So if you want to extract robust information, like if you want to be able to decode something reliably, or at least transmit some reliable information with these cells, there must be some order in that chaos. So this is what we actually wanted to look for next. So we went back to the initial idea that I just showed already. So imagine that you have basically your ganglion cell responding to these four natural images. In the first place, we said, well, OK, I mean, if it was always the same local STA, the best way to increase the fine rate for starting from each of these natural image should be to just add to them this blob and obtain this one, and this should increase the fine rate. So that could be true, but we actually, so sorry, one way to see it, but more abstractly is to say, OK, so in the stimulus page, if each of these stimuli is one point. And so in this abstract space, this change correspond to basically moving always in the same directions. So always adding the same thing. So always moving the same direction in the stimulus space. OK, so that would be nice if this was true. But of course, the problem is that we've shown that this local STA depends a lot on which natural image we start from. So in practice, the picture might be a bit more like this. Every time you start from a natural image to increase the fine rate, you should go in a different directions. So the question is whether there is some sort of an order in this chaos. And to try to assess it, basically, our goal from there was to try to basically do that plot that you see on the left, but for real, and try to find a subspace, because of course, the space of possible stimuli is too high. So try to find a subspace where we could actually visualize it and still get all the information. And for that purpose, we took advantage of the fact that we have a model, the CNN model that seems to predict pretty well how, you know, what is the local CNN response to various images. And since this is, so we took a leap of faith and saying, OK, so maybe this model is good for any image. And so we took 3,000 different images. And for each of them, for just one cell here, we're just generating the corresponding local STA, according to the prediction of the model. And what we noticed here is that, OK, of course, they're changing polarity depending on the image. So here, every time you have one image and the corresponding local STA, so one image, local STA, one image, local STA, you always have these pairs. And now we can ask, OK, how much do they change? Why they do change in terms of polarity and sometimes shape, but they won't change that much. So when we ask, OK, can we describe them in the lower dimensional space? And to do that, we did the simplest thing, which is to basically do a PCA on this local STA. So each, here, each image here can be taken as a vector, and then you can do PCA on all these vectors and just find out what the PCA tells you. And the PCA tells you that basically the first two components can explain a large fraction of the variance, like 80% to 90% depending on the cell, which means that basically you can decompose each of these local STAs as a sum of, a weighted sum of these two components. So basically it's going to be the sum of these times the number plus the sum, this plus the number. So basically with two numbers, you can just describe almost entirely what a local STA is about. So now this is great because basically we can project everything on these two components, our product between each of these local STAs and each of these images, and describe everything that's happening. So in practice, it looks like this. If you didn't follow exactly how we do that, that's fine. You can just remember that we have somewhat a relevant subspace where most of the changes in the local STA are described. So here, basically, if you have a pair of one image and the corresponding local STA, we're going to just plot it like this in this subspace described by the two principal components that I was talking about earlier. So the image is described as a point and the arrow here correspond to the local STA. So again, this makes a lot of sense because what we're always saying here is that the local STA is basically where you should go if you want to increase the fine rate of the cell. So basically if you start from this point in this subspace and you want to increase the fine rate of your cell, that's where you should go. If you want to increase the response, that's where you should go. Now you can plot it for another pair of local STA and image. So here is another image with another local STA. So we start here and you go in this direction and then you have the orange one and you start here and you go in this direction. As I said, with the model which is learned on each cell, we could basically do that for many different images. We could predict the local STA and we can plot all of them for a single cell. This is what it looks like. And immediately you notice that it's not pure chaos. There is definitely some order there. So basically every time you start from a given point here no matter where you are, what you need to do if you want to increase the fine rate is basically get away from the central point. So basically always you have to go centrifugal. You have to go away from this point. So what is that point that everybody is scared about? Well, that point is exactly actually a gray screen. So basically all these arrows are telling you get away from the gray. What does that mean exactly? So it means that for example here if you have a lot of dark inside your receptive field here what you should do is get even darker. So basically dark gets darker. On the other hand, if you have something bright inside the receptive field and you're here, basically if you want to increase the fine rate of your cell you should get even brighter. So it's actually very simple here. Brighter, darker, darker if you want to increase the fine rate of your cell. So from there it's very easy to realize that in fact this cell some are coding some sort of a contrast here. And what I mean by contrast because there are many definitions here of contrast that people have used. Here we're really referring to basically luminance minus gray squared. So we're really removing the sign here. So basically it doesn't matter whether you're getting away from the gray by being brighter or being darker. In both cases you just have you have the square here that removes the sign. So in effect we have actually tried to model this with a very simple model where we compute this quantity's contrast average over inside the receptive field more or less. So basically we just compute this local contrast. And now we just ask okay so how's that like local contrast able to explain this. So we basically just took this it's not a great it's not a model which is as good as the CNN but let's see what it predicts in terms of that kind of vector field that I'm plotting here. And the answer is actually qualitatively you get more or less the same thing. You get this centrifugal pattern here. So basically there's a lot we can explain about the response of these cells this context-dependent selectivity on and off selectivity by just realizing that these cells are here coding for contrast for local contrast. So okay knowing this now we know okay so basically if I want to increase the fine rate of the cell if it's bright I should make it brighter and if it's darker I should make it darker. So can we just but we have found that based on these CNN models. So let's go back to the experiment go back to the data and without any kind of modeling assumption here somewhat fine clues that goes in the same direction. So first to do that we went back to the data we already collected and we try to find it to just cherry pick case this is really cherry picks example here cases where you know there was some interesting stuff happening inside the receptive field let's say so let's say very bright stuff and very dark stuff and indeed in these examples we find that for example there is a correspondence between the the bright region in the image corresponding to on region in the local STA and the you know like here and here and for the dark region like here or here they correspond to the off part of the local STA and you can see all these fancy gabor things and all that but there is definitely always a correspondence that you can see pretty striking between like you know dark regions and off part and right region and on part so it seems that there is some some we are doing any modeling looking at this data it seems that there is indeed a correspondence here but we wanted to have something more predictive and not cherry picked and so for that we reasoned the following way so if we start from an image and we have a corresponding local STA in that space as we said before it should be one point for the image and basically the way to go is for the local STA is to get away from that central point so we thought actually if we take the negative image and by negative here I mean just really inverting black and white then in that case according to our qualitative interpretation then basically the local STA should go in the different directions which means in the opposite direction which means that if it was on it should be off and if it was off it should become on so that's an hypothesis we can just test directly on experiments without any modeling assumption so we went back to experiment and just did that so here this is just one example of a cell where we have measured these local STA for various images so some natural images and then some inverted or negative images where basically it's the same thing but black and white are inverted here and indeed in that case we found that all these kind of off like local STA that we had here became unlike once we actually took the inverted images so this is giving support to this idea of a contrast encoder here that we were formulating but we then having to rely on all the assumption that comes with models so this is really in the data you can see it ok so alright basically what we found here is that so this on off selectivity can be reshaped depending on the natural context but we can find a model that kind of predicts this finding and basically this context dependence does not keep these cells for actually robustly encoding something it's just that what they're robustly encoding is not luminance I mean if we are to rely on these cells to find out the luminance probably you'll have some troubles here you'll have a hard time to decode luminance but here actually contrast is very well encoded and you can actually decode it from these cells so actually this is a point I think worth making for when people mention context-dependent processing of sensory neurons to remember that you know context-dependent processing can mean sometimes just you know basically extracting a slightly more abstract feature than when we thought in the first place now I'd like to just conclude on making a few discussion point here the first one is so well of course so here as I said on off selectivity depends on the context that's like you know take a message but I mean just I'd like to make a few discussion point about you know the approach that we have used here so first I'd like to mention that you know this idea of computing this local so this gradient this local receptive field can be seen as a way to test models what I mean by this is that usually when we want to test a model so we usually we have learned the model parameters on some dataset and we want to test it on by trying to predict the response to other stimuli of course the number of stimuli on which you're going to test it will always be finite but it will always be about possible stimuli so let's say you know you have these two stimuli here and basically you have measured the responses and now you want to check if your model does well at predicting them so let's say that this is your model the black curve so here you should call yourself very happy but on the other end the issue here is that well there could be other models that perform equally well at predicting these average responses and so how do you solve that problem if your black model is better than these two gray guys with just that kind of data you can't on the other end if you actually have access to these local STA which somewhat are good proxy for gradient and which correspond to here in one derivative well then suddenly your black model is consistent with the estimation of this derivative while these two gray ones aren't that much so you can see this computing of local STA the tighter constraint on models as a way to somewhat discard some alternative that actually wasn't that good and of course here it might look very abstract so let me give you an example which okay it doesn't happen all the time but let's say my students I'm only actually just found an example which I think is kind of interesting from the pedagogical perspective so let's say here we had like you know one cell and we had these local STA here these are the data and we had two models basically these two models you need to know much about it I mean it's basically just two models learned with different set of hyper parameters and if you look just at you know the average how they were good at predicting the response to you know average the average response to flash natural images they are square are pretty similar right so it's like 0.88 here 0.84 here I mean of course you're going to prefer the one 0.88 and you should be right but honestly I mean 0.84 is something that you know might just be satisfying but actually if we ask these two models to predict these local STA that we have access to we realize that you know like the first model actually is not so bad I mean it's not perfect but it's pretty good on the other hand the second model is really a disaster here I mean it works well here but like otherwise here it's actually the wrong polarity and I don't even want to talk about these guys so basically this small difference in R square here ends up in a pretty big qualitative difference in terms of predicting local STA and that's why I really mean by saying that you know like measuring these things somewhat gives you really put a tighter constraint on what models have to achieve to really be good models and I think that's important also to you know then you know think about this modeling process as not just like you know designing the model and then just having the R square and then being self happy but then actually just thinking about ok what should be the next thing to do the next perturbation the next you know thing to do with stimuli and recording experiment to somewhat find bigger constraint and tease apart different solutions now the second thing is that you know going back to this plot here I've been often asked to I've been often asked I've been asked a couple of times how can we relate these to so recently there have been several works where people basically with very nice and various techniques they were basically trying to look for the image that would maximize the response so basically so usually that's called the maximal exciting image there's various work on that there's more to come I guess and the idea here is to just basically change the image that you're flashing on your visual system so I think the first work in v1 there was also work in IT and there was probably some ongoing work in Retina as far as I know so the idea is to always try to you know find the stimulus that will evoke the largest response so it's a great tool and I think it's very insightful especially when actually in some cases for some cell types but one thing to I guess to keep in mind here is that in our case in the case of that cell you can see that if you believe in that arrow plot that I'm showing here there are a lot of points so a lot of stimuli, a lot of images where basically along these ellipse more or less you should always get the same response so there's like some basically some direction some change for which basically the response is left invariant and this feature is actually as important at least as the maximally exciting image the second point kind of cautious note about these maximal images is that basically the answer might, because of that property because of the fact that you know you can actually have the same response for various stimuli in that case they you can also depend a lot on your experimental ensemble so like oversimplifying and caricaturing things a bit imagine that basically you're trying to find the maximally exciting stimulus in that pink ensemble and that you know this is probably the right point here but this says as much I mean it's informative and it's really good and it's interesting for modeling but you have to be careful also about the fact that the answer depends also on the explored ensemble here so again I'm not saying that these are this is a bad idea I think this is a great idea and this maximally exciting image are a re-useful tool but in terms of interpretation of course the limitation is that is to believe that basically this maximal image tells you summarize everything about what your Gaelian cell is doing here anyway so these are kind of complementary tools to what we've been doing of course now okay so to finish I'll say that there is obviously a limitation here in what we have done which is about time we have kind of collapsed everything in terms of the temporal axis so we have just looked at flash natural images and basically how many spikes in response to these images we're trying now to expand this kind of peri-peri approach with more dynamical stimuli I think the reason why we only found basically luminance encoding cells where all the arrows were going in the same directions and kind of contrast encoding cells like this one is probably because we have neglected time here and I'm guessing that probably we're going to find more diverse patterns if we manage to extend that to include the temporal axis finally if you have listened to me up to here and you're not a regular person bear with me be rewarded by saying I think there's not that much that I said which is really retina specific here and it might be possible to use similar approaches I don't know but it might be possible to use similar approaches to characterize also the processing in other areas like v1 for example ok so with that now I'd like to just conclude by thanking the people who really did the work here this was a joint work spearheaded by Mathias Golding a project in my lab and that is Lefebvre and somebody virtually to a PhD student and it was also in collaboration with Mathieu Fanvonkang and Ulysses Ferre who's also a PI in the team too and in collaboration with Thierry Moore in Ecole Normale Supérieur and Alex Seker who's also in university so if you find this interesting and if the prospect of doing some science in Paris actually interests you we're actually recruiting we are kind of a multi PI I mean we have several PI in the group and working together to make theory and experiment to figure out things about the retina and of course well so dropping email if you think this is kind of interesting for you or if you just want to discuss Of course I should also acknowledge the various funding sources and of course thank you for your attention Thank you very much Olivier for this very interesting talk there are already a couple of questions appearing in the chat but let me remind the audience at this point that after this first round of discussion that I will moderate by conveying the questions you posed to Olivier for him to address we will be continuing offline so in case you are interested to follow the conversation I'm already making available the zoom room that we are currently sitting at so you can join us there so the first question is from Suvaroi how does the STA of classical on-cells and off-cells depend on the distribution of contrast in these natural images because natural images with the same average contrast can still have different structures and this seems to influence the location and polarity of the STA Yeah so I mean the thing is yeah so here all the images we were flashing global contrast and luminance were equated but of course we are recording from any cells and they are looking at different parts of the images so we could not equate the local contrast I mean so they do depend in the sense that exactly I'm not I don't know if that's a correct answer for what Suvaroi is asking but basically they do depend exactly in the sense you see here I mean the further away you get from that central point the more contrast you have and so basically that's the way you're going to increase the fine rate of these cells here most of the time so I don't know if that address the question of Suvaroi or not We can give it a second like until maybe follow up or elaboration appears but in the meantime given that you have this plot here one question I have is because we see that it's symmetric in the amplitude of this centrifugal motion right but it can be a matter of how the center interacts with the surround because you are also interliving your stimulus like with random images that are perturbed with noise so the temporal sequence that the cell sees is completely different in terms of the special structure that Suvaroi mentioned in the natural scenes for the temporal window also because it changes very abruptly yeah so that's a great I mean so in principle the role of the surround here I think is not absolutely crucial although it does play all I mean the reason why I believe that is that if you look at basically the model we ended up with like the simple interpretation we ended up with here there's no like so basically that plot here there's no absolute need for the surround although it certainly plays a role because you actually see that there's some surround in the first layer so the exact role of the surround in generating these local STA the short answer for me is that if you want to see this changing on off selectivity quickly speaking I don't think you need the surround but I'm also pretty much convinced that the surround must play all in kind of changing the answer basically right because in principle like if with STA we could get perfectly the surround we could change the center that you increase the luminance there and you see like the strongest increase in firing you could also expect by changing the surround right leaving the center as it is and changing the inhibition you could receive from the surround yeah absolutely so yes I mean here for example you could imagine that you change the surround and this is kind of changing where you are on this kernel here and then it would change the answer absolutely right I mean yeah one thing we thought about so far is actually to also try kind of masking the surround for some cells there's some technically here because when you what does that mean to mask the surround when you have 200 cells but yeah basically that's the idea right and before I continue with the questions that Tom has posted like one follow-up of mine is like so have you tried this experiment at different light intensities like changing the ratio of center and surround yeah that's a great question we have not I think it would be very interesting I think it would be very interesting because well first of all you do change the center surround interaction when you change the luminance but also you can I mean all this ratio I'm talking about here is also changed I mean there's a lot of things that are different if you really go at low luminance level and it would be very interesting to know what's going on so we haven't carried out these experiments but yeah definitely it would be well very interesting to know also because we know that different background luminance by themselves can actually change the polarity but it would be interesting to see the interplay of both right thank you very much Olivier so there is a clarification from Suva just for my clarification what do the PC1 and PC2 capture in terms of the image property like variance of contrast I went through that pretty quickly so I'm guessing so I mean here the PC are relatively dumb if you look at it I mean they're basically if you look at the I mean you can look at the predicted local STN actually get I mean there's a first one which is really a blob in the center which maybe a bit of surround but basically here and then the second one I mean if you look at the you know what it looks like the appearance it's kind of a way to displace it to displace your local STN a bit but it's basically almost some sort of a derivative of the first one in some way so basically what they are capturing here essentially here you can think that this axis is somewhat the let's say the local luminance inside the receptive field here zero being like gray and then you know like going from bright to dark on the right and dark on the left the second principle component is a bit harder to explain but I think it has to do with basically this local STN displacement which is also an interesting feature that we see in this data so hopefully that helps So now I can ask like one of Tom's questions because I think it fits perfectly so he says that by eye at least quite often the positions of the same cells on and off component are only partially overlapping are these cells therefore driven asymmetrically in space by on and off? Yeah that's a great question so basically some of these cells might be asymmetric but I have to say we haven't looked at it symmetrically because there's an issue here which is that the first thing that I'm going to drive whether let me go back to here the first thing that we'll actually tell you where you're going to see these on like or off like local STN is also the image right? So depending on where you fall compared to the image like for example here so here you see the local STN mostly on the right the on like one but it's mostly because I think because the bright thing is actually on the on the right and when we flash natural images we don't know where the receptive fields are yet so of course we kind of you know and we cannot systematically explore that so we don't know that said you absolutely right that these could be a way if basically we would have to I think we would have to depart from natural images and use something a bit more regular and then we could we could actually probe different regions of on and off like LSTNs and find out that actually would be an interesting an interesting idea the fact that the don't overlap is actually absolutely clear and it's definitely something that CNN reproduces also but yes so they are definitely not if you look at examples that I should before like real examples not from the models you see here well here's probably maybe not the best one but you see that they are definitely changing location here and there but I don't think we have seen any systematic bias for like you know on being in one place and off being in another place but to be honest maybe it's there for some cell types and we haven't looked at it right and the follow up on that that Tom posted here in the zoom room if you were to do a covariance analysis do you think you would find similar special receptive fields for both on and off so you mean like kind of local spike to your covariance kind of thing that would be well that's kind of challenging to experimentally but I'm guessing my guess is that if the STA is already so different I don't see any reason why the spike to your covariance would actually be similar I mean well if you go back to this abstract picture I mean local STA is basically telling you what's the best straight line to fit this function locally if you add spike to your covariance you're asking of course in a high dimensional space but you're asking ok what's the best cured function so here it's pretty flat but basically if you were to be here you would see something like this why if you were here you would see something like this so basically my guess is the spike to covariance would give you different results if you were able to do it that said experimentally it's kind of hard to do it properly and before I ask the last question that Tom posted in the youtube channel I would like to remind our audience that in case they want to continue either to follow or participate in the discussion they should click on the zoom room link that I have posted earlier and the last question is of course like cross pieces comparisons salamanders have a lot more on off cells than mice how is this reflected in the proportions that show this switching yeah so unfortunately we haven't looked specifically at the I don't think we have a good estimate I mean basically we have done I think the proper comparisons to look at basically which cell types show this and the thing is that this is something we can look at for the mouse based on well your work Tom but we haven't done that for the axolotl at least not systematically my expectations would be that you see that more often just because there are more on and off cells but yeah it's hard to make the count the thing is one limitation here is because we're kind of flashing natural images and we don't know yet which part of the the cell is looking at the thing is that I think there are a lot of cases where we have seen that we don't see a change in the on and off local STA but then basically we think it's just because it was looking at the wrong part of the image so basically there can be if you don't see these on and off changes it can be due to two reasons the first one is that really that cell type does not show it but it can also be due to the fact that basically at that moment there's always some let's say to simplify things that there's always something right inside the receptive field so always get on like receptive field and the thing is measuring this local STA takes time and in the best cases I think we could do it for like 12 images so if you're kind of unlucky because it's always looking at something not dark enough or not bright enough to get for like STA so there's a component here we have done estimation of basically how many, what's the ratio of cell we should see based on the modeling and what we see in experiments and basically there's a difference here which is exactly due to that so that's definitely a limitation I guess if you do lose that limitation you could just choose exactly what you want to do right thank you very much Olivier one last question that just appeared from Simon Laughlin there are many RGC types if the majority behave as you say does this indicate that photoreceptors and horizontal cells extract local contrast and code it as transmitter release so yeah I mean I I don't think so well I mean I haven't looked at I haven't measured the response and the thing is of course here I'm just focusing on the one where we see something interesting but there are also other cells where basically we don't see this inversion of polarity and where if we actually plot the the same kind of you know vector field and we do the same analysis that I was doing doing before we see that kind of thing which we actually call like luminance on coding cells you can argue whether this is really on coding luminance or not but the idea here is that all the arrows more or less point in the same direction to some extent and so here the idea is that in that case this is cells where you don't see a change in the on-off selectivity you can in and there are some cells where we we don't see it several cell types where we have reported that in the paper actually where we never see a change so if the information has been preserved up to some type of Galian cell my guess is that it's also preserved at the at earlier level at least for some types so I guess that at the level horizontal and horizontal and about the flow receptor it's probably still there and that the you know the thing that actually creates these complex contrast on coding cells is downstream but of course you know it's only a speculation and that could be wrong right thank you very much Olivier at this point like I have a number of questions like follow-up like with respect to the generalization and transient versus sustain and so on but let's continue this offline so at this point I would like to thank you once again for giving this very interesting talk and of course thank the audience for attending yet another SAS X vision seminar thank you very much and we are officially offline so because you select this luminous encoder like