 online. Well that is good. So hello, hello, hello everyone and welcome to the latest talk in the Worldwide Neuroseries. We first of all absolutely have to thank Tim Fogles of Oxford because he was absolutely instrumental in organizing this series of talks. So I am Phillip Bartle of The Barton Lab at you know the University of Sussex in the very sunny Sussex over here. So today we're having Herr Professor Dr Tim Fogles. I hope that I did not forget anything there of the University of Göttingen. So he will talk for something like 45 to 50 minutes and after that we'll have a hopefully very celebrious discussion. So as Professor Goddisch is talking and enlightening us about Retini, please post your questions in the YouTube chat section. So you will be monitored by our colleagues and we'll have hopefully very good discussions of those afterwards. Right so if you're not entirely familiar with our speaker today we have done a thorough background check on him and we know that he knows a thing or two about Retini. So originally his background is in physics which he studied which was a student of at the University of Heidelberg and after that that was followed by a PhD in Biophysics at the Humboldt University of Berlin followed by a very envious career path as a postdoctoral research fellow in Harvard followed up by a group leader position at the Max Blank Institute for Neurobiology in Munich-Martinsried and now to the best of our knowledge he still resides as a professor of sensory processing in the rest of the University of Göttingen. So I hope that I haven't revealed too much that he still has got something to talk about so I give you Tim Gohlesch. Okay I guess I hope you can hear me now. Thanks so much for the nice introduction you're up to date I'm still in Göttingen doing sensory research on the retina and I'll try to share my screen here and meanwhile I can thank you Philip as well as Tom for this wonderful invitation opportunity this is really exciting. To be part of this it's a great initiative so also thanks to Tim Vogels for putting this all into motion I'm very happy that I can join this here. Okay so I'll be talking about linear and non-linear encoding of natural stimuli in the retina so my lab is interested in understanding okay let me just get a pointer here in understanding how the visual information that enters the eye is translated by the retina into spiking activity by retal ganglion cells in particular what sort of processing and computations and function is conveyed in this correspondence between visual stimuli and spiking responses if you ask about function in the retina the classical picture that you typically get if you look for example into a textbook is that that ganglion cells have these famous centers around receptive fields so for an on-center cell for example the cell is stimulated by light increases in the receptive fields center which is captured by these centers around receptive fields by the way I just see that the internet connection maybe is sometimes unstable here I'm calling from a from a small village close to Götting I hope someone will interrupt me if I freeze entirely but otherwise I'll just continue okay so in this classical view of retinal function the cells are acts are thought to act more or less as spatial filters in the sense that they integrate the light that falls onto their receptive field in a linear fashion weighted by the strength and by the sign of the receptive field plus or minus is and then the activity that the ganglion cell has represents whether visual stimulus matches more or less this preferred contrast pattern that is given by a receptive field and this idea has been highly successful for understanding retinal function it explains a lot of things for example how cells in the retinal ganglion cells respond to illumination with spots of different sizes so you can see that when a spot gets larger responses get decreased because of the surround it is thought to enhance spatial contrast which is a useful function because the most interesting aspects in visual scenes are often edges and contours and this has been also formalized in a very famous and successful theory that states that the part of the function of the retina is to reduce the natural redundancy inherent in in natural images which has been used by Barlow as well as Attic and Redlich in order to understand the the purpose of these centers around receptive fields but at the same time we also know that ganglion cells can be non-linear as shown for example in the famous work by Enworth Cougar and Robson for example seen when you present a reverse in grating as shown here that goes back and forth or that there are that cells for different spatial frequencies of this rating may respond to one phase to one reversal of this rating or the other reversal but at a certain spot when the grating so to speak straddles the receptive field and receptive field sees equal amounts of lightness and darkness black and white then some cells show cancellation of responses so their activity stays flat here as shown as the firing rate indicating linear integration because it's cancellation and those cells are the famous X cells and other cells at this point where brightness and darkness is balanced show responses to both reversals and therefore they don't cancel out the light intensity that hits the receptive field and are therefore considered to be non-linear and are called Y cells and this raises the question whether this apparent non-linear spatial integration of light intensity over the receptive field is important for natural vision so about natural stimuli there are surprisingly few studies about this in the literature and those that are there sort of show mixed results there's there are some papers that emphasize that linear receptive fields or models that are based on linear receptive fields work fairly well across cells in the mouse retina as well as some primate data primarily from the lab of Sheila Nuremberg and other other papers have emphasized that there's a more of a mixed behavior with good performance of linear receptive field models for some cells but for other cells also failures so there seems to be a cell type dependence in the primate retina emphasized in particular by the labs of E.J. Juczuniski and Fred Rieke and in the salamander as well in the work of Steve Bakker, Surya Ganguly as well as our own work so we here aim at investigating this a bit more thoroughly and in particular trying to connect the performance of linear receptive field models and actual computations that are happening inside the receptive field for the mouse but before I tell you how we do this exactly let me tell you where the data comes from so in in my lab we do multi-electrode array recordings of isolated retinas, they sit in a dish on top of the multi-electrode array with a number of small fine wires to a couple of thousands depending on the array that we use and at the same time we stimulate the photoreceptors by shining a computer screen focusing computer screen onto the photoreceptors of the retina. We primarily use a mouse as well as salamander axolotl salamander retina and occasionally we've also had a chance to record from primate retina from the marble set and because I will show be showing you data from eats I will always put a small icon into the plot that indicates what animal we are talking at what point of time. Okay so using this method these extracellular recordings where you can get the spikes of individual ganglion cells the spike times a graduate student in the lab Demos Caramanlis has been analyzing how ganglion cells in the mouse retina respond to natural images that are briefly flashed onto the retina so he has shown for 200 milliseconds and there's a gray screen interval of 800 milliseconds in between images and he focuses here on images rather than movies because that's a more direct way of focusing on spatial integration that we are interested in here and when he can he uses a batch of images 300 images in total in a random sequence and each image is repeated a couple of times so he can collect the spike times that are elicited by a particular image and quantify the response of a cell to a particular image by counting the average number of spikes that are elicited during the stimulus. Okay so how do you connect this now to whether the cell integrates linearly or non-linearly across its receptive field the most typical approach is to use a model and this is also what Demos has been doing and the model is very simple so he takes a given image and then he converts or he compares this to a filter that quantifies the receptive field so this is the receptive field that we get from a different measurement it's not so important how we get it we do get it from a spatial temple white noise stimulus and then we compute the spike trigger average extract the spatial component and fit it with a difference of gaussians so there's a center part and also potentially a surround part of this is kind of weak and he hardly you can hardly see it in this filter then we apply it to the to the image that was shown to the cell in order to compute the activation this is done very simply just on a pixel by pixel fashion we weigh the contrast values for the deviations from mean gray in the image patch by the strength of the filter some all these pixel contributions up and get one number for each image which we call the linear prediction essentially is telling us how much luminance signal is there inside the receptive field for a given image and then we do this for all 300 images in the batch and compare this to the actual response of the cell and we may get a picture like this here for 300 image how the linear prediction corresponds to the actual spike on that we measure and if this is a nice relationship that we can fit for example by a simple non-linear function then we can say the simple model captures well how the cell how the cell responded to the batch of images based on this linear receptive field and this model is not other than the famous and often used linear non-linear ln model so here we would say this ln model captures the responses quite nicely and therefore this cell is more or less consistent with the linear receptive field so here are some other examples cell one we've just seen cell two here at the downward sloping not positive peak and this cell then likes negative contrast inside the receptive field so blackness and it's therefore an off cell here you can also see that we can capture all off cells which have these u-shaped non-linearities for these to ourselves whether linear not linear model seems to do seems to be doing fine but then we also have examples where it really fails where there seems to be no good relationship between the linear prediction of the ln model and the actual spike count even if we allow the non-linearity to do funky things like having the skin here which we have in there because we want to be flexible enough to capture all these different potential non-linear relationships between the linear prediction and the actual spike count we can then quantify this to obtain an overview by essentially computing the correlation between the what we predict by the model and the actual method of response and we do this here with a bit more detail we for correlation we use what's called a normalized correlation coefficient which we've taken from the literature that takes into account the variability of responses the noise and responses given that we only have a limited number of trials so one here in this performance measure means that the model is as good in predicting responses as we can expect given that the responses themselves are noisy from a limited number of trials and also I should mention we do this with cross validation so we always leave out a number of images fit the non-linearity and then do the prediction on the set of images around and in the end take the average normalized correlation coefficient over all folds and what we then see here is that we have a bunch of cells whose performance is near once to where the linear non-linear model with the linear receptor field is doing a good job but then there's also this long tail here containing a lot of cells where the model is not doing so well potentially a sign for non-linear spatial integration under natural images but how do we connect this now how do we connect this model failure for these cells here to actual things that are happening inside the receptor field so let me show you this here so this is one of the cells where the linear non-linear model apparently failed and what we can do now is we can look at pairs of images like taking these two data points where the linear prediction was very similar almost the same so there was the same luminance signal inside the receptor field for these two images but if we look at the responses for one of the images we had a strong response and for the other we had a particularly or fairly weak response and if you look at the images you may already guess that this has to do with a structure that's inside the image because here we have had a lot of contours a lot of white and black and here we have a more homogeneous illumination of the receptor field so then we can ask does this difference in the spike count have anything to do with this with the spatial structure here inside the receptor field and to quantify this we compute a measure of the spatial contrast inside the receptor field by computing the spike the standard deviation of all the pixel of all the pixel contrast values weighted by the receptor field and for this particular image we get a certain value 0.46 and for this image here we get a smaller value because there's less contrast inside the receptor field okay and then we can hypothesize that the difference in spike count if the linear prediction is the same has to do with this difference in spatial contrast inside the receptor field and doing this systematically we can pair all of our 300 images into 150 pairs that have approximately the same linear prediction and what we then get is a picture like this so here's for example a cell where the linear nonlinear model did not so well and the cell was sensitive to spatial contrast as you can see here in these examples and if we do this over all pairs then we see that there's a systematic relationship between the difference in spike count for each image pair and the spatial contrast inside the images for the image pair so if the luminance signal is the same for two images then for this cell we almost always get a higher spike count if there was more spatial contrast inside the receptor field here's the same analysis for a cell that doesn't care so much about spatial contrast where the linear nonlinear model has been doing well you get the same very similar spike count here for different cells for different images although they have different spatial contrast and you see here a more or less flat relationship between spatial contrast difference and spatial and spike count difference for pairs of images and then interestingly we also find a third type of cell third type of response where the sensitivity to spatial contrast is reversed so here's a cell where when the luminance information inside the receptive field is the same cell actually responds more strongly if the receptive field is illuminated more homogeneously as shown here and then this analysis you see that with luminance signals being equal for image pairs you get stronger responses if the spatial contrast was weaker inside the inside the receptive field we can relate this sensitivity to spatial contrast by computing a measure of spatial contrast sensitivity which in these plots is just the slope the regression slope between the difference in contrast and the difference in spike count and then relate this to how well the cells perform or how well the LN model performs for different cells and what we see is that essentially we have a bunch of cells that seem to be more or less linear they have good LN model performance as well as no sensitivity in this spatial contrast sensitivity analysis and then we got a group of cells that are essentially non-linear they have a not so good LN model performance and they so shows sensitivity to spatial contrast like this cell here with a large slope in the spatial contrast analysis okay so here we now have a connection between sensitivity to spatial contrast and encoding of natural images so if we have some cells that care about spatial contrast and others that don't care about spatial contrast we should also be able to see this if we reduce the spatial contrast for example by blurring images and here I show you responses of cells from the same three categories when we show original natural images as well as blurred versions you see here for example the cell reduces its spike count considerably when the images blurred this cell doesn't really care much about blurring it does the same as before whereas this cell you see again that it prefers more homogeneous illuminations would actually increase its spike count when the image is blurred and this is a systematic effect that we see when we take a number in this case 40 different images either in their blurred version or in their original version the cell decreases activity the cell rather increases activity so that confirms the sensitivity to spatial contrast and just a note at the side if we have here cells that decrease the activity with with blur and here we have cells that increase the activity with blur that sort of suggests that if you look at the difference in activity of these two cells this gives you interesting information about how well an image is actually in focus you can imagine this if an image for example gets out of focus then these cells will mark this by increasing the activity relative to these cells that decrease the activity and who knows maybe the brain is using such a signal in order to adjust focus or learn the right focus for objects at particular distances we in this analysis we also looked a little bit at cell types but let me show you this briefly so in the multi-electrode array recordings there are some cell types we can identify fairly easily and one of those groups is what we call IRS cells we call them that way because we have previously shown that these are cells a functional group of cells that are particular sensitive to image recurrence so if you show images your ratings that of different identity different spatial phases then the cell responds particularly strongly when for a transition of images when before the transition and after the transition the same image was shown so they like a recurrence of an image that's an older story that I want to want to go into detail here but for us it just here means that we have a fairly easy way of identifying a particular group of cells which as far as we know correspond to the so-called transient of alpha cells in the mouse retina and for for these cells we find that typically the LN model is doing a fairly good job with a high performance measure under natural images by contrast we have also cells that seem to be more non-linear with worse performance for example direction selective cells maybe not surprisingly because we know that these cells do interesting non-linear things with visual stimuli and they show this also here in the performance of the LN model and then there are cells that are in between for example orientation selective cells some of them have a fairly good LN model performance others not so good actually turns out that when you look at the distribution of performance measure of how well the LN model does for orientation selective cells you see more or less a bimodal distribution this becomes more pronounced if we separate the cells into on type and off type orientation selective cells the on type cells seem to be mostly linear in the sense that they have a good LN model performance but the off cells really show strong bimodal shape here we went a bit further and tried to cluster these cells based on how well they do in the linear non-linear model as well as how well they do or how much contrast sensitivity they show in in the previous analysis that that that I've shown you as well as using some other other measures and it turns out that we can actually find more or less distinct clusters of off orientation selective cells and then when we look at the preferred orientation of these cells we find that if we take a pair of orientation selective cells that are in the same cluster those cells tend to have a similar preferred orientation with a difference in preferred orientation peaking near zero whereas if the two cells come from different clusters so a more linear cell together with a more non-linear cell they tend to have different orientations preferred orientations so that at least indicates that there are some relationship between what subtype of orientation selective cell we have for what preferred orientation it is sensitive to and whether the cell is more linear or more non-linear with a good or worse model performance so let me summarize this very first part by saying that we've seen here that non-linear spatial integration is indeed relevant under natural images for some or many ganglion cells in the mouse retina that we've also found some cells whether non-linear non-linearity acts in such a way that it prefers actually homogeneous image patches with little spatial contrast and that this property of being linear or non-linear appears to be cell type specific as we've seen from the direction selective and image recurrent sensitive cells so this now raises the question how we can capture these non-linearities in models and this gives me a second part of this talk but I will be very brief on this because this is essentially work that has already been published but it fits in here well as a reminder so typically it's thought that the non-linear non-linear spatial integration in ganglion cells comes from the fact that ganglion cells integrate non-linearly in a way of pre-synaptic excitatory bipolar cells and this non-linearity is thought to come from the fact that bipolar cells have a non-linear single transmission in their transmitter release onto the ganglion cell so that means that the ganglion cell receptive field can be subdivided into what's called subunits which correspond then to individual bipolar cell inputs and if we want to integrate non-linear spatial integration into models of ganglion cells goods or necessary steps seems to be that we have to find out where these subunits of a particular ganglion cell where they are so we've devised a method a couple of years ago spearheaded by a former postdoc in the lab John Liu and this method is based on using spatial temple white noise as a stimulus and then extracting those stimulus stimuli those images that trigger a spike that are successful in activating a given ganglion cell if you average all these images you get a receptive field that's the famous spike triggered average but if you do a different analysis for example or what we do is use a method that's called non-negative matrix factorization so together we call this spike triggered non-negative matrix factorization you get a number of receptive fields like looking structures small filters and interestingly some of them look like noise or they turn out to be to be noise here shown in in blue or the others you're shown in red they have a nice localized structure that look like subunits and if you fit these with small 2d gaussians then you see that they cover this receptive field quite nicely and we take them to be the subunits of a given ganglion cell are these I should mention that we're not the only ones interested in devising methods for finding subunits in ganglion cell receptive fields there are interesting related approaches for example from the lab of E.J. Chichuniski using spike trigger clustering or from the labs of Steve Bakas and Sirio Ganguly fitting a layered neural networks to ganglion cell responses and they also get similar layouts of subunits inside receptive fields of ganglion cells so for us we ask then are these subunits really bipolar cells do they correspond to individual bipolar cells and Helene Schreier analyzed this by heroic experiments where she recorded with sharp microelectrons from individual bipolar cells while at the same time recording spikes from ganglion cells so that you could get the receptive fields from ganglion cells as well as the subunits and then compare these subunits to the receptive fields of simultaneously recorded bipolar cells so here are two ganglion cells receptive fields shown in the pixel by pixel fashion and overlay the fitted receptive field of a simultaneously recorded bipolar cell and if we then compare this bipolar cell layout to the inferred subunits of these two ganglion cell then we see that we often get a very nice match of individual bipolar cell receptive fields two individual subunits giving us some confidence that what we get here in this analysis are actual bipolar cells that provide input to the ganglion cell so we believe this is a very nice method of looking into the substructure of receptive fields and if you're interested this may also be of interest to you that we have since then considerably improved the method this is thanks to this guy here zoomed up in the lab who has devised a much faster and more robust algorithm and user is using this currently to get subunits also from primate retina here from marmoset data where you also find really nice layouts of subunits of individual ganglion cells and using this to check for example if you have such a subunit layout how this relates to other ganglion cells that are recorded simultaneously and checking whether there are overlaps of subunits from different ganglion cells as an indication of common input received received by different ganglion cells but with this I want to leave the subunits and quickly summarize that with this method of spike trigger non-negative matrix factorization we can computationally infer these subunits and these inferred subunits match individual bipolar cells for us at least so far in the salamander data this all rests somehow on the assumption that the ganglion cells pool non-linearly over bipolar cells the bipolar cells really are mostly linear so another question that we ask is whether that's actually true so are bipolar cells then really linear and only providing non-linear input to ganglion cells through their synaptic output there's reasons to believe that they are or could be linear essentially because they receive input from full receptors which is modulated by horizontal cells and in the retina up to the stage of the bipolar cells they're essentially except for some maybe some exception in some species there are no spikes all these cells seem to be operating mostly on graded potential and continuously modulated transmitter release so everything could be linear and is typically thought to be linear but Helene Schreier already told you that she records from individual bipolar cells went to check this with a single electrode recording so poking a sharp microelectrode into the retina until she finds a bipolar cell and then checking for example how these cells respond to spots of light on their receptor field and comparing spots that are bright to spots that are dark or black and checking whether responses are linear in the sense that you get reverse responses and what she finds is that for some cells this is more or less the case so here for an off-cell you see this graded membrane potential inside the that is measured here in the by the sharp microelectrode she gets a depolarization and for non-preferred contrast here white spot she also gets a hyperpolarization in the cell that's dropping below baselines so here zero is corresponds to the resting potential or the baseline potential in the cell but for other cells this is not the case so for example cell three here shows strong rectification you get a strong depolarization for a black spot but essentially no response no hyperpolarization by opposite white spot and then there are cells in between of course so it looks like that at the level of the membrane potential cells bipolar cells can at least have a asymmetry between preferred and non-preferred contrast and Helene then went on to check whether this non-linearity also holds for stimuli that are continuous such as spatial temple white noise by checking how the contrast signal of a cell inside this white noise stimulus relates to the membrane potential so if you're familiar with the linear non-linear model that I already mentioned here just measures the non-linearity of this linear non-linear model that provides information about how the filtered signal corresponds to the output the membrane potential of the cell if not then you can think of this just by that for at every point of time she checks whether the average contrast inside the receptor field over given bipolar cell over the recent past whether that was activating or not activating and this this average contrast inside the receptor field we then call the filtered contrast and then we can relate this filtered contrast to the actual membrane potential that was measured inside the cell and here we see a similar picture as above here some cells have a linear correspondence between the membrane potential and this average contrast over the recent past inside the receptor field whereas others show a non-linear indicates rectification of the contrast signal here these are from the same cells here so you can already see that the cell that is more linear here for the hyperpolarization is also more linear here for the white noise stimulus and that holds true over a population of recorded cells we can quantify the degree of non-linearity here by computing an index and we find that the non-linearity if we quantify this that we get from the hyperpolarization if that's large then we also get a large measure of non-linearity under the white noise stimulus so that this seems to be a cell a property whether the cell is more linear or non-linear so here we see that the relationship between contrast inside the receptor field and the membrane potential of the cell can be non-linear for some bipolar cells having established this Helene went on to ask whether we can also have non-linear spatial integration by bipolar cell and for this she first measures responses to spots of light that are switching between black and white on the receptor field and she sees that the cells respond so here these cells all respond nicely to the black spot with the depolarization and then a hyperpolarization or decrease of the response when the white spot is presented and then she instead of the homogeneous spots she also uses pattern spots that are for example separated into a white and dark half or here into quarters so this is essentially the end of kugel robson type of experiment for bipolar cells checking whether now you have a cancellation of these signals of white and dark or whether you have responses to both reversals which would then show also a frequency doubling so let's see what these cells are doing cell one year for example essentially has a flat response so it seems to not respond to these pattern spots integrating linearly or canceling out the white and the dark activation of the receptive field but other cells show a reliable response to these reversals here for example you see that the cell very strongly responds to both reversals and has a response that at twice the frequency here is for the homogeneous spots and this frequency doubling here these responses to both reversals for pattern stimulus like in the old x-cell y-cell work from n-wroth kugel and robson is an indication that there's non-linear spatial integration in these cells in these bipolar cells this non-linear spatial integration is actually also related or correlated to the non-linearity that I've shown you in the previous slide this non-linear representation of contrast so cells that are non-linear in their representation of contrast under the white noise analysis from the previous slide also tend to be a spatially non-linear as this cell here with frequency doubling for pattern spots so this then raises the question whether given that we have potential non-linear contrast representation as well as non-linear integration of the local contrast signals shown here by the fact that there are local signals coming in that are non-linear non-linearly transformed so maybe we call this contrast signal here now the effective contrast or the integrated contrast signals so we have these potential two stages of non-linear processing and the question is whether we then actually need the second stage here or whether this non-linear representation of contrast in the membrane potential is just inherited from this non-linear integration here from the inputs that the bipolar cell receives so that you would not need a second non-linear stage here that would transform the effective contrast into a membrane potential you can check whether this is the case whether the single non-linearity here is sufficient for a simple model or in this model you can check it because this model makes a prediction about how the membrane potential for such a pattern spot such a white black spot is related to the membrane potential of homogeneous spots of white and black this turns out to be a fairly similar similar relationship that's shown here the membrane potential that you should get for the pattern spot should be the sum of half the membrane potential for the white spot and half the membrane potential for the black spot I'm not going to attempt to explain in detail how this this relationship comes from this model but the main ingredient here is that you can imagine if you have half a spot the membrane potential that you measure for this half spot already contains all the non-linearities that are part of this model so if you then combine it with another half spot on the other side the membrane potentials just add because both these half spot activations have the signals for these half spot activations already have their non-linearities and there's no further non-linearity when you add them up and convert or predict them the membrane potential so let's check whether this relationship here holds true so here for three cells we have responses to a dark to the reversal of a dark spot bright spot let's subdivide this response into the response to the dark spot and the response to the bright spot and then multiply them by a half and add them up in order to get a prediction for this pattern spot here this black white spot so this prediction here is for the for the for one reversal and then the same prediction here of course for this other other spot because these are essentially identical stimuli according to this model so you have this this response here shown shown twice for both reversal reversals and then we can check whether this prediction that comes from this model actually matches the measurements and this is shown here the the measurements for the for the split spot actually match this prediction quite nicely indicating and this is actually true for most of the bipolar cells that we measure we get a fairly nice match of the simple model to what we measure indicating that indeed here we don't need any nonlinear any additional nonlinear transformation of the integrated signal into the membrane potential all the nonlinearities in these bipolar cells can essentially be captured by nonlinear input that they are receiving locally inside the receptive field which means that most likely the site of this nonlinear processing is before the soma before signals are integrated for example on the dendrites or dendritic receptors of these bipolar cells so fairly early in the bipolar cell processing chain a lot more could be said about single processing in these bipolar cells but i want to get to one last point so let me sum up this this third part as well so we've seen that bipolar cell nonlinearities exist in the shape that there is nonlinear spatial integration what may this be good for this may for example retain sensitivity to spatial detail this may be particularly important for animals that have small eyes where receptive fields get quickly large in terms of degrees of visual angle if you want to be sensitive to a small uh object you may not want to do too much spatial averaging and spatial spatial cancellation so that could make sense from a functional perspective and then if you have this nonlinear spatial integration we found that this also translates into a nonlinear representation of contrast which is essentially inherited from this nonlinear spatial integration and the potential site of this nonlinearity we hypothesize maybe somewhere on the bipolar cell dendrites okay finally i want to ask are there also other dimensions besides space that have interesting integration linear and nonlinear integration so let me show you an example here of recording that was done by bohamat khani in the lab who is analyzing a different type of integration and here there are responses from two cells for three stimuli the original stimuli and here in the second column the contrast reversed version of the stimulus and if you look at this and the fact that here cell one for the third stimulus is not responding and cell two is responding for both reversals you may think this would looks very much like what we've seen from anorth kugel and robson where for spatial gratings you get responses for one reversal of the grating or the other reversal of the grating and then at the sweet spot where there's balanced excitation of the ganglion cell from black and white some cells don't respond and others respond to both reversals and it turns out that this analogy holds quite well also for what we are using here but we are not using spatial gratings but instead the stimuli that mohammad has been presenting here are fulfilled color stimuli so here the first stimulus is a green flash of green light that's either turned off or turned so increase in green light intensity or decrease in green light intensity over a gray background and here the second stimulus is the same for for UV light either a decrease of UV or an increase and here in the third row you have the mixture of the two increase in green and decrease in UV I should add that the mouse retina has two types of cones or rather two types of color sensitive options and they are sensitive to green and UV and therefore we here use this combination of UV and green and what we see here that the cell one apparently when you have the right combination of green on and UV off there's a cancellation you don't get a response analogous to the excels from emerald coogle and robson so this would be linear integration but the cell two here this actually seems to integrate color in a nonlinear fashion and then responds to both this original green on UV off stimulus as well as the contrast reversed version here so this would be a sign of nonlinear chromatic integration so having seen this moama devised a way to study this chromatic integration more systematically and what he needs is a way to find the sweet spot where you have a balanced activation or balanced sensitivity to green and UV and he does this very similarly to a spatial grating that you slide across a receptor field of a ganglion cell to find the balanced activation for both halves of the receptor field he here he's sort of speaks slides a contrast signal that goes from all green to all UV and then with intermediate combinations and then using also the contrast reversed version of the set and then you can look at the responses for ganglion cells and for example check as you go along this curve this set of stimuli here how the responses decrease from being activated by green light to potentially going below baseline activity level and it and compare this to the responses of the contrast reversed stimuli that are shown here increases the activity and then you find the sweet spot where a cell is equally sensitive to one stimulus and the reverse stimulus at the point where these two curves cross and then you can check for this crossing point here is there activation or no activation in this particular example we see that there is only baseline activity no activations of this will correspond to a linear cell but for other cells we see that this crossing point here is above zero there is actual activity here at this crossing point corresponding to non-linear chromatic integration and responses to both contrast reversed versions of color combinations. Let me show you a few more examples of responses at this crossing point here this is a linear cell with no response for the right combination of green and UV this is a response similar to what we have seen here increased responses to both contrast reversed stimuli in this case this is an off-cell but then we also find a third type of response here for an on-cell where at this balance point we actually don't see a peak in activity but we see a decrease of activity below the baseline activity and this interesting pattern here that for some cells we see an increase of activity and for others we see a decrease activity is actually systematic for on-cells and off-cells so at this balance point where UV and green stimulation are equally effective essentially if cells are non-linear then all off as well as on-off cells so this increased activity and all on-cells if they are non-linear show a decrease in activity that's very systematic and probably tells us something about the circuitry of how this non-linear chromatic integration is established by the retina or comes about. So just a few quick words about where we believe this non-linear chromatic integration comes from the first thing that you may think is well we already know that for spatial integration we have linear and non-linear chromatic integration and this comes from bipolar cells and maybe it's just the fact that some ganglion cells are always linear and some are always non-linear regardless of whether they integrate or whether the experimenter looks at integration over space or over chromatic channels so we check this but we found that this is not not the case so the non-linear integration under chromatic stimulus is different from the non-linearity that you get under spatial integration essentially because we have cells with all or we have cells for all combinations of linear and non-linear chromatic integration and in linear and non-linear spatial integration so here for example is a cell that is chromatically linear the crossing point is here at baseline but the cell responds very strongly to reversing gratings find spatial gratings indicating spatial non-linearity and here we have a cell that's chromatically non-linear with the crossing responses to a reversing reversing grating and we find that these two properties are just not correlated they don't have much to do with each other and then we also find that this non-linear spatial non-linear chromatic integration seems to come from the surround from the receptive fields around of the cells that seems to be important for these cells because we do find it when we use full field chromatic stimuli as shown here for an off-cell and non-cell crossing point different from baseline but when we do the same experiment with small spots that we place on the screen at different locations and cover the entire screen with with spots presented at different times and then for each cell find the one spot that matches the receptive field location of that cell well then we see that this chromatic non-linearity disappears now these crossing points here drop down to baseline activity we don't get the activation at the crossing point anymore anymore indicating that for local stimulation we now have linear chromatic integration so the surround seems to be important for this finally let me just make another remark about this chromatic integration that we find it's that non-linear cells are not homoid the cells are on the retina so this is an image here of a of a recording from an entire retina and recorded receptive fields the dorsal side of the retina and the ventric side of the retina and if for each cell we check whether it's linear or non-linear in chromatic integration and relate this to the position of the cell on the retina that then we find that the disc here as shown from the distance to the midline of the retina that in the dorsal retina we've mostly find linear cells here those are the blue spots here that are here on the dorsal side whereas the non-linear cells here shown in orange are primarily on the ventral side of the retina so whether a cell is chromatically linear or non-linear seems to depend strongly on where it is on the retina we also find some cells that don't integrate uv and green light that are exclusively sensitive to uv cells and those are also mostly in the ventral side of the retina okay finally some speculation about a potential function this we study by setting up a model of linear and non-linear chromatic integration where you take two signals from green and uv light and either you just add them to get an activation of a cell or you first rectify them and then add them up which is a simple phenomenological model for the type of responses that we've here found in the off type non-linear chromatic cells because this allows the cell to respond to opposing contrast in green and uv so for such a stimulus here we would get a response in these non-linear cells but no response in the linear sense and then we took images from a database that photographed images through the same images through both a uv filter and a green filter so where we have the two color channels and then checked how these model cells respond to the color information in these images and here I just show you the difference depending on the location in the image where the non-linear integration gives a different response than the linear integration it turns out that this non-linearity is important gives a different response than the linear model approximately here at the line that divides the foreground from the background sky illumination so the skyline so to speak this is where you can have this opposing contrast of green and uv light probably because of difference in reflectance properties of the foreground and therefore you see here that the non-linear cells pick this up and respond strongly more strongly than linear cells giving a potential channel of finding the scale skyline by comparing responses between chromatically linear and chromatically non-linear cells okay let me sum up this last part as well so we find that chromatic integration is another interesting dimension where you find also linear and non-linear integration very similar to spatial integration this non-linear chromatic integration appears to depend on the surround on activation of the receptor fields around and it's primarily found in cells in the ventral retina of the mouse retina so let me just very quickly recap the four parts of the talk we found non-linear receptor fields in the mouse retina that are relevant for natural stimuli in many cells I've shown you that you can get the subunits of receptive fields through computational techniques I've also shown you that in the salamander retina at least bipolar cells can also be non-linear show non-linear spatial integration and finally that there can also be chromatic non-linear integration where different color channels are combined in a non-linear fashion and with this I would like to thank people involved the people in the lab I've mentioned the people that have done the particular parts that I've talked about here I should also mention that this spike trigger non-negative matrix factorization was developed in collaboration with Stefano Panzeri and Arno Ong from the Italian Institute of Technology and finally I would like to acknowledge the funding sources that have supported us the ERC and the the seven framework program from the European Union as well as the German research foundation through a collaborative research center and with this I'm at the end I'm very much looking forward to questions and an online discussion thank you hopefully I'm audible can you hear me Tim I can hear you oh yeah well that should be an option that thank you there was actually a lot more interest in things that even I expected and I invited you so yeah so this was great we have a great deal of questions um so shall we start right now and absolutely yeah so I'll just remind you to look out what will be happening in the chat so we'll be posting a link or to this very room or to this very zoom room in which we too find ourselves so that you can join the discussion live a little bit later right so questions questions questions uh there are plenty of those I think I will group um two together so there's a question by uh George Caffetzis and a question by a certain Marcus Meister uh that uh sound a bit similar um so probably minor but given that you talked at the start about resolution in and out of focus um sort of images how does the stimulus reach the retina on the MEA uh physiologically through RGCs and then Marcus Meister also asked something similar in spirit regarding whether these effects are important for natural vision were the natural images scaled and filtered to emulate what would actually appear on the retina of a mouse or a marble set yeah good questions um so in the uh so in the experiments we actually simply um uh present the stimulus on a computer screen and that is projected onto the retina from the photoreceptor sides in some sense from the uh from the from the wrong side of the retina um which may indicate that there may be less blurring for example of an image this is something that we don't take into account uh at the moment uh which is definitely interesting follow-up question um at the moment we simply focus on the fact that if you compare um let's say natural images uh that have objects as well as contours and compare this to see whether this makes a difference to the typically used artificial stimuli like ratings and on off stimuli and see whether there's a relevance on this level but a good next step that we haven't done yet is to match this better by for example trying to match the blurring that occurs by the optics of let's say a mouse or a salamander eye and the passing of light through the retina but at the moment we haven't done this yet no right um so thank you for the response I just remind everyone that you know you can keep asking questions um as long as uh Tim is alive and willing to respond um so um somewhat more technical perhaps so Brent Young is asking um so it is very interesting that there is such a strong on-response in some cells with a uniform gray field how different is the light intensity of these natural image grays versus the solid field you used before um that was at the beginning of the talk yeah so I guess strong on-response may refer here uh to a response at the image onset I'm not sure whether I understand this uh correctly so the way the experiment is done that there is a intermediate gray illumination between images and then the images over the entire image have the same mean luminance so in that sense there's equal chance that the ganglion cell sees an increase or decrease of light intensity um but of course since the each individual cells only sees a small part of the image it's let's say unlikely that it sees the same average gray inside the receptive field uh there was the um the background illumination that can be and then if the cell is a linear cell the cell should not respond right because it would see an average zero activation but a non-linear cell can then of course still respond if uh you know their structure inside this part of the image so that it sees decreases as well as increases in light intensity and that can be considerable so even for an image patch where the net activation the average luminance signal is essentially zero is the same as the background gray light intensity you can have strong responses just because their spatial structure inside the image right I think that answers it um so we have another or technical question but non-negative matrix factorization and the penalties that you've used what loss and regularizations you have used and whether you expect different results um I guess qualitatively different results um if you change the losses and regularizations yes that's a good question from someone I guess who knows what he or she is talking about so in the original paper we used regularization that we essentially took out of the box and it's a was an L1 regularization on these subunits which helped to make them sparse and if we don't do this they are more blurred let's say um and we essentially took it out of the box without tuning it much because it worked well in the sense that it gave us consistent results um that if we wiggled it a little bit responses didn't um um change much and if we increased the regularization too much we would get individual pixels and if we reduced it too much um we we don't get nice subunits so it's sometimes that that could be considered a weakness and therefore we've worked on this since then and in the new version that I hinted to we now have a version that that works without regularization so where we don't need the regularization and the marmoset receptor fields that I've shown you you get without any any sparsity prior so any regularization any L1 regularization but um it probably will take still um a few weeks before this is really finished and hopefully out sometime soon but that's that's coming up so you can you can work on this and improve on this um and we've also looked at other ways of checking um of checking the the regularization so you can use regularization and find ways of gauging the right regularization level um for example by checking that uh if you use different subsets of the data or different initialization conditions um you get particularly um stable solutions and that seems to be a useful way of of doing something like cross-validation to find good um uh um tuning of of regularization here well that's particularly exciting uh for some people I guess well myself included um but I think that there are more questions that are sort of connected to this and are connected to the later bit of your talk so uh Whaley uh exclaims that that was a great talk uh and um asks whether you have found any chromatic subunits within the RGC receptive fields if so what do they look like and uh somewhat connected maybe maybe leads to a group uh Maximilian Yosh is asking that since UV light also stimulates m-opsins uh wouldn't that explain the linearity in the dorsal retina yeah good questions um so first about the subunit so um the the short answer is no we have not because we have not looked um sufficiently so far we have only analyzed data where we used uh monochromatic stimuli so just race scale and therefore we have no information in uh about chromatic aspects of the subunits um interestingly we do have data where we just haven't gotten around to analyze this so in some recordings we also use chromatic stimuli so you know two color channels uh high um spatial um um acuity stimulation long recordings this becomes a bit more difficult because you you double the number of parameters essentially by taking two color channels so this becomes a bit more um involved um but we may have a have an answer there sometime in the future too too many things to analyze uh the second question um stimulation of m-opsins um so that's true so the UV light does activate m-opsins um uh thing that I skipped over is that whenever I talk about UV and green contrast what I actually mean is m-opsin and s-opsin contrast because we do all these experiments with cone isolating or option isolating stimuli so really when we use UV light we at the same time um or talk about UV contrast we at the same time modulate the green contrast in such a way that um that the m-opsins are not or should not be activated otherwise this could be contributing uh to a linearization effect that's true um right um so we have uh some more questions one is from uh certain Leon Lagnado I don't know who that might be uh so he mentioned spatial non-linearities in um in bipolar cells um are you uh Tim uh thinking about variations in the density of photoreceptor inputs to passive dendrites or active conductances or both or neither or neither I guess the question is what we're considering as a mechanism is that the question um I mean you can look at the text of the question I haven't raised it too much um so it does mention thinking about okay yeah now I got it there are probably several mechanisms there um so I guess I guess both so at the moment what we can say is that it's likely that something is happening at the dendrites of the of the bipolar cells because it seems to be before spatial integration and it doesn't seem to be happening at the level of the photoreceptor per se for two reasons because we also record from photoreceptors and we find fairly linear contrast representation in photoreceptors um so at least at the level of membrane potential uh and one could also say if the if it's the photoreceptors then probably it should be all bipolar cells that have the same linear or non-linear properties so that's the chief argument so to speak to say it's not photoreceptors per se they could still mean it's the somehow in the synapse between photoreceptors and dendrites and maybe more likely post-synaptically and yes those two aspects are then the primary candidates that we would consider that it's somewhere so to speak at the synapse maybe on the post-synaptic side how the dendritic receptors um what what they do with a photoreceptor neurotransmitter signal whether they you know locally transform this in a non-linear fashion and the second aspect could be active channels in the dendrites which are known to exist so that's a good candidate as well and the more we can distinguish between the two. Oh we have more and more questions can can you carry on or we have many more questions coming in oh yeah yeah i can carry on all right well uh that's great right so um we have um um several interesting ones but i'll go with the Anton Nikolayov can the noise subunits not be noise but reflect the activity of amacrine cells mm-hmm yeah i see the point i mean most of these if you looked at these noise subunits despite trigger non-negative matrix factorization closely they seem to have their pixels mainly in the surround um so one could say so maybe they have something to do with the surround maybe they are you know a contribution of amacrine cells so the reason why i say they are mostly noise if is if you look how these filters if you take them as stimulus filters are related to the response of the ganglion cell you essentially see a mostly flat line so if you if you filter the stimulus with one of these filters that are called noise and then compute let's say the non-linearity for this filter or correlate the activation of this filter um with the response of the ganglion cell then it's it's really markedly flat except for something that is most likely noise and therefore we we interpret them as as noise signals so the the thing is that we do this analysis with a fixed set of filters that we typically use um in a way try to tune in a way that it's larger than the number of subunits that we have because this allows us to capture some of the noise that's in there just because the stimuli that we use are Gaussian distributed pixel values uh so there's never you know the exact stimulus that drives this subunit there's always variations and turns out that the it's it's easier to fit the subunits if you provide to the speaker a trash can where the variations in the pixel intensities can be stored and this is essentially what these other subunits then then do so if we don't allow for these noise subunits then also the fit of the of the actual subunits gets worse hmm and that makes sense um so we have more um questions so one I don't know if this was answered during the presentation uh right so it came somewhere mid uh in the midst of your talk so these computations uh Etienne Roche uh is uh saying that these computations are interesting for the process of features and natural scenes to what extent do they do you think they would be involved in processing um exempli gratia symmetry edge detection etc do you think this was answered um or do you want to probably not I mean I can I can say a few things so um so the interest in non-linear integration let's say partly comes from interest in the circuitry you know how our signals from bipolar cells relate to two ganglion cells is that linear or non-linear that sells to something about the the synapses maybe uh it's also triggered by the idea that we want good models and we want to know what what we have to integrate in these models in order to get good predictions for ganglion cells both for you know as the as a framework for checking do we understand what the retina is doing maybe in the future also to build prosthetics or whatever good models are always always useful right so that's an interest the second interest is that um most likely I mean we know that some cells in the retina are doing specific specific computations some uh detect the direction of motion other maybe the looming stimulus or whatever and these computations are really only possible if you allow the cell to do interesting things inside its receptive field and these interesting receptive things mean that somehow the cell has to combine signals over its receptive field in a non-linear fashion otherwise it cannot really do anything let's say interesting specific that goes beyond you know contrast filtering I mean in some sense interesting is here defined as being non-linear right so in that sense if if you're looking for a computation like detecting a direction of motion maybe or maybe edge detection could be a similar mechanism that's coming in a particular robust fashion then it's likely that this will involve non-linear mechanisms so I would say yes I mean all these non-linearities that we see are likely something like maybe you can call it the scaffold or the the layout on which non-linear integration on which specific computations can be can be built right you can then combine these yeah non-linearly extracted elements in a certain way in order to extract some information what that is we probably need to do a little bit more work right just by saying that a cell is non-linear doesn't tell us what computation it may be doing right you but you can then go on and ask what kind of non-linearity is there and in the talk we've seen two different types in a way right some that cells that prefer contrast inside the receptive field and some cells that prefer homogeneous stimulation and that may give you some idea about the function but you can go further right you can not only ask is there is the cell sensitive to spatial contrast but does it use the spatial contrast in I don't know a rectified fashion or does it compute the the square of the contrast and how does it combine these things and that may and their exams in the literature give you an idea of what sort of computation is then implemented by these non-linear circuits I have no idea whether that answered the question um so they gave me more questions to ask right um so I'm being told to move to the conversation to zoom although we have more questions that were asked in the chat room so what I will now try to do is I will try to interact with my colleagues and ask them where the link to the chat room is because it should already be in the chat but so please colleagues someone do the feat and post the link to this chat room right and meanwhile I will ask you maybe one more question um so I have um sorry this is sort of this is color related so we will have that so Marcus Howlett is asking how well do you obstinate and stimuli isolate cone output considering horizontal cells are spectrally broadly more broad than cones um yeah so I guess there's a general question of how well well cone isolating stimuli are really cone isolating and therefore maybe um the safer term here to use would be option isolating rather than cone isolating right because um given from feedback from horizontal cells uh you know the the the cone output signal may not represent the general question of how well since inside the um I'm hearing myself now in the background that's a bit irritating and never mind I'll just try to go on so the the cone output signal may not reflect the the option uh inside the cone and the second aspect is that also in the mouse we have a lot of um co-expression of options and therefore also we cannot isolate these cones obviously but we can try to isolate the options uh yeah so in that sense I would more go with with option isolating stimuli that should work fairly well because the you know this is uh sort of well established in the literature um and from our measurements it also looks like it's working quite nicely given the fact that we see a good correspondence with how cells in the ventral and dorsal retina are activated by our option isolating stimuli and from the fact that we see uv sensitive cells that are not responding to green light for example or m cone m option activation I should say I guess I will nearly disappear into the background now I know that there will be plenty of questions at least with these two around okay hi everyone do enable yourself to speak of course great I really enjoyed that lots of new stuff yeah mostly all new and turns out that it's like jazz right retinas like jazz eventually we all end up with color some some time ago I heard someone eventually you all end up with direction selectivity color color yes it's on a stepping stone color is the new black yeah color is certainly interesting so I don't I don't know if you if you know the study in detail so sometime ago we showed that uh uv cones in the mouse are very non-linear in the contrast response whereas the green ones are linear ish so do you think that's driving some of these effects you see it could be yeah absolutely that's a that's a potential candidate for for for adding non-linearities I guess both non-linearity um as well as potentially although there was not mouse for it could be a potential contribution to the non-linear bipolar cells that we found in the salamander if potentially something similar is happening yeah for the chromatic uh non-linearities I guess I mean our primary uh candidate for a mechanism probably goes through american cells um because of this connection that we have through via the receptor fields around because we really see that we need receptor fields around activation in order to trigger these non-linearities I've shown you these spots stimuli so for local spots the non-linear disappears but we can also make it disappear by certain inhibitory blockers so I would guess that the non-linearities from the photoreceptors um should be more robust to blocking inhibition um I guess on the level of the the type of non-linearities that we see here on the level of ganglion cells part of the non-linearities that come from photoreceptors could be washed out a little bit by integration by bipolar cells um if you have chromatically uh mixed bipolar cells potentially even you know more or less linear I would have to think this through what what expectation you what contribution you should expect from um I guess I guess the bipolar cells uh are funny in that way right because they they get these massive inputs from our green cells which you may or may not be seeing the way they are when you when you poke them in the soma right so it's an enigma yeah absolutely um so I mean in a way um we expected that in the in the salamander data the non-linearities that we've seen in the bipolar cells we would assume that these bipolar these non-linearities already come before potential contributions from amocrine cells because they are at the soma because we may not be able to because I guess we we we don't expect to see too much of the amocrine cell input that goes into the terminal yeah yeah yeah but we can we can rule this out this could be a contribution right the cells are not that big something could be back propagating yeah yeah so one argument that we have there is that we see the same non-linearities in the bipolar cells regardless of whether we use small spot and large spots large spots and as we've seen from recent literature about bipolar cells right at least the the inhibitory contributions to bipolar cell um activation seem to depend or are strongly modulated by whether you have a small spot or large spot and therefore we believe what we there see um because it's also therefore small spots doesn't depend on amocrine cell activation and back propagating signals um activation seem to depend i give away okay there seems to have been a question but then um yeah so please remember to enable your mic before you start talking right um there's a question from Jennifer um to everyone one thing that always confounds me is how to think about how such specific spatial information coded by these cell types will translate to a rapidly moving eye how do you think about how this information would look different to animals with rapid eye head movements where versus those without such movements like the salamander question mostly to Tim up for it yeah i think this is a an important question and probably a strongly under investigated questions question that i find also particularly interesting and therefore we have been looking a little bit into this um i mean a hand waving argument is that um at least in the first part of the talk i showed you responses to more or less briefly flashed images 200 millisecond flashes of