 και πιστεύω ότι είμαστε αρκετές. Γεια σας, καλύτερα και καλύτερα για έναν άλλο σημεριό της Σασάξ-Εξ-Vision Σεμίναρς, όπως όλοι ειδικά στην παιδίωση της Worldwide Neuroinitiative. Η Σασάξ-Εξ-Vision Σεμίναρς την νερά σε σύνε이다. Η Σασάξ-Εξ-Vision Σεμίναρς το μηρήθηκε σε τη διάρκεια της ΠολύΜπρος στο ποιοφέρ προς την παιδίωση της Σασάξ-Εξ-Εξ-Vision Σεμίναρς. Παραστάλαμε στο νερόaky, όπως μίατε απάντηκες μένα το Γιυνέδι και να βισθύ Jungkook. Οπότε, είναι ένας λόγος να βρεις για frogman για να βρεις μόνο από τον επίπεδο της στραβήθησης shall be our findings in your talk entitled connecting structure and function. In early visual circuits. So without any further ado from my site, please welcomeur baina. Ruthie, the stage is officially all yours. Thank you George, I'll start sharing, okay So thank you so much George for this nice introduction and I also want to thank time for the invitation it's really lovely to be here διακοτήσουμε κάποιοι. Αυτό το MyLab είναι εντυπίστητο σε νομική προστασία, με την νομική προστασία και εξαιρετικά, η δημιουργία μας είναι να εξηγήσουμε πρόσσυνοι σύνταρες που εξηγήσουν μοναστά τρόπινα, τελικά σε βασικές στενές, such as motion, color and so on. Για τη δημιουργία, θα μπορούμε να δημιουργήσουμε όλα οι διεξεδοχές, τα καταστηματικές, τα οποία δημιουργούν αυτή η δημιουργία, but also the neural circuits that implement these. And we have recently been more and more interested in understanding how different states and different stimuli can alter this representation and even more recently on how these representations can be used for higher cognitive functions. We do this in the fruit fly, not only because of the very well known advantages of this model system, such as genetic tractability, but also because of the increasing availability of connectomics data in the salmon, which has really, in my view, transformed our field. So we are experimentalists, but we collaborate very closely with theorists to build biologically constrained models that use connectomic driven synaptic information to understand how transformations are actually supported by real neurons and circuits. And so today I'm going to tell you about two recent stories from my lab where I hope you will see this thread of connectomics driven circuit investigation at play. But first I'm going to have to give you a short introduction on the fly visual system to get everybody on the same page. So the fly eye is made of 800 independent units called omatidia, each corresponding to one pixel in the field of view of the atom. Each omatidium contains eight photoreceptors, our one to our eight, our one to six form this trapezoid around our seven and our eight, which sit on top of each other and are in the same light path. Our one to six are involved in achromatic vision, whereas our seven and our eight are thought to be important for color vision. There are two types of omatidia in the main part of the eye that are distributed stochastically in the fly eye. So these are defined by the expression of wavelength specific options in our sevens and our eight. So in paleomatidia, our sevens express a UV-opsin and our eights a blue-opsin. And in yellow omatidia, our sevens express a different UV-opsin and our eights express a green-opsin. Our one to six always express a broadband receptor called RH1. Here are the relative spectral sensitivities of the opsin that we measure in the eye. You don't have to remember any of this. I'll have little diagram as we go through the talk. So now for the circuitry. All photoreceptors send their axons to the optic lobe where visual information is processed. This is a cross section showing the eye and the four neuropells that make up the optic lobe. Photoreceptors are in green. Our one to six target the first optic ganglion called the lamina. And our seven and our eight go through the lamina and target the medulla. Lobbular and lobbular complex present higher level of processing, which then lead to the central brain where sensory information is integrated and used to direct behavior. And this image I really like because it beautifully highlights the retinotopic nature of the visual system, which we use to our advantage in our experiments. So here's a schematic that I'll use throughout. But a lot of my lab, a lot of what my lab is doing is trying to figure out what kinds of computations are taking place in the optic lobe with a focus on the medulla and the lobbular. And for the vertebrate people here, I would like just to equate the level of processing that we're dealing with in the medulla at least to what probably happens in the vertebrate retina. The lobbular is quite, is still quite a mystery, but we're very curious at what's going on there. And just like in vertebrate retina, which is the role of which is partly to deconstruct the image into multiple representations within parallel circuits that each extract specific features of a visual scene, there are parallel pathways in the medulla that extract different features. And today I'm going to tell you about two stories, one about achromatic vision, specifically a circuit that extracts the direction of local motion, and one about color vision, a circuit that extracts information about spectral information. And so first I'll talk about the motion vision project. So this is the work of two extremely talented students in my lab, Jesse and Jacob. Jesse did all the experiments and Jacob did all the modeling and together they formed really a great team to tackle this problem. So local direction selective signals are very important and versatile signals. They're a building blocks that can be used by the brain for making much more complex motion signals. For instance, building optic flow signals, which can be used for self-motion estimation, which I think is nicely illustrated by this roller coaster video. These signals can also be used for object recognition as illustrated here. So in the still image, you don't quite know what's going on, but once you see the motion, you can recognize the dogs playing around. Okay, so local, in the fly, the first local direction, the first local motion detectors are called T4 and T5. T4 gets its inputs in the medulla. So here are many, many different T4s. Here I show you a clone of one T4 cell. And T5 gets its inputs in the lobular, same here, and they both send their axons to the lobular plate, which is a very important part of the brain in terms of motion detection. So both of them are direction selective, meaning that they respond to their preferred direction of motion and not or less to the opposite direction. T4 detects on motion while T5 detects off motion. And the question I'm trying to answer here, and in the next few slides, is how do these neurons achieve this property? So motion itself is not accessible by single photoreceptors since they each look at one point in space. What the brain needs to do is to compare luminous changes in space and time somewhere downstream to extract the direction of motion. Phenomenological models of motion computation have been extremely useful to think about motion. So I cite two very famous ones here, and they have in common that they use inputs that are displaced in space and also a so-called delay line that transmits information to a downstream direction selective neuron at a kind of a slower time scale than the other. So I'm going to give you an intuition of a very famous model in insect vision, the Hashansai and Raikar correlator, which works by enhancing the preferred direction of motion. So as an object moves in front of the correlator and activates the delayed line first and then the non-delayed lines, the two signals arrive at the same time at the next step, which detects coincidence and then you get a signal, a motion signal. As the object moves in the opposite direction, the delay actually separates these two signals in time further, so they arrive one after the other at the coincidence detection step leading to no signal. So this is a very intuitive model that actually accounts for many aspects of insect motion vision, but a problem with this and other models like this one is that they don't consider two key properties. One is the complexity of the processing properties of actual neurons that make up this circuit and the fact that neurons can adapt to various conditions. And the other is that they do not consider the connectivity of actual neurons in the brain. And so to stress the first point, I want to show you a simple toy model that shows, I hope, clearly, that incorporating more complex filtering properties can have a strong effect on the output of a motion detector. So here we have two inputs that are separate in space and for the sake of simplicity, I'll just add them up at the second stage of motion detection. In the first scenario, we use two low-pass monophasic filters to describe both inputs, one slower than the other. In the PD direction, the responses look like this and the ND direction, they're separated in space more and you can just sum them up and compare the output. And as you can see here, this model shows very modest difference between PD and D, if any. In the second scenario, what I'll do is instead of using one monophasic, two monophasic inputs, I use one monophasic and the second will be a biphasic bandpass filter. In this case, so you see this is the PD and the ND, if you compare the sum, you see that there's a large difference between the PD response and the ND response and this is because in this case, the trough of the bandpass neuron coincides with the peak of the low-pass one leading to some cancellation to happen. And so I hope that this convinces you that differences in shape and filtering of inputs to motion detectors can have some very important effects to the extent of direction selectivity in such a scenario. To illustrate a second point about the importance of incorporating our knowledge of the connectivity, I'll focus on the off pathway, which will be the focus of my talk. The pathway leading to T5 looks like this. T5 gets feed forward inputs from two columns equivalent to two points in space. The first one comes from TM9 neuron and the other from TM1, 2, and 4 or three of which look at the same point in space. So I'll schematize this like this from now on. It's important to note that all four neurons are cholinergic, so putatively excitatory. And many experiments have been done recording from T5 itself or from its inputs, but most models that have been proposed so far and I've added a couple of examples here, whether they're filter-based or conduct in space invoke some sort of direct inhibition. So although they can account for specific experimental measurements, they're still hard to interpret in terms of neuron implementation. So it's not quite clear in this case what the source of inhibition would come from. So we took a very intentional approach in my lab to incorporate these two aspects, the constraints from the structure, as well as kind of our maybe our hunch that the actual temporal properties of the circuit might be underscribed and matter more than what people have been actually using. And so this led us to think about adaptation because related to this is the obvious fact that motion circuits have to work in many different behavioral states and in response to a variety of similar statistics. And we know that sensory neurons adapt their processing properties in different ways, such as for instance, frequency tuning, gain tuning, or biphasic tuning, I would call here. And actually inputs to T5 have been shown to display at least frequency tuning and contrast gain adaptation. And these are the papers here that have shown this. And so our hypothesis from this was that accounting from T5 inputs, state and stimulus dependent adaptation could clarify the core motion computation in T5 and its implementation. So we decided to test this very explicitly. So what Jesse did was perform wholesale patch clamp recording of all four inputs to T5, to different stimuli and different states. In terms of stimuli, I'll show you some responses to white noise as well as some simple flashes. And in terms of state, we did this indirectly by adding the neurotransmitter dopamine to the bath while recording. And this is because motion circuits have been shown to be modulated by local motion and flies, which correlates with the release of octopamine. And octopamine has been shown to broaden and increase the temporal frequency of T5 neurons are sensitive to, presumably to match it to the statistics of motion of flying and walking flies. Okay, so now I'm going to get into the data itself. So here are the linear temporal filters that Jesse extracted with the white noise stimulus. So they're mildly bandpass. You can see they've got a prominent first negative lobe and a very shallow second positive lobe. And you can see this better when you look at it in frequency space. These are the parametrized versions of these filters. They all peak slightly below one hertz, with TM9 being the slowest of them all, which was known already, of course. When adding octopamine, we see this very interesting change in the waveform of the responses that all become narrower and speed up and TM1, TM2, and TM4 acquired this very obvious second positive lobe, making the filter just much more biphase. And then the frequency domain, this corresponds to a shift towards the faster frequencies of motion, as well as a narrowing of the bell-shaped curve. And I think one of the interesting properties that comes out of this experiments is that octopamine spreads the sensitivities of these four inputs, you know, across the frequency space, which is something that's actually seen in T5 and could be a reason for this apparent redundancy. I'm not gonna show you a spatial filter or non-linearities because we've measured them, but they don't change much. So, but we'll use them in the models eventually. So then the question now becomes, can we use these linear temporal filters extracted with the white noise to predict the responses of these neurons to different stimuli with different statistics? For example, if we convolve the temporal filter or temporal filter with the flash stimulus of a specific duration, can we predict the responses of these neurons? What we do is that we also record, of course, from these neurons to this flash and compare the two. Okay, so this is what we did here. Here are the white noise predicted responses of TM1 to flashes of different durations and dash lines. And these are the actual responses that Jesse recorded from TM1. So it's very clear that white noise filters don't do a good job at describing these responses. TM1 responses are much more biphasic to these flashes and the short duration flashes have responses with much higher gain than expected from these white noise responses. This is the data for the rest of the cells. TM4 and to a lesser extent, TM2 have similar properties. TM9 doesn't acquire a biphasic response, but both the shape and the gain are not well described by these white noise filters. So, but then we thought, these maybe the flash responses with more similar contrast changes to white noise, where contrast steps are very small from one presentation to the next, might be better predicted by these white noise filters. And so that's actually exactly what we found. Jesse used low contrast flashes as opposed to these high contrast ones. And you can see throughout that the responses are much better predicted by these white noise responses, by these white noise filters. Okay, so to summarize the data here, we indeed find signatures of frequency, gain and biphasic tuning in all inputs, most inputs and importantly, yeah, so we find frequency and gain tuning in all inputs and this biphasic changes with CNTM1, TM2, and TM4 specific. And I think it's important to point out that we're still making very discrete measurements of the temporal processing properties, but we think that these neurons are actually spanning a space of parameters which we've tried to illustrate here. I won't go into the detail of this because there's a lot of data here, but I just wanted to illustrate the fact that if we look at the processing properties, the temporal processing properties of each of these cells in a space where we look at biphasic tuning and speech tuning, these cells span this space. It's very clear also that TM9 changes much less than the other cells with this kind of depiction. In terms of stimulus dependent, dependence, we think that it's information content that's the main driver of these moves that push the cells towards specific points of this space. And it's something that was pointed out also by these two papers, which look at inputs to TM themselves. And so the idea is that in low signal to noise regiments, you want to integrate in time corresponding of a monologued filter, whereas in high SNR regiments, you might want to transmit information about change to reduce correlations. But so these are things that will need more work to be established at the level of these cells. Okay, so we have our date, but now if can we explain the output of T5 across stimulus and state using what we know now about the responses of the inputs. And before building our motion model, we wanted to get an intuition as to whether we could capitulate T5 responses as a combination of our recorded inputs, starting with a simple linear sum. So Eyal Grotman in the Reiser Lab performs some beautiful experiments recording from T5 while presenting flashes at different locations of its receptive field. So the responses are very large in the center and get smaller as you get further away from it as expected, but something you found that was not expected is very interesting is that at the trailing edge of each of these cells, the responses are biphasic. They depolarize and then they hyperpolarize. And we wanted to see if we could explain these responses specifically this asymmetric property using our recordings. And so Jacob performed a linear regression using our input measurements as variables with the only constraint that it should be a positive relationship. And so to see if we can match, because we wanted to match this to the excitatory nature of the inputs. Okay, so this is the data from Eyal and dashed lines at six different positions. Jacob only used two inputs, TM1 and TM9, because TM1, 2 and 4 are so similar. And so this is the result of the linear regression using the white noise prediction. So as you can see, we do okay, but we really don't manage to reproduce this asymmetric hyperpolarization here. We then try the flash responses, which are closer in their statistics to the flash stimuli that Eyal used in his experiments. And in this case, we do better but we still understood the hyperpolarization to the average responses here of T5. And then we thought we could use the Octopamine extrider flash responses. And in this case, we do very, very well at reproducing these measurements. And so we were very excited about this because our flash responses are more similar to the flash, the bars that Eyal was using. So it kind of fit with our hypothesis. So next, what we did was to actually build our model. The idea is to predict the responses of T5 to various stimuli. And we had a number of very important constraints that come from everything that I've told you earlier in this talk. So these are the ones for input to start with. TM9 is spatially separated by TM1, TM2 and TM4. All inputs are excitatory. We use connectome weights to constrain our model. And we also use our measure process and properties of inputs that vary with stimulus and state. So we try to match inputs filtering properties to the stimulus that's used to probe T5. And finally, we use just a simple linear summation informed by the success of our linear regression to the static flashes. So the first thing we did was to simulate the responses of T5 to moving sine waves. So first we use our white noise filters to describe the inputs because we can actually show that those account very well for our measure responses of these four inputs to actual sine waves. This is the response of T5 to increase sine wave to sine wave of increasing temporal frequencies moving in either the PD or the ND direction. The curves peak just below one hertz as we expected and PD is larger than ND showing a direction selectivity. Then we did the same thing using the filters that we extracted in octopamine. And as expected again, these peaks shift to the right towards faster speeds and you still see the directions selectivity. And so from this we plotted the directions selectivity index of our model using temporal frequency across temporal frequency and compared it to actual data from two voltage studies from T5, from the Klandinen lab and from the Reiser lab. And our model recapitulates the data pretty well both without octopamine and even better with octopamine in terms of the extent or the magnitude of the DSI responses. Then we wanted to check whether the connectome constraint was important. So what Jacob built was to build models with random weights and we see the average is here and you can see the performance decreases quite a lot showing the relevance of these weights in our model just to specify our model has very few parameters. So we next use our model to predict the responses to T5 to high contrast moving bars. We first use our white noise filters with the intuition that this would not work well and that is the case. If you look at the responses of our model to 80 millisecond moving bars or 160 millisecond moving bars you see that the PD and the ND responses are very similar. If we use the flash responses we start seeing some direction selectivity and this gets better when we use the OA extractive flash responses. Here is a summary of this data compared to the data from EL's paper. And as you can see with the white noise filters we really don't do well but then when we start matching the stimulus the properties that we've measured to the stimulus that was used to measure for T5 then we do much better. Okay, so now I can summarize this part of my talk. I think what I've shown is that direct inhibition first of all is not necessary to reproduce T5 responses across stimuli. We've also shown that a simple linear core computation is sufficient for direction selectivity at the level of T5 but it's very important to account for adaptive non-linear properties of inputs. And therefore something to mention is that like a thorough description of the filtering properties of inputs to motion detectors rather than this simplistic delay is really important. And I think finally the knowledge of the synaptic connectivity in this circuit is also very insightful. Okay, so now I'm gonna move on completely switched gears and tell you about the second story which focuses on color vision. So specifically circuits downstream of wavelength specific are seven and are eight for receptors instead of these broadband neurons that I've been telling you about. Before I get into this I want to acknowledge the two wonderful students who together led this project. Again, this is a theme in my lab, Sarah and Gouki. So Sarah did all the experiments and Gouki is behind the analysis and the modeling and stimulus design. Okay, so spectral information is very useful to understand the world around us. One of the most obvious uses of spectral information to add information when a chromatic contrast information is just not enough. For instance, here in this monochrome image it's very hard to see what's going on. You can see leaves and trunks of these trees but if I add the color information now the berries pop out and really it's thanks to a spectral information that we can get a lot more information about our natural surroundings. So one of the most important concepts in color vision is the principle of univerity. So if you consider the spectral sensitivity curve of particular photoreceptor which represents the probability of absorption of photon as a function of wavelength and you shine this purple light close to its peak or its brighter green light closer to its tail you'll get the same response. So there's a confusion between wavelength and intensity when you're dealing with a single photoreceptor. Because of this an observer with only one type of photoreceptor is colorblind it will not experience color and we'll see the world actually in grayscale. The way visual systems can start dealing with this problem is to add another photoreceptor with different but overlapping sensitivities such as this green photoreceptor here. And these two photoreceptor will respond differently to these different combinations of inputs and the signals and using these signals the system can tease apart wavelength and intensity but this can only happen if somewhere downstream these signals are actually compared in the visual circuits. And this comparison is apparent in neurons called color opponents which get antagonistic inputs from different photoreceptor types and are therefore activated by a range of wavelengths and inhibited by a different range of wavelengths. Color opponent neurons with this type of tuning have been described across the animal kingdom but much of what we know about color processing has come from working trichromatic primates. So there cone photoreceptors coming through flavors short, medium and long wavelength. These signals are compared in the retina and the LGN in two main color opponent axes. M and N versus L axis which is this red-green one and the M plus L versus L which compares blue and yellow. Deeper in the brain various chromatic signals exist which start to encode specific aspects of the quality of light. Perhaps most notably these hue-selective neurons are found in IT cortex and they're very interesting because they respond to very narrow range of wavelengths. And whereas the transformation between from cone responses to color opponentcy is starting to be pretty well understood involving horizontal cell circuits that connect neighboring cones the transformations that take color opponent signals and make higher order chromatic signals such as these hue-selective cells is not well understood. In addition to how any of these types of signals relate to actual color perception there's actually still an open question in the field. Of course these are not easy questions to answer and one of the main aims of my lab now in moving forward is to use Drosophila to understand kind of fundamental principles of chromatic encoding. So going back to my schematic the color vision of course starts with four receptors, R7 and R8. I've put the tuning here again and I want to point your attention to the axons of R7 and R8 here in the medulla. There are 7 and R8, target two different layers in the medulla but each pair of R7 and R8 from the same homotidium in the eye here occupy the same column in the medulla. And so EM reconstructions have actually identified axonal synapses between R7 and R8 coming from the same homotidium so from the same point in space in the visual field of the animal. And something I haven't told you yet is that insect photoreceptors depolarized to lighten our inhibitory as opposed to the vertebrate retina. So these antagonistic interactions between R7 and R8 are of course well poised to support the emergence of color proponency. And indeed the lab of Derek Reif found that these synapses are functional and showed very nicely that these lead to two types of color opponent signals. One that compares UV and green and another one that compares UV and blue. However, the direct interactions are not the only ones that can support oponency in the medulla. Indeed the same EM study also identified another type of a new type of neuron DM9 which both gets inputs from R7 and R8 and sends outputs back to R7 and R8. DM9 is called energetic so puritively excitatory which I've depicted with these arrows here. Has a really interesting shape it wraps the terminals of on average six to six R7 and R8 very closely in the upper layers of the medulla and I'm schematizing this DM9 like this in the rest of my talk. So we hypothesized that in addition to this direct columnar pathway there's also an indirect pathway akin to a horizontal cell like circuits in the vertebrate retina that will allow comparison between ometidia so inter-columner comparison and add to the richness of the comparisons that a fly acts on the full photoreceptor axons can actually achieve. So here's a bit of a recap of everything I've told you at the level of the raptomers we know the spectral sensitivities we know that there's a circuit that does account for oponency and there's a neuron DM9 that we want to know what actually its role is there. So I'll use this little schematic to frame the rest of my questions. So the first thing that we wanted to do is to measure the axonal tuning curves of these axons. So this hadn't been done. The DM9 prediction was that we would find a color of point and axis in addition to the UV visible ones that were already described. Then we wanted to relate the circuit connectivity to the function of these axons and then we wanted to ask if we can mathematically define the transformation that happens between the raptomeric and axonal responses and build a model that can actually explain this transformation and take circuit into account, circuit architecture into account. And finally, we wanted to ask what's the function of all of this? So I'm going to step through these questions one by one. So first, let's build the tuning curves. To do this, Gouki set up the following experimental system. So here are the spectral sensitivities. Again, we wanted to be able to excite these as independently as possible. So we chose six LED spanning from DPV to orange and we built a color mixer, which allows us to present full field flashes of light either alone or from these LEDs either alone or in combination to a fly that's mounted on the object of a two-photon microscope. So our goal is to measure the spectral tuning curves of our seven and eight outputs over the fly's visible spectrum. So ideally we would like to show many different light sources with narrow tuning but we only have these kind of broader elements. So how do we do this? The idea is to make use of the principle of univariance that states that once a photon is absorbed its identity is lost to mimic the effect of a target light source onto the fly's array of spectral sensitivities. And so what we do first is to consider one of these wave lengths and calculate the photon capture that this light source will elicit at the level of DI which basically gives us a five dimensional vector in photon capture space. And then to figure out our combination of LEDs will achieve the same vector. We just use a simple linear regression to get the weights to say the intensities of each of the LEDs we need to use to kind of mimic to get as close as we can to this point here in the space. And this is really basically the same ideas as the way that your screens that you look at work which can use three LEDs to give you the perception of million hues but with our method we can do this for any animal for which the sensitivities of options are known. And Gouki has actually since then greatly developed this method. So I'm plugging this in here. We have a preprint where he explains these algorithms extends them to allow the users to deal with uncertainties and spectral sensitivities to reconstruct natural images and so on. So he's also worked to make a web app to make this very easy to use. So I hope that those of you are interested in color vision we'll check it out and maybe even give Gouki some feedback. I'm sure he'll be thrilled to hear from you. So now that we have a method to measure it here are the measurements that we did at the level of the axons of our seven and our eight photoreceptors. As you can see they're all opponents As expected, of course, our sevens compare UV and visible with slightly different tuning curves. Yellow rates also compare UV and visible with flipped tuning curve. And pale rate was very interesting to us because it was the only one that showed this trilogue shape. It's activated by blue, of course, but it's inhibited by UV and also by green. So we wanted to also measure the raptomeric responses to eventually be able to relate the two. So we can't directly report from raptomune with our methods but we use a genetic trick that I won't go into for the sake of time that allows us to measure cell autonomous responses which should correspond to putative raptomeric responses. So here they are, as expected, they're only excitatory. And we wanted to be sure that these curves would directly relate to the spectrosensitivities of the absence that each of these photoreceptor expresses. And this relationship has been shown by others to correspond to simple log transformation of the photon capture. So if you plot that, they really match perfectly well which is really a nice control of our methods as well. Okay, so now we have the raptomeric responses that are color opponents. And I can point out now that there's at least one opponent axis which cannot rely only on these direct intra-material connections. This blue-green comparison and pale R8s and that needs to rely on maybe intra-material interactions within this circuit. So we wanted to test this hypothesis and also look at the role of TM9. So we did many genetic manipulations. So I'll show you just a few, very targeted ones focused on R8 which is the one that is very nice because of this trial shape that helps us interpret our results. So this is the control which I'll show in black from now on. So we made flies where only pale R7s and pale R8s are active, as well as TM9, of course. And in this case, we keep the inhibition in the UV but lose it in the green which fits with our model. The opposite is true when only our R8s are active which also fits since here we see that we keep green opponent C but completely lose the UV opponent C. I should also mention that we've looked at the contribution of our one to six and we don't see any contribution to the way from pale R8s here. So in pale R8 we see a signature of both intra and interometidial interactions. And then we did a series of experiments to confirm the role of TM9 here because so far there's nothing that tells us that TM9 is the one who's doing this. So here's a control. The first thing we did was just to very simply silence TM9 and as expected, we lose completely the opponent C in the green and maintain a little bit of opponent C in the UV. This is because of, sorry, that there's a mistake here at the R7 and R8 in trauma-tidual interactions are still here in this case. So then we did another control here but what we can do is to basically remove all the histamineergic transmission which basically abolishes both intra and intra-metidual interaction but then we can restore newer transmitter receptors only in TM9, restoring only in intra-metidual interactions while leaving the reciprocal inhibition impaired. In this case, we still have some opponent C in the green but also in the UV. You just have to compare the dashed lines and the black line to the actual response. So this shows that TM9 is sufficient to drive this intra-metidual opponent C. Okay, so far I've shown you that we're dealing with a dual circuit composed of intra- and intra-metidual interactions and now we wanna build a model. So before getting into the circuit structure, again, the same way that we did with the motion model, we wanted to ask if we could express axonal responses with a simple linear sum of raptomeric responses. And so with an unconstrained linear regression, Gouki shows that using our raptomeric log of Q responses as independent variables, you can see a linear transformation that can account for this neural transformation. So then the important question we wanted to ask is whether we can constrain our model with the architecture of the circuit, which I've schematized here again. So Gouki built a linear recurrent network that's constrained by circuit connectivity and the signs of the circuit and he fitted to the steady state of our responses. And so we call this a fully parameterized model. You can see it works well. But then we also wanted to constrain the model further by using synaptic counts as a proxy for synaptic weights the same way as we did this in the motion model. So Gouki just plugged in the synaptic counts instead of the fitted weights. And as you can see, this works very, very well. So this was a very nice result, but it could be that the model is very robust and that any weights could work and to control for this, he replaced the weights in our model with randomly drawn weights as we had done also in the motion model. And we can see that our model with the synaptic weights actually performs much better than the random weights. So we figured out how this circuit works and now we just wanna understand why it works. Why it works the way that it does. So I think it's really important to ask this kind of question. And so the first thing that we did to answer this was to compute the correlation coefficient between the different photoreceptors at the level of the wrapped news. And you can see this is very red because there's a lot of correlation between the photoreceptors, which is simply due to the high degree of overlap between the sensitivities of the options that they express. But now we can do the same for the external response and show that there's much less correlation between the external responses, which is a very useful thing in terms of signal processing. So clearly these opponent circuits have the benefit of the correlating photoreceptor responses but now the question is, why do they do it the way that they do? So why do they use opponent responses that vary along two main axes, one comparing UV and visible and the other one comparing UV and green? To answer this question, we were inspired by a very influential paper by Bushbaum and Gottschalk, who showed that the axes of opponents that are measured in the human retina, the blue versus yellow and green versus red correspond to the principal components that are obtained by PC on the spectrosensitivities of human cones. This very simply means that by recording this chromatic signal from three channels to two opponent channels, it's optimal in the sense that it reduces redundancies and the dimensionality of encoding was still capturing kind of a maximal amount of information. So we did the same thing basically and performed PC on the spectral sensitivities of the opsings expressed in R7 and R8. So the first principal component is achromatic, of course, and which accounts for about a half of the variance. The second opposes the R7s to the R8s corresponding to a comparison between UV and green part of the spectrum and the third compares RH5 to the UV and the green parts of the spectrum. And these first two chromatic PCs together with the achromatic PC explained 97% of the variance. The last one is it compares RH3 and RH5 to RH4 and RH6, but has very tiny explained variance. And so it's very interesting of course that the two first chromatic PCs broadly described the two types of responses with measure at the output of R7 and R8 and I've shown them here overlaid. And so again, here just as in the human retina, by aligning to these two axes, opponent mechanisms can efficiently decorrelate chromatic signals, reduce encoding space, but also allow for nearly full reconstruction of chromatic information. A small caveat to this analysis is that it assumes a flat spectrum which is not taken into account the reflectances of naturalistic scenery. So what Gucchi did was to perform the same analysis using a set of natural stimulus spectra, flower reflected systems, those are very easily available. And in this case, PCA on all four channels gives basically the same principle components. And then when you, when he projects the hyperspectral reflectances onto this PC1 and PC2, they're very nicely spread out as you can see. He did the same thing on the pale and yellow inputs separately. So the decomposition of inputs that segregates pale and yellow pathways and the two chromatic components here that we call CY and CP obtained that way are still somewhat trivially correlated. And you can see this when we look at the actual reflectances. And this shows that a full circuit combining both intra and entoro material interaction displays a more complete opponent mechanism. And so to summarize the second part, we have identified a dual circuit for color opponents at the level of Drosophila for receptor outputs, an intra or material kind of inside specific circuits mediated by direct synapses and entoro material evolutionary convergent circuit mediated by DM9. And we show that these two opponent axes allow for efficient and comprehensive representation of chromatic information. And just as is the case with the motion circuit, the constraint that are provided by the circuits of the brain really allows us to understand and describe these behaviorally relevant circuit computations. And before my last acknowledgement slide, I just have one little plugin here. We're recruiting, so if anybody is interested in doing a postdoc in my lab, we also have an opening for research technician, please email me. And if you want to know more about what we're up to these days, we have three posters at Cosine, which is coming up very soon. So please go talk to Gouki, Shani and Sharon, who will be talking about this color space geometry and the stimulus design, some normative models that try to explain the proportion of photoreceptors in the fly eye and spectral information for navigation. So with this, I would like to thank everybody in my lab as well as our collaborators, which are very, with a very special thank you to Larry Abbott, with whom we collaborate very well on these more theoretical aspects and our funders as well. And I'd like to thank you for your attention. Thank you very much, Rudi, for this wonderful presentation of two distinct but not really distinct stories of motion and color. Probably people think that I'm doing a bad job, usually asking questions. So they are keeping their questions to ask them live themselves. So I will be posting the link of the Zoom room we are currently sitting in so people can start joining us and then I will start with my questions because you mentioned the color story second, I will start with that. Do we know anything about the population or the dendritic spread of the DM9 neurons? Yeah, so from work that was done at Janelia by Alyosha Nern, we know we have a very nice idea of the span. So on average, each DM9 spans about six, seven columns which basically would correspond to kind of one central omitted dim and a ring around it. It's not quite clear that this is a structure that DM9s have, some of them are more kind of longer than others. So we're kind of looking into the diversity of these DM9s. And of course that will tell you something about integration and about maybe centers around receptive fields and things like that that we've started to try to model as well in terms of the opponents in space which I have not at all touched upon here which is part of ongoing experiments in my lab. Yeah, and it makes sense like to follow down that direction. The first question that appears from Tom is first he says very nice talk. You mentioned the huge cells that might be expected further into the brain but do you happen to see them? We are working on this and we have some really interesting data which hopefully we'll be able to share very soon. Sorry for the confusion. I think so we do see more specific signals later on. So we are very much interested of course on the whole pathway that's downstream of these cells as are other people in our field but I think some very interesting data will come out soon. Great, thank you very much for that clarification. So people start jumping in the room already and at this point before I continue with the questions I would like to let you know that there are a lot of people thanking you for your talk. Simon Laffrin is already here with us. So I would like to go to the direction story and because you tried to make models that are biophysically and biologically constrained I was wondering whether we know anything about the integrative properties of the T9 neurons like if they somehow post-synaptically can modulate the input they get. TM9 neurons, you mean? The TM9, sorry, not the TM9 the T5 neurons or the T4 neurons. A lot of work has been done downstream in the lobular plate with integration of these T5 and T4 neurons in kind of a higher order motion signal such as these like optic flow signals or even looming signals. So a lot of work has been done downstream. I think, you know, focusing on kind of the core computation is because historically there's been a lot of debate on what's going on. And I think this was really my interest there. And I do think that, you know, even though we haven't answered everything, of course, I think it's kind of the looking more closely at the adaptive properties of these neurons is really giving us a clue on what's going on at the level of T5 itself. Right, so but like when it comes to single cell T5 processing of the inputs, we don't know much because in the model, it's not included, right? Like I saw you mentioned... No, there's spatial information included. Right, the synaptic weights, right? No, but also the receptive fields of the inputs are also included. Right, OK, OK, thank you very much for that. I'm sorry, I did not understand your question, it seems. And then another question I have is because you mentioned that the octopamine is responsible for perceiving optic flow. Do we expect like also circadian changes in the level of octopamine that's expressed? So it's very interesting that you talk about circadian. So we haven't looked at circadian involvement in the motion pathway. It's true that the different times of day you might have different kind of overall intensity, but that I think is taking care of with contrast gain adaptation or different things that are happening at the level of photoreceptors themselves that, you know. But I think the circadian part is more interesting for the color pathway and definitely we're looking into different newer modulators. So more like serotonin dopamine that might be involved in dealing with circadian changes and affecting color circuits. So we've been more looking at the circadian effect on color processing. And I think we've seen also some interesting data. You would expect maybe some shifts in the tuning and we're seeing things like that. So before I continue with another question that Tom posted and he's taking us back to the color story, I would like to remind our audience that they should be joining us in the Zoom room in case they want to keep track of what we will be discussing shortly. So I will be stopping the live broadcast and the next part like the informal cheat set will not be recorded. So before I leave the stage for Simon or Tom or whoever wants to join and ask questions directly, one last question from Tom is, do you have an overall spectral tuning function for the brain? It's a green versus UV, which activates more stuff, which of the two? An overall tuning of the... Sorry, I don't quite understand. Yeah, so the question, and maybe Tom wants to clarify because he's here already, do you have an overall spectral tuning function for the brain? What do you mean, Tom? So imagine you've got decamp in all the neurons of the brain and you flush different color light, different wavelengths light. Some wavelengths are going to be much better at activating stuff than others, right? So like for example, if you do this in the fish, UV and red work really well and the in-betweens don't work so well. Like do you have that level data at all? We don't have that level data yet, but we do see, for example, as you... I think, okay, taking a step back. I think at the level of the brain itself, I don't know because what happens downstream of these oponent circuits is that the information gets basically split into different pathways and used very differently by different pathways. So one of the things that we're studying, for example, is UV versus green oponency in navigation. So it's not that... I think different parts of the spectrum or different types of comparison are used for different behaviors. So I don't know that I can talk about a brain sensitivity. Do you know what I mean? Some behaviors are going to care more about short spectrum, some more about long spectrum. And so we're really looking at this as kind of at the level of the outputs, then you spread all this information amongst different... You know, even circadian, right? You can enforce the circadian clock by knowing what type of light you have in the day. So that's another pathway. And in this case, you want to compare UV and green. So I think the UV and green comparison is like a very important one. Sorry, the UV visible is a very important one. Okay, thanks. So as there are no more questions appearing in the chat, I have one last by myself before I stop the broadcast. Again, going back to the direction story. And I was wondering whether we know... Like, you saw that when it comes to the subspace of parameters, T9 stays way more local compared to TM1, TM21, TM4, right? Do we know more molecularly why that might be the case? Actually, the story is a little bit more complicated. So you're talking about TM9 looking at one point in space. And TM1, TM2, and TM4 also look at one point in space. So I don't know if that's what you're getting at, but there's some really nice data from a Marion Silas that shows that some TM9s actually look at wider parts of the visual field. We see this also in our data. We've kind of separated those cells out. We don't really know what they could be doing in local motion detection. So I think that there's more complications. Like, I was trying to streamline the story as much as I could for kind of a broader audience, but there's a lot more complication there. Right, yeah. I was going back to the diagrams that you had, like there was a lot of information. And then like for the TM9, it looked like it is inhabiting a smaller space with respect to the stimulus parameters than the other... Oh, sorry, that, yes. Yeah, so we don't know why that is the case. Maybe you need one of these inputs to be much more stable, I guess, and to really allow the rest to shift so that the circuit can actually deal with this. We don't quite know... And we don't know what the cellular and molecular foundations of all these adaptive changes are. So there's some work from Tom Flanlinen's lab that shows that a circuit might be involved starting at the inputs of these TM1, 2, and 9 themselves. So L2 shows this biphasic shift. So there's something going on that we don't quite understand. It'll be really interesting to see what is the cellular and molecular basis for these adaptive changes. Right, right. Thank you very much, Rudy. At this point, I think I will be stopping the live broadcasting. So thank you very much for everyone that joined this talk. Again, thank you very much, Rudy, for presenting these really nice stories. Thank you. So I'm stopping the stream and I'm officially waiting.