 Ενώ δεν είμαι πιστήρυνος, πρέπει να είμαστε σκοτές. Καλώς, καλώς και καλώς και ευχαριστώ για έναν άλλο πρόσιο της Σεμίναρς Σασάσεξς, όλοι στην πρόσφυγηση της ΟΜΕ. Είμαι Γιώργη Καφετζής, ένας πρόσφυγης της Σεμίναρς Σόλας, και τώρα είμαι πρόσφυγης της Σεμίναρς Σόλας. Όπως σας πρόσφυγη για σήμερα, θα αγαπώ να ξεκινήσω και να αγαπώ να ξεκινήσω τον Βόγγελ και τον Πάνος Βοζέλος, για αυτό very initiative towards a greener and much more accessible, similar world. Και Having said that, allow me of course to get back to the reason we all gathered here for today and introduce our guest from Zanilia, Dr. Michael Reiser. With undergraduate and master studies in computer and electrical engineering from the University of Florida, he went on to Caltech and the lab of Michael Dickinson for his PhD in computation and neural systems. During that time, his research interests centered on modular electronics for behavioral neuroscience and sensory guided control of insect flight while keeping an eye open for vision-based or vision-inspired robotics. In 2007, he joined Zanilia as a fellow then and has remained ever since nowadays as a senior group leader and throughout the years, his research interests have broadened but haven't changed. Contributing to the development of tools and computational methods and employing a number of techniques ranging from behavioral to anatomical and to functional and computational. In his lab, using Drosophila as a model system, they seek to understand visual circuits, how they encode information and ultimately how they control specific behavioral programs. Therefore, today we have the pleasure of hearing about the latest and I'm sure exciting findings in his talk entitled Vision Outside the Visual System in Drosophila. So without any further ado from my side, please all welcome Dr. Reiser. Michael, the stage is officially all yours. Great George, thank you so much. Can you just want to make sure you can hear me and I'm going to share my screen to confirm that you see the presentation full screen. Give me a second, it's still loading. Yeah, good to go. Okay, perfect. All right. Well, thank you very much George and Tom and everyone for the introduction and the invitation and the opportunity to present. Thank you everyone for joining. Those of you who are joining live, I hope to get some interesting questions and look forward to a lively discussion. Of course, the interesting thing about these online seminars is that they can persist indefinitely. So of course if you're seeing this later and you have questions, you want to get in touch by all means, you can find me on Twitter or just Google me and send me an email. I'd love to talk about anything that comes up. So what I'll talk today about is what we've learned by thinking about vision outside of the visual system. And we weren't particularly setting out to think and learn things about brain organization, but it turns out that across a couple studies, this is where we've ended up. And so I think it's a pretty interesting and exciting new area to discuss. So I'm looking forward to presenting about that. And so just by way of introduction to let you know a little bit more about me and my interest and what we study in the lab. I came originally as an engineer into Michael Dickinson's lab and was interested in behavior and really thinking about it in this kind of reverse engineering way. So trying to understand if you were to design agents that could do sort of complicated acts of navigation by using sensory information. What is the algorithmic basis for things like that? And then how do animals really exemplify and inspire us? And so that's carried over into the lab. And so we focused on vision and visual control navigation. And for most of the last decade, the main project we've been working on the lab is trying to understand motion vision. We've approached it at this behavior level. So this is the famous so-called auto motor response where you can move visual patterns in one direction or the other and the animals turn and rapidly follow that. So we've used that relatively straightforward behavior that is incredibly robust to measure and fly as a way in to think about circuitry. And so we spent a lot of time in the lab working at the behavioral level, as well as at the cellular and circuit level. And most recently we've had a series of papers from a algorithm in the lab who's been patching the directly selective neurons. So we've made quite a lot of progress working through the visual pathways and to the point where we have a fairly good understanding of the algorithmic basis of how directional selectivity is generated in the fly. And we're working towards a sort of organismal behavioral understanding. So there's still plenty of work to be done there. And I've talked about this a lot. Fortunately, I don't have to talk about it today, because if you're interested, there is a worldwide neurotalk you can find from almost exactly a year ago. Relatively easy to find. And so that's exciting that I can present new work. And so again, just to let you know, we worked on, as George said, we've expanded our portfolio, let's say. So we've worked on a range of topics in the lab. And so the motion vision project has been a sort of top down and bottom up approach where we're working both at the cellular level and the behavioral level. Similarly, for the last few years we've gotten into color vision, again with behavioral projects and circuit mapping and some physiology. And we have a recent connectomics paper up on bioarchive if you're interested to see a little bit of what we're up to. We've also been working quite a bit on how animals can use visual patterns to learn where they are and where to go. And that's very much been a top down approach. There's still a lot of work to be done getting to circuits there. And then one of my favorite topics that we have a little bit of work on is this behavioral state modulation. The idea that the activity in visual circuits actually depends on what the animals are doing. So this is a project that absolutely has to be worked on at both levels, behavior and the circuit level. And actually what I'll be talking about today is what you might think of as a bottom up project where we've been using looking at the circuitry and the cell types that detect some visual features. And so we'll get into that. And that's been very much a bottom up approach. But the let's say additional perspective that's going to run through the talk today is that these anatomical methods in particular kind of comics that I'll get to, but also let's say the earlier methods of just cataloging cell types and generating specific driver line have provided for us what what I think of as kind of this middle out approach that tends to organize the entire experimental projects across the whole range of topics that we're talking about here. And so there are unique challenges to working both at the behavioral level and at the cell and circuit level. And I don't think connectomics makes them go away, but it really provides a way a really beautiful context for everything else we're doing. And so I'll try to give you some examples of that. At the very end, right, sorry, so today we'll be talking about visual feature detection. And at the very end, I'll quickly talk about some instrumentation projects that we're doing just for something refreshing and fun. I always like to talk about it. Okay, great. All right, so I know this is a vision seminar. And so many of you are already convinced that I don't have to convince you. And I like to tell my friends to work on other sensory systems that it's not a competition, right, but vision is special. The reason it's special among the senses is that it's spatial, right. And so here's a lovely picture from the Potomac River a few miles from my house where you can go for a very nice hike along the riverbank and it's quite rocky. And so of course, with just this snapshot and of course the much more exciting experience of being there in person. You can readily imagine how you would navigate this somewhat challenging terrain, right. You would figure out where to put your feet next. You could go down to the water you could plan a path right and so that is of course because your visual system allows you to interpret scenes and understand what is in the scene and where things are right so what is the spatial relationship between different features right where it's the water where our rocks where should you put your feet, etc. And so this facility comes at massive costs right and maybe the easiest way to see that is just to look at the footprint, let's say, a visual area so for example, in the macaque cortex we see that the visual areas are at least 50% of cortex, according to this estimate from David Sine college. And in the fly brain, if we just take a cross section through the fly brain and now look at the volume. We see that the brain is really conveniently segmented into two regions, the central brain here in the middle, and then the optic lobe off to the side and just by volume, it turns out the optic lobes where vision is primarily processed are about 50% of the fly brain and due to a very recent estimate of the neuron count we also know that in terms of numbers of neurons the optic lobes are about half of the neurons in the brain. And so one of the reasons these visual areas are so large is because they need to be able to maintain the spatial relationship between things. Right and so in the fly brain it's particularly easy to see that because it manifests as a property of retinotopy so if you look in earlier visual areas. Right we have, I've just illustrated here with a single this pink column is to represent the circuitry in the lamina in this case the first neuro pill underlying the processing for one spatial location, and that is propagated through deeper layers. So again we have these pink tubes here just illustrating essentially these columnar circuits that all map to a single location in space, and because we've inherited the organization of the retina. We find that if you were just to move over right to an adjacent location within any of these neuro pills you would be processing neighboring visual information. And so that required so that requirement to maintain spatial relationship actually requires this huge amount of circuitry right so that you can process the different features in different areas but also keep track of these spatial requirements and then the this retinotopic organization has been incredibly convenient and is really a key to understanding things like the computation of directional selectivity that I talked about as introduction. But then the more we think about it and the deeper we go into the fly brain trying to understand where vision goes. It is kind of this thing that makes you scratch your head a little bit which is so much of how we think about vision depends on this idea of being able to map a receptive field and doing structure function correlates that depend on retinotopy that it's sort of a curious question to happen. So what do we make a visual spatial information outside of these strictly visual areas where the organization doesn't just sort of tell us what might be going on. And so that is I'll be touching on that today. And so the way this project started is with a collection of neurons all the lobby like alumni neurons. I will show you plenty of examples, but the starting point was the generation of driver lines to be able to target all of these neurons one at a time. This was a project initiated together with Jerry Rubens lab and Gwyneth cards lab and a beautiful collaboration which was detailed in a fairly long paper five years ago. So I'm going to give you an appreciation of different visual projection neurons that project into the central brain, and to start off we'll focus on a single cell type LC six. Right, so here are just two examples of LC six neurons labeled with different colors from within a population of about 70 neuron, and they again project from the optic globe into the central brain, and they receive their inputs, though their dendrites are in this structure where they have where they span a roughly a region that's roughly about 4% of the field of view of the entire. So the lobby laws are at the topic structure. And so it's relatively easy to do this back of the envelope calculation I'm saying okay how much a visual space to these neurons. And they project into the central brain into a structure that we called glomeruli optical maruli. And in this case it adorably looks like a bit of a chili pepper. But what you'll notice is that they're these neurons already manifest a pretty interesting transformation from what looks to be strictly visual, right and quite retina topic to something that looks organized in a quite a bit different way and the outputs of different neurons are spatially intermingled. Right, so this is already the first chance to start to think about this transformation from strictly visual to something else. And so, when these structures were first described say 3040 years ago. One of the original proposals was perhaps this represents an abstraction of visual information already right so these neurons might be encoding something like what has been imaged but by discarding spatial information. That was one idea. And so that's a question will be tackling today. So remind you, we actually I showed you the driver line separately right but of course these neurons all project into the central brain and they have a pretty interesting organization so we'll be talking about the organization is these neurons in two stories so part one will be asking, or sort of investigating within an optical glomerulus. And the question that we're going to try to grapple with and when you can have in the back of your mind is, why do we even use this glomerular organization and what might it be good for. And this is a story that was led by a former graduate student in the lab my Morimoto, and a current postdoc in the lab our third child. Now I'm going to talk about the organization across optical glomeruli, where the question we end up answering which is, again, we didn't actually set out to answer, but it turns out that's what we were working on is why are the glomeruli where they are and why they organized this way. And that was a project led by nation from pochia, a joint graduate student between my lab and when it starts. Okay, and then the entire project is really supported by all of these amazing driver lines that I briefly showed and then the support from your children and Jerry Rubins lab to help us generate tools to assess all of these. So, so the project begins like many things do it at Genelia with making excellent clean driver line so here's the driver line for the LC six neurons. And it turns out from a initial behavioral observation that if you put an optogenetic depolarizer expressed in these neurons and drive it with red light in this case, what you find is that the flies take off and they take off with very high probability and with very short. Latency and the take off behavior looks essentially identical to what Guinness cards lab has measured for years in flies responding to visual looming evoked stimuli. Right, so the take off looks completely naturalistic and very much like something evoked by vision. And so it was quite exciting and reassuring. When my morimoto in the lab, when I had to image the visual response properties of these neurons expressing GCAMP doing two photon microscopy and delivering a controlled visual stimuli is quite reassuring to see that the neurons are quite responsive to looming stimuli and they do so with quite a bit of selectivity so it just in this simple cut of the data. If you show looming stimuli versus a receding loom to the exact same stimulus but just played in the reverse order, you find that the neurons are strongly preferring the outward expanding looming stimulus. Okay, so we have neurons that apparently drive take off behavior somehow directly or indirectly to be determined and encode a looming stimulus. Right, so then we come back to this question of spatial detection of a looming stimulus right so I showed you anatomically the output of these neurons becomes quite a bit mixed. Right, and so how does that actually appear right and to what kind of information do downstream neurons have access to. One of the simplest experiments you can do is just to show looming stimuli at different locations on the retina. And so my did that and so here is just an example from a single experiment where looming stimuli were shown at different locations, roughly along the equator. Right, some above some below. And what you find entirely consistent with the anatomy that I introduced earlier is that the output of these neurons are intermingled spatially so looming stimuli at different locations tend to excite something like two to four LC neurons. The image in the glomerulus at the axon terminals what you find is within very small regions of space so here within one or two microns or let's say four microns, you essentially have all locations of visual space represented within a very tight volume. Right, so again it's entirely consistent with the anatomy as I introduced it, and it does suggest a quite challenging readout problem if the goal is to promote and extract spatial information. So in the context of looming do flies even need to keep track of spatial information, indeed they do so from these beautiful experiments like when it's hard did. She showed that as a fly is approached by a looming visual object from different locations they essentially jump always away from them. So just keep your eye on these green examples here right so when looms come from this direction the flies are jumping almost 180 degrees away from. So at least to something like 1020 30 degree resolution, the flies need to know where looming object is coming from so they can get away from it. Right, so the question is, do these LC6 neurons, or any looms sensitive neurons which project into the glomerulus, but we happen to work on LC6. Do any of them actually signal the location of a looming object to take off circuitry, whereas this done in maybe a slightly more complicated way. And so the approach we took might at first seem slightly unusual. The good news is, I'm going to give you just a very high level overview and the papers published, so you're welcome to read it quite long. I have many details that I'm omitting. So the approach we took was, since we're asking a question about readout, the idea was let's just go for the readout neurons. So with Aliosha's help, we asked Aliosha to generate to identify cells that appear to be integrating from the LC6 neurons in the glomerulus and then give us tools so provide tools for us to be able to assay what the readout mechanism looks like. So Aliosha was able to find five cell types, which he thought were very likely to be downstream of the LC6 neurons to make a very long story short. We subsequently now have kind of atomic data so we know that they're directly downstream of LC6 neurons at the time we did it. And so we're going to focus on two cell types that sort of exemplify what's going on here. The first one, which we called LC6G1 for glomerulus 1 neuron and an ipsilateral projection neuron, which we call the G2 neuron. And then working with Alan Wong at Genelia, we did a pretty simple experiment. Again, we wanted to confirm that the neurons are connected. So these are ex vivo experiments where you dissect the fly brain out and you put an optogenetic depolarizer and the presynaptic LC6 neurons and then GCAMP in these putative postsynaptic integrating glomerulus neurons. And indeed for all the cell types that Aliosha thought might be connected, we find things like this essentially short time scale, rapid postsynaptic depolarization. So these are experiments we call this approach functional connectivity. And so it appeared as if the neurons are actually able to integrate LC6 inputs in the glomerulus. And then my went ahead and did a whole series of calcium imaging experiments while showing visual stimuli from the different targeted neurons. And indeed what we find is that these targeted neurons are able to integrate looming signals within the glomerulus. And to make a very long story short, we did a whole panel of visual stimuli. And we do find that of the two cell types I'm telling you about, there are quite some interesting differences in how they integrate LC6 information. So these bilateral neurons turn out to be inhibitory interneurons, and they integrate essentially, you know, one for one fashion with LC6. So any stimulus to which LC6 has a response, you essentially find proportionally that response strength in the output neurons, in these bilateral neurons. Whereas the epsilon neurons are actually excitatory projection neurons. So they look to be mostly overlap with the glomerulus. They actually do have an axon that's just sort of behind it. And they encode or integrate looming stimuli with an enhanced selectivity relative to LC6 themselves. So LC6 are looming sensitive, as I told you, but you can think of them as being fooled by some other stimuli. And then these output neurons appear to integrate, require integration, let's say, activity of multiple LC6 neurons within relatively short time scales. So by doing that, they appear to have an enhanced selectivity for looming by comparison to these other stimuli. Again, this is all detailed in quite a lot of it in the manuscript. Okay, so we have two different output neurons, right, and coming back to the spatial question, it's quite straightforward to then map receptive fields for these neurons, but because they're looming sensitive, primarily, we have to do it with tiny looming stimuli. And we do that at a dense grid all across the eye. And just summarizing a fairly complicated data set, we find something like this, where the bilateral neurons have a much larger receptive field. Essentially, nearly everywhere we can present a looming stimulus, we'll get a nonzero response, except for this upper right corner, which we learned after the fact, was spatially occluded by the amount we used to hold the flies. And then you'll notice that these are all done within one eye, and the responses essentially go all the way up to the frontal midline, and maybe just slightly across it. And in contrast, the ipsilateral neurons have a much smaller region over which they respond. So they really appear to have a much more focal visual spatial readout, something like what you would consider like a typical receptive field, again, for looming stimuli. Okay, so we have two cell types with very different integration properties, both spatial and for features. Right, and then so this is quite surprising, right, given this original challenge of extracting spatial information from this glomerulus. So, we wanted to know whether we could explain this through connectivity. Right, so around the time we had these data around 2017, 2018, I think it was, when my former graduate student showed me those data was a bit confusing, right, because again, the dogma had been, you're not supposed to have spatial information in the glomerulus. So she repeated the experiments many times, we tried it many different ways, and that was a very consistent finding. And so around this time, David Bach had finally established a full female brain transmission EM volume, and convinced us that you could actually identify neurons and trace them all the way through across dozens and dozens of sections. And so using those data, we were able to find the LC6 neurons. At the beginning, there were no tools for this other than a cat-made tracing environment, and now it's kind of amazing what we can do a few years later. But at the time, it took a few weeks just to find an LC6 neuron, and then we found all the neurons, and then this got much easier with a team at Genelia called CAT, which was managed by Ruchi, and had a whole group of excellent students who helped us trace neurons. And so we were able to find all the LC6 neurons, find the target neurons, reconstruct the target neuron, tag all the synapses between the target neurons and LC6 neurons. I'm just showing a few examples here of one of the axolateral neurons and then two of these bilateral neurons, one from each side of the brain, and we found many hundreds of synapses between LC6 neurons and these targets. So they're clearly connected, they're clearly integrating from LC6 in the glomerulus. And then Arthur Jawa, a postdoc in the lab, took on the computational anatomy project of trying to predict receptive fields from these reconstructions. And so in the manuscript, there's a very beautiful and very interesting take we have on just assessing this question of retinotopy. So I'm not going to talk about it at all, but if you have questions, happy to talk about it. But the idea is we asked whether there's any evidence for any retinotopy at any level in the glomerulus. The answer is we find a little bit. So the glomerulus is not organized in a fully random way, and there's a tiny bit of spatial information that's preserved, let's say, trivially. So if you were just locally integrating from past to the glomerulus, a very weak spatial bias. But what we can do then is having reconstructed all of these neurons is Arthur can represent every single neuron with its center of mass as this blue dot here. So we're going to just the flattened layer within this retinotopic globular structure. And here we've just highlighted two example neurons, this blue one and the red one. And from them, we can then computationally estimate anatomical receptive fields. And we do so in eye coordinates, which is just a flattened view of one eye. And again, here this blue neuron corresponds to a downward looking region part of the eye. Essentially along the midline and directly below the animal or almost directly below. And this red neuron is looking more or less directly behind the animal along the equator. So with that, we can then estimate the expected receptive fields for the target neurons in a very simple way. We essentially just scale the receptive field for the individual neurons by the number of synaptic connections they make onto each target neuron. And so we've done that and we do it for multiple neurons. Just again to make a long story short, here's what we find. So within the field of view of one entire eye, we now see that if the EM based receptive field estimate looks like this, where we expect a much larger receptive field for the bilateral neuron and a smaller one for the epsilon neuron. And so first of all, before we get into the comparison, we were already excited by this and I would just say, I think this is the more significant result because very well the connectome explains the functional measurements for a bunch of caveats, I'm happy to talk about. But I would say just the first level observation is that the connectome supports the proposal that within a glomerulus, you can generate visual spatial readout right that the neurons can have bias connectivity to the input neurons. Right and that is already sufficient to generate very different readout structures right now of course, we have not implemented or taken into account any work effects right there could be. You know recurrence within the glomerulus that could be sharpening through the inhibition all of these things can change other receptive fields but just at the first level, we already find these striking differences and I would say impressively. They roughly corroborate the functional measurements right especially these features I pointed out before. Just to be clear, I'm just carrying over the 70th percent contour level over to the functional measurement right so the main thing we find here again is that for the bilateral neurons. We tend to find support from the connectome right for input going all the way up to the midline and with the epsilon neurons we're finding these smaller receptive fields that avoid the midline. And you'll notice is an interesting discrepancy done that we have no explanation for. So, so the maps roughly agree. And again connectivity alone is able to explain at least some amount of these differences. Okay, so I think I'm going to end the summary of the first part here with a quick summary and then I'm happy. People can think if there's any like clarifying questions you want to type them into the YouTube. I'll take a brief pause to see if there's any question I think we'll have a more interesting discussion at the end, once I present the rest of it but so just to sort of take stock of the question I laid out earlier so why would you use a glomerulus rather than a retinotopic layered structure. First of all I don't know if you noticed but on my title slide, I had this sort of stack of hey. This is actually a snapshot of the reconstructed skeletons within the middle of the glomerulus and so at this half micron scale. It really is a ticket of neurons and it really is the case that every part of visual space is essentially within, you know the sort of distance you might think of as potentially being able to support kind of activity. Right so so there really is a I would say a pretty serious developmental and actual functional readout challenge. The original proposal is that this glomerulus represented this abstraction of visual information and I think logically that doesn't actually make sense it's not clear that fly should throw away all spatial information, and hopefully from the data I've shown you. It supports the idea that you know some spatial information is extractable from this structure. This proposal for what might be going on. And it's just a proposal, right it's, it would require a little bit of formalization to demonstrate this is that the glomeruli present a sort of interesting trade off that may minimize the total volume and wire length that you need. If you're trying to support two kinds of operations, this sort of all to one integration that I've told you represented by these bilateral G1 neuron, and this more focal kind of classical receptive field type readout for the LC62. And whether it actually makes more sense to do these things within layered, you know, well organized retinotopic structures like we have in the visual system or within these glomeruli may actually that balance may just tip depending on which kind of readout is more prominent. Right, so. All right, so let's see. Are there any questions before we move on. So there are two questions in the chat. Do you have it open or so like. Have it open. Okay, so how are you, how are you, has asked whether the LC6 and the bilateral neurons respond to checkerboard loom. I don't think. So yes, so the short answer, I think what how that means is one of these kind of loom controls, where you have local bits of darkening without all of the coherent motion. So the shorter answer is yes, and it's not as strong as a proper loom. So that gives you about a 50% kind of response. If I got it right, if now we can talk about it later. And then Tyler, hi Tyler, how are you doing asked. Do they integrate non visual inputs that's a great question. We don't know. I'm almost sure they do. I would say the LC6 inputs do not explain the full input of these target neurons. Right, but I think and some of the inputs come outside of the boomerous. But yeah, no that's a fascinating question to follow up on. Okay, so thank you for those questions. So just a very quick summary of what we've shown so far. Hopefully, you're convinced already that you don't need simple retinotopy to have spatial vision right so there's something we should already kind of elevate what we have in fact, the brain is capable of just from looking at its structure and saying okay you don't actually need the sort of simple retinotopic organization to encode spatial information. Okay, multiple more questions how do you compare these visual glomeruli to a factory ones. I mean I think the most. It's a good question hi Catherine how are you. So the simplest comparison is just they're incredibly dense with synapse right so they really just pop out in these, you know, even these initial comparisons of fly brains right so, so, I mean it's actually kind of astonishing the density with which we have synapse is in these region, and I think I'm not sure exactly what sort of comparison you're looking for but it'll be a slightly more interesting question after I present the comparison across glomeruli. So let's come back to that question later. And then Greg was asking about the two hemispheres unfortunately we never did the left hand side of the brain we were sort of exhausted after doing one side, but we'll do it Greg I promise, especially with larger bigger data set. So one of the challenges, well, we'll talk about it later. Yeah, no, yeah, it's a great question, we didn't do it. So the consistency that we did do and what we show in the paper is across multiple independent output neurons of each cell type. Right, and they're more consistent than not. Okay, so since at the very end there I raised this issue of wire length and volume minimization it's worth just recapping it briefly because it's a really lovely idea. Right, and it's been proposed a bunch of different ways. And I think formalized by me to Shlapsky and a really elegant set of papers. But the basic intuition is that if you're just designing brains from scratch you want your axons to be as wide as possible, right because they'll be fast and timing of signals will be more reliable. Right, because roughly for non myelinated axons, for example, induction should be proportional to something like spirited diameter. But then you run into this trade off which is if your neurons are made up of tubes, why cables that are too wide, right, you get into this problem that you start to push everything apart. And then okay so you have fast transmission but then it's transmitting over longer distances and then everything gets big. Right, and so in this simulation you can see that that as you make a sort of an imaginary axon wider and wider you get this very rapid reduction in the transduction and then all of a sudden it starts to take off. Because you're making the brain larger and larger and pushing everything apart. And this is not just a hypothetical issue it's really a serious constraint on brain organization. And so from this example of the data neocortex, what you might consider the wiring, right the axons and dendrites really do comprise something like 60% of the volume. So it really is a major constraint. And so just to be clear, when we talk about minimal wiring, I think nobody really has in mind that for any single neuron you pull out of the brain, it is optimized so as to be as small as possible. Right, the idea is that across the entire population of neurons for all the connections they need to form, right you have a global minimization. So that's the sort of trade off, we have in mind. What's fun and why I like to bring it up is the implications of this are quite profound right so if circuits are optimized to minimize volume and minimize wire link what you get essentially for free from that is that most brain computations should be local. Right, and what that means is that neurons are close to each other because they're actually participating in some shared circuitry right so that's kind of a basis of a lot of the organization. What we tend to see in the brain, and this provides perhaps the simplest and maybe too simple, but a really intuitive and appealing proposal for why we see maps in the brain at all. So the first example I showed you today was about retina topic. Right, so you sort of propagate the organization of the retina, presumably to keep these operations local and to minimize connectivity and wire link. So these are probably the most famous example of that are, you know, these homunculus types maps and motor and sensory areas of cortex described by Penfield. And these are even more astonishing because these are many many synapses from the periphery, and yet you still have this, you know, fairly well preserved organization so it seems like a really striking feature of brain organization and it seems to come out of this minimal proposal. And so I'm thinking about this with colleagues. We realize that this is such a fundamental principle but it's kind of hard to think about how the math are built and used other than just observing that they're there and using this as a way to find neurons to record from, because in order to understand them you need a minimum you would want to know, definitely in the modern neuroscience fashion of trying to understand brains you want to know, what are the cells and the cell types that actually make up these maps. You can map neurons in code. And then you'd like to know something about how the map is actually used right how's it read out downstream. And so we realized that these LC neurons I've introduced today are really neat place to ask these questions about mapping, because for these three criteria, we essentially already chose the first one we're going to focus on these visual projection neurons. We have some ideas and we can find out more about what they encode, and then kind of comics now allows us to look at downstream read out mechanisms. And so I'm going to give you an overview of some recent work where we are able to address all of it. And so this is the project from Nathan Kropotki. Again who's a short shared post up with Gwyneth Card. And so the first question that we tackled is moving beyond LC6 so taking another whole set of LC neurons. We say even encoding right so we published a whole set of neuron driver lines about five or six years ago, and since then there've been many papers from many labs right these neurons have become quite useful to silence them and look for behavioral effects to do some imaging so there are a bunch of ideas, but we thought it's worth taking a another look at a bunch of neurons which have been hard to understand and Nathan has really perfected a kind of complicated experiment that if you have ideas on how to do a simpler one. We'd love to hear it, but what we find is that it's a bit of a Goldilocks problem so if you image in the optic lobe in the lobby law. The density of neurons is too high, and the calcium signals are too low so it's a bit hard to actually get specificity if you imagine the glomerulus. You have this packing problem everything is all together. It's very hard to map receptive fields and deliver targeted visual stimuli. So what Nathan has been doing is doing small volumes in this region where it's sort of between where just anatomically the neurons are separable right and you can find these you know these initial axon segments and you can actually segment them out. And, and in so doing you're able to map receptive fields for individual neurons and then deliver visual stimuli that are quite targeted to the center of the receptive fields of the neurons and it turns out it. This is almost essential, because these neurons are quite selective in their visual stimulus properties and so it's quite easy to miss a lot of their remarkable selectivity if you deliver stimuli which are not within these map receptive fields. So here's one example LC 18 just to show for three adjacent neurons three adjacent receptive field, and here's kind of a typical receptive field mapping for the second neuron in this set right so these are just like small square stimuli that have been flashed. And I'm sorry to say the paper you cannot read this paper yet. So I'm going to give you a quick overview. It should be on bio archive within weeks. You can ask as many questions as you'd like. Alright, so having mapped receptive fields for a whole panel of LC neurons. Nathan found actually a quite straightforward relationship between the anatomical size of the neurons and then the receptive field we measure. Right, so it's always nice when data provides simple answers. And so in this case is a simple answer if you just look at the size of the dendrite in the visual system. And then you look at the functioning measured receptive field in general these correlate very well right so there are no surprises here. So that's good. And then, after mapping receptive fields Nathan goes on to deliver something like about 100 different stimuli. I have to say, if people know me know this is like my favorite approach to science which is, don't choose your favorite hypothesis choose all of them. Right, or as many as you can squeeze into a sane experiment. So we do something like we test speed size looming non moving different motion things things like that a few controls in the mix. And so there's LC 18 which I showed you and it turns out LC 18 is quite sensitive, the small moving objects while ignoring most other stimuli. And across the 10 cell types that we had focused on. We find a wide range of response properties to the different stimuli such that you can essentially distinguish every single cell from every other one, every single cell type from the other one based on its response properties. And at the very first pass, you can make one very simple cut which is half of the cell type are primarily responding to looming stimuli, which is here in the first two columns, with much lower responses to other things. And then half of the cell types essentially ignore looming stimuli and respond to things you might call small moving objects. Okay, so I'll give you a second to take that in. And then we're going to go and talk a little bit about the size to have these various neurons because that turns out to be a another very interesting way to distinguish the cells. And then we'll come back to brain organization all in the next few months. Okay, so just to give you a flavor of how Nathan is mapping the size tuning of these neurons they're going to show you data. That look quite a bit like a receptive field map but it's actually a map of size tuning. So here are the kind of stimuli we show. So little tiny two by two degree squares going up to 90 by 90 degree squares. And we're going to vary either the width or the height of the of the object relative to the axis of travel. So this is just a neuron with a hypothetical. Oh, oops, sorry, that's supposed to be 30 degree, not 30% I apologize. I made it this morning. Okay, so imagine a neuron with a 30 degree receptive field. And just essentially traversing through this column of this map, imagine we're just sort of passing objects so that is the nine degree wide bar and now it's a nine by nine square, and then a nine by 15 rectangle and then nine by 30 and then we can keep going to a smaller and taller one but this neuron couldn't tell the difference, because it has a small receptive field. Right so that's the rough idea of the kind of stimuli they were passing through. And so of course we do all of them we do the full matrix presented to all the neurons. And so for the looming neurons, you can see here we tend to get quite different responses so they're all distinguishable based on their size tuning already. But for today I think we're going to focus on the small object neurons because they're even more interesting. And so here we find for these five neurons which are primarily responding to small object motion, we find very different response selectivity so these LCA teams, you might actually call them, I'm just going to give them names, but casual, you might think of this neuron as being a small point detector so it's sensitive to objects that are so small, that they're smaller than the kind of stimulus sizes that the earliest neurons in the flight visual system are mapped. Right, so there's some really interesting size selectivity enhancement that goes on in subsequent processing. I'm not going to talk about it today but I have a slide so if people in Q&A want to know how do flies see things and actually prefer things which are smaller than the earliest stages of the visual system, just ask. So then we have LC-21 here, which seems to like things which are not too big in either in one dimension, LC-11 seems to like things which are not too tall, LC-25 is primarily a vertical line detector, so it likes things that are tall and skinny, and LC-15 is mind boggling, because it's a neuron that likes things, it's essentially an edge detector, almost like your edge detector from an image processing toolkit, it essentially likes things that are thin and long. The point is that we find very different response properties for all of these neurons, and in the manuscript hopefully you'll be able to read soon. We spent quite a lot of time deconstructing the response properties of LC-18 and LC-25, because we are interested in how these features are encoded. But I'm not going to talk about that today, I'll just say that their encoding properties are different from each other, and they are different from the classic motion detector. So there's already at least three or four examples of different ways to detect the movement of objects in the fly visual system, which is quite cool. But the one thing I want you to take away is that unlike the general receptive field mapping, the size tuning in no way is explained by the receptive field size of the neuron, and so just to give you one example, your LC-18, which is the cell type which encodes the smallest features of all the ones we found, actually has one of the largest receptive fields. So those are unrelated, and it's in part due to the fact that this size tuning is actually generated and computed by the neuron itself on its inputs. It's not just something that's inherited upstream. Okay, so we're going to come back to this brain organization question. So the glomeruli essentially form this contiguous group, and that's why we focused on these 10, because they're all neighbors in this protocerebrum. And so we have this contiguous group of optic glomeruli, and there's high stereotypy between individual brains, and so one would hope that that stereotypy is actually good for something. And what it might be good for is organization for higher order circuit. So if we just apply the labels I've already introduced to the anatomical map, you'll see that by and large they're already separated for these features. The looming neurons, looming sensitive neurons are next to each other, the object sensitive neurons are next to each other with this LC-18 sort of being on the fence. And so in order to take a first pass at thinking about this organization, what Nathan did was just take all of the response properties of the neurons, that big matrix that I showed you a piece of, and then just apply principal components analysis to be able to reduce and project this down to two dimensions. And they're what we find, and this is quite cool, this was done on individual recordings. And so from individual recordings of each cell type, you find that all the cell types essentially cluster with the other cells of the same type that were recorded separately. Right, so that's the first result that all the neurons are more similar to themselves. And then they're organized in such a way based on their response properties that has this kind of shocking similarity to what we find in anatomy. So that the relative position of the cluster really resemble these anatomical position. But for example, here's LC-25 and LC-15 quite next to each other. Here's LPLC-2 sitting right next to LC-4, and really the only outlier is LPLC-1, which is kind of the troublemaker. Just to give you a bit of intuition for what seems to be going on there, roughly speaking, the first principal component here seems to capture just the thing I've already introduced, which is the extent to which the neurons respond to looming or not. So all the looming-sensitive neurons are down here, but there's positive value of PC-1, and then the object neurons are up here. And then the second principal component seems to capture the size tune. Right, so the neurons, which are more responsive to larger objects, tend to be on the right-hand side of this PC-2. Right, so just roughly speaking, that's what seems to be captured by the two principal components, the looming sensitivity and the size tune. And of course, which is not to say we only showed 100 stimuli, you would, as people know, you would need a literally an infinite number of stimuli to completely define the properties of these neurons and to fully specify their relationship. But we already find this kind of astonishing similarity between the encoding properties and their spatial location, which really shows that nearby neurons have these similar visual responses and suggest that the glomeruli are organized into a feature map. So then we get to this question of, does the brain actually use this feature map? And so, do downstream neurons actually integrate from nearby glomeruli and why are the glomeruli where they are? And so, this is now the 2020 version of this project. We don't have to go and find the neurons on by one. We benefit from massive efforts by a large number of people, especially from the FLiTNM project in Canalia, who've generated this hemibrain connectome, which fortunately contains all of the glomeruli, unfortunately doesn't contain most of the visual system. But it's hard to complain when you get, you know, essentially for free from the work of many counted people, you know, 20,000 reconstructed neurons with almost 10 million prisen axes, etc. And within that data set, again, we have 1600 neurons across the 20 LC types. And so, it's a really nice starting point. Briefly, I mentioned before that we have this remarkable stereotypy between brains for the position of the glomeruli. I just have to show this because this concordance is amazing. So if we take out of the hemibrain, the spatial location of the T bars, which are a structural component of presynaptic active zones, right? So if you take that out of the hemibrain and just color coded, it looks remarkably like essentially the exact same data that generated from antibody variables, right? So again, the structure of the glomeruli looks remarkable, even down to these peculiar details, like that this LC4 has this little stick down at the bottom. So just again, I define this remarkable. And it's because of the stereotypy that actually becomes relatively straightforward to start to reason across very different data sets. All right, so just getting to the main question. So are these neighboring glomeruli used for shared computation? So we do this roughly in the way I introduced for the LC6 local computation question, right? So we find readout neurons and we ask where they read out from. So it turns out in the hemibrain connectome, if we set a threshold for strong connections, we find about over 140 neurons, but integrate from exactly to LC types. So integrate from many individual LCs, but across two glomeruli. And then we can just look at the distribution of their connections and ask how does this compare if we just looked, if we imagine a brain is constructed such that all glomeruli are integrated from uniformly. And we find that the actual pairings you find in the connectome are heavily skewed towards nearby glomeruli. So here we're just plotting against the distance between glomeruli and you find that actual neurons in the connectome are integrating from neighboring neurons in the glomeruli. So I showed you close by neurons are more similar. So for the neurons for which we have done all of this calcium imaging, we can essentially do something like the dot product between their response properties and ask, you know, are the integrated glomeruli featuring similar response properties and indeed that's what we find. So we find also that we have this skewing towards inputs which include very similar visual features as compared to a uniform connectome. And then this is focused on the 10 cell type for which we have a lot of imaging data. We, of course, in the connectome have all the LC types. So here it's nice to summarize everything by just looking at a matrix of all the integrating neurons that integrate strongly from multiple glomeruli. And you find something like this. So you find that it's actually quite sparse. There are many regions where you don't find integration across glomeruli, but then you find these patches. And what's quite nice about connectomic data is that at one pass, they give you this broad overview of the organization, but then it's also a lookup table, right? And in that sense, you can just go back to the connectome and say, okay, we're kind of interested in visual looping. They say, okay, here are some neurons which integrate, for example, from two looming sensitive pathways, right? So here's a very specific cell that you could look up and maybe record from. And then if you're interested in small object motion, here's a neuron which integrates from two different of the small object pathways. And then there are even some neurons over here, for example, which combine what appears to be from looming sensitive pathways, as well as from the object sensitive pathway. And in this case, actually, it's kind of interesting because the combinations you tend to find have other similar properties. So these are neurons which both tend to be skewed towards faster stimuli. So again, maybe these are neurons, these target neurons, would be neurons that are looking for sort of small things moving around, which occasionally change in scale, for example. Okay, so just to kind of bring everything together, these are the visual projection runs in the Hemibrain, they're very beautiful. And what I've shown you today is that there are dozens of parallel projections from the Optiglo and each one encodes distinct visual features. And that within these glomeruli, you can actually read out focal receptive fields, but they're going to be done at a coarser scale than what you find in these retinotopic aerophiles, right. So there's already some downsides, let's say. And that the organization is really everywhere, we basically never find any evidence for randomness or anyone. Right, so again, things tend to be quite organized, so randomness is not the rule. Within glomeruli, what we find is that there appears to be this compromise between being able to read out all of the neurons at once and then also being able to pull out these sort of focal receptive fields. Across glomeruli, we find efficient shared computation between these projection neurons that encode very similar features, which are going to be next to each other. And the question we haven't been able to answer, and the one that I think now is the most exciting, is whether what we're looking at are a bunch of parallel visual spatial maps, which then have these different feature selectivity. And they actually support spatial readout across features, but at same spatial location. So that would be very exciting. It's been a little bit hard with the existing data to assess that, but we're working on it. And so I think there are many open questions about vision in the central brain, but it actually seems quite approachable. The challenge for us has been that we sort of thought in the simple fly brain, simple fly brain, right, that the pathways we'll be looking for are quite short. And so here we are outside of the optic load. So now these are fifth or sixth quarter neurons, starting at the photoreceptor. And we're actually not obviously looking at these anything like premotor neurons, right. So the, every time we take one step further into the brain, there's still a little bit more to learn. But I think progress is really exciting and will be supported by new connectomic data sets in the next few years. Okay, so that's that for biology. I was going to say a little bit about instrumentation. I'll make this super quick because I think I'm approaching the hour. So I was going to say for a long time we've been working on visual LEDs to do visual stimuli. We've kept working on it. I benefited from having really great people in the lab, Frank, Matt and Lisa and others who've helped push the technology. We're now up to the fifth generation. The fourth generation, one of the nice things we did with that is we've set it up so I can run with UV and green, which has been hard to buy commercial UV LED displays. We have a website where the documentation has gotten quite a lot better. And we're now working with multiple groups who are trying to build them. And so we're trying to figure out how to support the assembly. If there are people who are passionate about open hardware, I would love it if you get in touch. We're always trying to help people make them more reliably and cheaper, which remains an issue. So we'll have a message paper on this fourth generation soon. And one of the limitations I mentioned briefly for matching the receptive fields of the neurons is that the receptive fields that we measure have not really spanned the full field of view of the fly. So we've been working on that in a couple of ways. One way here with this G4 arena is we can pitch it. And so we can deliver geometrically corrected, in this case, optic flow field at different locations on the eye. And so that's quite convenient. And it's still something we're continuing to work on and we're thinking about different ways to do it. So with these G5 panels, which are high density and high LED count, we've been thinking about assembling this amazing ship called the Rambe Q-Buck Dehedron, which is essentially the most reasonable spherical approximation you can build with squares and some overlap. And so we're working towards that. And then the very last thing I want to show, it's just too fun now to show, is Frank Loach in the lab. I spent a lot of time building a highly optimized fly on ball setup that is very, very cheap. And so that's the fly on a ball, looking at visual stimuli, which for us is a really exciting experiment. It's quite complicated and we understand that a lot of labs cannot set this up. So Frank has done a great effort to convert, essentially part by part, a typical setup in the lab to something that other people can set up. And it's been incredibly well documented, both how to run the experiments, how to prepare the flies. That is a very nice website where it's all super detailed and we have a manuscript up by Archive and hopefully a published paper soon. But the bottom line is we're able to transition this kind of experiment from one which is super expensive and super complicated to one that hopefully is highly accessible. Many people could build and set up. And we originally targeting lab courses like the summer courses or undergraduate training. But now we realize there are a lot of fly labs who never would have thought that these experiments are an option for them. And they now seem quite excited and even we've had some kind of interest from high school teachers. So we'll just see how far it goes. And again, if people have questions or want to help us by streamlining and documenting any of these existing systems, just get in touch. Okay, thank you very much for that. I just wanted to thank all the incredible people in the lab. It's been a hell of a year and a half. It's a very resilient, really exciting group of people. Super fun to hang out with them on Zoom, as it is, as often as possible. And we've had great collaborators, lots of help. And I'm very happy to take any questions. I'll just say, while we take questions, I'm going to leave this slide up for two more minutes. If people have a phone, want to pull out your phone, there's a fun demo with the QR code there, which just sort of demonstrates that it'll give you an experience of what it's like to be a fly in one of our experiments. And to show people that our cell phones are actually quite good at delivering visuals. Alright, thank you so much. Thank you very much, Michael, for this impressive presentation. Lots of captivating stories, lots of compelling evidence for the importance of connectomics and what can be distilled when it comes also to the functional investigations. I really like what you are hitting at, like with the principles of neural design. It's nice that we have Simon with us as well. There are already more questions appearing on the chat. I would like to remind to the audience that we will have like a 10 minute post-talk discussion session, and then we will continue in the Zoom room link that I will be posting shortly. So I will start with the questions that appear. I think the first one was from Tom. He says, isn't one big argument for retinal maps that they emerge as the first step from duplicating an already systematic map in an evolutionary sense? For example, V1 duplicating to become V2 and again for V3. It's time adding a processing layer but becoming more clumpy with iterations. Could that explain why Cortex still needs them even though they are huge? Sure. As these kind of evolutionary arguments though, they are all interesting proposals, hard to prove. That's sort of what I was getting at where we thought this map of LC neurons is actually a chance to put some of these proposals to a test rather than just observing that they provide this sort of explanatory appeal. So I don't have any problem at all with what Tom is proposing. Similar things have been proposed in the fly visual system for duplication. For example, how on and off pathways have sprouted new pathways. So we have these lobula and medulla, maybe there's a more ancient and a newer, more derived structure, which again has this full complement of retinotapi. But I don't take any issue with that at all. I still think what I'm trying to push the discussion in a different direction, which is like what happens to vision when you're not clearly endowed with massive retinotapi. And maybe that is kind of right at the cusp of sensory motor transformations or where you start to have multisensory integration. There are clearly like at some point, right, the brain is doing things which are vision plus, right, and you expect to lose a lot of that comforting organization of the retinotapi neuro pills, right. So, and so I think that's where we can start to go now in the fly, right by leaving the comforts of the visual system. So I like, yeah, I really like what Tom is getting at. And we've sort of moved beyond that and haven't answered that question but are looking elsewhere. Yeah, next one is from Katrin Vogt. Do you know if any of the LCs encode colon information, any hints from possible connections from the medulla to lobula to glomeruli? Yeah, so for sure it's a great question, Catherine. So for sure, the answer is yes, sort of. So it's mainly what we know through these TM5 neurons which project from the medulla to the lobula and many of them go into LC neurons. What remains unknown is whether color as you would think of it, right. So things which are downstream of opponent pathways, I didn't have like very strong wavelength specificity. Whether those things strongly influence the LC outputs that's to be determined and whether what we're looking at is really like color pathways or let's say color inflected object pathways, right. So maybe what the neurons are primarily doing is encoding say on and off things, right, but that tends to be influenced by some color information. So that's all to be determined, but the signals definitely get there. We just don't know what they do. For the interest of time I will proceed with the next questions as we already have like some open-ended ones when you were discussing about the first part. Next one is from Evgenia Ciapi. How about LP projecting glomerally? Are interneurons integrating across Neuropile or are LO and LP VPNs largely parallel channels? It's a really good question. They're mostly independent and they're actually like spatially disjointed, right. So they're not neighbors, they're mostly independent. There is some crosstalk. From what we can tell, most of the crosstalk, if you want to get motion signals into these feature detectors, that tends to actually happen in the optic load. But it's to be determined. I mean if we're, we have nothing to say like if you're two layers downstream of glomerally, we don't know, right, but at the first layer they're mostly independent. Except with the exception of the looming ones, but that's a special case. Next one is from James Stone. Is the feature map innate? If so, what proportion of this unknown information in the genome is required to make the map? Unreasonable question, I know. Right, that is an unreasonable question. I can't answer that question. I really can't. I have no ability to even approach the question. I'll look like a fool if I try. We believe the map is innate. We haven't done the critical experiments, but everything we know, like for example, flies that have never seen the world will still have glomeruli, and the glomeruli will be where they are. So to that extent, yes. And maybe we can talk about. Yeah, I don't know how to answer the second question. Speaking of which, I have already posted the zoom room link of the room that we are currently sitting in. So please make sure if you want to continue discussing with Michael in a more informal tone after the talk to join Simon laughing. He says a golden Aldi why have maps, several good ideas here computational maps in the brain and he cites a paper. And then the question is any hint of fast sorting aggression time sort latency and slow channels efficiency favors their segregation. Yeah, so that's for sure there are. I don't think we have a continuum of speed tuning that we should expect there to be a strong interaction between speed tuning and feature selectivity but but at the first pass, they do seem to be LC neurons which are tuned to slower things and faster things so that's for sure. So that we have like every flavor of speed there might just be fast and slow right with just a tiny bit overlap in the middle, but but for sure that that is there. Great and if I'm not mistaken I think this was the last question that was posted on the on the YouTube tab. People are already joining the room. One question I have maybe it's not important in the end because you mentioned this disrepancy between like when you were talking about the receptive profiles. Between what you see with the M and functional, and you said you don't have an explanation for this but do you have any speculation as to how it might like where it might be attributed to. So, I mean I think I'll just say two things one is technical right is just like maybe we didn't we thought we did an amazing job right we did the best receptive field mapping we could at the time. But maybe we really just needed to do this better I coverage right that's why I come back to that idea right so it's possible. That just you really, you know, one of the things we really appreciate now that we spend time thinking about mapping the field of view of the eye is that flies really see a lot they see directly below them almost directly above them and almost directly behind them. Very much. It's both practical and maybe a bit and you know anthropomorphizing flies. We focus on things like in front of the flies. Right, and so if you want to distinguish between these receptive field, maybe the simplest thing to do is actually to map the full extent of the functional receptive to the anatomical prediction, but that's just a thing we would like to do, we haven't done it yet. And then the other possibilities that we're just ignoring all the network stuff right so it's possible that you have some interactions right within the glomerulus not just the direct first order connections right which again refine receptive fields. So those are my two favorite speculations, I think we had three or four more contents in the discussion of the paper. Yeah, but they get eliminated I assume like as time passes. So at that point, at this point actually Michael I would ask you to stop screen sharing so we appear bigger on people screens. I would like to thank you for honoring us and for giving a talk through our series and remind the audience that I will be stopping live transmission in five minutes maybe. So if you wish to tune along, please make sure you click the link that you see in the chat. Thank you very much for being here. And officially I can wave my moderator rights, so people can proceed freely with their own questions like once they appear on the screen. So if people want to reveal themselves by all means, please go ahead. Thank you George. How's everyone doing. I hope I was intelligible. Great. So, while no one's asking a question I'm just going to hit you with one. So, in, in their fish, obviously we've got something very much clumpy like your projection neurons. At the level of the projection fields the AFs right with AF seven being the famous one for break after. But perhaps a even more complex thing then you're proposing your protection neurons to do. Yet they sort of sit at a similar similar level of organization. So, do we conclude then that the zebrafish retina is a bit smarter than the flight one or the other. I'm just being provocative. I mean, like, I didn't talk too much about the small. I mean, I think the neurons we, we find that are finding small targets, right. The neurons that have been described in dragon flies right and these are these LC neurons so they have restricted fields of view right and so and that's nothing to say about what's capable right when you integrate a bunch of them right either in a synergistic or an antagonistic way. So I'm not going to define the upper limit of what these projection neurons can can contribute to, because we actually don't know that's actually the weirdest thing about this project. And it's very humbling, like, as someone who's like really excited about behavior is that we failed to use behavioral methods to explain what many of these neurons do right all we have is like to take off right and silencing experiments have been really hard, Potentially, because I think they all work synergistically. So unless you have a really tight read out of how the flies are using a particular visual feature it's not clear that if you silence one of these pathways you're going to find something prominent right so I think that's actually, I would say the weirdest thing about where we are, is that we just picked the neurons, and we tried to use, you know, a kind of a good representation of reasonable visual feature to predict what they do. Right. We, for sure we've missed lots of interesting things. There's no way we haven't. So I don't know Tom, but sure, let's say zebrafish are smart and fine. The flies are cuter, you know what. I wanted to push you a little bit more on this amazing analysis that the ARTOR did in terms of the EM estimates, and that little, you know, mismatch between the unilateral receptive field, based on EM estimations and the actual one, based on functional estimates. And I think if I'm recalling correctly, so EM connectivity suggests that you will have this more ventral part, but that you don't find functionally. Do you know any particular bias in terms of connectivity for any extra element that might be providing some sort of inhibition or something like that that will be masking. What truly fit forward connectivity could actually, you know, define. Yeah, no, it's a great idea. I mean, honestly, we didn't look, we were really exhausted by the time we finished that analysis. There was like, no, so we didn't, we didn't map the all the connections between the target neurons, for example, right. So we already know that two of them are for sure inhibitory. We did a May 1 GABA cell, right, and then there's a bunch of cholinergic neurons, right, and we just did not do enough mapping to propagate the analysis, one more level. Right, so from the, I mean that bilateral neuron is pretty cool because it's basically doing kind of like what you expect for an inhibitory internal run, right, which you would say whatever gain control pick your favorite thing for an inhibitory neuron with lots of inputs and outputs, except it also bridges to the other side of the brain, right. Yeah, but I wonder whether that's a common feature across all these glomeruli, because that really, as you very well known reminds us to the ring famous neurons, even in a non glomeruli circuit right so big guys pressing to the two sides, potentially So I wonder whether that's an actually maintained feature across all these. It feels common, right. Yeah, and of course, with a whole connectome with a whole connectome right they don't need to be direct right you can imagine you functionally get the same thing with one step in between for example right, but the direct ones are very easy to observe. Yeah, I'm sorry. I don't have an answer to your question.