 Okay, we're good to go, Tatla. Good morning. Welcome to today's open research webinar hosted by Elive. This series aims to give early career researchers an online platform to continue to share their research as an alternative to in-person gatherings. I'm sure that many of you, just like myself, is now watching from home. My name is Tatla Weigl. I'm a deputy editor at Elive. I've been with Elive from the very beginning for almost 10 years now. I'm a plan geneticist and illusionary biologist for the past 20 years, but I actually started out as a neurobiologist as an undergraduate in the 80s. I'm really, really excited about the talks today that we'll, we have three fantastic talks. You'll see the speakers here. We'll hear from Dr. Miriam Metamales, who is a postdoc at the University of New South Wales in Sydney. She'll be speaking to us about stridal function and its role in goal directed action, a learning update. Next, we'll have Devanian Disgupta from the CREC in London. He'll be speaking to us about perception and encoding of temporal fluctuating odor stimulus and mice. And finally, Alexandra Silivaki, PhD student at the Charité in Berlin, Germany, will speak about inner neurons intelligent operation. It's a matter of non-linear dendrites. Each of the talks is going to be 10 minutes, and after the talks we'll have five minutes for questions for the speakers, and then we'll move on to the next talk. To ask a question, you can type it into the chat on Zoom. Make sure you figure out where the chat box is on Zoom or directly into the Google document. We're joined today by Miranda, Andrea and Naomi from Elife, who are working in the background to support all of us. They'll help me line up your questions. I'll read your questions out loud and include your name where possible. The open notes document is also a place for you to contribute, share the public notes. We welcome you to do so and to list yourself as a contributor in the list above the speakers for today's webinar. Thank you very much for doing this. Finally, we just stay on that slide. No, the previous slide. So just like to remind everybody that we are recording the webinar, also live streaming this on YouTube and ask you please to be respectful, honest, inclusive, accommodating, appreciative, open to learning from everyone else, do not attack, demean, disrupt, harass, or threaten others or encourage such behavior. It's very important if you feel uncomfortable or unwelcome on any of these webinars, please contact us by email to events at elivesciences.org. Miranda Nye is watching that email address. And finally, we serve, of course, the right to ask anyone to leave in order to deny access to subsequent webinar on Zoom. If you need help, chat message on Zoom to the host Naomi or directly to Miranda Naomi or Anya. So without any further ado, really excited that we are hearing now from Miriam, Postdoc, and Sydney on Stryl to function and its role in goal directive action, a learning update. Over to you Miriam. Can you all see me? Yep. All right, thank you that left for your introduction. Hello everyone. Thanks for joining. Today I'm going to talk about our new view of straddle function and its role in goal directed action. Please send some questions and I'll be happy to answer them at the end of the of the talk. So you know that the capacity to learn and adjust behaviors that secure resources is one of the primary functions of, of the brain. And this idea of learning from interaction with our own environment underlines a lot of learning theories that are currently up. One of the most influential one is this one from Rescorland Wagner that postulates that learning is driven by the discrepancy between what was expected from our actions or from an stimulus and what actually happened. That is when errors in prediction occur. So a major question in both in behavioral neuroscience and computational neuroscience is how is this prediction error implemented during, during learning. Experiments from shows and colleagues provided some insight into the neural basis of these process in their experiments. Classic experiments they first show that dopamine neurons shown here as all these spikes individual neurons they were activated following the delivery of an unexpected reward, which was thought to be signaling positive reward prediction error. Later when these monkeys were trained with a pair CS a tone or a light that preceded the delivery of the reward dopamine neurons did no longer activated in response to off to reward but then they did activated in response to the anticipated anticipatory stimulus. Based in that dopamine neurons convey reward prediction error signal, not just signal off the reward. Later, they, they show that in those trials in which there was an omission of the delivery of their reward, following the pair CS. Dopaminergic neurons then drop the activation of dopaminergic neurons drop below baseline indicating a negative prediction error. So all these other evidences indicated that the fire this classic firing from mid brain dopamine and dopaminergic neurons conveys a reward prediction error signal in the brain. So the question is now, where is the signal going on. So, as you as more, a lot of people know this random or CPU is the main target of dopamine transmission in the brain. And to illustrate that I'm showing you here, the reconstruction of the Arbor is axonal aberration of a single dopamine neuron from the SM. And as you can see, this neuron is projecting all of the terminals to to the stratum occupying a large area of these structures. So if you can imagine, if you want to understand how this dopaminergic dopamine signal is encoded in the brain, we have to have a spatial view or a large sky view of what's going on in the stratum. So the stratum is composed by these spiny projection neurons as you can see here they are spiny because they are receiving massive input from cortical and thalamic areas, and they are, they are the most the larger population in the stratum as you can see here they don't number any other kind of neurons in the stratum here in the SPNs in white, and here other kind of other interneurons in in in red. And interestingly this population is can be subclassified into two equally large super populations based on the expression of two types of dopamine receptors so you have the one type and the two type and then you have like the one type of the two SPNs. And as you can see these two types of neurons are not organized in a topographical manner but they are just like randomly distributed in a intermingled one with each other and filling all these issues within the stratum. So because they express different types of dopamine receptors they respond differently to changes in dopamine levels. And some recent studies are starting to elucidate how these two populations encode intracellularly or at the active level, this error, error signal. So one example of these studies is this one from Sabatini's lab, in which they show that by measuring the activation or the activation of signal in within the one or the two neurons they show that the intracellular signal was amplified in the one SPNs following a positive reward mostly, but in the case of the two SPNs, it was the signal KMP activation occur exclusively after a negative reward prediction error trial. So in the experiment, in the in the study that I'm going to present today, we set up to study or to answer the question how do D1 and D2 SPNs encode this dopamine signal across large areas of the astralum in order to adjust a learned behavior. To do that, I set up a wide field high resolution mapping of transcriptionally active SPNs. And for that I used a marker of phosphorylated histones that are known to be induced by or to be phosphorylated in response to the KMP and glutamatergic signal. And then I use D2GFP animals that allow me to classify these D1 and D2 neurons based on GFP content. So I use spinning this confocal microscopy that allow me to have a high resolution, a single cell resolution of large areas of the astralum. And as you can see here, I'm just I just use like very basic automated methods to detect the activated transcriptionally activated neurons and then classify them based on GFP into D1 and D2. Then using this data, I could reconstruct where in the where in the stratum that this activation was happening. And in this, I did that with in different regions of the astralum. So to study how D1 and D2 encode these different prediction error signals, we train animals in an instrumental learning performance task in which they have to learn to press a lever in order to obtain a pellet. As you can see here, after 15 days of training, they become experts on this task. So to really understand or to explore how activated and where D1 and D2 neurons are in response to these differences in dopamine levels, we use extinction paradigm that it's a fundamental form of memory updating that prompts individual to withhold previously learned action for to reduce this action that no longer secure an outcome. This is a very powerful change in behavior. It occurs very rapidly. So as you can see here, after 10 minutes, animals just stop pressing the lever. There's no point of pressing if there's nothing to get out of this pressing. So we got this animals and then we studied how these two systems were distributed across the stratum and how these two overlap within the space. So as you can see here in expert animals, you see that activation of D1 and D2 occurs in different regions, and the overlap between these two populations, it's very low, as you can see here, however, in animals that had undergone extinction. And this should have said that this is the overlap of all the animals in this group. They show that both neurons now overlap in the same region, specifically in the dorsal medial stratum, which is known to be important for encoding the association of these action outcome contingencies. So the question is why this stratum needs both neurons to encode this reward prediction error, and we hypothesized that D2 SPNs may modulate plasticity in neighboring D1 SPNs during extinction learning. And so to show that pharmacologically, we use raclopride that activates D2 neurons here or GBR that activates D1 neurons, and we hypothesized that in this test group in which raclopride is injected prior to GBR, this will block the activation of D1 by this compound. And this is what we see here, this quantification of D2 and D1, you see that raclopride mostly activates transcriptionally D2 neurons, in the GBR group, it's mostly activated D1 neurons, and in the case of, and you can see that illustrated here, and in the case of the raclopride plus GBR, there was a complete blockade, a major blockade of the GBR activation of the D2, by the D2 neurons. So we hypothesized that activation of D2 SPNs in these discrete stratal territories could influence dopamine-dependent plasticity on D1, and hence reshape and update the behavior right to test that. We then went back to this extinction learning paradigm, and then we deplete the D2 neurons from the dorsomedary stratum using a toxicogenetic strategy, and then we train animals in the same paradigm. So with the extinction learning, 10 minutes of extinction learning, as you can see, the both groups, there was no impairment of the acquisition of these contingencies when D2 neurons were not present in the dorsomedary stratum, but then in the following when after the extinction, after the first day of extinction, we could see that distinction learning was not completely encoded in those animals that were, that D2 neurons were absent from this dorsomedary stratum. You can see that here. So this is another experiment that I have to show. Suggested that control of D1 SPNs plasticity by these D2 SPNs-dependent mechanisms is critical for updating previously acquired God-directed behaviors. So to finish, I just show that in this stratum, we have like this population of neurons that is composed by these twin sup populations. And as I show you, in expert animals and during initial learning, these two neurons are transcriptionally active in different parts of this stratum. When this learning needs to be adjusted or refreshed, these two neurons are then occupying the same territories, and we hypothesize that D2 neurons can eliminate or inhibit this learning in the D1 neurons that is outdated. If you want more details, there's a paper that we just published a couple of months ago on this. So with that, I would like to thank Eli for giving me the opportunity to speak to you today, and also to all the collaborators that made this work possible. Thank you so much. Great. Thank you very much, Miriam. Awesome. Very interesting. I'll start with the first question. How can the connectivity of the local stratum network support the D2 to D1 modulation? How does that work? So in some experiments that I didn't have time to show, we found that on others that were classically obtained with pair recordings, we show that there is a bias in connectivity between the D2 and D1 neurons. So there's more, like using transynaptic virus, we could show that D2 neurons are more connected to D1 than D1 to D2. So these are probably other mechanisms, like not so much into the electrophysiology or in the connection level, maybe in more into the neural modulatory level could probably support this transmodulation. Let me look at what the next question is here that Naomi pasted in here. In this transmodulation, you told us about negative prediction error, like with extinction learning. Does this also happen in other types of flexible learning? In another set of experiments that I didn't have time to show today, we train animals in reversal learning in which animals are trained to learn the contingency between an action and an outcome, and then we reverse the action and outcome contingency. So they have to learn change in their previously learned actions, but without prediction or negative prediction error, so prediction error should be the same. And in this case, when depleting D2 neurons from the DMS, we also had the same effect. So animals were unable to refresh their previously learned action. That's one last question. That's sort of a very high level question. So it's really amazing what you can do with these engineered animals and live imaging and whatnot. Does anyone connect this to any type of experiments with humans, obviously not invasive experiments, but other type of experiments in humans? Or is there still too far a disconnect between these two fields? Well, like pharmacologically, for example, like D2 receptor antagonists are widely used clinically, so it would be, I mean, these are available and are used largely. So it will be interesting to see how these compounds affect learning instead of just like motor output. Cool, got it. All right. Awesome. Thank you very much, Miriam. Awesome. Thank you. Thank you. Thank you. All right. We move on to Debanjan, who is a postdoc at the Crick in London. Have we switched over to the slide? Yeah, we have. So Debanjan is going to talk to us about perception and encoding of temporarily fluctuating odor stimulus in mice. Over to you, Debanjan. Thank you. And this is going to mean one minute, Debanjan. Okay. Thank you very much. So firstly, thanks for organizing this wonderful thing, even in the midst of this corona wave. And thanks for hosting us as well. So yes, I am Debanjan in Russia's lab and we are interested in understanding how odor temporal dynamics can be encoded and perceived by mammals, like especially mice. So if we look at odor diffusion in the real world, what we see is not like the typical square pulse, what we usually give in a laboratory setting. But if you put up a PID, a photo asian detector in any of these plumes or any of these diffusing orders, we get to see orders fluctuating as a fast and rapid concentration plume, something like this. This leads up to a basic question as why does this happen and if these temporal structures are in general prevalent in odor plumes, are they useful for animals or any of these olfactory driven animals like mice? So to answer that, we took up a pattern or method of asking this question and build some turbulent environment in the lab, where we introduced orders at different configurations like odors placed at different spots at like over here around 50s and apart or as a mixture and used a fan and an object to create a similar turbulence like outdoors. And what we observed first is that indeed the odor plumes had a rich temporal dynamics. And while when we placed orders as a mixture or as a single source, we observed that the plume that was occurring from both of these orders right at a very, very correlated fashion, while when they were just around 50 centimeters apart, derived as an uncorrelated fashion. We went further and systematically studied how separation or order source separation might be quantified as in terms of correlation. And what we observed was that indeed when we change the order source separation distance, we indeed get a monotonic decrease or change in the correlation structure. As you can see in this graph, we can find that the correlation is almost at a maximum one when we have it as a single source or as a mixture of orders, while it drops down to almost zero when we are just around 50 centimeters apart. This suggested that indeed maybe correlation might be an important clue to understand order source separation, which might be useful for any of these all factor driven animals, for example the mice. So we went back to the literature and observe that there has been reports, especially in the insect world where temporal dynamics have been seen to be to be useful or to be encoded in the olfactory system of these insects. However, in the mammals, it is largely a far-fetched story where there has been really little evidence that temporal dynamics is being used to encoded by by mammals like Mike. So we turned up to the question of, can mice understand correlation structures in order to use. So to do that, firstly, we needed to build our olfactory device, which can reliably demonstrate or deliver orders at a very precisely controlled timing. For example, something like this at around 50 hertz, where we can reliably see that the order can be fluctuating at around 50 hertz. So we did that using these high speed valves. And we observed that indeed the signal fidelity was largely similar for all the frequencies that we had used even up to 100 hertz, while the amount of order release was tightly controlled. With this, we created this basic plume structure where we have a collated order stimuli where two orders were fluctuating at a perfectly in sync phase and the anti-collated stimulus where they were completely out of sync. So we wanted to test whether mice can understand the disc or can discriminate between these two stimulus. That's one of the basic questions to understand if mice can actually treat order temporal dynamics in a fruitful manner. While of course we did ensure that the flow in these two stimulus by constant so that we don't have any other queue for the animal to do the task. So now we actually plug that order device into these homegrown, fully operated, fully automated operant conditioning system, what we call it the autonomous animals can live happily in there for up to a year. And they have free food in there, they live in an enriched environment downstairs, while to get water they need to climb upstairs, get into this tunnel and where it is really strictly controlled with this magnetic door. And over there they get to be firstly identified using an RFID chip on their neck. And once they are there they are given a specific set of trial. And the trial is to identify either if the order task that is coming up as a collated or an anti-collated order stimulus. So what we see is that, indeed, from our, from our behavioral experiment, like I mean this is the schema of what we are doing, where a half of the animals were treated, were asked to understand if the stimulus is an order, is a collated order stimulus. And we'll get what reward if they answer it correctly, while the other half do the other way around. What we see from our star animal is that indeed they can understand and discriminate collated from anti-collated stimulus and further they can even do that at a very high frequency, even up to up to 20 hertz or 30 hertz, it's way beyond the SNF frequency. And when, when we did different controls, I don't have the time to go through all of them, but maybe one of the important ones is switch control, where in any of these tasks, we use valves to create these orders stimulus so any clicking sound and then can be, can be a problem of extra queue. So we tried to change the valve cues for any of these control and understood that still then the animal was still able to discriminate them as a collated or an anti-collated stimulus. Now, if you look into different examples, we indeed see that most of the animals could actually understand collated from anti-collated stimulus as we increase, as we give them the stimulus as we increase the number of times. Further, when we increase the frequency, which is by which increase the difficulty, they tend to have a decreasing performance with that. We'll see the entire population of the cohort of animals, we have already done almost a million number of trials. And we see that indeed the entire cohort could perform the task very well with a very high accuracy level of around 80% that are for the low frequencies and up to 10 hertz or 20 hertz, they are even still way above chance, suggesting that they can do the task quite well at all these frequencies. Further, what is surprising is that the frequency that they can do up to is around around 20 hertz or 30 hertz, which is way beyond the SNAP frequency. We even tested what happens like for a normal plume which is not anti-collated but rather uncollated. We did that for 10 hertz and we saw that even the animals could understand the task for that case as well. We tested for a different odour pair, which is acetophenone and zineol, for which also it worked quite perfectly. Now, when we looked into the reaction times, we observed that the best performing animals had the chance of increasing the reaction time as we were increasing the difficulty of the task, suggesting that indeed they were taking a chunk of the stimulus, rather than just the first few blip in the task, which suggested that indeed they were not just paying attention to the first square pulse, but it was a single loader or a mixture of odours, but actually taking into account the temple structure of the plume. So what we get when we actually classify the performance based on the reaction times is that animals, when we, animals who are performing better than the chance or rather who are actually taking into account the temple structure of the stimulus, do take a lot of time around 750 milliseconds, suggesting that temple structures indeed being used by these animals or doing the task. From all what we have got so far is that correlation among incident odour plumes and essential signature of source separation and mice can indeed discriminate between correlated and anti-golator odour plumes. However, what we still don't know is that whether the olfactory bulb, yes please, yes Daniel. How far, what we still see. Yeah, one more minute to Benjen. Yeah. In the olfactory bulb is that the frequency contents in the odour, is it being used or is it not being used. So we turned our eyes towards the early olfactory system, which is olfactory bulb, and we tried to see what happens in the tufted and mitral cells. We went in through using a silicone probe to find that in general, the rasters did not look like they had a very distinct difference for the different frequencies that we had given. However, when we put the spike timings through a linear classifier, the classifier could indeed, could indeed discriminate for the performance between the different spike time. So this motivated us to do a patch lab experiment to find out whether there is some membrane, sub-potential membrane potential discrimination between the frequencies. We pick up two frequencies, two and 20 hertz, and picked up the membrane potential from them and cross correlated that with the PID. What we observed is that what we calculated is a coupling coefficient between all the cells that we have picked up and tried to find how much is the coupling coefficient varying for a single cell for a given frequency. And what we observe is that for both two and 20 hertz, we find that cells did indeed have a certain number of cells had a coupling coefficient more than one, suggesting that they could indeed follow the frequency in the stimulus. Further, when we actually gave the temporal structure that is the collated to anti-collated pattern. However, a large fraction of neurons had followed and could not discriminate between the two stimulus while there are a handful of neurons, which had encoded the stimulus differently and suggesting a nice difference in the sub-structural membrane potential as well as in their PSTHs. This altogether, if I can conclude today, is that naturally spreading orders have rich dynamics and chemicals coming from the same source fluctuate in a highly correlated manner, particularly uncollated, even for closely spaced sources, and mice can detect correlation structure up to bandwidth of greater than 20 hertz. We observed that all factory bulk neurons can follow the frequencies, both at two and 20 hertz, and there are certain all factory bulk neurons that can encode temporal mixtures differently. I would like to thank, of course, Eli and then brass lab and brass himself for considering the project, especially Tobias and Andrew who have been equal for contributors of the project. Thank you so much. Any questions? Thank you, Deban. Very cool. This is really amazing that the mice can do this. So we'll start right away with the question. So Benjamin Stauch is asking, can humans perform the same correlated, anti-correlated discrimination type, and what would be the associated percept? So this is a very cool question. And of course the next step. So we actually don't know what happens for correlated to anti-correlated, but I guess people might have known a recent paper from Rafi Haddad where humans have been tested with temporal structures in general for humans. And it seems that they have the capacity to discriminate temporal structure in orders. However, it might not be a very straightforward stimulus for them. What was it? Do you remember what the frequency was that they use? I think so they did not use a frequency, but they were using different spots at different time points. Okay. They did not have frequency per se. Right. Cool. Okay. Question from Vinod, can innately aversive orders for mice be converted into attractive orders when the frequent is correlated with another order? So classical type of learning. That's another cool question. So again, again, never tested this, but maybe possible. One, what I guess Vinod is asking for is an order identity change. This is something like what Paul Shishka has been trying, has actually published some aspects of it, where if you pair another order in a correlated or anti-correlated fashion, you can kind of change the order identity to some extent. But again, largely untested. Maybe I'll ask a very naive outside question. So at 50 hertz that really blows my mind, that's not something that you expected that they would perform so well above 20 hertz or was it? Sure. Sure. So that's actually the big surprise from what we observed from the behavioral task. So because especially the SNF frequency, what we have observed so far is not beyond 16 hertz, even from freely moving elements, which we have tried using completely wireless SNF measurements. Data was coming up recently. And that is that suspects that there should be SNF level of computation going on in the mice. And we have done some preliminary tests using glomerular imaging. Tobis in the lab has done that. And we could see that animals could actually discriminate at very small discrimination in the global level, suggesting a subset level of computation. Yeah. So we have one last question from Naomi. This autonomous cage is a really amazing contraption. How do the outcomes in this setup compared to when you have separate chambers? Are the mice less stressed? Sorry? Are the mice less stressed? I mean, the research I assume are less stressed because it works by itself, but are the mice also less stressed? I mean, of course, they are less stressed because there is actually very, very minimal human interaction. One and two is that largely most of the behavioral tests when they are not done in these kind of automated fashion, they are done in the day time while mice are actually not done. So they are stressed with that as well. And here we can see that the mice are doing a lot of tasks in the night time, which really reduces a lot of stress. Nice. Nice. More natural. All right. Great. Thank you, Debanjan. Thank you very much indeed. All right. Great. Brilliant. So we move on to our third and last speaker today, Alexandra Silivaki, who is a PhD student at the Charité in Berlin, and she'll be speaking to us about interneuron's intelligent operations. It's a matter of non-linear dendrites. Alexandra, over to you. Okay. Thank you very much, Detlef. Can you see my screen? Okay. So thank you very much for the opportunity to present my recent modeling course that took place in Poirage Lab in Greece. That's very, very briefly. Interneuron's beautiful cells that constitute one of the main types in the mammalian central nervous system. Interneurons are cabarensic inhibitory neurons and can be divided into many subclasses. It's particular interneurosome class has its own features regarding the morphology and its own personality, let's say, in pattern in terms of molecular and protein markers. Multiple interneuron subclasses have a prominent role in executive function, as well as in neurodegeneration. In this particular study, we're mostly interested in the dendritic computation of fast-piking basket cells that belong to pervalvement-positive interneuron family. Fast-piking basket cells contain very thin and aspirin dendrites with calcium permeable ampereceptors, low levels of NMDA, high density of potassium channels and low density of sodium channels in the dendrites. Okay, but in general, and this comes from all interneuron classes, the previous hypothesis that was that these neurons are simple own-off cells, which means that according to the point in neurodoctrine, interneurons were considered as neurons with very, let's say, not very intelligent dendrites, rather than with linear dendrites, that simply integrates enough inputs and are not able to perform any kind of clever computational functions. However, in contrast, research experimental evidence suggests that in many brain regions, for example, in the C1 or in the cerebellum, multiple interneurons from classes contain clever nonlinear dendrites. So, that was the goal of our modeling study to unravel whether a linear point neuron or a more sophisticated abstraction, like the two states nonlinear artificial neural network that we know, that represents pervalvement neurons, can successfully capture the synaptic integration profile of fast-piking basket cells in the C3 and in the middle prefrontal cortex. So, towards that goal, we utilize anatomical reconstruction from rodents from the C3 and the prefrontal cortex with the red using the dendritic freeze and with a gray reaction. So, the first step to answer our questions is to extensively validate our 3D in silico interneurons as to make sure that our model cells have the same electrophysiological properties, respond to the same way as the real thing, the in vivo or the in vitro fast-piking interneurons. So trust me, these are very good models. So, the first question that we set to our models is what is the way that the dendrites correspond after the activation of a uniformly activated synapses, synapses from 1 up to 20 synapses. So, according to the point neuron dogma, if the dendrites of fast-piking basket cells were linear, then the actual PSP amplitude should be something like the dust linear line you see here. But you see that this is not original here, surprisingly enough, we found out that in all of our models and around 600 dendritic branches that we tested, dendrites can be divided into two nonlinear types. So, this is a linear one with a green where the dendrite is able to produce a dendritic spike and the dendrite can be also sublinear, which means that this dendrite is not able to produce a dendritic spike and has the jump here. So, I want to highlight here that each particular cell has a different percentage of supra-linear and sublinear dendrites, but the two modes coexist in all eight cells tested. So, the natural question that arises is why are some dendrites able enough to produce spikes and be supra-linear and some others are not? To answer that question, firstly, we focus on the active membrane mechanism of the dendrites and we found out that if we look short of spikes in the dendrites, all the supra-linear dendrites are not still able to produce a dendritic spike and that became sublinear, so supra-linear dendrites have dendritic sodium spikes. So, the next question that arises is okay, well, we do not have different conductances in the sodium between supra-linear and sublinear. Why some dendrites can exhibit sodium spikes and some other dendrites in the same cell cannot? Maybe morphology has something to say to us and the answer is yes. We found out that in the prefrontal cortex models and in the hippocouple models, we have statistically significant differences in the length and in the diameter of the dendritic branches that belong to supra-linear or to sublinear category. And we then tested the volume of the branch and we found out that both in the hippocouple and in the cortical fast-piking basketball models, all supra-linear dendrites who are thicker contain larger volume values and lower input resistance. On the other hand, the sublinear dendrites in our cases are thinner contain low volume values and high input resistance. So dendritic volume can very well classify the two integration modes that I referred earlier. So how causal is this mechanism? Is the morphological mechanism? Yes, it is because thanks to modeling, we are able to change the morphology of the dendrites. So here is the control, what we have in the anatomical data and then we can set the mean length and the diameter of supra-linear branches to all dendrites and you see that we can transport almost all dendrites to become supra-linear. With the same reasoning, we can set the mean length and the diameter of sublinear branches to all dendrites and then you see that most of the dendrites become sublinear. So to sum up, we have two types of non-linear modes, supra-linear and sublinear. In the supra-linear case, we can have dendritic sodium spike and this can be achieved through our specific morphological features, let's say the volume that these branches have. So, okay, we next attempt to see the effect of the non-linearity in shaping the neuronal firing of these neurons. So what's that goal? We activated using KF50-H cross-on spike frame excitatory synapses up to 60 in all of our model cells you see here to indicate cases and these synaptic inputs can be allocated with different spatial allocation patterns. The first one with the baby blue is the dispersed one where we, for example, activated 60 synapses in a dispersed way, which means scattered throughout the whole dendritic frame. In the group way, which is actually the clustered way, we activated the same amount of synaptic input, but only within a few dendritic branches. So we see here that fast-piking basket cells having the intelligence to understand not only the amount of the synaptic input that is coming, but also the spatial allocation profile of the synapses. It was previously proposed that the preference of these interneurons to disperse spatial allocation of synapses is due to a combination of the diameter of the branches as well as of the existence of eight-eye potassium channels. To test this proposition, we did a simulation where we said all the diameter of the dendrites equal to two microns and we blocked the eight-eye potassium channels. And as we see here, our model predictions agree as now we don't have this preference. So, okay, so in that case scenario, it seems that the point neuron dogma, the linear point neuron dogma, maybe is not a good approximation to describe that cell in a deep learning network, for example. Yes. What is the best way? What is the best mathematical formula to describe that cells? Towards that goal, we implemented artificial neural networks that have the same parameters except from the hidden layers. One minute. You have to speed up a little bit, Alexandra. Okay, so here you see the linear point neuron that represents the point neuron dogma, and here we see the two-layer modular ANN where we have inside the supra-linear and sub-linear dendrites. And we see here that according to regression analysis that we did for all our cells, it seems that in all cases, that's it, and in all cells, and for a large amount of synapses, the two-layer nonlinear artificial neural network is superior to the linear one and can very well capture the spike rate variance that is generated from our biophysical models. Last but not least, let me very briefly tell you that we finally investigated the functional lubrication of our finding. My colleague, George Castellakis, created a biophysical plausible and morphologically simplified macrosync with model that includes nonlinear dendrites in the first part in basket cells. George initially trained the model to encode for a single memory. The properties of the excitatory angrams were assessed by analyzing the activity of excitation of the excitatory neurons during the recall, 24 hours after the learning event. The results indicate that bi-model nonlinear dendrites of fast-biking basket cells lead to smaller excitatory angrams and that the neurons in the excitatory angrams file less and we have an increased sparsity. The result suggests that the nonlinear dendrites of fast-biking basket cells might be helped to avoid memory interference in the neuronal sequence. So this is our conclusion, the work is available for you to read and I'm very happy that we have new experimental evidence that supports our modeling conditions. I would like to close by setting a question that maybe it's time to stop thinking of interneurons as on-off simple damped assistant cells, but maybe it's time to consider them as equal colleagues to excitatory pyramidal counterparts that together enable us to learn and remember. Thank you very much and stay safe. Thank you guys under. Thank you very, very much. Brilliant. Again, really, really awesome, really enjoyed that, enjoyed it very much. So I'm looking at the first question here so from somebody who says being quite far outside the field, so thinking about supra-linear versus sub-linear, what would that translate into in computing and learning when you have an ANN or a neuron that behaves in a supra-linear or sub-linear fashion? Okay, okay. In very simple words supra-linear and sub-linear in any case are kind of non-linear functions and we know basically from excitatory pyramidal neurons that when a neuron has that kind of non-linearities in the dendrite, it means that the dendrite can translate the incoming inputs in a non-linear way. It can do mass, it can divide, it can reduce or add, etc. And this enables the cell to correspond to the information that is coming from afferent axonal form axons in many different ways. For example, the generation of spike, the dendritic spike is a way to enhance the incoming signal that at the end passes throughout the summer. So it's a way for a neuron to understand the synaptic inputs firstly and secondly to answer to that input in many different ways. It enables the neuron with increased computational capabilities. Yeah, yeah. Okay, cool. Okay, so that question was from Alice Miserai. So, thank you. Thank you. So one more question. This one from Romain Kaz. Two-layer neural network is a universal approximator. So it will obviously do better job in approximating the behavior. Is there any other argument in favor of a two-layer artificial neural network? Okay. First of all, before answering, I have to say that I present here what is the best ANN formalism for this particular type of interneurons. As I said in the beginning, interneurons are very diverse. So do not take it for sure that this ANN formalism is the most suitable for all other interneurons of classes. We need to find out the computation of, for example, some other starting and other type of interneurons. Okay, so actually we don't love the two-layer non-linear ANN. We are not in favor of that and not in favor of the linear one. We have to understand and to find out what is the best way to translate these cells. Because we found out that these cells, their dendrites have dendrites that speak, let's say, in a logistic sigmoid function or in a logarithmic function, it seems that the two-layer ANN can better describe them. Okay, so that's it. Great. All right. Thank you, Alexander. We had one more question from Vinod, but I think maybe we can take that offline because our time is almost up. So thank you, all three panelists. Great presentations. Thank you to all those who have asked questions. I realize myself that actually it's pretty stressful just being the moderator is already has been pretty stressful so it must have been a lot more stressful for the three of you. I mentioned this seminar was presented in collaboration with Neuromatch. It's an online and conference for computational neuroscience which continues in a few hours times and runs until 10pm Eastern daylight time so 3am British standard time for the full agenda and instructions on how to join you can go to a Neuromatch.io I should also mention that as deputy editor of Elive, I'm really super, super happy we are doing this series right from the beginning when we founded Elive that was actually something that was really important to us to be more than just a place for publication but we want to change a research culture. We want to change how people interact, how people communicate when we especially want to serve early career researchers and this was all very much in the Elive ethos. This is the whole series and so the next installment is going to be on April 2 and day after tomorrow at five o'clock British time with Katrin, Meaghan Crete and Ilke Spock. I think that's about it. Let's give our speakers another round of applause and then with that we'll sign off. Thank you all. Thank you very much for the great initiative. Thank you. Bye bye.