 Today's speaker for the TSVP talk is Dr. Merafer Stern. So, welcome to OIST. And then Merafer did her PhD at Haveri University and she worked on the Cerebellum with Professor Yosef Ayaron. And she also worked with Professor Larry Abbott at Columbia University regarding chaos in random neural networks. And then she took a research position at the Schwarz Center at the University of Washington and currently a researcher at Haveri University and also affiliated with the University of Oregon. And today she's going to talk about modeling networks, how neural connectivity transforms Spencer's time. But I forgot to tell you, so she was also a tutor for OCNC from 2016 to 2018. Thank you very much for your help for us. Okay, so then Merafer, please start the lecture. So thank you for the introduction. So as you said, for me, it's just great to be here again. And it's a great honor to be part of the theoretical sciences and visiting program, really. I knew before I'm coming that there are amazing, amazing scientists here. So it's really a great pleasure. And a special thanks is to Leonis and Hoka, who really worked extremely hard to bring us here in one case, despite all the difficulties. So, yeah, let me start the talk. It came out a bit ambitious and they, yeah, you cannot hear me. Okay, so I hope I can cover what I've put in. But at the same time, I would like to be understood. So please feel comfortable asking questions during the talk. I know what's in the talk. So if something is not clear, please ask. And so as I would like to start the talk, right, there's a lot of clutter in my head and lots of thoughts are running in them. And there is a big mess. I don't know about your heads, but our brains are a bit messy and they're a very complex system. I think one of the most complex questions of our time is to answer how do they work. Now, they're complex at every possible level. So if we look at the brains and what they're constructive of, you know, you start with molecules and proteins, and they interact with each other and making sense out of it is difficult, but it's not enough right there. The molecules and the proteins they build and they feel the neurons, the basic cells and the neurons themselves interact. And if you kind of get the hunt of it, you can see what's coming, which is well, neurons by themselves create local networks. These local networks have activity. And they interact with each other to get at this local secret stables, these brain areas and somehow from all this clutter of things, cognitive functions, right? So as these babies learn to sit, walk and eat, they also learn to interact and talk and laugh and we do all sorts of things. And so there is a big question hanging here of how do we even approach that before we even phrase the specific question, what approaches can we take to make sense out of this big clatter of things? And so I can think of a few ways to do that and I think people take all these different ways. One of them is to tweak something small, you know, the deficiency of a protein of some sort and see how it affects our ability to talk, walk and so on. One of the cognitive functions that we do. It's a great approach to kind of jump over all the intermediate levels, but it doesn't give a mechanistic understanding of how things interact and how things are building up. It's possible, but it's, I think, very challenging using this approach to build a full mechanism understanding. So what other people are doing is to look at relationships. For example, how person-optic neurons affect the person-optic neurons, right? How they interact and talk to each other. So they remain on the same level and you get a mechanistic understanding. But that is lacking the ability to kind of get the bigger picture here. And so, again, it's a great approach. But what I try to do is really get a mechanistic understanding and get an idea of how the bigger picture looks like. So what are the principles, you know, that I can make sense of all the clutter and move on to the next level. So eventually I can explain cognitive functions. And so, you know, the hope is that we can do it at every level and move on. And of course, we can keep searching within the same level all the details and keep correcting, but we can kind of get one step up every time. And just to kind of give you an example from the non-neuroscience world. So this will be something very hopefully understandable. The idea, at least the principle, maybe the philosophical principle behind this approach is to give examples from non-neuroscience, which I think are very obvious to all of us. For example, magnets, right, they have strengths, we all know what magnets are. But they become magnets because there are so many electrons running around them and they have a spin. And so, in order to tell you how a magnet work, I don't need to explain to you all the different electrons running inside. However, because we understand how these electrons running inside and how they interact and all the energy that they have in the spin system and the fact that the spin is very non-zero, then, you know, we understand the idea of magnets, which is kind of the bigger picture. And there's another example which I just couldn't resist in Okinawa, which is the heat. There are plenty of air molecules running around us, different types, and they hit our bodies, right? And that's why we feel heat. And so I don't need to start describing all the different molecules hitting my body right now. I can tell you it's hot and here I can also tell you, you know, they interact with air and therefore it's humid. So I have one number, two numbers in this particular case for, you know, making sense out of this many, many, many particles. And this is the idea when we look at neurons and go into the network activity. I think Linoy was here, maybe she has mentioned similar things because she uses somewhat the same family approach that I do. And so before we move on, another example from neuroscience, which will also kind of give us a buildup for what I do. And this is the very famous Hodgkin and Hodgkin and Huxley model that really is doing the same thing, right? What they did was to look at the average activity of many channels and explain how neurons change their voltage. So just super, super quickly, you know, neurons have this ability to hold the voltage difference between their inside and the outside. They do it by having, you know, these holes in their membrane, these channels, and they control the ions coming in and out. There are millions of channels in every neuron, but instead of modeling every single channel and how ions coming in and out of each channel. But they did was to look at the probability of channel working being open and when it's closed, so stop working in a sense and pull out the average of them and how they impact the average of other channels type. And so they were able to explain what their neurons do, which is to fire or to spike, which is to raise their voltage suddenly. So they took out all these humongous man's of channels, look at the average and gave us the picture at the neuronal level, rather than at the molecule and the protein level. Right, so that's an idea of how to look at the mean. And so I will use the idea of neuron spiking and how they interact with each other to move to the next level. Right, so neurons, they, they create connections sign ups us. And once they spike, they send signals. And we need to connect them right to build a network right how do they, which neuron is connected to what kind of to move to be able to move to the next level. I come from a cerebellum lab so I have to say they're also gap junctions will put them aside for this particular moment. And this is another way for the neurons to interaction actually for this talk. It doesn't matter. Well, it doesn't matter too much you need to tweak a little bit the models but there is a pre synaptic and a post synaptic neurons they interact and we want to understand the network level of things. Right, so to do that like I said we have to understand how they're connected. And now there's a whole different set of stories opening up because in order to understand network activity we have to understand how the network is organized. There are diverse different ways for networks to be connected. Many of them have, have been studied here and this is just one example of clustering. I did a very hard work of finding morphologies of neurons in the inferior olive and what she found and together with a huge bunch of people working on this very hard that neurons tends to clatter together so they form a cluster and they have, if these are the neuron bodies in black, then in blue you see their dendritries and they kind of hug each other in a cluster of neurons. So clustering, which is, you know, few neurons strongly connected among them, and relatively more weakly connected to the rest of the network is a very robust and common phenomena, not just in an inferior olive but really across the whole brain and I will talk about the meaning of clustering from the dynamic point of view. Another example which is, which was studied here and I studied myself for a while is the idea that excitatory type of neurons and inhibitory type of neurons come to balance each other exactly when they influence neurons downstream. So you see that in this particular study the inhibitory and anxiety currents into an neuron exactly balance each other and this means that when we come to connect the network we have to make sure that it's not the connections are balancing each other so this is a constraint on network connectivity. And what I start with, the last thing I mentioned here is cell type dependent connectivity which we also see, you know, across the brain. If cells belong to some particular subgroup here in red, then they have a better chance in the layer two or three of cortex, they have a better chance of a different subset different cells being connected to them, just because they are related to this group so there's a, the chance of being connected with other neurons, depending on the subgroup of the neurons that this particular neuron belongs to and they're also inhibitory neurons here that do not care about this kind of differentiation between the different groups, but the idea in general is that connectivity, the chances of connectivity the strengths of connectivity can depend on the type or just a set, a particular subset of neurons that a neuron belongs to yes. Yes, I'm talking about kind of the bigger picture so profile attachment is one example. Right and there you can have many examples, it can be because the network is feed forward for example so neurons strongly connected to neuron downstream and so on it can be because it can be organized organization of the reactions in the visual cortex for example when you're in our more correlated, more connected because they're more correlated by activity, many reasons. I'm not going to talk about the reason I'm going to talk about what, what are you causing. Right so the talk is a bit ambitious. And I can hear your promise to tell me when I'm getting to the end but there's a clock actually so it will be good. I'm going to show you what cell type dependent connectivity, how influences the dynamics and actually the example I'm going to bring is actually from neurogenesis. And I'm going to show you how clusters and I promised and I will, I will get to that for sure. So how clusters that change special characteristic into timing capabilities, and we think it's a very major important properties of clustering and, and the last thing I will show you hopefully is that feedback inhibition actually is an antidote for the clustering because what it does is to take the timing and transform it into special, special properties. So let's dive into the self dependent case. And the idea is, like I said to look at the very general question, what the division into cell types. What is the division to set types to turn network. And it's not, I don't want to create any particular scenario. I really want to look at the general phenomena of dividing networks into into subgroups. To do that, I'll start with no subgroups. So typically in order to study networks, what we do is we write down the two dimensional connectivity matrix, it's just a list of the connections between neurons so I order all neurons here on the different rows and here all neurons are the different rows. And every small element here is the strength of a particular neuron, driving being the press in optic of some posts in optic neuron. And I collect all these connections into a network, it can be sparse it doesn't personally does not change any of the results that I'm going to show you so it can be sparse or not. So I list them here well instead of writing numbers which is very confusing, you know instead of numbers I use colors. So it's very obvious. The different strengths, strengths, and there's no order in this network, you know these are just random number drones from some distribution. I know randomness, kind of get the experimentalist, you know, on their edge but this is just to kind of mimic and network without knowing anything about it. And so I mark this matrix as J. And basically all it's written here is that I've taken these numbers from some random Gaussian distribution. Yes. And I need a dynamics. So it's a bit on a mathematical size but with the mathematics would actually go will go decreasing between the projects. So, I need a dynamic so basically this is says that my network every unit in the network is driven by itself to stability. So basically counting for the leakage in the sense and the rest of the network so there is a contribution of all other units this is the sum here by their activity and activity of each unit is some nonlinear function of the input that this unit is getting right that idea of very basic idea of fine rates of nonlinear function they get some input and as a result they increase their firing rate. And I'm going to work with time to the public. So if there are mathematicians here they should say well this is not exactly time to the public the zero should be here and I say well, I also don't care about the particular number I want to find. I will show you that I can extract the phenomenon is what. Interesting us in this in this part. And so, if we have just a network right there is a connectivity matrix for everybody and there is a one dynamics that kind of describe them all, and taking this random networks of connection that I have. And what's interesting about this, this networks, and why it says uniform is that every single neurons in this network. See, exactly what another neurons see right because if the network is big and I drawn connections from some randomness and there's, there's humongous amount of neurons, the picture that I see around me, everything in a neuron is exactly the same. There's no difference where you are in the network there's no difference between the neurons. And this is a very important point for modeling the systems, just like the electron spins in the magnet on average, every electron and average every neuron here, see the same thing. And so actually we can replace the whole network with just one input, and we can model every neuron with just one representative neuron. So I really need one equation, dynamical equation for this type of networks. And what they found by kind of thinking about it this way is of course when when you do this correctly so so when you have this one equation, and you fit it to the properties of the whole network, then you should have the same results it should represent the full network the other correlation that changes in time of the full network. What they found was very interesting. If you connect the neurons strong enough, if your game is strong enough. The network stop decaying and become active so here is activity of neurons across time. And there's really one number for the whole method, and that's the variance of the strengths of the connectivity. Basically, if the connectivity is strong enough, the network is going to be active. If it's not the network is going to decay. And now comes the story of the cell types. What cell types do is to change this picture. And when you have cell types, the connectivity between two groups now depends on the group so in in remodeled it as a change in the strength of connectivity meaning the variance of the distribution that we pull the numbers from is different. So, every picture that neurons see when when this neurons looking kind of at the network and the inputs actually depends which group the neurons belong to, but neurons of the same group would actually on average would see the same picture. And so now, instead of the unstructured connectivity matrix. We have what we call block structure right so I ordered and once again and I ordered and by the cell types, these subgroups that they belong to and here there will be strong connections here will be reconnections. And when we started working on this problem they turned out that this matrices, I have properties that have not been figured out by the mathematical communities. We kind of got stuck that point but we continue thinking about the activity of this network. And so, if we have the equation for the dynamics. Yeah, now we need to write a question such that every neuron has a sum of the groups coming in and every groups give an input to that neuron with amount of strengths that depend on this groups on this group. And actually now we cannot write a single equation for the whole network where we rather need to write an equation for every cell type for every subgroup of such, because every subgroup see on average the same every new ones in subgroup see on average the same thing. And it turns out that this number of groups here I have three different groups right. This number of groups defined a new matrix of connectivity and this new matrix of connectivity is the impact of every group on average on another group on average and this new matrix has the both the size of the group and the strengths. Right, the group is bigger, more more possibility for connections, and more strengths of connections, the bigger impact, and it turns out mathematically that this reduced matrix that very easy to deal with because just a number of groups you have two three five, rather than the original retro group have thousands and hundreds of thousands entries. This, this reduced matrix actually describe all the properties of the full matrix. And that was actually new for the mathematical communities and we found it by thinking of the activity. The, the Eigen value which is a property of the reduced matrix would define now what happens to the network, not the average gain average strengths of the initial. Sorry variance of the initial matrix, and whether it is again value is bigger than one the network of the cell types would be chaotic. So, the, what we learned from that is that when you divide into cell types, it's not the average variance the average gain that is now responsible but whether a very complex. Well it's not very complex but rather a different. The average is not what's meaningful here anymore, whether a different property of the reduced matrix of the of the structure of the groups. So we deviate from that and I will show you, if you can choose I will show you an example and I think that then it will be very clear. I just to say that once we did that we explained the properties of the largest metrics and it's also a mathematical study, but I'll go to the example. So if you confuse that's okay here's a very specific example of why matters that we have something. Neurogenesis neurogenesis actually part of the factory systems. And these are very young neurons, kind of born and joined the network where they're extremely hyperactive and very strongly connected. Initially when they're born. And so if you think of a network without them this is the, you know, just the random network that we started with. And here's all the connectivity of the network without this newborns neurons. But when you add them you have these small groups, right I just listed here, and here it is represented newborns neurons that are very strongly connected, keep reactively active in the network. So we moved from a connectivity metrics of such, you know, uniform structure to this block structure matrix with two groups. And so, because the having a cell type structure is not equivalent to having non cell types. You get the following phenomena, which is if you look at the network. Without these new newborn neurons, and you give it some average strength of connectivity in the network. In this example would decay. So it's not active, it doesn't have its own internal activity. If you add these new barns, the network become, you know, the block structure become such that the network will be active. But what's really interesting is that if you take this synopsis strong sentences and you just randomly spread them across the network, it would not be active. The ability to to to divide the network into groups is that for the same amount of energy that the body put into creating synopsis neural connection. The fact that you have groups is more efficient way to give a network a dynamical activity just because it's divided into groups because the mean of this. The mean of this matrix is not this is not what's called defining its fate, dynamical fate when you have groups. Sorry, I'm a little bit confused. So, newborn neurons initially do not have any strong connections. So what do you mean. I think in the phase of their two or third week they do create them. And they're also hyperactive that's for sure. And we want to you want to model this hyperactive activity and they do create these strong connections so you are thinking some some period. Yeah, some period. And so, and also if you think about them hyperactive that's how you would model that in this type of models. Anyway, so it's a dual ways of doing that. Yeah, so this is, so that's the example I want to bring actually after this study was completely done. Very recently in Mizrahi lab at Hebrew University they they show that indeed the newborns are needed to be able to different better differentiate between others and they do have a somewhat similar model more exact to development of newborns. But the idea is the same where they, they increased activity and the network and enabling the network to better differentiate between orders. Right. So this was after all we were done. It was shown which is always very nice and the dream of an iteration really. Okay, so. Yeah, so in the current model, the each cell type have a excitatory and infusion of randomly mixed. So I assume some cell type is excited to see some cell types are in committee. How does the picture change was same. Right. And so, initially this was a challenging mathematically but in, in principle, you can do that by right by shifting the So it's possible to add that to the model and would not change the bottom line results. You can shift the activity function such that it will be almost all positive right there are a few works that use that. And then you can use it for for positive and negative and then you'll have inhibitory neurons automatically by by sending That's not true. Sorry. So you need to shift the function which would not change anything we know that that's one thing and then you'll have positive firing rate. With 99.9% probability, and then you need to shift the connectivity, you need to shift the connectivity the same way. So that the Gaussian is all 99.9% positive. Now it's a bit tricky because you get outliers in the eigenvalues, which send you to different stories or how to balance them and this is where the critical balance comes in. And I have those few steps to be done, but the, the, the different the dividing into block structure throughout this whole process would remain. And this division is what causing the, the eigenvalues right to be defined, not by the average, but rather by the structure of the blocks. And, and this would remain despite of all the tricks that you would do to include a data and inhibitory neurons. So it's a, it's a long road, but the bottom results would would remain would hold. I'm trying to get an intuition for why this works the second, you know, why a small cluster of well connected neurons with is better than the spread one and is it something to do with having a higher probability within the cluster of. I don't know co joint pre synaptic neurons or something. You want the intuition beyond. Yeah, why this works. Why, why this is working. Okay. Good question so I need to think around because I'm used to thinking of mattresses and I give value, but why, from an activity point of view, this is true. Let's see. I have actually that I think the clusters case are going to highlight the same intuition where you have this small group being hyperactive. Keep it keeps responding to the input is receiving in a very strong way, right so because every small input to any element to any new born neuron gets a very strong response. And so this keeps driving the network in a much more strong robust. Another few inputs right so one would, you know, man, give it a pick and the other one would send it would help with actually silence and so on and so they are changing their activity in a very strong way. And that kind of drives the network to keep being active. Well if it's just a big pool of neurons where some sign ups as a strong. They are more easily being console by the by either inhibitory current by the inhibitory synopsis and currents coming in so there's more balance in this particular case the last one where in this in the newborn case there is less the balance, I would say it's more fragile. Okay, so it's very easy to get them kind of keep going and what they do is eventually stick the whole network with them because when they are not silent, the whole network keeps keeps going. And we're actually going to see this in the clusters the small clusters would do something similar to the network these are small groups that are very sensitive and so they keep, you know, like little kids. They'll keep you awake. Okay, and so. Yes. So you were talking about how you can vary the connectivity connection probability or introduce cell types to influence the ability to maintain activity in the network. And then at the end you were talking about how there's this finding in the olfactory system that it helps for function, you know introducing these new granule cells so I was wondering if your framework. Had anything to say about the, the nature of the activity. So, not rather than just having activity or no activity. Yes, we can say things about whether the activities more or less correlated between groups or within a group, for example, and this sort of questions. Yes, I just didn't. Yes. To what extent would be similarity or dissimilarity between what you are describing here and something like those, like random graph that we have, like, those are like, these are random artist running networks that we change to include exactly and then apart from the get necessarily thinking about the clustering part of it. If you set constraint on the degree distribution like for instance, so I'm going to talk about crossing right now. How about that. I mean, I divided into groups, but these were not necessarily clusters. They have different connectivity probabilities, but it doesn't mean that they're necessarily more connected inside the new agent. The new ones are but it's not the general must be in the next case is a is a sub case in the sense that these are sub groups but they're also all of them are more strongly connected inside. I'll talk about clusters. Sure, but what I'm trying to say is like, for instance, in case of the orders, like a random graph or network, what you have is you have like this person distribution among the network you're putting a constraint that in degree out degree should be certain type of constraint for them like you know, should be balanced or in balance and the same level of until they get saturated the same level of a network flow to so to speak takes place in there. And those are not really related to the clustering. It may or may not happen. For instance, if they move to work like something like a small world network, there will be certain level of clustering start happening but it is not necessarily required for like those type of post post distributed random graphs to have clusters in order to to permit certain level of the network flow. That's what I'm trying to. Yeah, they're very, they're various ways to get different variations of the network flow. That's true. Every, every such variation, although gives a slightly different set of properties I was talking about the main phenomena, but there are set of properties that that the fair in different ways that you change these networks. What I try to do is to change the network to include structures that are present in the brains that we see, which are common and to understand their properties it's true that if what you want is high correlations. There are few ways to get there. But each way would create actually different properties of the activity if you look at variances or times of silence or up and down activities and like you can ask for higher correlations but then with every different perturbation you do you get other properties to to vary in a different way. Alright, so okay this is becoming more challenging, but I will give it a try. Okay, so but great, because I hate talking to myself. So clusters are in a sense, a particular example of their previous case, but very specific and therefore, and also very common in brain like I said and therefore was an interest of us to study this particular case. So you have subgroups, but these subgroups in this case are very highly correlated in activity so this is actually a real study where they look at neurons. sensitivity to I think ratings if I remember correctly, but the main point for us is that they respond subgroups of them respond in a similar way and they're strongly connected within them. They're just different types with different properties of connection but specifically every set of group sub subgroup is strongly connected within. And so when you draw the connectivity matrix and you organize the neurons by their groups you get this highlight kind of blocks but on the diagonal because that's where they're strongly connected. So the study in order to understand them what we did, and that was the whole trick of the whole thing was actually to reduce the network to be represented by clusters. So now the building blocks of the network are not single neurons, but rather the clusters themselves. And so I get, again, very nicely instead of having you know hundreds of thousands I get a reduced matrix, and this reduced matrix is a matrix of clusters. So in the dynamics to kind of count for that particular phenomena is to add what we call a self connectivity part so every cluster is driven by itself, but also by its own activity impacting it which is to count for all the connections within the cluster that I kind of threw aside. Again there's the network this is now network of clusters impacting that cluster. What we do, and to kind of keep your mind, not worrying is that we do have now, and this is new you can check it on the bio archival yesterday. The full biophysical spiking model of the full network showing that the results the analysis that I'm going to show you for the reduced network actually holds for the full big picture. Right so the reduction in this case was justified. And the reason why it's important to do this is because we gain an understanding right so this will give us the phenomena. We know that the phenomena we see is correct, but having this reduced model would in the next five minutes also give us the mechanism understanding mechanistic understanding of what's happened. All right, so if we have clusters and they're all of the same size, which is non biological that's okay it's already interesting because what these clusters do is they tend to switch the activity from being chaotic and ongoing and not very interesting to have this by modality so every cluster by itself if you think about strong cluster would like to be either silence or very active, and that's the intuition for you. And so, if a cluster is very strong, an input, a small input would have trouble getting in out of it whatever stable state that it is there are plenty of neurons are strongly connected. It's relatively silent. You know it takes much to get them drive, get them going with their very very active, it takes much to get them, you know lots of innovation to get them silence so what strong clusters doing a network is to having one of two states is relatively silent, and eventually the whole network settle into a fixed point where every cluster choose one of the possibilities with clustering don't do much they're close to this random network that we started with the whole network is the same. And they're, they do add some by modality but not very strong. So that's the case that was a study a while ago. And then I actually had a debate over Lucas the only lunch, we have debated whether we can use this clustering to generate multi time scales and then that's working. We debated how to generate multi times and network and I suggested clusters, and it's the only lunch for life. And so we show that instead of taking one size of clusters we take multiple different size of clustering in this example there are two of them. And every set of such cluster would have its own time scale of activity so remember that the strong cluster they switch between very, very, very silent and being very active. And there is a typical time that it takes them to switch right because again the network of all cluster all clusters are the same the whole network that every cluster is the is exactly the same you see the same clustering around it. But once the network is comprised of two sets of cluster, every set of cluster has its own time scale of how long it spends in each one of these two states. And what you can do is you can tweak the size of the clusters, some are very weak and some are very strong, and the time scale of activity of each set of cluster would grow exponentially large. And this was actually very interesting because it's very difficult to generate this exponential grow by changing single neurons membrane constants or so on it will grow linearly. And here is grow exponentially and these neurons are part of clusters. And that's what's give them the ability to have such different time scales of activity. Again, because in these clusters there are neurons right. And so. And, and also the important of having to size of cluster is there is that the weak clustering just like the neurogenesis they keep the network active so the big ones, the heavy ones they want to settle into a fixed point they want to be either silence or active. And the small ones keep nagging them. And so the network keeps being active because of the small ones keep nagging them. And so, in the extreme case where you have just one huge cluster. And I guess this is more for the analytical understanding. One huge cluster would be in one of these two possible states, which has to do with its size, but we calculate the time that is spent in each state I think for this talk it's a bit less interesting. But if we take a network that is comprised of many clusters sizes, then we get a network that includes within its natural internal activity, multiple time scale. And where the, the special properties of the network of where the connectivity is strong is turned into a timing mechanism, timing properties of the network the network will be slow. The neurons will be slowly changing their activity where they are strongly connected within, and there will be very fast to change their activity where they are weekly connected. Because then every small input would change their typically vastly. So that was the promise in the abstract so I feel that promise. And like I said, there's a huge distribution and it's exponential distribution. And actually I think I believe we're the first one to explain this highly diverse across several order of magnitude. Time constant that was recorded in this, I think this is in the cortex as well but really this several time scales of activity to give a mechanism, mechanistic explanation, where what's important to say that all the neurons that were recorded here are of the same type they're just clustered connectivity is is what we would we think that the connectivity among them is different so it's not anything in the neuronal properties, the physical properties of the neurons, but rather how they're connected that give rise to such variance time scales of their activity. Okay. I think that this type of organization of clustering does is to filter the input because strong clusters would resonate well. Strong clustered would resonate well with a low frequencies, so they have a higher peak in this free air transform analysis, but and small cluster would resonate better with input of high frequencies and so they perform a special kind of the mixing the place where they change their ability to filter the input, I depend on the composition of the network. And I think. Yeah, that's the end of the story so they, the bottom line is that the strong clusters they're, they're slow in their activity, and they are not just generate internally a slow time scales but they also resonate better with slow input that is coming in, whether the fast, whether the small clusters the week clusters, they resonate better with high inputs, high frequency input that is coming in and so if you think about this kind of special differentiation and input kind of flowing in. In different places, different resonance of input and you can still do the input to two different places, according to whatever the needs are. Okay, so there are lots of questions before but I guess. Yes. I was wondering what do you think the ability is to predict the activity from. So let's say you had EM connective, you know, connect home piece of brain, you have everything about the connectivity. Can you predict the activity and if not, what's missing. You tell me everything about the connectivity and you give me some. That's a very high level questions that thing. If you, you're, you say connectivity to activity so. And I believe so. If you will, I'm giving you principles so I thought of a bit easier questions where you know in this study will predict something very specific. And we believe is we believe it will be found. But what you tell me is that I have all the connectivity. And I think, I mean, the answer is, I think eventually the answer is yes, but it's a long road you start. Well if we believe our rate models are good description of the full biophysical spiking model. And so it should be yes that the problem where I'm stuck is actually where, you know, mathematically I'm bothered by the fact that this info is chaotic. And, and they have lots of inputs. Well I've been talking mostly about their internal activity. I think what I can give you if you can give me the full connectivity is the right statistics. So, whether I predict a particular one, you know the immediate kind of ongoing activity I think that will be hard although we're slowly getting there, I think what I can give you is the properties of the activity so how long would they hold spiking a specific spiking rate versus another and so on, and which neurons will be highly active, and which neurons will be relatively science. I think I hope the answer is yes. You know, I think I'm also not going to be tested because can you give me a full. I think we're on the same page you know to give a full connectivity and the full activities equally difficult. So I think that would be we're the same level there. One technical question. Is there any specific reason you chose log normal distribution for the size. No, we were just interesting in long tail distribution that can be easily modeled. And we were interested in in long tail distribution, because they're also common in the brain. I wonder if the distribution is so here the distribution is skewed smaller size. But if it scales to the larger size, it would affect the strongly stability of activity. The network would stop being active at some point and run into a fixed point you need enough children running around to give the grown up more and more relaxed to keep you need the small clusters to keep the network going. And we have in the manuscript conditions for this to happen. So yeah, you need a long tail wall is common but also you need it so because there is a balance between the different probabilities of the different sizes of clusters that you can have so you can still have an activity and not a fixed point. Yeah, it's a generalization of the previous study. Right, so not last for last but not least. I don't know if I have. Okay. So last but not least and especially for me, this is the antidote of the clusters this is a mechanism for actually changing timing back into special properties. It's a bit different from the previous project in the sense that we actually started from a specific structure in the brain that was very interesting to us. And this was the beer form or factory cortex, which has a very particular structure, which is for for two for modelers this is fantastic because you can do something with with it. And the perform cortex has three different very distinctive layers. So there is a machine side but again we're looking at the bigger picture. So the bigger picture is that there are three particular layers, and the first layer, getting inputs from the bulb, and the bulb gets input from your nose. So this is actually the third station. Yes, the third station in the brain that gets input of orders of smells. So this is called the factory cortex or perform cortex because of the shape. And so it has three layers. The first one gets input from the bulb, which get the input from the nose, and the second layer has excited ignorance and they are connected in a loop. So this is the feed the feedback loop with inhibitory layer. So when we model this thing is relatively simple, and these are back to spiking neurons rather than spiking rate so this neurons are spiking, not just rate of spikes. So the perimeter neurons that the peer form cortex neuron and what we care about is their activity in this particular study. Which is again we take the connectivity but we care about the activity in this particular case of specific group and that's because these are the neurons that project forward so we understand if you understand the activity you can see what next. So this is the primary brain areas, see from peer from cortex, and we model this input coming in off the different orders, and we model the inhibition for forward inhibition between the input, and the main bunch of cells here the perimeter cells. And this is a feedback inhibition between these cells and basically back to them. Any external input. And the question was this in the preform cortex we know no factor cortex we know that there's a representation of orders by which neuron is active and we know that because this has been recorded right so people have changed the strength of order presented and they saw that the same cells in the preform cortex are still active took a different order and they said and they saw that a different set of cells in the of neurons in the preform cortex are active right so we know that in the, in the preform cortex in the factory cortex. Yeah, it's the set of neurons that represents the order, but in the input to the preform cortex which is coming from no factory bulb. It turns out that what's interesting. Mostly there are other information but what's interesting in the timing of which bulb is active. And so an order is represented and these are spikes of particular. So we have glomerular in the bulb, the order of which glomerulus in the bulb, which, which kind of small area within the bulb is active. This is what important. The question when eventually they all saturated in the old spike, or at least a huge amount of them, but the question is when they're active once you smell an older. There's no connectivity here that allow this timing representation, right, exactly the opposite of the classroom case, the timing representation to be switched into a pattern representation of ensembles of no active and the question is how this is happening and this is why we built the but just to kind of give you the convince you probably that the input is what does it mean that the input is is by time, if we have two different orders presented, then the timing of which, when it's glomerulus in the bulb is active is different between these two orders, and it can be same active, but they started different time. And what's more important is that if more smell is coming in so the order strengths is bigger. And all this, sorry, all the, all the activity that was previously in in the last strength strong order present remain, but it starts earlier so basically changing the magnitude of an order, change the shift the temporal activity in glomerular to start early, but it keeps the order of timing, that's why the timing is important. So from this representing cheese and these three particular glomeruli active if you present a lot of cheese, and it stinks. Many glomer, glomeruli will be active, but this three first one would remain the three first one that were active for the little bit of cheese. That's why timing is the representation here. So whatever comes first. That's where the, the joke of the, I, you can remember that if I tell you the joke where two people went to see a horse racing, and the other one person that's the other. I think he explains, you know all the words holes holes are racing and whoever comes first is the winner. And after a while, the other person answering. Okay, I thought about it. And I understand why the first horse that is finishing is racing but why do all the others. This is exactly how the factory system. Who gets their first representing the order. Okay, and so in the platform context like I said, there's a representation of which neurons are active and it doesn't. We do not know whether it's important to one. So today is an ongoing question, but how this happened. Well we've built a model. And so we can record the spike from the model. And so in a model, we can really see everything that goes on we have an automatic recording of all the neurons in the model, which you can see the reality and therefore we can present actually when you come to print them in a presentation you have to give up some of them go back to have a partial representation just we cannot grasp them all but the input like I said, is my timing so this is a particular order of cheese, and you have this very initial. They're active in the bulb and they're being the input of our factory cortex. And what happens in a factory cortex. Well the feed for neurons, I have to say they don't do much on a regular basis, unless the order is very strong and then they just moderate the strength coming in. But they do it across the whole. Sniff right so then as the order coming in they kind of silence if there's too much input. So there are the primed ourselves the main neurons in the perform and they're responding to the order starting to coming in so the first input that comes in remember the first input represent the order. They respond to it. Because we have this. The feedback inhibition that the these interneurons this feedback inhibitor into neurons they silence the activity of the main cells in the perform once they start being active. So what they do is actually prohibits other input from coming in. So if every order is represented by the timing of the input, and the initial input is important. What this feedback inhibition is doing is to not allow further input to come in, and then it doesn't matter what you know how the order is strong and all the other inputs just because there's so many molecules with the nose coming in, it just silence them. Right so this particular structure of this of this feedback loop. Just because I have a summary of this. Yeah, so this particular structure of this of this feedback loop is what suppresses the continuous input that is confusing in a sense the system. Now, if this is for one order, because we're currently working on the spendings model understanding father, of course that it's not that the other horses were running for nothing. They've tried to get there and they have an information. And so, the input that is coming in is silence relatively but not completely and it has an information about other parts of the order, but they are secondary in their importance that put it this way, and they're important for feedback and they're other parts of orders when there are mixtures and so on and so forth. But it's the general structure that allows this switching from timing to special representation. And actually, I know that this day. This model is being also used for other part of the cortex right so you can think about this circuit as a very gentle. It's a very gentle idea right you have a connectivity of feed full forward excitarian feedback and the feedback is responding for our silencing father input that is less important than the initial kind of information that comes in. But we learned from this one and I think, given the time, many of these observations have, have been confirmed which was very nice of this particular but I will not go through it I just want to thank you very much for coming and for listening to this. I have to say quite ambitious. Subject covering talk. Thank you very much for the questions. I have a question about last point. So how the timing is related to the timing of perception or cognition. So, in the 50 millisecond of initial 50 millisecond other orders cannot come in. But does it mean that cognition or perception of order occurs within that timing or what it does is to decrease significantly correlations between orders because different orders would initiate different sensory new different subset of sensory neurons at the very beginning right so they're strong. I understand the mechanism but I wonder how this mechanism is actually related to animals perception of order. Yeah, so what it does is do to decorrelate, right, that the neurons that are responding so if, if you think about the whole time of a sniff very initial molecules binding is happening to to molecules that are betting better binding with receptor in nuance, and they kind of start right the activity which goes all the way into the peer form through the bulb and the, this, this account for this very initial activity, the correlates their representation because it does not allow molecules that are less that not as as well bind to the receptor neurons to impact activity down the road so if you have two different orders, impacting two different initial sets of separate neurons, then down the stream this is what would come this is what would activate the neurons to get a very spread activity in the factory in the peer form cortex. And it's, if you if you would account for the binding that goes keeps keeps happening in our nose, it would activate more and more common, I would say, glomeruli and therefore neurons would eventually be active in the peer form cortex will be more and more overlapping, they'll be more active by the same initial neurons downstream, or up the stream might have been. Yeah, initially. So it decorrelates different orders. The question is what do you do when you have orders that are quite similar to each other and they're, and then we come in and said well their attention mechanism and feedback to connect their perform back to the factory but for this you have to wait for next year. The questions. Is there any question from the zoom participant. So here we had a lot of questions during your talk. Okay, thank you very much for your nice presentation and covering all these topics. Thank you very much.