 OK, so welcome to the new meeting of the seminar series of the department. Today is my pleasure to introduce Gustavo. Gustavo has been with us for a number of years, I think around 13 years or something like that. So Gustavo is the head of the computational neuroscience group and the head of the director of the Brain and Cognition Center. Before this, he got a PhD in physics a long time ago, in Rosario, same city as Messi. So it's our Messi. And years later, Gustavo got also another PhD in psychology. This time, I think, from Munich. And actually, before joining UPF, Gustavo was in Munich heading the computational neuroscience group at Siemens Research, I think, for a number of years. Actually, 13 years, probably, if I remember correctly, it's almost the same number of years as he has been here. So Gustavo is very well known in the area, many distinctions. I'm not going to go over that. I'm not going to be boring. OK, just for the last distinction is that he is a recipient of an ERC grant on the topic. OK, and so he's going to be talking about the global model of brain activity. So the web in the brain. OK, Gustavo, thank you. So thank you, Hector, for the kind introduction. And thank you also to the organizers for inviting me and for giving me the opportunity to speak a little bit about my research. I try to do really a relatively general overview, including an introduction, a motivation for people who are not familiarized with neuroscience, and in particular with this type of neuroscience, which is a whole brain modeling. But also because if not would be too boring for me, I will try to say a couple of things which are actually our current main interest in the group, even at my personal level. So the title is very general. As I say, we will speak about brain and about brain at the whole brain level. And for that, we will focus on a particular condition in most of the talks, which is called the resting site condition. But just to motivate that, let me start with a very simple slide, which is a cartoon, but includes some philosophy. The way how we study the brain in neuroscience actually is a standard. And it's exactly the same philosophy that we apply in physics, in chemistry, and in engineering. And in fact, it's called in physics the solution of the inverse model in engineering the reverse engineering problem. And the idea is extremely simple. You have a black box. In our case, will be the brain or part of the brain. It could be even one neuron. In our case today, will be the whole brain. And in order to try to identify, just to use exactly the same semantic that you use in engineering, in order to try to identify the system, what the system is doing, which kind of computation, which kind of dynamic is processing information, we excite the system with a variety of and with a whole battery of different inputs that, in our case, are defined by external stimulations. Thus, for example, we stimulate the eyes, so visual stimulation, we stimulate the ears, auditory stimulation, tactile. Or we do even more complex things. We do a working memory task. We do a decision making. We do an emotional evaluation, whatever you want. All the tasks that a human could or an animal could achieve. And that elicits some output. And by studying the relationship between this input and output, we could perhaps infer some aspect of the processing. The complication in neuroscience is that this battery of input is, of course, very, very rich, very, very general. As I said, all possible tasks that you can imagine. And the output is also extremely rich, because we have nowadays a lot of different type of measurements at all possible time scales, all possible space scales, describing the output of the brain or how the brain reacts to all these different tasks. For example, we can start from the macroscopic level, just observing behavior and just observing how the person or the brain behaves and how much time it takes to solve whatever tasks you decided to implement, the decision making task, how well he's been doing that. Or we can go to the mesoscopic scale and try to characterize the activity of millions of neurons across the whole brain with different techniques like EEG, electroencephalography, or MEG, magnetencephalography, or even scanning with MRI imaging, in particular functional in most of the cases, or optical imaging. Or even we can go to the local level and extract information with local field potential, with microelectrons, install it in the brain, most of the cases in animals, but nowadays also in some cases in humans. And we can infer even what is going on at 11 of one single neuron. So meaning that we have really a whole palette of richness of this output, and that's, of course, complicated problem. But the philosophy is exactly the same. From that point of view, it's nothing new. That was a use it in neuroscience since centuries. I would say since the first days of psychology in Leipzig three centuries ago, people decided to go for that way, for that I call Galilean way of doing science, I mean, because you go to the empiric, you extract the phenomenology, and then you do this in a systematic way in this input output framework, and you try to identify the system. And practically everything that we have been learning in neuroscience during the last three centuries is related with this philosophy. This philosophy, of course, is assuming implicitly that if you don't have these input outputs, nothing happens. Nothing happens in physics is, of course, this is a living brain, it's not suddenly dead. So something is going on. But it's not particularly interesting. So for example, all the neurons are activated at the very low level of activation, of spiking activity, very low activated. And what is more important, there is no structure in time and in space, meaning that we can assume that this background state is irrelevant. And therefore, it is possible to apply this philosophy. And this is the case in most of the physical system. The problem is, and if you can guess in the way how I am motivating this, this is not the case. But before going to the evidence, I just to give you a feeling instead of analyzing the brain, let's try to analyze now a much, much more simple system, which is the system water. Actually, the music in the background was beautiful, but I was feeling that it would fall asleep if I connected. So what you see is that the white water, so the resting state of the water was really quite, a background state which has no structure at all. And then, when I put an input that was this falling drop of water, then it has an effect, generate an output. The output is this traveling wave. And by relating in a mathematical framework the relationship between the drop, the falling drop of water and the generated way, the amplitude, the traveling velocity of that wave and so on, we can learn a lot about the physics of the water. So meaning this inverse problem, this reverse engineering problem, in the case of the water is working perfectly. But imagine for the moment that the water is not like this water when there is no perturbation, so like in this case. Imagine that the natural resting state of the water is something like that. That for some magic reason, the water is more like a wavey sea. There are a lot of interactions. I mean, that does not look like noise. I mean, there are a structure because there are waves. There are spatiotemporal structure. And that means that we have a problem if this is the case. Because if this is the case, and coming back to the slide of before, means that the resting state condition is not that I have a trivial background state, which is just uncorrelated, spatially and temporally noise. But what we have is a dynamical system with some spatial and temporal structure. The bad news is that, okay, then we need to take into account this intrinsic state in this framework. It's possible, of course it's possible. It's a problem, of course it's a problem. Because now you have to have the information about in which particular state was the brain when you applied your particular stimulation or your particular type. And you have to describe also how intrinsic and how structured was that state. As I said, possible, but it's a huge complication, but new. The good new is that, forget the reversing engineering problem, forget the input-output framework. Just concentrate only on the resting state. And perhaps the dynamics and the structure that you see here is so powerful and so informative that we can learn a lot about the brain. In particular, under this paradoxical condition, which is exactly when the brain is doing nothing. So we want to learn something about how the brain computes function in the condition when the brain is doing nothing. Sounds stupid, no? But it will be the case. Okay, let me give you, I mean, there are tons of evidence nowadays, but I just selected my favorite evidences. I think the first one that I like at most is from the lab of Mike Fox. It was published in PNAS in 2005, so a couple of years ago, 10 years ago. More or less, it's an fMRI experiment. It's under resting state condition that was not usual. So you ask a human person to go into the scanner. As you know, in the scanner, you can visualize with the technique, which is called bold, just by looking at the blood, at the oxygenation level of the blood. I mean, how much activation do you have in the millions of neurons related with that particular location? Have a very good spatial resolution, which is approximately at the level of cubic millimeters, so what we call a boxels. This temporal resolution is not so good. It's around seconds, two seconds in general. But in this case, so a human being or a set of human being, healthy human beings were into the scanner for 10 minutes, and they were asked it, and that's the way how we define operationally resting state, they were asked it to do nothing. I mean, we can discuss a lot afterward what means to do nothing, because if I ask you now to sit there and do nothing, I hope that you're doing something and it's attending me. But if I would ask you to do nothing, then you will start to think about the person that you love, the game of father this morning that some of you lost and couldn't accept that and things like that, okay? But you are not thinking on that in a focused way. I mean, you are just jumping, hopping from one thought to another thought, and this is resting state. And under that condition, what you would expect is a background state, meaning there shouldn't be any kind, of course the brain is activated, I mean, you have activation in the neuron, but there shouldn't be any kind of structure. And what you see here is the whole brain and in particular, as a function of a time, the signal of activity for a particular box cell, which is a strange region called PCC, posterior cingulate, and when you see the signal is in yellow, it looks like noise, it's a strange noise because if you attend a little bit to that signal and you look this for all possible subjects, but even here by eye, there is a tendency to have a peak every 10 seconds, meaning that there is some low component, which is consistent with the idea of noise, but okay, just a remark. What is astonishing is if you take now another region on the other side of the brain, I mean, if you would take one region in the neighborhood, well, I mean, they are so near, they are volume conduction effect, they are some physical effect that means that they should be more or less correlated, but if you take a region which is in a totally different part of the brain, here is in the medial prefrontal cortex, it's far away from the PCC, has functionally nothing to do with the PCC usually, and you plot this in red, and you got here without any mathematics, of course we do the mathematics, calculate correlations and analyze the significance, blah, blah, blah, obligate the surrogate's method to see that the significance is really reliable, blah, blah, blah. But here, even by eyes, you see that they go hand by hand, but they are extremely correlated. I mean, and this is unexpected because you were expecting noise. You were expecting that this guy does not care at all about what this guy is doing. So the yellow and the red cuff, they should look absolutely uncorrelated. And here, if you do now, really, you take this as a seed and you do this pairwise with all the boxes in the brain, you see that all these boxes and all these boxes are extremely correlated with this guy. And this is what we call nowadays a resting state network. In particular, this is a very well-known resting state network, which is the so-called default mode network. It's the only resting state network that has a strange name. We will discuss this in the next slide. Of course, it's not the whole brain that is correlated. There are other parts of the brain, like this LPS or these different regions which are blueish here. One example is plotted here of the parietal sulcus, this LPS, and you see that there is no correlation. In this case, even there is an anti-correlation which in fact is going to be artifact. But the important thing is that it's not anti-correlation. So first evidence, we have a very strong spatial structure and the resting state conditions which is reflected in the form of resting state network. Of course, after that, people started to apply clever methodologies. I mean, not just correlation analysis. There are many, many, many papers and many, many, many examples. In this case, one of the most standards and applied philosophies methodology is the ICA, the independent component analysis that most of you are familiarized with that. And by doing this and applying this in different subjects with different scanner all over the world during the last years, what you get is a very consistent result which is summarized in this review paper of Wagner. And what you find is that there are different resting state networks which are here in different scholars. In the human, it depends on the methodology, but around seven to 12 different resting state networks. And what is more astonishing, these resting state networks with the exception of the one that we discussed before, the formal network, that was the first one discovered, has a name, they have a name. Meaning one network is called, for example, the attentional network. Another network is called the saliency network. Another network is called the memory network or the language memory network or the visual motor network. Why? Because when people started to recognize the resting state networks that were up eating or emerging during resting state condition, they say, but we have seen this network before. I am working in attention since 20 years and I always see exactly the same network. So, this is the attentional network, but under resting state condition. So, this means that apparently, of course it's not the demonstration, but apparently this intrinsic activity is not only interesting from a point of view that it's showing structure, especially, and we will see soon, also temporally structure, which is informative about the brain, but it's also telling me about how the brain compute information. Without computing it, it's explicitly because I am under resting state. So, this is a promise. So, next and last piece of evidence, which is my favorite, I have to confess, not only because I am a good friend of the last auto, who is Maurizio Corvetta, but I think it was an extremely relevant paper that was published in Nature, in fact, 2007. The Lap of Maurizio Corvetta and Marcos Reichel, and was the first time that a non-human primate, so an animal, in fact, was studied under the resting state point of view, because many people, of course, were criticizing what means resting state in a human. Come on, you always think about something. But in this case was a monkey, I don't know, monkeys probably also think about monkey girls or monkey women and monkey paddles, but in this case, the monkey was anesthesized. And even under anesthesia, that was resting state. In this case, this is shown in the left supanel, and you see just one resting state in order to not crowd the picture. You see the so-called visual motor resting state, which involves this frontal eye field and this person, frontal and parietal regions. Well, this is astonishing, so that's fine. In fact, when the monkey is awake and is performing a visual motor task, what you see is exactly the same. Practically the overlap between the activity of now, an awake monkey performing a visual motor task is basically what you see here. The color is just because here is always activated, and therefore there is more activation in average, and here the activation is fluctuating at that 0.1 hertz level that I have shown you before. So the resting state network, they are hopping from one to the other, so they are changing, they are switching, and therefore in average, you will have less activation, but the peak of activity are more or less the same. But my favorite picture is the last one, because it's the anatomy. These are tracing studies done in the same lab, relating these regions, and what you see now are the wires underlying this region. So how they are connected physically with synapses, so with fibers. So this is function, this activity, sorry. This is structure, and the fibers. And what this is telling you just at the pictorial level is ops, it's basically very, very, very similar. I always press this in an scholastic way. Actually the sentence comes in a totally different context, and comes from Thomas of Aquin, but he took that from Aristotle. Therefore I would say that's in Latin because I cannot Greek, and because it's known in the Occident because of Thomas of Aquin. And it's quid quid recipitur at modus recipientis recipit, meaning the container shaped the content. Glass of water, the glass shaped the form of the water. And this is basically what this picture is telling us is the fibers, the underlying structures are shaping the activity. And now I can understand why this has something to do with function, of course, because the wires are always there, even if I'm not using them. But if I have now noise, and I have the wires, then I introduce correlation through the wires which reflect this structure, and of course this correlation is related with some particular function because the wires are there not by casualty, because they are there because they were implementing some particular function, visual motor task, or whatever, attention, or whatever. So meaning that is, I don't know how, end of the story, I could finish here, if that is the true, that's fine, and I am a chopper, I have to switch subject. So I will try to demonstrate that Aristotle was not right in this context. Of course, he didn't mention this context. But that sentence of Aristotle, the container shape, the content is not applying that. It's only an apparent shaping. I mean, of course, it's shaping, but not in a unique way. Of course, the wires that we have, the fibers, the structure have something to do with the function, hopefully, but it's not determining this in a purely unique way. Okay, the way how we study the structure is with a technique called DTI, diffusion tensor imaging, in the case of humans. Of course, in animals we could do tracing experiments, but this is an imaging technique and practically what we visualize are the myelinated fibers, which are the main fibers connecting distant regions. And the idea is to track basically the direction of the movement of water molecules with the magnet and basically do tractography and put this all together. And at the end of the day, if you are lucky and really you should be lucky, you get some nice picture like this. So that you know who is connected with whom and then you can translate these guys in a matrix. And this is what we call the structural connectome, because describing, in this case, human, the structural human connectome. So you translate this DTI tractography for a given parcelation that could even be at the boxer level. Usually we use much courses, standard parcelation. And then you translate this in a matrix, which is relating the relationship. I mean, if two different cortical areas are connected or not. So in most of the cases, they are not connected. In some cases, they are connected with high density of fibers or with low density of fibers. And this is what means the difference numbers or in this case, colors in this map. And now the idea is to go back to Aristotle and say, okay, then this is my container. I put a noise and I see if the noise is shaped in the form of the resin state map. So I could put noise here and my favorite way of inserting a realistic neuronal noise and see if I can generate a functional connectome. So functional activity that will be correlated in such a way that correspond to the functional correlation that I observed also in the scanner and the at the level of ball correlations. If that happens always, then Aristotle was right, okay? Just a thing to very clearly see that Aristotle was not right. I mean, there is here a missing parameter, which is good because it's exactly what we can study. And it's, we know the fibers, we know who is connected and how, with whom, but we don't know the value, what we call in physics the conductivity of that particular fiber. So then this is a parameter that we cannot, one can measure during electrophysiology for one particular part, but I mean, it's impossible to do that labor for the whole brand. And that is a free parameter. And of course could be different for different fibers. We follow the standard physics approach. We assume that they're all equal. So we have one parameter. And this is like a scaling parameter. We multiply this parameter by the fibers. And then it's very clear that if you decide that this parameter is zero, you multiply all the fiber by zero, meaning that they are all disconnected and because you introduce noise by definition uncorrelated, what you will generate is just uncorrelation, meaning that you are not consistent with the reality. So this is a typical case where it's totally was not right. If you put here very big parameter, then probably this explode, everything will be correlated with everything. I don't need even to do the calculations. It's absolutely intuitive, it happens. And then that is also not the case because it's not that everybody's correlated with everybody. I mean, very particular structure. So meaning that there are two situations, two extreme situations, of course, so that I can say here without any formula in the computer, where it's totally was not right. If we started that, I mean, the first approach was really to use a form of neuronal noise, which is very realistic. I don't have the time to go into the details, but practically what we simulate are real neurons and real synapses. But under recent conditions, they look like noise. We insert this source of noise in each node. We multiply by this coupling parameter, by this general conductivity parameter, which is unknown, but at least we can parameterize and study the effect of that at the global level of the dynamics and see what happened. Here's a cartoon to show what happened and what happened is really very interesting. This was the situation before, if this is my scaling parameter, my global conductivity parameter, if this is zero, I have just one attractor, so one equilibrium state, which is trivial, and it's full anchor relation. So all the neurons are down and they have nothing to do with each other. If I go to the other extreme, it's not shown here, probably everything explodes. But the nice thing it does, if you start to increase this parameter, you still have one attractor, but of course you start to increase correlation as we will see later. But technically you are always in a situation where the stationary state, the fixed point of that dynamical system, we have here now a dynamical system, so let's start to be now a little bit mathematical. We have a dynamical system depending on one parameter and the fixed point of that dynamical system is still just one position. But suddenly, and this is what we call in mathematics a bifurcation point, many other possible states appear, meaning that suddenly I change a little bit the parameter and instead of having some state where all the neurons, imagine that my fingers are the neurons and they are spiking at low activity, which is what happens here, then suddenly it could happen that some of the neurons are up and some are down or some others are up and some down. And this is what we mean here with all these values. This is what happens. Of course, in order to obtain this, we do simulations and a lot of analytical work, actually with many people in my group, we were working a lot, for example, it's Adrian Ponser who is here, and try to do first really up initial simulations or really inserting all the differential equation, describing all the neurons and, sorry, and connecting the neurons with the DTI information, how they are connected in the brain. But locally, they are connected, as we know, are connected in the neurophysiology and reproduce in a realistic way noise. Of course, you can describe this with differential equation that's, I put here just for cosmetic reason because it's always impressive to see differential equation in short talks. And that is, you solve that and even more, you can not only solve up initial simulation, but do also some analytics. And we invested a couple of years for trying to get more and more analytical or semi analytical results on that. The take home message, which is what is important here is what I told you before. If this is this coupling parameter, and this is the before cation line, actually there are two before cations, but let's consider just one in order to keep the story so simple as possible. And this is the correlation between what you generate, what you simulate, and the reality. So it's a measure of how good you fit the reality. So the larger is this number, this correlation, the better is the fitting of the reality. But you see if this guy is very small, nothing happens, of course, because you are uncorrelated by definition practically. But when you start to increase this and you approach this particular before cation where the space is split, this is another way of seeing the before cation in a cartoon way, then just at the edge of the before cation, you get the best fitting of the reality. And this is a very interesting result. Now, the thing is why the hell the system is working in that so particular reach. I mean, for physicists, we love that. Exactly at the before cation is where the system work. I mean, that is orgasmic. I mean, it's fantastic. And we can prove that, and we can doubly prove, simulate, et cetera, et cetera. And there is a reason. I will tell you, some of you already heard that story, but because it's a general talk, I mean, the most simple way of understanding this is in the following way, by playing them. And this is the coupling parameter. This is where the system works. But of course, if I would be God, I could have designed the system here. I mean, this could work. I mean, and the resting, look, I mean, you don't need to show me your correlation. Why the hell, I'm not interested in your correlation. And when I need to use the wires, because I do a decision-making memory task, I just jump from here to the right attract. That's fine. But the energy and the time that you need to go from here to here is much larger, and this is what we are showing here for visual stimulation. It's much larger than if you are just at the edge. I say that as play with the tennis because this is, the resting state in tennis is how you wait for the ball. And if you wait for the ball, I always play tennis at seven o'clock in the morning Saturdays. My resting state for tennis is more or less like that. When my friends or trainer looks at me, they look at my resting state, I think, and they can start to do diagnostic. So they're relevant for neuropsychiatry. They say, I don't need to see how this guy played tennis. He's very bad. Which by the way, it's not true. I move a lot. And I play relatively well. But if that would be the case, yeah, the resting state would be like here. I show no correlation. Then come the ball, boom, too late. But if I am here, like Nadal, then probably I will get the ball. And this is the reason or the intuition why it's so convenient to work at the brink of the defecation. Okay, there is a second model, which is absolutely orthogonal to the first one. And that is a little bit disappointed because the first assumption is, okay, of course the wires are important and we mix the dynamics with the wires to get the global brain dynamics. Fine, the exercise is exactly how the take home message is that we work at the brink of defecation. But let's assume that the dynamics is not noise. Let's assume that the dynamic is oscillation. And as you know, and many of the neuroscientists here know that there are many, many, many evidence that also during resting state conditions, there are strong oscillations at the neuronal level. So why we don't put an oscillator instead of noise? Okay, let's do the exercise. There's a good excuse for doing another PhD and that was the PhD of Joanna Cabral. And basically what we decided here is to go for the most simple things from the mathematical point of view. We put the Kuramoto. Kuramoto is a horrible name for a very simple thing. It's a sinew, nothing more. But in physics we like to complicate it. So we call that Kuramoto. And we study, okay, we put the Kuramoto oscillator in each node, we wirestand according to the DTI to the structuration of the real brain and see if we can get exactly the same. And the answer is yes. We can get exactly the same. The pictures here are a little bit more complicated because we have two parameters, not only the coupling parameter, but because we have oscillation, the delays in the wires are also playing a role. But this is a technicality. The important thing is that if you check when you fit at best the reality, and in this case even it's using another technique, but of course we have also done with FMRI, but where we can describe the real resting state networks, you see that there is a region, which is this reddish region, where we can achieve that. So meaning that the model works. And even you can interpret where it's happening is exactly in the region where the oscillators are not full synchronizing and not full desynchronized. We use a measure which is called the Kuramoto order parameter which is characterized in the label of synchronization and is plotted here in the same parameter space. And you see that this region is exactly in this region which is between the full synchronization and no synchronization at all. And that is nice, meaning that somehow the brain decide and the resting state condition that some guys goes together, they are synchronized, and then they switch to another cluster of synchronization. And even more, the richness of the switching is maximal because if we calculate the variability, the standard deviation of the Kuramoto order parameter, that is what we technically call metastability, this is maximal. So that is good because we have the double of papers. We have papers claiming that we can explain everything with noise and we have papers claiming that we have explained everything with oscillation. From a scientific point of view, it is disappointing because we have two orthogonal explanation to the same effect. And the idea is how can we solve the problem? That means that probably what we need are more measurement. So the functional connectivity which is what we use in both cases is not informative enough. And then we decided to go for another measure, these are real data from La Charité, the simulations were done by Adri by Adrian Ponsa. And what is showing is that in the small windows, the sliding windows, the same, the functional correlation. So instead of calculating the functional correlation, so the correlation between all areas in the whole windows of investigation, and this is what we call the grand average functional connectivity, we do this in a small window which is just one or two minutes and we slide the window. And what we see here, for example, for some random pairs is that the correlation is changing. Or here, if you pay attention, the functional connectivity in the different regions are changing, or this is the principal component. Meaning that there is not only a spatial structure but probably also a structure at the level of the time of the fluctuation. And perhaps using these or similar measurements, we can solve the problem. The idea is that, so how to unify these two scenarios? So the asynchronous or noisy scenarios with the extreme oscillatory scenarios, and which kind of measure do we need to use in order to achieve that? The solution is trivial. Actually it's what we should have done from the very beginning but that is not usually the case. So we do exactly the same philosophy, we use the DTI, we use function but another measurements. And now what we assume for the dynamic of each node is something that is able to describe both things, noise and oscillation. And this is technically called a hopped defecation. We use the most general form of a hopped defecation which is the normal form of a hopped defecation. It's a simple equation. Basically just to give an idea, it has some, each node has a local parameter that we call A and if this parameter is negative, you have noise. So this is just the simulations for one isolated node. If this parameter is positive, you have a perfect oscillator. In fact, it's a Kuramoto reduction. And if you are in between, you are near zero, there is a hopped defecation and it's the transition between these fixed points to an oscillator. So you have something that looks like a mixture of noise and oscillation. And by the way, just by looking at the picture, this is how the reality looks like. Meaning that's probably, I mean, the model that we need is something that will be in between. Okay, we will use many different measures. I will not go into the details but all these measures are the one that we have already seen, the grand average functional connectivity. But on top of that, we will use the metastability that we already defined it and it's a temporal measure or this functional connectivity dynamics. So the evolution of the functional connectivity across time. And there is a way of translating this in a quantitative way that you can ask me at the end but I will not lose now my time with that. But this is what we call the functional connectivity dynamics. So we try not only to express the spatial correlations but we try to express also the fluctuation, the structure of the temporal fluctuation. And by using these three measures, the take home message is if we study, let's concentrate now on the right part, we have the parameter of always, the global conductivity parameters. And now for each note, we assume that we have exactly the same parameter. This is just a trick in order to be able to do a 2D plot. Because if not, I would need so many parameters here for the note. So I will have the incredible high dimensional picture. But the first approach is, okay, all these guys have exactly the same parameter. They are all negative, noisy scenario and we reproduce exactly the noisy results or we are positive and we reproduce exactly Joana's thesis, Kuramoto's case or we are in between. And then we check with these three measures which are not the only one. That could be more general measures but static measures and spatial measures and temporal measures. And we see that the best results are, I translate for you the colors are exactly here. These are just cuts at different regions or at the negative region. So it is the asynchronous scenario at the positive region. This is the Kuramoto scenario. And this is exactly at the bifurcation point. And at exactly the bifurcation point, we get the best fit of the grand average functional connectivity. The best fit in this case is when this is near to zero because it's a Kolmogorov distance. The best fit of the fluctuation and the best fit of the richness of the repertoire. That is the metastability. And this is very, very now constraining. I mean, if you take only this, you see that's all the other guys are doing also a relatively good job. And that was the reason why the asynchronous and the oscillatory results are also good. But if you look the other constraints, now it's not any more the case. And therefore this structure at the level of the time. This case is coming with the metastability and the functional connectivity dynamics. So important because it's telling us something that also the latin scholastic people would resume under the sentence medium virtue. So at the bifurcation is the best. The context was different, but anyway. The best result is exactly the combination of noise and oscillation. And that was constrained. There is also a buy side product which I think is extremely interesting and is that from all possible models, the one which is consistent with the reality is the one that achieved maximal metastability. And that has a very nice interpretation because means we need a resting state or a dynamical working point of the resting state such that the system is showing maximal richness in the repertoire. And that is the meaning of the metastability. So coming back to Nadal, you need that Nadal move as much as he can because this is convenient now for the brain when you start to use the brain. And this is now not an speculation and not an intuition. This is a demonstration. Okay, you can do a little bit more and this is the last technical transparency and then I finish with some examples. And you can not only optimize the G and one identical A for each node, but you can identify an individual A for each node. And of course you need to do some tricks. You need to take into account the power spectrum of the signal and do some optimization problem. But I don't go into the details. But if you do that, you do of course a fabulous job because you can reproduce everything, the functional connectivity, spatial correlation, functional connectivity dynamics, fluctuation, metastability. And what you get, these are the different brain regions, the different nodes now with names on the right hemisphere, on the left hemisphere. This is the bifurcation parameter which now will be different and the red line is the best optimization. Meaning that you have now different dynamics. So some nodes are more in the noisy side, some nodes are more in the oscillatory and as we will see in the next slide, most of them are around zero, so maybe a bit. But you have now a distribution. So let's go to the applications just to show that this is not only for fun, actually it's for fun, but also we cannot try to apply. And we have a fabulous application that we were very lucky because we got excellent data and that was analyzed by Victor Tanger, who is somewhere, hopefully. He's a PhD student in our lab and he got really fabulous results. Well, from Pakistan, you know, sorry. It's placed into the brain, I understand. It comes level with the top of the nose and the ear. And it is connected to a battery in my chest and in the skin of my chest here. And this is controlled. I can control the amount of the voltage going in. I can control the length of time that the pulse is and I can identify the number of times per second that it goes in. And I'll turn myself off now. So one of the problems, not only as you can... The probe that is placed into the brain, I understand. This is the fact. I mean, this was a guy, actually he's a professor from Oxford and he's not an actor. Real Parkinson's patient that was installed with a DBS. A DBS is an stimulator, a very simple stimulator. It comes from deep brain stimulation, which is installed in this case in the subthalamic nucleus and subcortical regions. And it's basically applying an oscillatory stimulation in the brain. It works, as you see. I mean, when he turned off the DBS, you see the tremor is catastrophic. That was the state of the patients before surgery. So it's like magic. Of course, we don't know still why it's working. I mean, yeah, subthalamic nucleus is... We know that Parkinson's is related with some data of the dopaminergic neuron, substantia nigra. Substantia nigra is very near to the subthalamic nucleus. So exciting those regions, we are trying to compensate what was wrong. And it works. It took, of course, many years of experimentation in monkeys, but you see the result, as well. But we don't know the real effect of this stimulation in the brain. We were lucky because we got from UCL, from Carl Freestone, Josh Cajan, and many others. 10 Parkinsonian patients installed it homogeneously with the DBS, like the one that you have seen in the videos. And they have done 10 minutes resting state, which is fantastic. But what is more fantastic is that they have done this with the DBS on and with the DBS off. And of course, there is also another 10 subjects controlled fully healthy groups. And we apply this HOF model, optimizing all single nodes and what Victor got is fantastic. I mean, this figure is the overlap of the other three figures. So let's concentrate on the top panels. This is the healthy group. So these are the ACEs. So these bifurcation parameters, remember, negative mean noisy parameters. Positive means oscillatory in medium virtuos. So most of them are here and so looks the distribution of parameters in a healthy brain. When you go to the DBS off, you see a disaster. And this is very astonishing because what that mean and people are aware about that actually I was speaking yesterday with the neurosurgeon. I mean, they were aware of that already in the 30s of last century. In fact, Parkinson at the beginning was not a neurological disease, was a cognitive disease because the first thing that people realize is that people are very bad in memory, and they're just switching and many other things. And what you see here is that it's not that the sustancia negra is affected. The whole brain is affected, the whole brain. So, which is very bad news. The good news, which is a genuine casualty, but there are some good reasons for this. It's the good design of the brain. Is that when you turn on the DBS on and you repeat the whole procedure, you get this distribution of parameters. Very similar to the health. So, the DBS is not only, of course, compensating the debt of neurons in the sustancia negra and therefore, ameliorating the tremor in a radical way, as you have seen in the video. But on top of that, it's helping the whole brain. It's trying to recover at least from a dynamical, parametrical point of view. The brain is recovering in the right direction. It's going in the right direction. It's returning to that magic point that we call in medium bit. And this is with just one, and a specific stupid periodic stimulation. But that means that now the hope in DBS is enormous. Because it means that it's not only that we can solve with DBS some particular neuropsychiatric disease. And people are trying to use this in all possible contexts. Schizophrenia, alcoholism, pain, chronic pain, and so on. But the hope is that even for very dissociative of diffusive trason, like schizophrenia, for example, which is probably affecting the whole brain, perhaps with a single stimulation, we have a chance to go in the right direction. We still don't know how, but at least we have the hope that it's feasible, or it's possible. The second example, I don't know how I'm doing, 10 minutes, fantastic. I astonish myself how good I am. Last two examples, and it's also a neuropsychiatric disease, and it's about coma. That's something that we have been doing with Adrietta Oste, Jaco Seed, and the people from the Neurospin, and the hospital associated with the Neurospin, Nakash, and it's related with consciousness. You know, when you are over 50, you are allowed to do whatever you want. And then I decided that, I, when people speak about consciousness, before 50, I would say, that is esoteric. I like esoteric, but that is esoteric, so for scientific, it is not good. Now I decided that it's time to go to consciousness. But also because things change radically in the last two, three years about consciousness. This is actually, it's not consciousness, state of consciousness, technically speaking, or clinically speaking, and it's related with coma. And we have access to a fabulous group of patients with different coma states that I will describe later. And the main idea that we apply is that if we use whatever functional connectivity matrix we can gain, that you already have seen, and we put this in the form of a graph, basically we can define how well connected are all these graphs. Technically these are called the subcomponent of the graphs. And if we look to the size of the largest subcomponent, and this is what is called the Jiang component, and normalize that, that is what we call integration. And what that means is we are measuring how wide is the broadcasting of information across the whole brain. And this is related with the modern theory of consciousness. So many people in neuroscience say, I don't care what is consciousness, but at least I can see the correlate of consciousness. And the correlate of consciousness seems to be related with the idea that I have extreme large integration. This is a way of characterizing the integration. So how well connected are all these aisles? How large is the largest component? Oh, and a complement of that, which go hand by hand, but in the other direction is the segregation. So how many aisles I have? And that is the segregation and measurements in between. So what we have done is we took these patients which were, I don't remember, 170 something. Patients in different state of coma, which are described here, some of them were extreme coma, so vegetative state, minimal consciousness states. So they have some reaction according to the Glasgow scale. Conscious states, so meaning they were coma patients that are recovered, they recover consciousness, and just healthy control group. And what you see is that these two measures, integration and integration, and if you look at the statistic, it's very significant. You can distinguish these groups with this measurement. By the way, it's not ephemerae, this is the high density EG, and it's not resin state, it's performing an odd problem. So they're hearing a tone, which is repeating and suddenly comes another tone. So this is a good measure of consciousness. But for our application, it was irrelevant that. But the nice thing is that we can classify and we can label the patients with a quantitative measure, which is not existing. So this is a biomarker, and it seems to be that this biomarker has these two properties of a biomarker, which is sensitivity and specificity. And on top of that, it gives us a good interpretation of what's going on in this patient. Really, the brain of that person is disintegrated from the communication point of view. And when we restore this communication, and with DBS, we could do that, then we are going in the right direction. Yesterday we have here a meeting with actually people from the New USB, and so on. And they are really doing already DBS for trying to wake up a comapassant. And the way to do that is really, actually basically in our ideas. I mean, to increase this quantitative measure of integration and see what happens. And of course, later on, check if also behaviorally the guy is away. But at least we have a quantitative way of regulating or looking for the region that we should accept. Last thing, and this is really absolutely preliminary. And it's like all things that I do, I always laugh the last thing. I mean, I am absolutely in love with that. So therefore I wanted to at least spend two minutes on that. It's what I call the third way. And I'm not intending to paraphrase Tony Blair, because I say that in a scientific context. The first way was reverse engineering. Successful, but limited. The second way was racing state. I worked on that five years, I got bored. But it was successful. The third way is the perturbative way. And makes a lot of sense. Sinking a glass of water. I mean, I can learn something about a glass of water by drinking the water. That's it. Or if it's magic water, I let just the water at rest and observe what happens. And I learn a lot. But if I have the freedom to move the glass of water as I want, this is what we call perturbation. I can really see in which way really this exploration of the repertoire and the shape out of the equilibrium is under different condition. And in particular, we started to look that under different consciousness condition. The idea actually is from Marcello Massimini in Milan. Has done this experimentally. So he used TMS, TMS is an estimulator that we have even here, you see many others in the lab. You put some stimulation and then you measure within this case very simply with the EG, what happens. And then what you see is that something happens. I mean, put an stimulation, of course, and you see in the signal going close. And a quantitative of describing this and was a general idea of Marcello is while I just use a complexity measure. So lempel sieve, compressibility. So I try to compress the signals and if the signals are simple, then the compression is very small. If the signals are very complicated, so the perturbation has some strange effect, it's very large. So with that complexity, and they call perturbational index. So with that perturbational index, they were able to show that if you do this under a way conditions, you get something. If you do this under a non-rem sleep, you get another thing. If you do this under a REM condition, you get another thing. Then if you give some drugs to the guys or you anesthetize the guy or you study some diseases, always affected. So it's a good biomarker, which is not a sponge. One thing, I mean, I love Marcello because he's an extremely nice guy and Jim. But I think he committed two errors or he forgot something. And he studied everything in signal space. Signal space, if you have water, it's fine, but he's forgetting the resting state. So to study the influence of a drop on quite water, you have same. But on the way we see at the signal level, wow, could be nasty. The stimulations are relatively very short. So very small because you are doing this with humans. So for ethical reasons, you cannot burn the brain of a human. Just for your curiosity. And in fact, there is no the timing here, but if you look in the reality, this is a couple of hundredth of a millisecond. Really short at that. Therefore, everything is nasty because you have seen the effect in signal, in amplitude space, with a very small perturbation. So our idea was to be radical. And what we do is take the brain, we do a model of the brain using resting state, extract the signal, we extract the phases. And for each time we define a matrix that describes the state of synchronization. And with that matrix, we can calculate the integration. So meaning that for each time point, we get the measure of the integration. And when you see the measure of integration, this is a cartoon, but you will see the reality soon in the next slide. What you see is independent if you perturbate or do not perturbate green or red, then you see the way we see. And in fact, you cannot distinguish that. But if you apply now a trick, which is very standard in electrophysiology, which is the ERP trick, I mean, you start to average over trial. So first, you do here massively because you are doing this offline. Sorry, I didn't say that. So you do the model resting state and you take the model off, you take the model offline and then you apply the stimulation in the model. Now there is no way to commission for you. Then you can really burn the model. And that's what we do. We do massive estimulations and we see what happens after the stimulation. What we see is that at that level is the way we see. So to measure this in that type of space is not so meaningful. But if we average over trial and we look just only what happened after the perturbation, what we see is that green is the normal state after the onset of the perturbation, but without perturbation, so just what's going on. And if the perturbation is an imposition of synchronization and this in the Huff model is very easy to do because we force one guy to be in the positive range for a while and then we recover them. Or we force him to be in the negative regime and then we recover them. So we can have noise stimulation or oscillatory stimulation. And what we see now very consistently is that the average is very informative because what we see is the effect of the perturbation. The integration increases and it starts to decrease. And the time that you need to decrease is what we call the pili. It's the perturbation integration latency index. We wanted to have more letters than Marcella. And it's turned out that the pili is extremely informative. So we were trying this together with their jobs in the cases of these are real data. Tell you later on humans awake or sleep with ephemerae, resting state. And what you see is the integration. Now it's not the cartoon with perturbation, without perturbation of the model. And what you see here is what happened after the oscillatory protocol or noisy protocol of how you recover. These are real data. It's for awake and sleep. And what we speculate is that the time that we need to recover in one or another condition, this case, sleep, but could be anesthesia or could be a disease or whatever, is different. And that is exactly what happens. These are data from Diege, from Melanibolie. And what you see is really that if you adjust the model which is what's shown at the top. So forget now. I mean, it has excited in different number of notes. You really get a difference between the wake condition and the sleep condition in the period. And the same happens with the data from Frankfurt, from Enzo Taglazzucchi, also showing a sleep analysis. So just to sum up, we come back to the water and we have learned that it's very interesting and challenging to link functions. So the wavy form of the water and the resting condition or vertebral condition and all that condition. And we know that it's shaped it, but not in a unique way, but it's shaped it by the body of sun. And now what we would like also to go in the future is to see how we can interact with this function because we do a task, because we install a DBS, because we do DMS, or we put the guy sleeping or an exercise and so on. And see how this is changing and because this is changing, there is also plasticity effect in the body of sun, so in the fibers. And we would like to see how this is changing and because it's changing this, sorry, this will change again. And then to go in the direction of interaction and plasticity and change in the human body. So many thanks to the people in my group. Really, they are not all of them here and they look in a fancy way wearing sun. Classic. Goose, which with me. That was my birthday, that's why. It's not that they are always like that. Usually they hate me, but only for my birthday they are nice. And of course all the data are coming from all possible places. We have excellent collaboration, I say with Oxford, UCL, London, also at the theoretical level with Marseille, Ashari Tambaline, we have an NIH project together with Mauricio Corbetta now in the U.S. And long lasting collaboration with McIntosh and many others. Thank you very much for your attention. So comments, progress, and microphones around. Do you take the questions without? If I know the answer, yes. Okay. Make a point. That's you. There's no microphone there. Okay, so thank you for the wonderful talk. And I have a question. So when you pass from the real human brain to the human brain, is that one brain to one simulation of the brain or is it an aggregation of different brain? Yeah, very good question. It's not a naive question. Yeah, we can do both. And actually the first step is going or analyzing first the group label because it's less noisy. So we try really to go, in this case in the sleep case was like that. I mean, we take all the subjects or all the trials that we have and we do a model of all this group of people which we assume is our homogeneous. And because then we can compensate many different kind of errors and then we do with one model representing the group, the whole experimentation. Of course, when we try to go into the direction of neuropsychiatry, we do the opposite because I don't care about the group. Well, I care because I can learn something about the disease and sometimes therefore we also do that in neuropsychiatry. But the final goal is really to say something concrete about a single person, which is an extremely difficult task. And to be honest with you, we are not sure if that is possible. In fact, Mario and some people in my lab are studying is how much information I can take from in neuropsychiatry is even worse because you don't have even 10 minutes pressing state. Sometimes you have four minutes, seven minutes because you cannot convince a schizophrenic or measure depressive guide to go for a long time into the scanner. So how much can we say with four minutes pressing state one time? I am sure that if we would be able to do these four minutes, perhaps many times, we will do a good job. And we have some preliminary results. But with one session, probably not. But that is the main interest of subject life. And I'm wondering, first of all, if there is any... I mean, dopamine also affects the long-term state of the brain in terms of the entire state of it. So I'm wondering if there is any... Just because I don't know really well if there is any attempt to relate all this three-way interaction of like DV as dopamine and then resting state. And then also, I'm asking this question also because dopamine effect is actually visible for a small region of the brain. So maybe that can be used to reveal some regional functionality affected by the resting state. I have to go back, sorry, don't move. But it's possible, and actually we are trying to go into that direction. There is now, I think there is only one scan in Europe which is going to be in Padua. Actually Mauricio Gorbetta is moving, by the way, to Padua part-time. And with that scan, they could use perfusion and ball at the same time so that we will have a whole map at least from GABA, glutamatergic information probably serotonin and dopamine in the real brain. And then we can start to think how we can involve that type of information in the modeling. It's a challenge, but it's possible and we are even writing proposals going into that direction. The second part of the question I forgot, sorry. Oh yeah, yeah, yeah, I mean, that is the case that I mentioned before. I mean, this was the reason why actually the first observation in Parkinson's is not the tremor. I mean, in fact, not all Parkinson's have tremor. But the main observation is that all those functions which are somehow affected by dopamine, like prefrontal memory, touch switching, even decision making, are affected. So therefore, it's not astonishing that Parkinson's is contrary to what we believe it before. It's a totally diffusive effect of questions. People from my lab are also allowed to ask. Yeah, Mark, I don't know what we can try to speculate. Actually, this is an idealization because what we see and Katarina Glom is not on that. I mean, is that there is co-activation so it's not a perfect switching. I mean, this image of one resting state network active, deactivated, and another activated, it's an idealization, I mean, and interaction, and probably they are not resting state networks or different spots of the brain are activated and deactivated in different ways. So class. I mean, in the difference model, because you are very near to the bifurcation, means that you are very prone to change states. And that's the mechanistic explanation. And why you want to be there, which is what we call in a fancy way, the richness of the dynamical repertoire or in a more dry, physical way, metastability. Is, because that is good for your functions, because for switching from resting state to a functional state. Signed speculation, of course. The question about the peculiar resting state to Saturday morning at seven a.m., but we talked about that at lunch, okay? So I think we'll finish now. We can test us on the court, if you want. Yeah. So, thank you, Tavo, thank you.