 It's a big pleasure for me to be today here presenting my work. I will be presenting some measures to try to quantify the structure of the brain. We'll start with a very briefly introduction. So as you all know, mapping and characterizing the structural brain network might help to understand how the brain works. In the literature, we can't find many different measures. For instance, we can study integration and segregation with measures like the path length, the characteristic path length, the global efficiency, the clustering coefficient, the transibility, the modularity, and so on. However, still nowadays, compare networks on a study devolution is still a big challenge. And in this world, what we try to do is by using information, the mutual information measure, we try to quantify the structure. How a structure is this brain? But perhaps I should start explaining what we mean when we say a structure. So I guess most of you will agree when I say that the brain network is not just a collection of random connections. So there's definitely an organization in this network and there's definitely a structure and this is what we try to measure. I will go very briefly through the main formulas of information theory, the main measures, because this is the base of my work. So it's very important to remind what these measures are calculating. So the entropy, I guess most of you are already familiar with these measures. So it's just the uncertainty of a random variable X that has a probability distribution. Then we have also the joint entropy, which is the uncertainty of a pair of random variables X and Y. We have the conditional entropy, which is the uncertainty of a variable, of a random variable Y given a random variable X. And also, finally, the mutual information, which is the measure that we've been using in this work, which measures the shared information between two random variables. To have a more visual idea of what these measures are calculating, if we have the entropy of one random variable and we have the entropy of another random variable, where there is this overlap of information, this is the mutual information. On the other hand, another thing that we've been using is the idea of a Markov chain. The Markov chain is a stochastic process, which can be described by a transition probability. And in this transition, the probability only depends on the current state and not on the other ones. I will now explain the method by using all of these basic. So we have a brain. We have segmented this brain in a larger scale. And from each region, we have a node. Obviously, these nodes are connected either by using, we can calculate these connections by using DTI or fMRI. And let's imagine we have a particle. This particle starts in a random path in one of these of these regions in the, in the connectome. And this particle visits, according to the probability of the connections, visits one node after node, as you can see in the image, is randomly moving through this graph. From, from this concept, we have generated our model. So we can see that we have the connectivity matrix. In this case, our symmetric. And in this matrix, we mainly have the number of connections that there's from node to node. If we divide the, the number of connections of each node by the total number of connections, we have the probability of the node. And if we divide the, the amount of connections between two nodes by the probability of the node, we have the conditional probability. By using all of these, this theoretic framework, you can now see that we have probabilities of nodes. We have conditional probabilities, which means that we can directly apply the mutual information to, to this matrix. And this is how we understand how we apply this, this mutual information. For us, the mutual information calculates the information that we gain when we know which was the, the previous node. For instance, we will have high values if there is a high correlation between nodes, which means that from one node, I can predict also which can be the connections of the, of the following node. And we will have very low values when this correlation is very low. For instance, in a random graph, it's very difficult to see which ones, or how the, the nodes correlate to each other because the connections are random. So there is no, it is very difficult to predict. So, and the idea here is that the information, the mutual information will be the uncertainty of the state when we have no knowledge, minus the uncertainty when, when the, the pass is known. So this is the global measure. This measure will give a single value for, for each, for each network. And from this measure, we have also used the, to the compositions. And in this case, it's the composition which will give us a single value for each node. We have the, the mutual surprise. The mutual surprise, this measure, this is not new. I haven't invented this name. But it mainly shows how surprising are the connections of, of that node. And it will have high values when the connections are very surprising. It's something that we don't really expect. And we'll have very low values when the connection is, it's connected to, when the node is connected to the, the more likely nodes. And on the other hand, we have the mutual predictability. In this case, we'll have high values when the, the connections of this node reduce the uncertainty of the, of the network. And we'll have low values when the connections of this node increase the uncertainty of, of, of the, of the network. And also if we do the weighted sum of these, these two local measures, at the end, we get the, the mutual information, which is the, the global, the global measure that we've been using as well. Now we show some results. First, we tested our measures with the synthetic networks. We use the Brain Connectivity Toolbox. This is a great toolbox for, for MATLAB. It's very simple to use. And we decided to generate four different types of networks. So first is just random networks. And then the last three are lattice, ring lattice, and a small wall. These, these last three networks, they all have characteristics and properties that we can find also in, in Brain Networks. We generated different data sets with different nodes. For instance, we generated 128 networks with 128 nodes and 156 nodes and increasing the, the number of edges. And also we generate another data set preserving the density. Because the density plays an important role in this, in these measures. So we have a data set with 0.2 density and also with 0.2. And in this data set, what we do is we increase the number of nodes and the number of edges in order to preserve the density. These are the results with the synthetic networks. As you can see, when we have a low number of edges, there is a clear distinction, especially when we have 256 nodes. When the random networks, they have a clear lower number, lower mutual information. And the other three networks, which they have a bit more structure, they, they have a, a larger value. And obviously if we keep increasing the number of edges, because the density of the, the graph on the network, it increase. At the end, all the, all the graph, they tend to have the same value. Because the network, it kind of lose the properties. So at some point, if we keep, if we keep adding net edges and edges in a small world network, at the end we'll have a random network, because there's no small world anymore. And if we present the density, in this case we can see that either for 0.2 or for 0.4, the measures they tend to have quite similar, similar values. And the next results I will show in this case we use human networks. We have a data set with a structural network. This, this information comes from the, the DTI. This was, there were the matrices that were published in a quite a well-known paper. It contains ten subjects, and these ten subjects, we have the networks at different number of partitions. And then we have also another data set. We took from the Human Connector Project. This is a massive data set. It's really nice. They, they give it for free the, the connections and also already the, the connectivity matrices. And in this case we have 461 subjects and also again, different partitions. And what I want to say here is that I didn't generate any DTI. I didn't generate any fMRI. I just took directly the networks that were provided to avoid, to avoid errors. In the case of the, of the structural data set, we can see that the networks are very robust. This was a property that we were looking for. Because what we don't want is that if we receive a new network, we suddenly have a completely different value of mutual information. Because then it would be not useful. So we can see that for different numbers of partitions, we, the values are, are quite stable. But it's not the, the mutual information as we increase the number of regions, it also increased the, the number of, of mutual information. Which means that we have to be really careful when we compare networks with different number of partitions. And then by using the, the functional data set. What we, we could see is that the, the mutual information is, is always very similar. And that's because the density in this network is also very, very similar. And these are the two effects that we could see with, also with the, with the synthetic data set. And now, some examples of the extreme cases that we can find in the structural data set. So in the first image, we can see all the connections. And to visualize this network, I had to reduce, I had to apply a threshold because there were too many connections and it wasn't possible to see the, the network. And for the mutual surprise, the thalamus proper is the, the region that it gave us a lower value. And in this case is because the, the connections are very balanced as well with the probability of the next node. So if the probability of going to the node is very similar to the probability in being in the next node, that's, we can say that that's not surprising. And the other extreme is when we have a very high value, in this case was shown in the transverse temporal. And we could see that the, the connections of these, these nodes were, were very surprising because this node, either the direction you take, the probability of being in that node is, is very, very low. So that was kind of surprising because it has very specific connection, this region. And the last example, these are the results with the mutual predictability. This is the second local measure. We got very low values in the putamen. That's because it's really difficult to, as you can see in this image, it's really difficult to see according to the probabilities of these, of these connections to, to try to guess which is going to be the next node that these, these particle will visit. And on the other hand, this is a very clear example in the temporal we can see that there is a very, very strong connection to a specific node, which is different from, from the other ones. So in this case, we can say that we have a very high capacity of prediction because it's very clear which is going to be or which is the node that will have more probability in this, in this path. Which means that there is also very low uncertainty. And just now to conclude my work, we have seen and I have shown proof of concept that the mutual information can be, can be used to quantify the structure of different model networks. You can see that I have interpreted the connectivity graphs as a stochastic process where particles are modeled as if it were doing a random work. The local measures can characterize new properties of the network nodes that it hasn't been shown before. And obviously we need now a good clinical study. So the data that we've got is not enough to, to draw very strong conclusions. But we have seen very good performance and we have a feeling that these, these measures can be used to, to distinguish perhaps networks with the disease or networks from healthy, healthy subjects. Yes, I would like to thank you for your attention and I'm happy to take questions. If you have a different resolution of the network, you also have a different degree for each individual node. And also you have a different degree within the network. So some hub nodes and normal nodes. To what extent does that influence your measure? So yeah, the measure is, is influenced by the, the density of the, of the network in, in general. I'm not sure how influence is of the degree, for instance. But yeah, it, we have to be really careful when comparing these, these networks. So we always have to take into account the, the density. So as I have shown, we, we want to compare different subjects. We really have to be careful in generating these networks with the same density. So obviously we want to do a good study and see if these measures can be useful. It has to be in the same. Have to be generated in the same way and it's following the same protocols and having the same partition, probably the same nodes and this, this sort of things. So yeah, it's not, it's not that easy that suddenly we are proposing a measure that can be used for any kind of networks and for any kind of, any number of partitions. Okay, thanks.