 pleasure to introduce Dr. Isaac Neri. Isaac got his PhD from KU Loiven in 2010, I believe, in a disorder system. So he's a theoretical physicist, mathematical physicist working on the interface between non-equilibrium thermodynamics, stochastic process, but he worked on a broad range of subjects, including disorder system, random matrix theory. After his PhD, he had several postdoctoral positions in Europe, including the prestigious LBA postdoctoral fellowship at the Max Planck Institute for Complex System in Dresden, and now he's a senior lecturer at King's College London. So Isaac, it's a great pleasure to have you here at OIST. Okay, thank you very much Simone, and let me as well take the opportunity to thank Simone and Samuel for being such a great host at OIST, and as well thank the administrative team for having so efficient organization and making everything so smooth. So in this talk, I will talk about spectra of complex networks. The main idea of this kind of research is to understand how network topology affects the medical systems defined on graphs, examples being neural networks, ecosystems, basically anything that is large defined on a graph. So the approach that we follow is based on statistical physics. And statistical physics is basically we are studying systems with randomness. These can be small systems, can be complex systems. And then we use probability theory as classical theory to study systems that have randomness in the natural sciences. So in examples, or for instance, small systems, these are, in this case, there's a cilia connected to a cell. And you see this motion is stochastic. This is typical statistical physics problem. We want to understand the fluctuations of this cilia. So what's interesting is that this motion is not thermal because there are molecular motors inside. So these are non-equilibrium situations. And we cannot apply standard thermodynamics to that. We need to develop non-equilibrium thermodynamics. And so this is a topic for interest. And but in this talk, I will talk about systems which are random for entirely different reasons. These are large systems, systems consisting of many components. And the reason they're random is because if you look at interactions, this is an example of such a complex system, you don't see any clear symmetry, you don't see any clear structure. So the way we model such system is as a random system. So this is brilliantly visualized by Barrett Lyon. This is the internet. Each node is a network of computers. And the connections or data connections to these computers. We also see that you can see these hubs here. The systems are not entirely random. You can see certain structures appearing. The way we model this kind of complex systems is using graphs. So graphs are fairly simple objects. These are a collection of nodes. These nodes are connected by edges. And that's it. So to study graphs, we like to use linear algebra. So we represent these graphs in terms of matrices. This is commonly done by the adjacency matrix. It's simply a matrix whose entries are either zero or one. An entry is one. If there's an edge pointing from a node i to node j, an entry is zero. This edge is absolute. So we often it's as well interesting to add weights to these edges because we want to quantify the interaction strength. And there's two types of weights. There's weights associated to edges. These appear as diagonal entries of the matrix. And then there are edges or weights appearing to nodes, I mean, by di. And these appear as diagonal entries of the matrix. Oh, yeah, please. Sorry, I think j connecting to i will also have one in the adjacency matrix, both the sides. Yeah, absolutely. But yeah, so this is for any pair of nodes i, j. So you have i, j at some point and some other point you have j, i. So yes, you have both. And importantly, they can be different. So j, i, j is not necessarily equal to j, j, i. Yeah, so this is the, so we use these matrices to study these networks. So let me give you some examples of such networks. So one example of footpaths. In this case, each node represents a species. And there's a link between two species, one species feeds on the other. So these links in this case have antagonistic interactions, unsymmetric interactions. And that's because let's say if a girl eats a shrimp, that's bad for the shrimp, it's good for the girl. So these interactions are positive in one direction and negative in the other. This is the example for a bay, some quantum bay is a quite rich ecosystem. And it's fairly simple to get this network, but it's very difficult to get interaction strengths such networks. So these are in general not known. Let me give you, you know, another example such a complex system. This is a client supply network. It's a financial network. In this network, each node is a firm. And there's a link between two firms, a firm A supplies goods to firm B. So in these networks, it's fairly simple to get the interaction strengths. These are simply the number of goods supplied from one firm to the other. This network is large. It has about 10,000 firms. This is simply a small part of the network. However, this data is not publicly available because these financial data are often provided by companies. And yet it's not publicly available. That's kind of an issue sometimes with financial data. And then let me give you a final example, which is interesting, which is our neural networks. So these are in this case, each node is basically a neuron. And these neurons are connected by synaptic interactions. In these interactions, you have chemical messages which are transported. That's how these neurons interact. So what's interesting about these networks is they are directed. So about 90% of the interactions are directional, the other one are non-directional. These are largely directed networks. In this case, we see the network of C elegans. This is a worm. It has a fairly simple neural network. It's about 400 neurons. And you can actually, researchers got this entire network topology, because you can slice this worm. You can actually make images of the slices. And you can actually detect the connections between these neurons. In this case, the entire neural network is known in this organism. However, as you can imagine, more complicated organisms is not possible anymore. For example, when the food fly has about 100,000 neurons, mammals have billions of neurons. It's not possible to get the full network topology anymore. So these are some examples of complex systems that can be modeled in terms of graphs. Let me now go to the theoretical part. So from the theoretical side, most studies have concentrated on all-to-all interactions. These are graphs in which every node interacts with every other node. And this is considerably more simple to study, because you have law of large numbers, central limit theorem. This applies in these systems to some extent, and this simplifies considerably their study. So much research is known about such systems. What is unfortunately much less known is about complex systems defined on sparse random graphs. And that's because these systems are highly heterogeneous. Each node behaves differently in each other node. But in recent years, there's more and more study on these kind of systems. So I will discuss that in this talk. So let me first start with what's somehow the paradigmatic model in complex systems theory, which is the spin-blast model. And I will discuss what is known in case of all-to-all interactions. And then I will discuss the dynamical systems. Let's go to the rest social retail. So let me start with the spin-blast problem. This is a very fruitful problem. So much of the metals and complex systems theory were developed this model. So as the first model that I've studied in my PhD, I started working this model. And yeah, let me explain how this works. In this case, you have a graph and you have edges on the graph. These edges can be either one minus one. Let's see here an example. And then we can assign to the nodes of the graph the variables. Again, they take their binary and be either one or minus one. For example, this is a possible configuration of the network. You see we assign here this node one and this node minus one and so forth. They exist to the power and such configurations. Next, we associate a cost to this configuration. And this goes as follows. If the weight of an edge, in this case, it's one equals the product of the variables at the endpoints of the edge, then we add minus one to H. On the other hand, if the weight of the edge is opposite to the product of the endpoints, we add plus one to the H. That's how this works. For example, in this case, we have seven edges which are satisfied. This means that the product of these variables equals the weight. So for example, here's minus one. The weight is minus one. Here it's one and the weight is one. So this is seven. So we get minus seven and then we have two edges which are unsatisfied. We get the two here. If we add this up, we get that the total cost is minus five in this configuration. This, however, is not the minimal value. You can find the configuration for which the total cost is minus seven. That's the minimal configuration. So the spin glass problem is now the following. Spin glass problem is to find the minimal value of H for a given network and a given assignment of these weights, Gij. The spin glass problem is weights or parameters, and we draw them from a given probability distribution using our IID, and they take probability one-half plus one, probably one-half minus one. The sigmas are variable, so we want to find the minimal value of sigma given a certain configuration of weights. I can tell you this is a very, this is a difficult problem because as you see, the number of configurations scales exponentially in N. There are two to the power of N configurations, so naively you will have to go through all of them to find the minimal value. There's a problem with exponential complexity because the large number of configurations. Also, you could consider to do some gradient descent, but there's lots of local minima that doesn't work, neither. In fact, so let's look at the all-to-all problem. If you have all-to-all interactions in the C variables for one, so there is known, there's a trivial topology, so this becomes the cost function. In this case, so it was shown by Bajarona that this problem is NP-art. It's really proven that it's an exponentially hard problem in the system size, and with more than a computer, you wouldn't be able to find the minimum. However, and this is really remarkable, so in 1980, 1979, Parisi, he actually, in the limit of an infinite-size system, if you look at the average value of the minimal value of H, the average here is average over the GIG variables, the H weights, we take the average over all possible configurations, average minimum, then he developed a method that gives you the minimal value. But even though the problem is NP-art, you can determine theoretically the minimal value of this cost function, average over all configurations. And this was, at Parisi, developed this methods, cavity method, replica method to study this problem, and then about 25 years later, telegrammed through this rigorously. There has been some fruitful mathematics from this as well. So as well, this problem gave us lots of insights into complex systems, and in fact, so in 21, this was awarded Nobel Prize in physics for ground-breaking contributions to the theory of complex systems, as based on the methods to solve this spin-glass problem. So this is for all interactions. However, what about random graphs? Well, here, it's a bit disappointing, I can tell you, because for random graphs, we don't have, or now, or it's, maybe it's good, depending on your perspective, but we don't have this estimate yet of what is the average value of the minimum. That's an open problem. This to illustrate that we know much more in all interactions than in problems on random graphs. All right, so let's now go to dynamical systems, this is a second problem. If you want to study, say, a neural network or ecosystem, you, in general, you don't have a minimization of a cost function. Instead, what you have is a dynamical system. So you would have a differential equation with this form. So here, xi is a variable representing the state of node i. For example, it can be the population abundance of a species. It can be the firing rate, the potential of a neuron, and so forth. And the equation tells you how the variable x at node i changes as a function of the network structure. So here on the right-hand side, you have the influence of the other nodes in the network. For example, if you study the ecosystem, this will yield a set of Lotka for equations. If you study neural network, you have set of firing rate equations and so forth. So this is, in general, a very difficult problem to solve because it's a set of non-linear differential equations. You don't have the tools ready to solve that. So instead, what we typically do is we look at something much simpler, try to solve that, and then hopefully we can come back to the more complicated problem and get some insights on that. So here, if we study the dynamical systems, the simplest possible problem is a linear problem. In this case, this function here becomes a linear function. It's a sum, basically, the adjacent matrix multiplied by the vector x. That's a linear dynamical system. And here, you as well see what are the meaning of these edge weights and these diagonal weights. So the diagonal weights basically tell you how fast a variable x in isolation decays to zero, whereas the edge weights tell you the influence of neighboring nodes on the variable and the rate of change of x at node i. That's the meaning, the concrete meaning of these edge weights and these dynamical systems. So why is a linear system tractable? Well, because we have the explicit solution of a linear dynamical system. We can express if the matrix A is diagonalizable, we can express x as a sum over the right eigenvectors of this matrix A. And then we have here some pre-factors, which is the exponential of the eigenvalues of A times time. This is an explicit expression for x in terms of the spectral properties of the matrix A. So what we can conclude from this is the following. If you understand how network topology influences the spectrum of a matrix, then from this, then you can understand how a network topology describes the dynamical system through this equation here. This is an approach often followed in the process of theory. So if you look now at this equation, you see you can notice one particular thing, which is that in the sum, if time is large, in the sum, there's one term that dominates, which is the term with the largest real part of the eigenvalue lambda j. So that brings us to the concept of a stable matrix. If we order the eigenvalues from large, these are the real part from large to small in this way, then we have two possibilities. So first is that the eigenvalue with the largest real part is negative. In this case, it happens that in the limit of large times, the state of the system converges to zero. To this we call a stable matrix. The stable matrix is a matrix for which all eigenvalues have negative real parts. Then the opposing case is when the real part of the eigenvalue is positive. In this case, what happens is that in the limit of long times, the norm of this factor diverges. That's the opposite case. And such a matrix A, which has this property, is called unstable. This is the two cases. So let's now, as I mentioned before, we had systems with all to all interactions, which are well understood. And then we have network systems, which are much less understood. So let's have a look what that means for linear systems. So in this case, a system with all to all interactions, what we're doing again is we put these C variables to one, because everyone interacts with everyone. And the edge weights, Gij, we draw them independent and identically from a given distribution, P of j. That defines the linear system with all to all interactions. So as we have seen, if we know the spectral properties of this matrix A, then we can tell what happens in the long time limits of this linear dynamical system. So what are the spectral properties of a matrix A with all to all interactions? Well, what happens is this kind of all these eigenvalues occupy the complex plane because it's matrices are non-symmetric. So you have an imaginary part of eigenvalues in a real part. And what happens is that these eigenvalues uniformly occupy a disk. And the radius of the disk is square root times the size of the matrix times the variance of this J variables. So that's the radius of this disk and the eigenvalues are distributed uniformly on this disk. So this problem effect has a rich history. This was first found by Ginebre in 65, where he studied the case with Gaussian random variables. And then there was a very influential paper by May in 72 where he conjectured, this is true for any independent and identically distributed random variables. That's much more generally valid. And then there was a whole sequence of intermediate results by mathematicians. And in 2010, it was proven rigorously that indeed these eigenvalues occupy uniformly this disk. So let me now come back to this paper by May in 72. So he was studying this in the context of linear dynamical systems. And what he concluded is the following that since this leading eigenvalue increases as a function of n, if a system is large enough, then if it has to be stable, this variance here has to be very small. So you can't have a very large system which is heterogeneous. This is called the diversity complexity trader for stable systems. But that's mainly for all to our interactions. So let's now have a look what happens with this when we study sparse systems. So random graphs, yes. So let me introduce you the main models to study sparse systems. So the paradigmatic model here is the Aris-Renier graph. This is a model that has two parameters. There is the mean out-degree, c, and there's a number of nodes and that's it. And given these two parameters, you generate graphs as follows. So you say that if you take randomly a pair of nodes i and j, there is a connection from i to j with probability c divided by n. On the other hand, this connection is absent with the probability 1 minus c divided by n. So you do this for every pair of nodes in the graph. And then it's fairly, let's say with Python, you can easily generate such graphs. So here you have an example. This is an Aris-Renier graph with mean out-degree 1.5 and with 100 nodes. That's an example of such a random graph. So one issue with this Aris-Renier graph is if you now plot the degree distribution. So if you plot a number of links that are instant to a node, then this is a Poisson degree distribution. However, if we remember the special of the internet, we world systems have these hubs. It's a very large node with many connections. So we would like to manipulate or change the degree distribution. This yields a different kind of a sample. These are random graphs with the prescribed degree distribution. So these graphs are generated as follows. You assign to each node given in an out-degree, given a certain prescribed distribution. And then given these prescribed in and out-degrees, you randomly assign Aris-Renier nodes. This generates some sort of random graph, as you can see here, for the case of a power low random graph. So this is already significantly better because you see here hubs start to appear. So if you have power low random graphs, you have these hubs with very large degrees, and the degrees of these hubs increase with some size. So these are the models that we'll study the spectra from. Let me then define the matrix A. These are the graphs we as well assign now weights to these graphs. And we do that in the most simple way. For the graph, as discussed in the previous slide, the graph is a random directed graph with the prescribed distribution of in and out-degrees. The weights are assigned independent and identically from distribution P of J. And the diagonal, the node weights are as well assigned independently from distribution P of D. But that defines a random matrix associated with a sparse random graph. And we would like to understand the spectral, the eigenvalues of that matrix. So if you now diagonalize such a matrix, you get this kind of spectrum typically. This, again, each of these dots represents one eigenvalue in one matrix and one matrix representing a sparse random directed graph. So this eigenvalues of complex value because the graph is directed. But here you have to imagine a part of the real part. And what you see is that these eigenvalues are confined in a circle. And the radius of the circle is now given by the following quantity. So it's a square root of C star times the average of J squared. And C star here is not the mean out-degree. It's slightly more subtle. It's the out-degree when you randomly pick an edge in the graph. So instead of being the out-degree when you randomly pick a node, this would be the out-degree when you randomly pick an edge in the graph and look at an endpoint of that edge. So this line here is obtained in the limit where the graph is infinitely large. This is based on a hefty theory similar to the methods developed for spin glasses. Then you can actually show it in the infant size limit all eigenvalues or the spectrum has a continuous spectrum. And it's this given by the circle here with this radius here. So there's another property you can appreciate which is that here there is an isolated eigenvalue. This is an eigenvalue outlier. And we also have an analytical expression for that. It's simply C star times the average of the diagonal weights. So that's what we get for the spectrum of such random directed graphs. So you can now wonder what happens in the middle here. So here you see that the spectrum is not anymore homogeneous. So but we have as well this theory to get spectral distribution of infinitely large graphs. And yeah, so here you have an example of such a spectrum. So here you have the discrete spectrum of a finite matrix. And this is the corresponding spectrum of the infinitely large random graph, which is continuous because it's an infinitely large system. So then we can as well add. Oh yeah. That's the like the middle one is a zero mode and then you have the continuous spectrum. Isolated one is a zero mode and then you have a continuous spectrum of the graph. Yes. So in the middle, in fact, there's a delta peak in the middle. Yeah. And the weight of a delta peak is the fraction of nodes that do not belong the strongly connected component. But there is as well some interesting relation in the spectrum and topology of the graph. But there is actually a delta peak in the middle. We can actually see that here. Yes. So we can as well add diagonal disorder. In this case, we add uniform diagonal disorder between zero and five. You can see this dark line here, which is reminiscent of this diagonal disorder in the graph. And as well, using this theory, you can get analytical expressions for this boundary. At least you can show that this boundary is solved by this simple equation here. And the outlier is solved by this equation. So you can get equations in the limit of infinitely large graphs on the boundary of the continuous spectrum and the outlier. So let's now look at stability. Let's use this to look at the stability of linear dynamical systems. So we have seen that this is determined by the leading eigenvalue, the eigenvalue with the largest real part. So we will have to look at the one which is largest, which is in this case, the outlier. So you can then get this kind of stability diagrams. So this diagram tells you for which parameters the random graph governs a stable system, and when it's unstable. So here in the y-axis, we have this mean out degree. On the x-axis, we have the variance in the H weights. So how heterogeneous the H weights of the graphs are. And so you see is that if the graph has low connectivity and the H weights have less heterogeneity, then the system is stable. However, if you increase the connectivity or you increase its heterogeneity, then the system becomes unstable. You can get this analytical diagram from studying the spectral properties of these random graphs. Yeah, and what's kind of remarkable is that the stability does not depend on the degree fluctuations. It only depends on the mean out degree. That was something we kind of found surprising. Also here it doesn't depend on the higher moments. So there is one more thing which I hope you can appreciate, which has been the following, which is that the eigenvalue of a random directed graph is finite. So if you take the limit, if you have seen the limit of an infinity, the spectrum is confined in a finite region of the complex plane. And in fact that's surprising because the norm, the matrix diverges in the limit of infinite large systems, but the leading eigenvalue is finite. So that's a bit weird. On the other hand, to add to this, if you look at a non-directed system, then the eigenvalue diverges in the limit of infinite sizes. You see that the nature of the interactions, you can have systems with finite, with spectra confined in a compact region of the complex plane or spectra which stretched to infinity, just based on changing the nature of the signs of the interactions. So we call such systems absolutely stable and that's because if the leading eigenvalue is negative, it will be negative for all system sizes. The system size does not play a big role in stability of such systems. On the other hand, such systems we speak of size-dependence stability, that's because if a system is stable for a certain system size, then it will get unstable if you are large enough. So eventually systems that are large enough will eventually get unstable, we speak about size-dependence stability. So you can now ask the question, so directed graphs, they are up to the stable, the eigenvalue is finite. Non-directed graphs have infinite eigenvalue. So what about other sign patterns? For example, we have seen ecosystems have antagonistic interactions. How about that? Does that affect these two scenarios? So let's therefore look at the following cases. So we have these kind of antagonistic systems. These systems have sign-antismatic interactions. These are predator-prey interactions as in ecosystems. Just to note that we're only fixing here the sign of the interactions, not their absolute values. So the weight in the positive direction can be different from the weight in the negative direction. We're only fixing the sign, so it's not the same as an antisymmetric matrix. We have this oriented case, which are case of unidirectional interactions as neural networks. And then we have the case, what we call a mixture. In a mixture, we have three types of interactions. We have the sign antisymmetric interactions. We have comparative interactions, which are minus-minus. And we have what we call mutualistic interactions, which are plus-plus. So a graph that has a combination of that, we call that a mixture graph. So now let's look at the leading eigenvalue in these different cases. So for this, we consider the following kind of matrix. So it's very similar to the matrix we discussed before. The only thing is that the graph is non-directed. But we add the GIG, they are now correlated. So GIG is correlated with GGI because we draw them independently from joint distribution. And this allows us to have antisymmetric, science-symmetric interactions and so forth because we draw them from joint distribution. Apart from that, the model is the same. So then we study this problem. We got this graph. Let me go through it. So here we have the average real part of the leading eigenvalue, because the eigenvalue with the largest real part. And you see that for a mixture ensembler, so it has a mixture of interactions, if you increase the size of the graph to the number of nodes in the graph, this leading eigenvalue diverges. On the other hand, if the graph has assigned antisymmetric interactions as antagonistic, then if you increase the size of the graph, then is this finite? So this, we completely didn't expect that. So we found this, we were really awestruck because again, the norm of this matrix diverges, but the leading eigenvalue is finite. So that's really weird. So here you have a finite leading eigenvalue. And let me as well stress that the markers here are direct diagonalization. The solid line is theory in the infant size limit. So we have this cavity theory based on spin glosses. And you see it matches very well to direct diagonalization, but which this confirms there's not a finite size effect, confirms that really in the infant size limit, it's finite. Oh, yes, please. It's constant. Yeah, yeah, it's, it's, I think it's maybe two or something like that before I don't remember the exact number, but it's, it's finite and constant. Yeah. Yeah, otherwise, it doesn't scale the size of the net right here. No, it doesn't. But the weights, still the degree distribution is unbounded. It's Poissonian, it's unbounded. So that makes that the norm of the matrix still diverges. So you have an unbounded degree distribution. And yeah, because you get nodes of any kind of arbitrary large degrees when n becomes large. Yeah. So that's why, yeah, that's why the result is, it's not trivial, if the degrees were bounded, it would be trivial. Yeah. So the, so this is the point. So let me as well stress that so this dashed line is not theory, this dashed line is just guide to the eye. However, we can do a theory for the mixture case, and we find this divergent to lead in eye value, so it's consistent that organizations are consistent with theory as well in the mixture case. So then, there's another kind of interesting result. I want to highlight, which is that so normally if you increase fluctuations in the system, it becomes less stable. But interestingly, in the antagonistic case is the opposite. When the antagonistic ensemble, if you increase the degree fluctuations, then the stable region increases, as you can see here for mean out of degree four and here for mean out of degree three. Whereas say in a, let's say a directed system, if you increase the degree fluctuations, the stable region becomes smaller. That's what we usually expect in complex systems, you increase fluctuations, systems become less stable, but remarkably in antagonistic systems, it's the opposite. That's also quite interesting. So we can now have a look at the full spectrum. So we have seen that the spectrum of these antagonistic mixtures are finite, they're confined in a compact region of the complex plane. So let's look how the spectra look like completely, fully. So this is for mean out of degree four spectrum of the antagonistic matrix. This is for mean out of degree two. So you see this, how this is confined in the complex plane. So this is, this line here is again the theory for infinitely large graphs. And you see there's this peculiar thing that at low degrees, the leading eigenvalue is imaginary. See, because there's this re-entrance behavior here, there's some interesting things happening there too. Let's now look at the mixture case to understand what's actually going on. And here, again, we have the finite, the finite size results are quite captivating because they're finite size matrices. But if we look at the theory, which is a solid line, you see that here tails start emerge on the real axis. That's the reason that the leading eigenvalue diverges because you have the tails popping up in the real axis. So you could, you could now be happy because we have a theory that describes the results. It fits that actualization. But in fact, when we got this, I wasn't really happy because still we don't really understand what's going on. You have theory, it fits that actualizations fine. But why is it really that here, there are the steels and here they are not that that's we'd like to discuss next. So, and for this, we were digging a bit deeper into the literature in the past and we discovered this very interesting concept of science stability, which is something that was studied in the 60s and 70s by quantitative economists and quantitative ecologists. So what's that? So a matrix A is science table. If all matrices B with the same sine pattern are stable. But if you fix the sine pattern, you vary the strength of the, you arbitrarily vary the strength of the sine of the weights, all these matrices are stable. So let me give you some examples. So this is an example of a science table matrix, because these are the two eigenvalues of this matrix. You see, no matter how large a 11 is, no matter how large a 22 is, the eigenvalues will be negative. That's a science table matrix. So if you fix the sine pattern, this determines the stability of the system, the weights don't matter. Which is actually quite, I will explain you later why people are interested in that. So this is a matrix which is not science table. And the reason it's not science table is because you see if this A12, A21 is large enough, then one of these two eigenvalues will be positive, this one will become positive. This is not a science table matrix. So the reason people were interested in science stability is because in empirical data, you often don't know the weights. Then economical data, logical data, you don't, it's very hard to determine the interaction strength of these weights. It's fairly easy to determine the sine pattern. That's why people were interested in this sine stability. So what's remarkable, this seems like a very strong constraint, but however you can derive sufficient and necessary conditions for sine stability. And these are those conditions. So a matrix is a sine stable at the following three conditions satisfied. The first, the diagonal elements are negative. There's a minus here, this positive is negative. Then how about the weights? Well the weights, they have to be either sine anti-symmetric or unidirectional. So here you see things popping up. This corresponds to antagonistic, this becomes the directed case. And then there's a third condition, which means that you can't have feedback cycles. You can't have cycles by the three. So from the third condition, we can conclude naturally that random graphs are not sine stable because random graphs have a lot of cycles. So sine stability is definitely excluded. However, you see we have still this coincidence of sine and symmetry and unidirectionality. So what we somehow conclude is the following, the random graphs are still locally sine stable. So if you would pick up a note and look at the finite neighborhood, then this finite neighborhood is sine stable if the graph is large enough. With probability one, any local neighborhood becomes sine stable if the graph is large. So from this, we conclude that, or we conjecture at least, that local sine stability implies absolute stability. That's how we understand this. So to check whether this indeed works, let's look at which kind of topologies are locally sine stable. So we have the unidirectional interactions. We have these antagonistic graphs. So here, the red represents a positive interaction, green, negative. And then there's as well other things like graphs that have forward cycles, as well locally sine stable. So all these random, if you construct random graphs out of these structures, they're conjectured to have a finite leading value on average. And then we can look at structures which are not locally sine stable. So we have these mixture graphs. But interestingly, as well, you can have antagonistic systems with cycles. If you have pre-interactions, but you have these cycles, then you're not locally sine stable. So this random graph should have a divergent leading eigenvalue, if what we conjecture is correct. So let's test this. So here we have the three ensembles. So the red one is the case of the antagonistic, when you graph, so this one's locally sine stable, and it converges to a finite value, as we have seen before. But here, this is the interesting one. This is the case of an antagonistic graph built out of these motifs. It's called a husimigraph as these cycles of length four. But the interactions are all pre-interactions. And you see this seems to diverge in the middle of large N. And this is the thing we had before. It's the mixture case, which also diverges. So that works. These numerical evidence seems to confirm this. And then here we can, as well, look at the spectra. It might be even more convincing. Here we have the spectrum of the sine locally sine stable random graph. We see it's confined, seems to be confined in the complex plane, especially note that there are no tails on the real axis. This is the mixture case. Here you see these tails form the real axis, and this is the antagonistic husimigraph. So this one has these cycles of length four, but it's antagonistic. So here the local sine stability is broken by the topology, not by the sine of the interactions. But if you break it by the topology, you also get a system that has divergent leading eigenvalue. So let me maybe go to this final slide. So how about footwebs? Are they locally sine stable? That's, of course, not an answerable question because footwebs are finite, not infinitely large. However, I mean, from this, we learned that anti-symmetric interactions were stabilizing, and yes, footwebs have pre-interaction. So that's satisfied. But as well, from this, we learned that there should be a small number of cycles, because cycles tend to destabilize footwebs. So is this really the case? Let's look at some data. So this is data from dinner in 2002. So what they did is they looked at a whole bunch of networks in literature. So these include social networks like co-author networks, the worldwide web, neural networks, but as well includes footwebs, here are the footwebs. And then, so they compared, this is the clustering coefficient as a way to quantify how many cycles graph has. So they divide the empirical value of the clustering coefficient by the clustering coefficient of a random graph with the same degree distribution. But if this is of order one, it means that the graph has the same number of cycles as a random graph. If this is larger, then it has much higher number of cycles in the random graph. So what's kind of interesting is that you see that all these networks here have a very large number of cycles. So there are three order of magnitudes larger than the random case. However, the footwebs are at the bottom here and they are the same magnitude as a random graph. But that at least seems to be consistent with the concept that cycles are destabilizing. So yeah, so let me thank all the collaborators who have worked on work related to this. And yes, this is some discussion. So we have seen that this kind of local science stability determines the complexity rate of more stable complex systems defined in random graphs. And so from what we can learn from that is that for random graphs, at least different than from fully connected systems, the sine pattern really matters for the stability of a system. And there's two types of sine patterns which stand out as being stabilizing, which are those that have sine and symmetric and those that are unidirectional. And there's a third property that is stabilizing, which is that you have locally a tree topology, which means that you have a small number of small cycles. Yes, that's good. Thank you very much for your attention. Thanks a lot for the very nice talk. We have time for questions. Yes. Please use a microphone. Hi. Thank you very much for the presentation. So I was wondering if any of the rules which you defined would hold for a system with nonlinear interactions? Yeah, that's all goes down the trash as soon as we add it. Yeah, that's a very yeah, that's of course a very interesting question. That's the I think the question to look at next, how has local science stability and effect persistence with nonlinear interactions? But there is, I think there is good hope and there are some, for example, there was a paper as well written in the 80s, where people shown that on a tree, not on a random graph, but on a tree, if you have anti-clinicistic interactions, and if the system has a fixed point, if the system has a fixed point, then this is the unique equilibrium point of the system on the condition that the graph is a tree and it's anti-clinicistic. So that's the, as far as I know, the only result which is out there. So, but yeah, I think this is yeah, this would be very interesting to investigate what are the implications for nonlinear systems, local science stability. Yeah, so when like, when you introduce the signs on these random graphs and directions, if I think of like mapping the system to some kind of a spin glass system, are we talking about like presence of, can we sort of map this to presence of frustration in these graphs and you can think of these local science stability as like frustration-free configurations or something? Yeah, it's quite difficult to map it on a spin glass, because for a spin glass, you basically need a Lyapunov function, that the dynamical, spin glass is a minimization of a cost function, and you can, a dynamical system corresponds to a spin glass system, or if it has a, it's called Lyapunov function, this means that there is a function that decreases as a function of time, if in your dynamical system. And so if you have this Lyapunov function, then you can think on how to map this on a spin glass, but in general, there is no Lyapunov function, dynamical systems, so that makes it, yeah, it's not really clear how to relate these two things. So, yeah, so like you were, I think working mostly on these matrices were square matrices, yeah, like, have you ever considered like looking at systems, like for example, the incidence matrix is a rectangular matrix and looking at the SVD instead? Yeah, I haven't, I haven't, that's, yeah. Other questions? If not, let's thank Isaac again.