 Shall we start? OK, today's Friday, everybody's tired. Well, I'm tired. So let's do something short. So I'm going to focus on doing maybe a simple things. But before, I mean, some people were interested. Actually, if for some reason you would be interested to go to Mexico to do MSE or PhD, you know with me just in Mexico, in Mexico, universities. So you just drop me an email and I can guide you to the right place to ask information. The only issue, as far as I'm aware, maybe if you want to do MSE or PhD, either at UNAM or UAM or any other university, I'm not sure. It's like, you have to speak Spanish. So you have to, you have to, yeah? Yeah, yeah, I know, I know. But as far as I'm aware of, this is a requirement, but I'm not sure. So if you would be interested to, you know, like, for instance, there is a guy here who would like to do medical physics. So I know the correct people there that you can ask, OK? And I hope you understand my writing. That's my email. And yeah, and also I think that somebody sent me emails to ask me about things and maybe I didn't reply. Because send it again, because I got a lot of emails and I think it was buried somewhere. Send it again, please. And in principle, in Mexico so far, there is good funding for students, either at MSE and PhD. This is not, this so far, I don't think this is an issue. And the salary for living in Mexico or the fellowship, it's not a salary, is quite OK. OK, so what can we do today? So let us do simple things, OK? So I think it's very useful when you are doing derivations to do, when you do a very long derivation and you are reached to a horrible formula of what we have done to do particular cases, to see what you'll recover no results. So for instance, there's a very, very good number solving random matrices, which is called relative to spectral density, which is called the Wigner semi-circular law that appears for the classical random matrix ensemble, JOE, JOE, et cetera, et cetera. So we are going to try to recover. So we are going to do this particular case, recover and Wigner's. So again, this law appears when you have standard random matrices where the random matrix is filled with entries. And then entries as, for instance, Gaussian random variables. But in our case, for the equations we derive, the matrices were not like this. For instance, for Poissonian graphs, we were focusing on the spectral density of this adjacency matrix, which has a lot of 0s and a few 1s per row and per column. So remember, so we were doing, suppose that C is a connectivity or adjacency matrix, and we were focusing in Erdos-Renyi graphs. So you would go to a computer program or to a laptop and ask him to generate one of these matrices. But what happens is like you have n times n matrix. You will have per row per column a number of 1s, which is proportional to t, which is the average connectivity, and the rest is 0s. Like for instance, if d would be 3, then per column, on average, you will have 3 1s, and the rest is 0s. So this matrix is very, very sparse. For instance, suppose that this 3, and you focus on this row i and column i, and here you have, for instance, 1 here, 0, 0, 1, and many 0s. So this is not dense. This is not the classical matrices random matrices that appear in the classical ensembles. So how can we, from our equations, get the Wigner semicircular law is do what is called the dense limit. The dense limit is take d going to infinity. It's called the dense, what I call it the dense limit. So now that the number of 0s per row or per column becomes bigger, bigger, and bigger. So you fill or you dense that row or column. And what you would expect from our equations is that in the dense limit, you recover the Wigner semicircular law. So far, so good? OK, so how do we do the derivation? It's very simple. So we can do it with the leplica method. With the equations, we have for the leplica method. And the equations that we have for leplica method. So let us start with the equations we have for the leplica method. So what we have is the following. For the Gaussian orthogonal ensemble, you have random matrices where each entry is a Gaussian random variable. With 0 mean and variance 1 over n, or j parameter over n. No, but when you do this dense limit, at least you are sure that you are going to have a number of elements in a given row or column filled, in this case with 1s. So you are trying to get rid of as many 0s as possible. Yeah, but still, the way you can put the 0s and the 1s is random. And you have to be careful with how you do the dense limit. Because the idea, this is a very good question. Do you do the dense limit after you do the thermodynamic limit? This is very important. So first you have to do the thermodynamic limit n going to infinity. And then you do the dense limit d going to infinity after the derivation we have done. And then this makes sense, right? So in such a way that it's still d over n, somehow it still goes to 0. Sure, the law is the, yeah, of course, I was going to, yeah. The law says that the following. I'm going to read my email here. That if you have matrices, you know, a, i, j, such that these entries are Gaussian variables with zero mean, let's say, variances j divided by n, right? Then the spectral density for this guy is equal to the square root of 4j square minus lambda squared divided by 4 pi j, I think, something like this. So what you have is a semicircle. We are going to derive it. Yeah, but the diagonal doesn't affect that the variance scales differently. So it is true that for the Gaussian orthogonal, some of you have to be a bit careful how you say how the scaling is for the diagonal and diagonal. But in the end, it doesn't impact this. Yeah, very good. More questions? Tell me. Yes, yes, yes. And still you got it, yes. We don't have to do anything special. This is a very good point. We don't have to do anything special mathematically to take the limit. You'll see it's very easy. So just central limit theorem when you do the mathematical derivation. The only thing maybe you have to be a bit careful about, and even you don't need to do that, is to change a bit the connectivity matrix. So it is a weighted connectivity matrix that instead of having zero sum ones, you have, when you have one, you have a value, which is rescaled with d. In such a way that when you make d go into infinity, you have a sum which is finite. But even this, you don't need it. Shall we do derivation? Yeah. No questions? Yeah. OK, so let me let us remember what we have for the replica, for the final equations, OK? So what we have is that the spectral density average over the ensemble of graphs was equal to the limit. It is going to 0 plus of the integral of d omega, sorry, d delta omega of delta, the imaginary part of delta, OK? And we already discussed on one divided by pi. We already discussed the meaning of this notation. Where this omega of delta obeys the following circumsistente equation. It has the omega of delta is equal to the sum k from 0 to infinity, exponential of minus d, d to the k, divided by k factorial. The multiple integral for l from 1 to k, d delta l omega delta l. And this multiplies, at the delta of delta, minus 1 divided by z, minus the sum for l from 1 to k delta l. All right? So far so good? Yeah, so the only thing I'm going to do is, OK, if instead of having a connectivity matrix or a adjacency matrix, you have a weighted adjacency matrix, or now instead of having a 1, you put a parameter which is j divided by square root of c. The only thing that this changes in that equation is here. I let you to do this thing as an exercise. You will have here j squared divided by c, sum for l from 1 to k delta l. It's simply at a scale that is in front of the variance. And the reason I'm putting this thing is because I want to have here 1 over d. This is the average connectivity, 1 over d, because I want to take the limit d going to infinity. So I want to have a finite sum. But if you don't put it, it doesn't matter. The only thing that would happen is it tells you that the domain of the spectral density grows with d, actually grows with the square root of c. Square root of d, sorry. So you can do it both ways, if you want. Good? Yeah, the matrix answers. So it's still a weighted connectivity matrix where if two nodes are connected, you have to wait in this connection, which is d divided by the square root of d. That's it. And the only thing that it does is to change to rescale this factor. That if you notice, if I were to put it on the other side, it's like rescaling c. And c has lambda, and lambda is in the domain of the spectral density. By doing this rescaling, I'm going to be sure that the domain of the spectral density is going to remain compact. It's not going to grow with d. But otherwise, if you don't need to do it, you will see. More questions? OK, so how can I take the limit of d going to infinity in this expression? So what does it mean d going to infinity? It means that the probability of a Poisson number to appear becomes larger and larger. And this is going to impact the number of elements that you have in this sum. So in such a way that when d goes to infinity, you have an infinite number of elements in this sum. And what is this? It's a sum of random variables. So the only thing you have to do is to find a way of determining the probability of this sum of random variables. So how do we do this? Very simple. So I take this thing, and I isolated somewhere else with a beautiful Dirac delta. I'm all the time doing the same tricks. I'm going to write this thing as follows. So I said that omega of delta is equal to an integral over something I'm going to call h of the sum of k from 0 to infinity of exponential of minus d d to the k divided by k factorial of this multiple integral l from 1 to k d delta l omega of delta l of the Dirac delta of delta minus 1 divided by z minus h. And then here, I put multiplying. And delta tells you that h is equal to j square divided by d, the sum l from 1 to k delta l. I have done nothing. Good. Now forget about the rest of the universe. So let me change a bit this expression. This I can write it now as follows. I can write this as the integral over dh delta Dirac delta of delta minus 1 divided by c minus h. And then I have the sum of k from 0 to infinity of exponential of minus d d to the k divided by k factorial. And then the product of l from 1 to k delta l omega delta l of this. And then I'm going to say a sentence that maybe it's not the best way to express what I want to say. But I'm going to say it nevertheless. Forget about the rest of the universe. Tell me. Oh, yeah, of course. So forget about the rest of the universe. We are not doing random matrices. We are not doing spectral densities. So what is this object from a ballistic point of view? What is this? It's a probability, the PDF of h. So h is a sum of random variables. These random variables are independent because here you have the product of the PDFs. And I'm doing the average of the Dirac delta. So therefore, this is the density of the PDF of h. It's as simple as that. Do you agree with me? And then the relationship between h and this delta is the sum. So this is a convolution of this. You can express this thing as a convolution of these random variables. But these random variables are independent. So this is very easy to evaluate. You can apply that directly centrally with theorem or you can do directly the derivation. So let us do derivation. So how do you do derivation? You express as I always do. But now for a different reason. The Dirac delta using a Fourier representation. So this thing is equal to the following. So this thing, the whole thing, is equal to dh, dh hat 2 pi. Exponential of i h hat h of Dirac delta of capital delta minus 1 divided by set minus h. And then I have the sum of k from 0 to infinity, exponential of minus d, d to the k, k factorial. And then I have what? So we test it by state. I have the multiple integral for l from 1 to k, d delta l, omega delta l of what? So I'm really able to put the Fourier representation of this Dirac delta, put one part here. And the other part would be the exponential of minus i h hat j squared divided by d sum l from 1 to k, delta l. Are you with me? And since the variables are independent, this factorizes, right? So let's continue. This is equal to the integral for dh, dh hat divided by 2 pi. Exponential of i h hat of Dirac delta of delta minus 1 divided by set minus h of the sum of k from 0 to infinity of the exponential of minus d, d to the k, divided by k factorial. And this is the same integral k times. So this is the integral over d delta omega delta of the exponential of minus i h hat j squared divided by d delta to the power k. So what I'm doing is to prove central limit theorem. That's what I'm doing for this combination of random variables. So now I can do the series, right? Because I have this to the power k that I can put with this d divided by k factorial. This is exponential of this. So this is equal to what? This is equal to the integral dh, dh hat divided by 2 pi. Exponential i h hat h of the Dirac delta of delta minus 1 divided by c minus h. And this, let's do it by step, is equal to the series of k, the sum of k from 0 to infinity of exponential of minus d, 1 divided by k factorial. And then I have d, the integral of delta omega delta exponential of minus i h hat j squared divided by d delta to the power k. So this here is the exponential of the thing which is within the square brackets. So let us continue with some of the series. And this will give us what? This gives us that this is equal to the integral dh, dh hat divided by 2 pi exponential of i h hat h Dirac delta of delta minus 1 divided by set minus h. And then have what? Exponential of minus d, this d here, plus the exponential. Well, the arguments would be times the exponential of this. But I put this in the argument of the first exponential. Plus d of the integral d delta omega delta of the exponential of minus i h hat j squared divided by d delta. And you would say, Isaac, you are, as usual, complicating things, right? But now I can do the limit properly when d goes to infinity. Because here d multiplies something that goes to 0 when d goes to infinity. So I can do a Taylor expansion here and keep the leading contribution in d. So now, when d goes to infinity, again, let's do it. This will be equal to what? In Taylor dh, dh hat divided by 2 pi exponential of i h hat h Dirac delta of delta minus 1 divided by c minus h. And then have this exponential of minus d. And then I have here plus d. And I have what? Let's do it like this. Let's do it like that. I have the integral d delta omega delta of 1 minus i h hat j squared divided by d delta plus terms that go like t to the minus 2 to the minus 2. Now, the integral of d delta omega delta 1 is 1 because these are tensities. Then you have that the minus d cancels with this d. And what I mean is the expectation value of this. And the d cancels with this d. And then overall, you have tens which go like d to the minus 1. So this would be equal to what? The integral of dh, dh hat 2 pi exponential of i h hat h Dirac delta of delta minus 1 divided by c minus h. And then I have here what? Exponential. Again, minus d cancels to the d. And then I have minus i h hat j squared. And then I have the expectation value of delta. That means the integral over delta omega delta of delta plus something that would go like d to the minus 1. So far so good? Excellent. So now, what on earth is the integral of delta omega delta of delta? Well, the expectation value of delta would respect this density omega. So notice that I have here the integral d delta omega delta of delta. Again, forget about the rest of the universe. What is this? The expectation value of delta. The first moment. So let us, again, I want to confuse. I'm going to use the same letter. That denote this thing as delta bar. Bar meaning which is the expectation value with respect to omega. Yeah? So then, what do I have here? I have here the expectation value of delta coupled to h hat. And I can put this thing back together. And I have what? So then, if I continue the derivation, this would be the integral dh hat dh by 2 pi delta of delta minus 1 divided by state minus h. And then I have exponential of what? That's with the step by step. i h hat h minus i h hat j squared delta bar. Yeah? Good? So let me take a common factor, i h hat. So this is equal to this. But now I can integrate back h hat. And this is the direct delta of this. So I have that is equal to the integral with respect to h of the direct delta of delta minus 1 divided by state minus h times the direct delta of h minus j squared delta bar. And now I integrate once more. And I obtain surprise, surprise something that is expected, because I have the bloody sum of random variables and a scale with 1 over d. If it was a scale with 1 over square root of d, then you have to have a Gaussian distribution. You have a here delta due to the scaling. So it's a particular case of the central limit theorem. So then you integrate, and you obtain that the following direct delta of delta minus 1 divided by c minus h, sorry, delta bar. And now we go to, so remember where we came from from the self-consistency equation of omega of delta. So what we obtain in this dense limit, what we obtain in this dense limit is that the omega of delta is the direct delta of delta minus 1 divided by this, right? So we obtain that omega of delta is equal to a direct delta of omega minus 1. Sorry, here I missed the j squared. Minus j squared, sorry about that. Set minus j squared d delta bar. Well, remember that delta bar was, by definition, the expectation value of delta with respect to omega. And that's it, no? Because this is very easy to continue. You say, what do I do with this? Well, I close the equation because the equation is a closure for the mean value of this parameter delta. So from here, I come here, and now from the left-hand side, I do this calculation, right? And what I obtain is that the delta bar, which is, by definition, the expectation value with respect to omega of delta would be applied this thing into here. That's the final simple step. Direct delta of delta minus 1 divided by set minus j squared delta bar of delta. This is equal to what? Well, this is equal to 1 divided by set minus j squared delta bar. So what I obtain is that this is equal to delta bar, which is a very simple question for delta bar. And the spectral density, what it is, is delta bar, no? The imaginary part of delta bar, because it's the first moment of omega. Remember that the spectral density, now in distance limit, was equal to the limit eta going to 0 plus, or the imaginary part of the integral d delta omega delta delta, but this is delta bar, no? So this is equal to the limit of eta going to 0 plus of delta bar. Clear? Simple, no? Or not? Do you find the derivation simple? Notice that I'm always using the same stupid tricks, right? Directly, I have to take things out. So these tricks are very useful. Now, the only thing you have to do is, but this I will not do, no? Because I'm pretty sure if I do this part, I will screw up, all the way to this part. So this is a quadratic equation, a polynomial equation of second order for delta bar. You solve it, you have two solutions. There is a reason you have to choose one over the other. It has to do with how the spectral density has to behave when set goes to infinity. So that's how you choose one of the two solutions, and then you take the limit, and you recover this Wigner semi-circular rule. Tell me. Yeah, I'll put it here. Because I'm an idiot. Thank you, yeah. It's the imaginary part of delta bar, yeah. Thanks. Questions? When d goes to infinity, because you want to go to the, from a diluted, from an ensemble of diluted matrices, sparse matrices, in the sense of three-like matrices, to matrices that are then. So you are trying to approach this ensemble as much as possible to an ensemble of random matrices where many of the entries are different from 0. No, no, no, no, no. This has nothing to do with this. This is always the case, OK? Well, not always the case. So it's funny. I mean, here you have a quadratic equation for delta bar. There are two solutions, because you have two solutions, OK? And normally, the way people choose the solutions to say, OK, I'm going to take the solution that gives me a density which is positively defined, yeah? But there is another way. I mean, you can do this in a posteriori, but if you go back to where we started in this mapping, you can show that the solution you have to take is the one that behaves in a certain way when set, in the complex plane, when the spectral density for set, when set goes to infinity, behaves in a given way. It has to decay like one over set. And this will allow you to take one of the two solutions. Because one solution will not behave in this way when set goes to infinity, and the other one it does. So that's how you choose mathematically one of the two signs. If not, do like everybody else does, right? It's like, I don't know which blood is I going to take. I know that this thing is a density, so therefore, I'm going to have the negative solution, yeah? Very good. More questions? So what else? What do you want me to do now? You want the, tell me. Yeah, over. Yeah, yeah, yeah. But you don't have to take, you don't have to put one over the, so you can do the same analysis without doing this one over the. You simply have to be a bit careful how you take the leading terms. Having one over the allows you to identify it more visually in an easier way, which are the leading terms. But it's not a big deal if you don't take one over the. Sure, actually, this is funny. Actually, in some cases, and this is an open problem. So in some cases, you start with an ensemble of diluted sparse matrices in the way we have discussed, like three-like matrices if you look at them visually. And then in many cases, when you take the dense limit, you recover the classical results for full matrices. But in other cases, you don't recover the classical results. And nobody knows why. Well, a couple of guys know why, but it's not published yet. So you have to wait for it. But this is something I wanted to discuss in this lecture, but we are not going to discuss that. So the dense limit essentially always works, but in a few cases it doesn't. And you have to do something very special, much more complicated derivation to derive the classical results. Classical means, in this case, fill matrices, classical ensembles. More questions? Yeah, for matrices that belong to the classical, what I call classical ensembles. That would be the Gaussian-Orthogon ensemble, Gaussian-Unitary, Simpletic, for the classical B-Chart ensemble, et cetera, et cetera. For matrices where the entries, where the number, essentially, you don't have an extensive number of zeros in rows or columns. Extensive with the size of the matrix. More questions? Tell me. OK, what do you mean by normalization? Because normalization is like a very powerful technique that appears in Contest Matter, but in a certain particular context. Well, it's kind of funny that you mentioned this. Actually, central limit theory has to do with RG. But it's like a trivial RG. Because, essentially, when you have a, in Contest Matter, what you have is, essentially, you have, what is a Hamiltonian? From a point of view of polarities, it's a bunch of random variables that are interacting correlated. And then what happens is that, depending on the parameters that appear and that weights on how the correlation, so the interaction between these random variables when you want to, for some reason, derive from there like a macroscopic object. And you apply central limit theory, it doesn't work. Like for instance, think about easy model and think about magnetization. What is magnetization? It's the sum of spins. What is that? It's the sum of random, bloody random variables. If these random variables are weakly correlated, then your central limit theorem applies. But when these random variables are strongly correlated, close to phase transitions, central limit theorem doesn't apply. So RG, the normalization group, is a way to capture how this sum of random variable behaves. This is RG, which is pretty cool. But in this case, so when you say, yeah, this RG, yeah, but it's RG in the lamest, tamiest, simplest possible way, which is weakly correlated. No, I don't follow you. What is the question? Yeah. Well, I mean, what we did now is something that you should always do when you do long derivation and has nothing to do with this exercise with it here. And it's something that is very good practice, that after a long derivation, the worst thing you can do is to look at your derivation and to say, OK, I'm very cool. I'm pretty sure this is right. It's better after a long derivation to see which particular cases or extreme cases you can do to be sure that maybe your long derivation is correct, or it's likely to be correct. So what we are doing here is to do particular cases to see whether we're recovering all the results. Yeah, but actually, bigness in a particular law, you can prove that actually it's much more general, that it applies to a sum of matrices where the entries are independent or weakly correlated, but they don't have to come from a Gaussian distribution. You only need to know the first two moments of the distribution of those random numbers. And the way to prove this thing, actually, you can prove it from here. For instance, now, you can do this exercise as well. Suppose now that instead of weighting the elements that are different from 0 with a constant, j divided by the square root of d, are weighted by another random number. So I have two nodes, i and j, and they are connected. That means cij is 1, but I put a weight which is random. And it's a Gaussian number. I put a weight which is jij divided by square root of d. Again, this square root of d is that you identify easily what's going on when you take the density limit. And now this comes from a general distribution. So these are j's. They are generated by some distribution. What's your name? Camilo. Camilo. We're going to call the distribution c of j in honor of Camilo. If you hold the division again, let me write down again there. And if you do the whole replica derivation again, you get a self-consistency equation, but now you have to take also into account the statistics of the j's. So what you would have is the following. You would have that omega of delta is equal to the sum of k from 0 to infinity of exponential of minus d d to the k divided by k factorial. And now you have the k integrals, l from 1 to k of d delta l omega delta l. But now you have to take into account the distribution of the j's, which are the nodes which are connected. So then you have djlcjl that multiplies to this beautiful delta of delta minus 1 divided by c minus 1 to 2. That's right. I'll have 1 over t, the sum over l from 1 to k jl, delta jl squared. That's right, delta. And if you want this in the limit, this will resemble more the classical ensembles. Because now you have in each entry, which is different from 0, you have a, if you want this can be Gaussian, now it's like a Milo distribution. But if these were the Gaussian distribution, this would be the diluted classical Gaussian ensemble. And when t goes to infinity, I'm just recording putting elements now from 0 to different from 0. Now, can I apply the same trick as before to do the dense limit? What do you think? Huh? Why yes? I mean, these are different distributions, but still it's the product of distribution. So therefore, this, which is a random variable, and this, which is a random variable, are independent. And then you have the sum of products of random variables. And you apply again the bloody central limit theorem. So you do the same thing. And at the end what you have when you do all the things we have done, you'll have that omega of delta is equal to direct delta. After doing the dense limit of delta minus 1 divided by set minus the expectation value of J squared under Camillo's distribution, right, times the expectation value of delta. That's why when you do the dense limit directly in the connectivity matrix, you get the same as the Wigner's semicircular law, because here you obtain directly the variance of that random variable. And then you also realize that the Wigner's semicircular law to recover it doesn't require that these numbers are drawn from a Gaussian distribution. They must be drawn from a distribution where the first two moments or the first two cumulants are well-defined because only the second cumulant appears. Here I'm assuming actually that the mean value of the Jc is 0. Not Cauchy. I think with Cauchy, you have to be careful, but I'm not sure about that. And this has been proven rigorously something by Terence Town, assuming that you have an ensemble of random matrices and the distribution has some kind of well-behaved properties up to the third moment. So depending on the distribution, you always get Wigner's semicircular law. Distribution for the matrix entries. This is not a rigorous proof by one. This is just a proof done by physicists. More questions? It is, it is, it is. So it is already OK. When I say Jij, I'm focusing in this step of derivation that from the beginning I say that the connectivity matrix is symmetric. See, if I connect it to J, J is connected to I. And of course, the weight of both of them is the same. So in some part, when I do the derivation, I will symmetrize, and I will focus on those elements which are independent from the rest because the rest are fixed by symmetry. So here, again, so Jij has to be equal to Jji, but I and J have to be connected. If the matrix is the, if you can have, I was planning to do this in a mandate, yeah, and I will stop torturing you, right? Mandate, we are going to do non-Hermitian matrices, which, funnily enough, to solve, to do these tricks, you only need to know about the electrostatics of it. You'll see why. More questions? You want to do, you want to hear about the Bichard Ensemble? You want to make me work very hard, huh? OK, let's work, let's talk about the Bichard Ensemble. So this is cool for students because when you do a Bichard Ensemble, you have to realize something. When you do the average over the matrix entries. We are going to do the diluted Bichard Ensemble, which is very cool. And again, actually, this derivation, you can do it with the cavity method, and the cavity method is nicer because when you apply the dense limit, you have to think carefully what is called Onsager reaction term. And you have to realize that that term is zero, but I'll let you think about that, huh? Onsager reaction, if you know cavity equations and TAP equations, let's just get out cavity equations and TAP equations, you know, here, when you go from the cavity equations to the TAP equations, the Onsager reaction term is zero. TAP equations is from Thoules and Mediapalmer, which was the reaction of the replica method because nobody understood the replica method. And it was just a way to write down microscopic equations for local magnetizations in spring glasses. Have you heard about these TAP equations? OK, this is for a different spring college, I think, huh? Well, we're going to be here one more week, so if you want, I can show you about these TAP equations. But let's focus on random matrices. OK, that's just because of the Bichard Ensemble. But we are doing the diluted Bichard Ensemble. So what is the Bichard Ensemble? Well, the definition is not going to be the diluted case. The Bichard Ensemble is the following. So I take a rectangular matrix, let's say that I have a matrix G, with entries i mu, i from 1 to n, mu from 1 to p. For the moment, there is no randomness. This is just a matrix, which is a rectangular matrix. It's a matrix of size n times p. And then given this matrix, I find another one, w, which is simply, let me do this thing properly, G times G transpose, right? So that means that this, I'm doing this thing correctly, yes. So this is n times n matrix, where the matrix entries, w of i, j is equal to the sum of mu from 1 to p of G i, mu, j, g, j, mu. So what's the wood? And this ensemble was introduced by Bichard in 1924 to represent matrices which are covariance matrices. Good. Are you with me? Now, suppose I'm going to do, again, this matrix, they can be whatever they want. This is a general definition. It's simply a product of rectangular matrices, right? So let's just put into a nice context of a graph, bipartite graphs, something like that. So suppose that now this matrix, G i, mu, represents the weights. So what is our graph? It's a graph, but you have two family of nodes. Why in this case, I think that by bipartite, that's quite. Why do you think I have two family of nodes? Because I have two different type of indices, one that goes from 1 to n and the other one that goes from 1 to p. So let us represent the nodes of a bipartite, these two family of nodes in a bipartite graph with circles. So I will have i from 1 to n circles and then a half squares. And then this matrix is telling you, OK, if the element i mu of this matrix is different from 0, then the circle i is connected with a square mu. And the weight is precisely G i mu. So then a half a bunch of squares connected to circles. And the circles are connected to other squares, et cetera. So suppose that this is node mu, this is node i. So there is something between these two guys is because G i mu is different from 0. And then mu can be connected to, sorry, node i can be connected to a square mu. This would be the entry G i mu, et cetera, et cetera. So far so good. And again, I'm drawing a tree because I want to make the things simple. This stuff does have to be a tree. Unless, of course, I apply here the cavity method. Well, suppose that actually we have that a bipartite graph has a weight at this game by a diluted matrix. J. And the next question is, how W looks like given this bipartite graph. So the memory that W is J G transpose. That means W ij is equal to the sum mu from 1 to p of J G i mu G J mu. Tell me. The bipartite graph is a graph that can be partitioned into two different families of nodes. Two families of nodes or more. One family, another family. And the graph is, you see, the graph is divided into nodes, square nodes, which are connected to circles, but not a square, some vice versa. Ah, yeah, this is wrong. Sorry. Yeah? That was a problem. Sorry. Yeah? Now, if I give you a graph of this sort, which is three like for this weight for the bipartite graph, what is how this graph associated to W looks like? You have an idea? So that's right. So what is going to happen is like a soldier tracing away the squares by keeping track of how now the circles are going to connect it by the squares. And what you get are clicks. Like, for instance, if these nodes are connected by this square, what it results is all these nodes are connected by this link. And then you have connection between all these nodes. So then you get something. But it's not going to represent this thing. You have to do it because all two are, I think, so. Yeah? Yeah, that's right. Because what happens, like, for instance, if I take this one, ij, and I sum over mu, these two are connected by mu. So this is connected. And this is connected. And this is connected. And this is connected. It's just the sum of the links of the source of the links. Yeah. And the weight would be the corresponding one, OK? If the matrix entries are just 0 and 1, so you get simply the connection with weight 1. So this one will result in something like this, not for the central thing. So you have these four nodes. If I'm not mistaken, which are connected, no? Something like this, I think, right? And then maybe this would be connected to another click. And maybe this would connect to a click of, I don't know, 5, whatever, with all the possible links, et cetera, et cetera. Yeah? Good? Well, I suppose that your future boss and your current boss comes to you and says, OK, I want for you to derive the spectral density of these weird objects. Why not? And then you just get at your head and just like, what on earth am I going to do this? Shall we do it? I'm going to give you the, OK, so. I'm not going to do the whole thing, because I'm going to tell you. No, no, so what I'm doing now is from this graph, which is represented by a matrix, OK? I know that W will have an associated graph, because it's related to the j's with this product. So giving this picture, what is the corresponding picture of W, if I understand that this is like a weighted connectivity matrix for a new graph. Eh? Ah, a click is just a connection of nodes on a graph which are connected with or between themselves. This is a click. Yeah? No, and there isn't this. So let us do it in the simplest possible case. Let us assume that the weights are 0's or 1's. 0's, the node i mu are not connected, 1 is connected. Yeah? So now, I give you this tree for the bipartite graph, and I want to construct the trigonaspondent to W. All right? So then I need to check if i and j are connected, because if they are not connected, I will not draw a line. Yes. Is that OK? So let us say that, OK, I suppose that this is i, and this is j, right? They are connected via mu. So when I sum over mu, there is a point from 1 to n. There's a point that this label will take precisely this value, and you will have 1 times 1, because these two links are there. So that means that the corresponding weight for W, i, j is 1. So that means that in i and j, you have to put a link. If this i and j. So I take now k, and I go to the matrix entry, W, i, k. So what happens, like here, I do the sum for mu, and at some point mu is going to be the mu that is connecting i and k. So that means I have to draw a link now for the weight associated with W that goes from i to k, et cetera, et cetera. So what happens, like when you sum over this mu that appear in this product of g with j, j transpose, what you do is to link all the nodes that are connected via this square node. Better? No problem. Sorry, I might start it already. It's frying. Yeah, so now you have a crazy boss, a Spanish crazy boss. I'll tell you, OK, I want you to calculate the spectral density of this, right? And then you say, I don't know how to do this. So what you would do? Just our psychotherapy, or? No, so first thing is to realize it's like, OK. So first thing to realize is like, well, the mapping, first is what? No, the spectral density is the spectral density. So that means this is a matrix, W. I can calculate the spectrum. I suppose that God gives you the spectrum. I want the spectral density of this. It would, yeah, yeah. The clicks will impact the shape of the spectral density in the same way that for those renegrafs, a certain topology of the graph impacted the discrete part of the spectrum. But in this case, it's not going to be so easy to disentangle. But yeah, you can get some information about the topology. More questions? Very good. So what you do is to do this mapping. So you say, OK, so what do I know? What I know is like the empirical spectral density for this matrix W is equal to, well, the limit going to eta to 0 plus of minus 2 divided by pi divided by n, the imaginary part of the derivative with respect to set of the logarithm of the partition function for this matrix W of set for set is equal to lambda minus i eta. So why can't I do this directly? Why can't I do this thing directly? Because the mapping was exact for any symmetric matrix. And this is a symmetric matrix. That's it, right? And then once I'm here and I map my problem to a problem in StackMec, there are two ways to solve it. At least the ones I showed you. Cavity method or Sleplica method. Now, both are very cool. And I'm going to give you the ideas. And actually, this is in the first paper I published in this area, right? And now, and in this case, they have some subtleties, right? So for instance, in the cavity method, it's better you can apply the cavity method directly to this graph, but this is annoying. Because you have to be sure what you have to remove to the decodalized things. Remember that the trick of the cavity method is to find something in the system you have to remove such that the rest of the systems becomes statistically independent. That's the trick. So for the cavity method, it's better against you can do it directly in this type of graphs. These are like click three like graphs, right? But it's better to do it in the bipartite graph for the graph of the G. So you can write a Hamiltonian in this mapping where you have explicitly two type of thermal variables, one type of thermal variable associated to the squares, and the other type of thermal variable associated to the circles. How can you see this? Well, do the mapping. Again, remember that this mapping is always the same, right? So the mapping is this hat of Hamiltonian that was equal to 1 plus, let's put it like this. It was 1 half of the sum of ij 1 to n of xi z minus wij xi. Where wij is given by this. From combining this with the definition of this research ensemble, you can show that this, well, you can write this Hamiltonian and ask if it has two variables. One sitting on these square nodes and another one sitting in these circle nodes, right? And then what you do, you apply cavity method twice. You see what happens in this graph when you remove a circle node? And then you see what happens when you remove a square from the graph. And then you have two type of cavity marginals, one for the variables which are in the squares and the other ones for variables which are in the circle. So you have now two couple equations for these two distributions. And then you can show that actually the marginals can be actually cautions, et cetera, et cetera. So you can do the same trick. I'll let you do it as an exercise. This is an article I published in some time ago in 2008. It was a PR. Yes? Another way to do it is by replica method. So let me just give you do, let's start doing the derivation for the replica method and stop in the crucial step that you have to realize that you have to be careful how you proceed. So suppose that I want to do replica method. So in this case, I'm worried about the expectation value of the empirical spectral density which corresponds to doing the expectation value of the logarithm, blah, blah, right? And this would be equivalent. And then I apply here the replicatic. And it means to do the expectation value of the nth power of the partition function. And at some point, this would be equal to this. So I'll have the same old, same old. The integrals for the replicas alpha from 1 to n of dn x i vector alpha of 1 of exponential of minus z divided by 2 sum i from 1 to m sum alpha from 1 to small n x i alpha square. And then a half, I'm going to write this part. And then a half the expectation value. This would be the expectation, would respect some randomness of this graph. G, the expectation value of this of a second sum for all i and j, sum over alpha from 1 to small n of w i ij x i alpha x j alpha. Now, you would be tempted to do the following thing. And this would be wrong. I still am not giving you the distribution, the type of distribution that you have for this. But suppose that this would be the connectivity matrix we saw in Erdos-Renyi graph. What you would do is to say, ah, well, so I can symmetrize this sum. I can focus in the terms which are independent. And then I can factorize. I can put this sum here as a product. And then I can worry about only to do the expectation values with respect to the double use. So you would be tempted to do the following. So this is equal to something like this, the half. I'm missing some terms, but that's not the point now. I'll write something like i is more than j, sum alpha from 1 to n w ij x i alpha x j alpha, the expectation value. So the last focus on the term, which I think is important to understand that, you have to be careful, careful, careful in this case. And you think, now, this is equal to what? This is equal to the expectation value of the product of i is more than j of the exponential of the sum over alpha from 1 to n of w ij of x i alpha x j alpha. And then you would think that this is equal. And you would think that this is equal to the product of i is smaller than j of the expectation value of the exponential of w ij the sum over alpha from 1 to n of x i alpha x j alpha, right? As we have done before, we did this trick. And what this thing is reflecting before is that these random variables are independent and identically distributed. So that means that the expectation of the product is the product of expectations. But in this case, this is clearly wrong, no? Well, why? Because w ij is the definition of the sum over mu of j i mu g i mu g mu, right? And if I suppose that now that the matrix entries of g they are independent, OK? With this expression, that means automatically that the matrix entries of w are correlated, right? So that means you cannot factorize this. Well, this is wrong. Cool? And then you become desperate. And then you start crying. You're saying, how am I going to do this? Because otherwise, my boss is going to fire me or not. But this is a very simple way to solve this, no? With using Dirac deltas. Yeah, yeah, yeah. Yeah, Dirac, you know, it was, it did very cool things. But the Dirac delta is a pretty good thing, right? Yeah, why? How you use, tell me. Here? Here? No, no, so this is, this was the product. So this is an equality. So this, you know, the expectation value of the n power of the partition function is equal to the whole thing. So this was time. And from here, I was only focusing in certain part of the terms that appear in derivation because I want to emphasize that you have to be very careful with this. And the solution of this, I mean, there are various ways to do it, OK? But another way to do it is to, again, use Dirac delta. So why? Well, so let's think about it. You see, if I go back to this expression, let's go back to this expression. And I have 1 half of the sum of ij from 1 to n of the sum of alpha from 1 to small n. And let me put the definition of wij, which is sum of mu from 1 to p of gi mu of xi alpha xj alpha, right? And then I notice that I can reorder this to write the following. I can write this as 1 half of the sum of mu from 1 to p of the sum over alpha from 1 to small n. And you notice now, since this is a complete sum for i and j, I can put this thing as the sum of i from 1 to n of gi mu xi alpha, and the same for the other one. And I have a square, no? I have the sum over i from 1 to n of gi mu xi alpha, a square. And the other thing I have to do now is to linearize this. If I linearize this, then I can factorize. So there are two ways to linearize this. By how are the strategies transformation on the introduction of Dirac delta? So you introduce a Dirac delta. You isolate this. And now these terms that appear here are statistically independent. And then you can do the average correctly. So again, Dirac delta to the rescue. Oh, how are anatomists to the rescue? It's over. No, it's over gi. So the way to, OK. Maybe I didn't find it properly from the beginning. So you assume that the matrix g's are random. And therefore, w is random. For simplicity, you assume that the matrix entries of g are independent from each other. But that doesn't mean that these are independent. They are correlated. But when you do the expectation value, it's the expectation value over the statistics of the g's. Questions? And again, we save the day. And the boss will be happy. What's up? It depends on the constraint. It's not true. Because if you have a global constraint, the global constraint can be expressed as a Dirac delta. And therefore, you can do a derivation as well. If the constraint is a local constraint, maybe you can put a soft, like a large multiplier. And still, you can do some type of approximation to derivation. So still, there are many things you can do. But the trick is always based on this kind of factorization, right? So this one here. More questions? So I'll let you do the derivation subtleties, no? For dinner. What else? So we have one more day. So what do you want me to do for Monday? Let us do the non-enemiesian case. So for non-enemiesian matrices. And the trick to solve these problems, well, there are many tricks. Actually, you need an extra trick, which comes from electrostatics. So I need you to remember yourself the following. What is the, if I understand the Laplacian in two dimensions, you know, the area with respect to x squared plus a little. What is the inverse of this operator? So what you have to put here, such this is a Dirac delta. Again, Dirac delta, right? It appears that I'm obsessed with Dirac delta. And this is related to electrostatic problem in 2D. If you know what this thing is, then you can apply this thing to solve problems of non-enemiesian matrices. You know, in 3D, you know that the inverse of this is 1 divided by n, yeah? But this is only in 3D. In a two-dimensional world, the electrostatic potential has a different shape. You know what this shape is, no? That's right, the logarithm, right? And we love logarithms, no? Because logarithms of eigenvalues can be related to logarithms of determinants of matrices, right? Very good, that's it. Coffee or tea? Very good, thank you.