 So, I'm going to throw you the biggest possible challenge, derivation, all right? And then you are going to work in groups to see how you do, and then I'll go around. But let us start first with the mapping. Second mapping, I'll let you, so it is the following, mapping number two. So let me see. So the idea was to have, we have an ensemble. All of n times n symmetric real matrices, and suppose that we have one a, which belong to this ensemble. And let us denote as lambda vector a, the spectrum a, and then we introduce this, the following random variable. If a is random, the spectrum is random, so I can introduce the following objects that are not like I sub n of x as the sum for I from 1 to n of the Kronecker delta, sorry, of the theta, theta-heavy side function, step function of x minus lambda I a, and this was giving you the number of eigenvalues to the left of the real number x, right, where x. Now again, so if the matrix is random, the spectrum is random, therefore this random variable, one way to fully characterize that random variable is to calculate other's distribution, or the moment generating function, or the cumulon generating function, right? For that, so it is if a is random, this random, this implies that, of course, the spectrum is random, which implies that this is a random variable, so again to fully capture, more or less, fully capture the statistical properties of this random variables, we introduce it's a moment generating function that we denoted g sub x of mu as the average over the randomness in the sample of random matrices of the exponential of mu I sub n of x, yeah, this is what I gave you, no? And the challenge was to be able to express this thing as a stachmic problem. So to manipulate this expression in such a way that the partition function will appear and then you have to double it over partition functions. Cool? Did you manage to do this? No, shall we do it together? And then I'll give you the big challenge for today, like kind of an exam without preparing, yeah? Is that okay? So how we do it is the following. So the only thing we have to remember, so again, so the idea is I start with the definition supposing I have a spectrum and then I want to get to an expression where I get rid of the spectrum and I express everything in terms of the matrix. So then I can apply it like a method, cavity method, et cetera. So it's a way to overcome the necessity of having the spectrum of a given matrix, right? So the identity we have to use is the following one, that the heavy side function theta of minus x can be written as the limit, eta going to 0 plus 1 divided by 2 pi, 1 divided by 2 pi i of the logarithm of x plus i eta minus the logarithm of x minus i. Okay? If I do a mistake, you tell me, okay? This is very good. So this means if I use this expression over here, this means that the number of eigenvalues to the level of x can be written as the limit or eta going to 0 plus of the 1 divided by 2 pi i of the sum i from 1 to n of the logarithm, careful, yeah, the logarithm of lambda a i minus x minus plus i eta minus the logarithm of lambda sup pi a plus minus x minus eta, no? x minus i eta, yeah? Very good, so let me introduce the following notation, it can be a beta, no? And let me denote as x sup eta, this definition as x plus i eta, yeah? And let us denote the complex conjugate with a star, right? Like, for instance, x sup eta star would be x minus i eta, good? So then here, the other thing I'm going to do is to put this notation for simplicity, okay? To simplify everything. So this is equal to the limit of eta going to 0 plus of 2 divided by 2 pi i of the sum i from 1 to n of the logarithm of 1 of lambda i a minus x star eta minus the logarithm of lambda i a minus x sup eta, right? And now I do the same trick I did for the case of the spectral density, the sum of the logarithm of the logarithm of the product and then I have the product of eigenvalue plus something and that's equal to the determinant of the matrix, right? So then this is equal to the limit when eta goes to 0 plus or 1 divided by 2 pi i of the logarithm of the product of i from 1 to n of lambda i a minus x star eta minus the logarithm of the product for i from 1 to n of lambda i a minus x eta. But this can be written as follows, right? This is equal to the limit eta going to 0 plus 1 divided by 2 pi i the logarithm of the determinant of a minus x eta star times the identity matrix, sorry, minus the logarithm of the determinant of a minus x sup eta the identity matrix. So far so good? And now notice that, you know, I don't have the spectrum anymore, I have the matrix a. I can write this thing as follows, no? I can write this thing as the limit eta going to 0 plus, oh, what, what can I do here? 1 divided by pi i and I'm going to write this thing as follows, right? The logarithm, I'm going to use the 1 half, I'm going to put the logarithm of the difference as the logarithm of the ratio, yeah? And then the 1 half has a square root, so I have the logarithm of the square root of the determinant of a minus x eta star identity matrix divided by the determinant of a minus x eta identity matrix. So far so good? Yeah? And then I have a beautiful 1 divided, I have the square root of the determinant, no? Remember that then the 1 divided by square root of a determinant, symmetric matrix, this can be written as a partition function, remember? So this can be written as a, well, as a multivariate Gaussian integral, as an integral over dnx of the exponential of minus 1 half sum i over j from 1 to n xi s i j x j. So then I introduce the idea of a partition function and I identify these partition functions with the square root of these determinants, that is, I introduce the following, 1 over the square root of the determinant of a minus x eta identity matrix, this is going to be a partition function that depends on the matrix for some parameter which is x eta. So therefore this, so what we have so far is the number of eigenvalues to the left of x is equal to the limit when eta goes to 0 plus of 1 divided by pi i, the logarithm of the partition function set of A for the parameter x eta divided by a eta star. So again, it seems that I complicated things and it is true I'm complicated things, but I want to get it into an expression where I let a quantity of my desire with something I know how to calculate in a stack mech using the replica method, cavity method, et cetera. So far so good? Now this is still at random variable, right? So given a matrix, the matrix has an spectrum and this would be the number of eigenvalues. So what we have done is something actually kind of idiotic, right? So we started with a very simple expression. The simple expression was the definition of this object. So this is the number of eigenvalues to the left of x. It's very simple, right? The definition. And we got crazy and we express this as this, OK? Now suppose that I want to study the statistical properties of this one, particularly for instance studying the moment generating function. Now I have to plug this thing into here. Now from here I have to combine this result with this. Now the moment generating function which is g sub x of mu, which is by definition exponential of mu i sub n of x. And this part remembers that this is the average over the product distribution that gives you the randomness in this ensemble of random matrices. This would be equal to I plug this result into here. Of course, in some theoretical physics, I assume that everything can be exchanged. Like limits, I can take them outside of arguments and put it in from. So I assume that you can do that. So this is equal to the limit of eta going to 0 plus of what? Of the, let's do it step by step. I'll have the exponential of mu divided by pi i of the logarithm of this ratio of partition functions z sub a of x eta divided by z sub a of x eta star. And then I have to do the average over the different realizations of the matrices A. And then I do something more crazy and I said, OK, this is equal to the limit of eta going to 0 plus of what? So I put the factor which is in front of the log as the power of the argument of the log. And then I have the exponential of the log, they are inverse functions. So this will give me the fuller. This will give me the limit when eta is 0 plus of the partition function at x eta raised to the power mu divided by pi i times the partition function for x eta star raised to the power minus mu divided by pi i. And then do the average over the disorder. And then you think why you are torturing us with this, right? You all agree with me that what I'm doing is crazy. Yes, you agree with me? Why on earth we are doing this, yes? Very good, very good, that's another, this is a very good point actually. And you have to be very careful at some point in the derivation, right? Because it is true that certain functions in the complex planes, they are not, they are multivalent functions. And of course then it happens that you have to take this, you know, you have to do this, this computation with care. But you know the following, right? You know that this is the representation of a random variable that when s grows, has to grow, has to increase monotonically. Because it is accumulating the number of values from the left to the right. So at the end of the day, this will allow you to choose how you would do, how you take from the different possible representation of the logarithm as a multivalent function, which one you have to take. So this issue is not really an issue. Okay, now, so everybody agrees with me that this is crazy. Very good. But now, why is this crazy? Because I started with something very simple, this thing. Then of course I wanted to calculate the statistical properties of this. This is a random variable because in principle the matrix can be random. But this is a very simple definition, I complicated. And then I put it here to calculate the, to get an expression of the momentary function. Because this will capture all the principle fully, all the statistical properties of this random variable. And I get to an expression where I have kind of partition functions raised to an imaginary number. Actually, I have two partition functions raised to an imaginary number and I have to do the average over the disorder. So how do I do this? I apply the replicatric. What I do is like, I don't know how on earth I'm going to do this expectation value, but what I cannot do, because this would be very difficult. But I know that if this partition function was raised to an integer power, this is for me very easy to do. So as I place this object, which is the object of my desire by the following. I take the partition function set of A of x eta to the power m plus. And then I take, I multiply it by the partition function with the parameter x eta star to the power n minus, where n plus minus are integers. What's up? In the definition of what? No, no, in the definition is for any complex number. It's just in the last part here. But again, this limit must be understood in the sense of generalized functions or distributions, yeah? So then I take n plus and n minus two integers. And now if I were to do the average over the quenching disorder, this is very easy to do. So first I do this, and then I take the limit, the replica limit. But now it's a different replica limit. Now it's not the limit of the number of replicas going to zero. It's the two replicas, one going to plus mu divided by pi i. And the other one going to minus mu divided by pi i. And apparently, at the end of the day, you use derivation and it works. And here, the replica method has the same steps, right? So you use the replica trick, which is this one. Now the replica limit is different, but then at some point to do the replica limit, you have to introduce an ansatz, an hypothesis that will allow you to analytically continue integers now to imaginary numbers. Questions? So that's why I was torturing you with this, right? So there's like every right things in such a way that I feel comfortable using techniques of spin glasses, and I say, ah, I know that of a crazy trick somebody told me once that it might work in this case. Good. There was something that we left last week that it was how to make this limit, right? Now to start doing this limit in this case, it's going to be a bit complicated. So what we're going to do is we're going to go a couple of steps back. And we are going to learn how to do this limit in a particular case. And then we'll come back to this, all right? Very good. So what is going to be the exercise for today, for the rest of the day? You can work in groups. And the main goal of this exercise is to learn the step two of the replica method that is introducing an ansatz that will allow you to do the replica limit, either the standard one or this one. Now, so let's go to the case of worrying about the empirical spectral density of a matrix, right? So remember that given a matrix A, the empirical, a matrix symmetric, blah, blah, blah, okay? The empirical spectral density associated to a matrix A. This could be written as minus 2 divided by pi n, give me a second, the limit going to eta 0 plus of the imaginary part of the derivative with respect to set of the logarithm of set A of set for set equal to lambda minus i eta, where set A of set was equal to what? It was equal to the exponential of minus a Hamiltonian, no, sorry, what am I saying? Over this set of A was the following, that's what, that's the integral, a product of i from 1 to n of x i divided by square root of 2 pi over the exponential of minus h of x. Let me put it explicitly, minus one half of the sum i and j from 1 to n of x i set times the identity matrix minus a entries ij xj, yeah? We had something like this, am I correct? If I'm missing something, let me know. But this is the empirical spectral density given one matrix, all right, and we find, I found out how to evaluate these things using the cavity method. Suppose now that instead of having a matrix, you have a bunch of matrices that are generated by a productistic recipe, so you have the probability of, you know, you have an ensemble of matrices and you have a probability of picking one of those matrices. So then what you want to do, you want to maybe characterize not the empirical spectral density but the average, the typical value of the empirical spectral density, that means you would like to do the empirical spectral density and the average over the different localizations of the matrix. So this means that this would be equal to what? This would be equal to minus 2 divided by pi n, the limit eta going to 0 plus, the imaginary part of the derivative with respect to set, of the logarithm of the partition function set sub a of set, average over the different localizations of the matrix and the matrix ensemble evaluated at set equal to lambda minus i eta. Now forget about the rest of the universe, right? So here you have a beautiful, a magnificent logarithm of a partition function and you are trying to do the average over the matrix, which is inside the partition function and the logarithm is in between. So do you know a method that will allow you to do this, this expectation very easily? Replica method, right? So using the replica method, we write this as now the expectation value of the empirical spectral density is equal to minus 2 divided by pi n, the limit eta going to 0 plus, taking the imaginary part of the derivative with respect to set of the limit where n goes to 0, 1 divided by n of the logarithm of the partition function to the power n, average over the disorder for set evaluated at lambda minus i eta, right? So far so good? Very good. So what is the challenge for today? For the rest of the day, and we work in groups, it's the following. Let us take a particular type of ensemble which are going to be Poissonian graphs, Erdos-Renyi graphs. Okay? Let us take, so let us take our ensemble, ensemble of matrices will be Erdos-Renyi graphs and director, that means the connectivity matrix or the agency matrix symmetric and what I want to do is to calculate precisely the spectral density for this type of graphs, average over the product distribution that characterizes this ensemble of random matrices. So at some point, so the first thing you have to do, of course, is to take the partition function to the power n and do the average over the Poissonian graphs. We have done this, right? We did it for the, we did this, right? Yes, for the, for the easy model. So mathematical speaking is the same, the same steps, the only thing that changes is the, that in this case the dynamical variables are continuous real numbers and in the case of the easy model they are easy variables, okay? So at some point at the end of the day, you have to show that the nth power of the partition function in this case, this is equal. And the reason I can write it is because essentially all the derivations are the same, right? The only thing that changes is the type of variable that you have. So I know that this should give you, or can be written in terms of a path integral over two functions p and p hat of the exponential of n times a functional of p and p hat where this functional, this s sub n of p and p hat is equal to the following. This ns sub n of p and p hat is equal to the imaginary unit times the integral over a vector in the, remember that the line below is a vector in the tropical space, this x line below is x1 up to xn, now of course x alpha sub real number before it was a easy variable of a p hat x p hat, sorry p hat x p x plus d divided by 2 where d is the average connectivity for this ensemble of Poissonian graphs of the double integral with respect to x with respect to y of p x p y that multiplies the exponential of the scalar product of x with y minus 1 plus the logarithm of the integral with respect to this vector x of the exponential of minus z divided by 2 x vector square minus i by hat x, this is the first thing you have to show, does this expression also assemble something to you? It's the same expression as in the ferromagnet, why? Because the Hamiltonian has the same form, the only thing that changes is the type of variables here, right? Now the second thing you have to do, so this is the first to improve this, the second thing you have to do is to derive the Zeile point equations for this case, so that means I'm going to evaluate this average empirical spectral density when the size of the matrix becomes very, very large, so this implies that when n goes to infinity, the syntotical behavior of that expression is like I said and average to show that this is equal, well show or say which, what are you going to do to arrive to this conclusion to the exponential of n s sup n p0 p0 hat, where p0 and p0 hat are the functions that obey the following equations, p0 p0 hat obeying the following, obey that minus i pi 0 hat of x is equal to the integral over y p y that multiplies the exponential of x the scalar product with y minus 1 and that's right p0 x is equal to the exponential of minus z divided by 2 x square minus i pi hat x divided by the integral with respect to y of the exponential of minus z divided by 2 y vector square minus i pi hat y. Derive this and in this case you can really call the variation of s the variation because it's actually now a calculus of variation. Good, now the third part of this problem is the following. Now if you go, are you with me? Now if you go to the expression of the spectral density average over the disorder blah blah blah blah, you can show that asymptotically and going to infinity, the average over the disorder of the empirical spectral density can be written as follows. So let me see, yeah that's right, can be written as 1 divided by pi, give me a second, okay, the limit eta going to 0 plus of 1 divided by pi of the imaginary part of the limit when eta goes to 0, 1 over n of the integral over the x p0 x, the sum for alpha from 1 to n x alpha square. Go ahead, big n and small n, no, no, no n is the number of replicas, big n is the number of, is the size of the matrix or in the context in the mapping to stack mech is the number of of dynamical variables you have or a vision of with a vocal body of the number of thermal variables you have. Yeah, normally a small n in this notation is in this context is a small n is number of replicas, big n is the number of variables that you have or the number of nodes or the size of the matrix. Very good, more questions? Yeah, the y square is the y square, no? It's y, it's color product with y. So, I mean, I mean, if you want to put it in components, but I was trying to compress the form would be the sum for alpha from 1 to n y alpha square. Yeah, questions? And then, once you get here, yeah, the tricky part that would be step two of some other replica method which is the the answers that will allow you to make this limit. You see, in this limit, I have to do the limit of n going to zero or one divided by n, but here where is n? n is in the dimension of the of this vector space and it's inside the number of arguments this function has. So, this is crazy. So, how on earth you are going to do this limit? So, we will see that if you take under a certain answer, you agree that this is crazy. Yes, very good. So, we'll see that if I make a very simple observation, so, I can write down an ansatz, a form that this function has to obey and this will allow me to take the limit. I'll keep the suspense for later. Questions? So, in principle, I mean this, this kind of exercise exam, okay, okay, now between us, okay, don't tell Matteo, no? The exam is going to be a bit easier, yeah? I'm not going to put this thing, but this you should be able to do, this derivation because I have given you everything. The only thing you cannot do yet is this part because I have not motivated how to take, how to make the ansatz for this object and where it comes from. But all this derivation is based on doing expectation values of exponential functions, seriously, okay? So, what are we going to do? You turn around, you look at each other, you create groups and you start discussing and between you, you discuss and you try to come up with all this derivation, yeah? And I go around.