 No, you're not fine. Okay, let us finish with a bank, right? Let us finish with this, the most recent derivation we are doing, and then I'll let you go to prepare for the exam, and I'll be around this afternoon to answer questions. So what we were doing, you are circulating this attendance list, yeah, very good. So what we were doing is the following. So we want to do, we're doing the exercise of the example or the torture, whatever you want to call it, of the empirical spectral density average over the, the empirical spectral density average over the ensemble of Erdos-Renyi graphs. And for this, remember that what we left the derivation yesterday at the following point, right? So we have that the, the nth power of the partition function for the mapping average over the disorder, this was equal to what? This was equal to the integral for the product of alpha from 1 to n of d alpha of the exponential of minus z divided by 2, the sum for, for alpha from 1 to small n, the sum for i from 1 to capital N of xi alpha square plus d divided by 2n, the complete sum of i and j from 1 to n of the exponential of the scalar product in the empirical space of xi xj minus 1. So far so good. So we were here, right? And then I, we, we discussed the trick in the, in the trick of two, of how to linearize this to be able to get into an expression where I, I can, I can apply the style point method. That's the trick. Okay, so let me, let me, let us focus on this. Okay, let us do it step by step. So, and actually I'm going to do, to do it a bit more general. So suppose I have something like this, I have the sum of i and j from 1 to n is the complete sum of a function, doesn't matter the function, in this case it's exponential minus 1 of x vector in replica space i and x vector in replica space j, right? This is what I have there. Do you agree? So this, I do the following, I, I say this is equal to the integral over a vector in replica space, the y of the sum of i and j from 1 to n of f of y vector in replica space x, the replica space times the Dirac delta of y minus xi. Do you agree that I have, I have not done, I have done nothing, right? Very good. And I do it again, okay, for the other variable. So now this is equal to the integral over y, let's put now y prime if you want, the sum of i and j from 1 to n of f, y vector in replica space, y vector prime in replica space Dirac delta of y minus x i Dirac delta of y prime minus x j, what's up? Yeah, normally since the, maybe I should have been more careful with this, when you have discrete variables, it's better to chronicle deltas for continuous variables, direct deltas, right? Because in this case y are continuous variables, right? In the example of this model, there were discrete variables, they would take values plus minus ones and you have a vector in replica space of minus plus ones. In this case, they are continuous variables. So this guy here, yeah, this guy here, the notation is y1 to yn, y alpha belongs to r, right? It's a continuous variable. But that is the meaning of, of x, remember that when we do the mapping for the spectral density of the partition function, the partition function in this case, were continuous variables, not discrete variables, yeah? So in the mapping, this was related, I'm not going to write the whole expression, was related to a partition function that had continuous variables. It was the integral, shall I write it down? Maybe just, no? The integral of dnx, let me put it this way, exponential of minus 1 half, identity matrix, minus c. So these are continuous variables. But again, so I'm a bit sloppy with chronicle deltas and direct deltas. Maybe I should, I should, I should be more careful, but the, the property you, you use is the same. Like, for instance, here, I'm using this property that if I, if I integrate it over a direct delta, I go back to this. And before, you know, if this would be a spin variables, I would have, for instance, let's write down the same thing, right? For a spin variables, suppose I have this, yeah? So then I will, what I will write is sums, no? I will write the sum over tau, then the sum for i and j from 1 to n, I put here f of tau vector in replica space sigma j, and I'll put, I'll put here a chronicle delta of tau sigma, sigma i, but I think I'm using the same thing, the same property of the chronicle delta of the direct delta, it's due to the mapping. Very good. More questions? All right. So, so then you see this completable sum, so I can put the sum over i over here, the sum over j over here, right? So then I'll have the following, I'll have the double integral for y vector, y prime vector of this function f y, y prime, let me put it like this, no? I'm going to put here an n square, I'm going to put 1 over n, the sum over i from 1 to n of the direct delta of y minus xi times 1 over n, the sum of i from 1 to n, the direct delta of y prime minus x vector in replica space j. I have not done anything actually yet. Good. So if I come here, yeah, so this is generic, right? And now my function would be this exponential minus 1, so I can do this trick here. So I have that this is equal to the integral for the product of alpha from 1 to n, the nx vector index alpha of this exponential of minus z divided by 2, sum for alpha from 1 to small n, sum i from 1 to n xi alpha square, plus n d divided by 2, and then I have the double integral for d vector in replica, i vector in replica space, i prime vector in replica space. I'll have a exponential of the product of y with y prime minus 1 that multiplies, let me put it here, these two sums, all right? I have 1 over n, the sum of i from 1 to n, direct delta of y minus xi times 1 over n, the sum over, I'm sure that this would be j, but it's a dummy variable, okay? It doesn't matter. The sum over j from 1 to n of the direct delta of y prime minus xj. Okay, yeah? So far so good? So this now would be the equivalent of the, this magnetization that appear in the example of the fully connected taxing model. What I need to do is to introduce a direct delta for this whole object, and that would be the function of the direct delta, to introduce a direct delta for each value of y, okay? So this, now I can write it as follows. That's right, that's right. Is the model that you use the magnetization? Yeah, because you only need one parameter in this case, one value. For the easy model on random graphs, you need a whole function. Good, so this is equal to an integral or a path integral over a set of functions p. The integral for the product of alpha from 1 to n of dn x alpha. And then I have the exponential of minus set divided by 2, the sum over alpha from 1 to n, the sum over i from 1 to capital N xi alpha square, plus nd divided by 2, the integral over dy, dy prime of p y, p y prime, that multiplies the exponential of y square product with y prime minus 1, times a functional direct delta, that means a direct delta for all possible values of this function p, that tells you that p of x must be equal to what you are removing, or you are extracting from the argument of the exponential, minus 1 over n, the sum of i from 1 to n of direct delta of x minus xi. So that if I were to do the path integral and use the functional direct delta, I would replace p that appears here by this expression and I go back and I have this double sum for i and j. Again, the trick in the spirit is the same, the object you apply to is a bit more complicated. Now what I do, I introduce the free representation of this functional direct delta, that would be the free representation for each value that this function can take. So let me call the conjugate variable dp hat, and then I have a path integral for p and p hat of the integral for alpha from 1 to n of dn x alpha of exponential of minus z divided by 2, the sum over alpha from 1 to small n, the sum of i from 1 to capital N of xi alpha square plus this stays the same, y, y prime, p, y, p, y prime, that multiplies the exponential of y, the scalar product y prime minus 1, and I put now the Fourier of this, the Fourier representation of this, this would be a i n, the integral over dx of p hat of x that multiplies p of x minus 1 over n, the sum of i from 1 to n of the direct delta x minus xi, and this is already inside the argument of the exponential. The same trick, the object is a bit more complicated, but it's the, sorry, it's the bloody same trick. And then the Fourier of these steps, the Fourier of this trick is now, so you see this, you have here a quadratic sum, and now you have only linear sums in the node variables, so here I have a linear sum, and here I have a linear sum, so that means now I can factorize per the variables per node, per node, and I can do the trace for those variables, the same as the example of the fully connected, easy model. So, so let us rearrange a bit this expression here, what do I have is the following, what do I have is, I'm going to move some terms around, okay, so I have the path integral for the parallel dp, dp hat of the exponential of what? Of dn divided by 2, the double integral over y, and over y prime of pypy prime, that multiplies exponential of the scalar product with yy prime minus 1, right, this would be this term here, this one, and I have this term here, I'm going to put here as well, plus in, the integral over the xp hat xpx, all right, and then the others I put them together with this part because I'm going to do, it's fine, it is there hidden, you know, in this measure it's hidden there, these factors which are not important, even before just, yeah, yeah, yeah, so you should be, when you do the derivation, you should be exquisite with the derivation, so in principle, okay, but I'm going to write down, this is not really correct, but for discrete variables, when we wrote this thing for this emollence, what actually managed was the product for all possible values of sigma in the upper lattice of values minus 1, 1, okay, of dp, yeah, and this dp hat meant the same thing divided by 2 pi over n, right, no, they don't cancel each other. No, no, with path integral, it's a bit annoying because now you don't have a discrete variable, you have continuous, so this will be something to the power infinity, so it doesn't make sense, so the way you do it with continuous variables is that you regularize, you discretize the values of the functions, and then at some point you make the continuous limit, without caring about these factors that are not coupled to important functions, but they give you infinity in this continuous limit, so this nastiness, which is relevant, I put it, it's hidden here, right, so what I call this is the product of these two things, yeah? You, if you want, you can also put the 2 pi, but the memory that in the mapping, the relationship between the spectral density and the logarithm is by the derivative of the log of the partition function, the log of, then you have this constant derivative is zero, right, so that's why I don't write it anymore, but if you want, write it and realize that in some cases, some constants are not important, I know the constants they are, sure? Yeah, for the same reason I mentioned before, so if you want, don't put the end, do the standard for representation of the Dirac delta, and then you go, you do the whole calculation I'm going to do, then you apply the solid point method and you realize that p hat must be proportional to n, to have an undrival solution, since all these derivations are the same, you know that the conjugate variable that appears when you put the Fourier representation of the Dirac delta must be always proportional to n. That's why I put it from the beginning, because half of the power of foreseeing what is going to happen. No, I mean, if you take the continuous limit, you still have a problem, but it's a trivial problem that you have a constant that goes to infinity. So you see, if this were for continuous variables, if this would be for actually those functions, what would you do? You'd levelize or you'd discretize the values of x, like if you were to do integral, right? So then this product I wrote here is well defined, okay? You do everything with discretization, and then you take the continuous limit. So what would happen is like here, sans would go to integrals, you will have this, but then you have an issue with this factor, because this factor would be two p divided by n, to something to a power that goes to infinity. But it doesn't matter, you have the factor still two pi. The n is not important for what I'm saying. So the factor two, you have the whole factor to a given power that goes to infinity when you take the continuous limit, yeah? Very good, more questions? Go ahead, sorry? The differential of d here, no, it's just notation. What I'm saying is that this, yeah? This, what it means is this, okay? This, what it means is here? Yeah, what's happened with this one? No, no, no, no, this is, okay, so this is just to indicate that the product is is only affecting this part, yeah? Okay, maybe I'm reusing same symbols, okay? So why don't we do the following? Use here this, not to be confused with this, not to be confused with this, but this only thing it means is the product is affecting to this part, it's not the same notation, yeah? Better? You see, I want to put here this square brackets because if I would not put that, people might think that this product affects everything with that this in front of it. This square brackets is just to point out that this product only affects this expression here. This is the only thing it means, yeah? Very good, more questions? Go ahead, but there is a procedure to make it well defined, okay? Which is you discretize, like in Riemann integrals, and then you do the whole process for discrete x, and at the end you take the continuous limit. So repeat the derivation, what you have in mind? Can you repeat what you have in mind? How do you? Sums of deltas, this one's here, yeah, sure, yeah. But what would happen is like you go, you undo the derivation you are doing. No, because the sum is still there, right? So how can you put that side? So you see, so at some point, inside the argument of the exponential I have the following, I have the integral again, over y, y prime of the exponential of y, the scalar product with y prime minus one, and here I have one over n, the sum i from one to n of this, times one over n, the sum for j from one to n of direct delta of y prime xj, yeah? And this is inside the argument of the exponential, and you are saying what? So still I have the double sum. What I want to do is to get rid of the double sum. So if you, I mean, if you multiply everything, what is going to happen, and then you use the deltas, you are going to recover the expression we are trying to avoid, yeah? Very good, more questions? Come on, don't be shy. More questions? Go ahead. You don't understand the last equation. This one here? No, but I had not finished to write down. It's because when I was writing, somebody asked me something, so I didn't finish it. Can I finish it now? Very good. So as I was saying, what I'm doing now is to reorder the terms. So I put the path integral over these two functions, what this thing means, what I tell you it means. All right, and what I'm missing is, still I'm missing this term here and this term here, okay? So then I have, now the product, well, this is multiplying the integral over the product of four alpha from one to n of dn x alpha. And then I have the exponential of set divide by two of the sum for alpha from one to n, the sum i from one to capital N of xi alpha squared. And now you see I have the term this term here with the sum and the Dirac delta. So I can use the Dirac delta to do the integral. And then the one over n will cancel this n. And this will give me minus i, the sum of i from one to n of p hat xi. And this is the same step in spirit, mathematically speaking, as in when we, like in the fully connoted IC model, where now everything is linear in nodes, in dynamical variables, and now we can factorize and do the trace, right? Because now this term I can do as follows, right? So I'll have here the path intervals for p, p hat. And then I have this part here, let's do it step by step, d divided by n, sorry, dn divided by two, the double integral over y, y prime, p y, p y prime, one plus i n, this integral p hat x, p x. And then this, so you see this, I can rewrite it as follows, right? I can rewrite it as this multiple integral, but now I change the order i. I see these differentials. I put this as the product of i from one to n of the x vector in the elliptical space i of this potential of minus z divided by two, so sum over alpha, sum over i from one to capital N, alpha from one to number of replicas n of x i alpha square minus i sum of i from one to n p hat x vector i. And now all these factorize per i, okay? It's capital N. In there? Ah, to keep the notation, yeah, put it like this, fantastic, yeah? Excellent, yeah? So now this factorizes per side, right? So this is equal to what? Two, the same thing as before, this path integral over p, over p hat of exponential of, again, this same piece, let us do it this way. Let's do it by step, dn divided by two, double integral over y, y prime, p y, y prime, exponential of y, the scalar product with y prime minus one, plus i n, the integral over, if you want d dx, p hat x, p x, and this now is what? This is the product of i from one to n of the integral over dn xi of the exponential of z minus z divided by two x vector in the particular space, s square i minus i by hat xi, repeat, speak up? Y prime, thank you, yeah? And this is the same integral n times, right? So therefore, this is equal to this integral to the power n that I can put inside the argument of exponential as plus n times the log of this, better? More or less, I mean, the way to learn the calculation is to do the bloody calculation, there is no other way, right? So, I managed to go to a situation where I'm very happy. Why? Because of extremely complicated object, which was what? Which was, okay, I'm going to go back to the last one, the partition function to the power n, our h over the disorder, at the end of the day, I can write it as some weird integral, a path integral of the exponential of n, a functional that depends on these two functions, p and p hat. For this functional, it's precisely the one that I have there without the n, with sn of p and p hat is equal to what? It's equal to i, the integral of dx, p hat x px plus d divided by two, the double integral over y, y prime, p y, p y prime, that multiplies the exponential of the scalar product of y with y prime minus one plus the log of this, plus the logarithm of the integral of dx. If you want, you can put here d and x to keep all the notation sane, consistency, but okay, when you do the derivation, just be careful and consistent with the notation. Exponential of z divided by two x vector in the replica space square minus i p hat x, as promised. So now why this is cool? Because from the definition, this, if you look at it, is something very, very difficult to evaluate. While this, if I'm interested in the syntotic behavior of the partition function, when the size of the matrix is very, very large, I don't know, independent of how complicated and how poorly defined the path integral is, the idea of the side point method also applies. So I don't have to do a path integral. That's the cool thing about it. So for n large, this, it has like the exponential of n and then this functional evaluated at the points that the functions p naught and p hat naught would extremize this functional. So that's why at the end of the day, I don't care about, I'm not being particularly careful with the factors in the path integral because I know that in this case, if you're worried about the syntotic behavior of n, I don't have to do those integrals. Questions? So this was the first part of the exercise. Now the second part was, well, what are the values of p naught, p zero and p hat zero? So remember, these are the functions here. p naught x and p hat naught x are the solutions to the saddle point equations. And these saddle point equations are equations that come from minimizing this functional of p and p, p hat. So that means you take now the variation of s with respect to p equal to zero s n and the variation of s n with respect to p hat equal to zero and you obtain a couple of, a couple of questions that relate p naught with p naught hat. Good? Questions? Now, how do you do this derivation by doing it? The only thing you have to know when you do calculus of variations is the following trick. The first is that essentially when you do the variation of something has the same properties as the normal derivative and the other one is that the variation, for instance of p x with respect to p x prime, this is the Dirac delta of x x prime. That's the only trick you need to know. The only property you need to know is the derivation. And if you do it, what you obtain, let's do it, it's the following, right? You obtain that, so let me see, you obtain that minus p hat of x zero is t the integral over y, p is zero y that multiplies the scalar product of x with y minus one. This comes from doing the variation with respect to p that affects here and here. And then when you do the variation with respect to p hat, you obtain that p naught of x is equal to the exponential of minus z divided by two x squared minus i p naught x divided by the same integrator, right? The integral over y, exponential of minus z divided by two y squared minus i p naught p naught hat y. And the variation from this is rather straightforward. Did you manage to get, what time is it? Did you manage to get these equations? I guess so. I guess you managed to do it, yeah? Now, let me delete all this, or maybe this part here, or actually this part below. And let us notice something, something that is very important as well, right? So remember the mapping. The mapping tells, told us that the average spectral density was equal to what? Was equal to minus two divided by pi n, the imaginary part, what? Minus two divided by pi n, the limit of eta going to zero plus of the imaginary part of the derivative with respect to z of the limit of n going to zero of one over n, the logarithm of the partition function to the power n average over this order, when z is equal to lambda minus i eta, right? Good? Now, if you are interested in the asymptotic behavior of the spectral density, that means when n is very large, you know, the average replicated partition function behaves in this way. If you can plug now this part into here, the logarithm would cancel the exponential, n cancels this n here, and then you have to do the derivative with respect to z, if you want, of this functional. If you do it and use the Sylpon equations, you should get the following. Okay? You should get that. So, we are going to do limit of eta going to zero plus of one divided by pi of the imaginary part of the limit of n going to zero of one over n, the integral over dx v zero x, and the sum for alpha from one to n of x alpha square. Why do you get this? Well, because when I do the derivative of this functional with respect to z, you know, I deleted, but it was in the part of the logarithm, the logarithm of the integral of this object here. When you do the derivative, the logarithm will give you the denominator, this and this in front, but at the same point, this object is precisely p zero x. Well, now, tell me. Very good, very good, that's right. So, you see the Sylpon solutions also depend on z through the Sylpon equations, right? So, when you do the derivative of the functional with respect to z, you have an explicit dependence and an implicit dependence through the functions. To do the logarithm with respect to the implicit dependence, you have to do the variation of the action with respect to p and p hat, but the variation of the action with respect to p and p hat is zero because these are the solutions of the Sylpon, at the Sylpon, right? So, you only need to worry about the explicit dependence on z, on z. Good. Now, step two of the replica method, so, which means how on Earth I do the limit of n going to zero, right? So, let us look at this object that we have here. Let's not forget about the imaginary part and the limit of eta going to zero plus. So, we have the limit of n going to zero of one over n, the integral over dx is zero of x, the sum alpha from one to n of x alpha squared. So, an n is integer before doing the limit, right? And what is n? The sum, so it's kind of a bit weird taking this n integer and making it real in the sum, right? And then it's also hidden here, no? So, p zero of x vector is actually p zero of a vector having n components, x1, x2 up to xn. So, it's very difficult to see from this expression how to make the limit of the dimension of a space going to zero and evaluate this expression, right? Do you agree with me that this looks like an impossible task? Yeah, okay. So, the idea is the following. So, you do the following observation that consists on realizing that when I want to do, when I want to derive the typical properties with respect to the randomness due to the graphs, I do this trick of having a partition function of z of z and I multiply it ten times, right? So, it's z to the power n, right? So, this is actually what it is with n integers. So, this is z. Let me take out. Let me simply set, okay? This is z times z times z up to n times, right? And then you call this thing the n replicas or copies of the system. And then you say, okay, I'm going to call the system related to this partition function, the first copy of the first replica. This would be the first copy for the first replica of the system. This would be the second copy, third copy, and the last one is the nth copy. But there is no reason to call this one the first, this one the second, this one the third because this variable, they commute, right? So, this one, the 24th, 17th, 1st, 3rd, 7th, et cetera, et cetera. So, that means that if after, at the end of my calculation, I should expect that what I obtain must be invariant under the exchange of replicas because at the beginning, I put the replicas independently, right? So, at the beginning, in this derivation, the replicas or the copies are independent. So, I expect, in principle, that my theory, after doing the average over the disorder, et cetera, must keep this symmetry, okay? And then you say that the theory, or the resulting theory, must be independent under the labeling of replicas, right? So, if I were to label the alphas, nothing should change, right? Because at the beginning, I put them independent. Or in more fanciful ways, you say that your solution must be invariant under the permutation of replicas, right? My objects in the theory must be invariant under the permutation of replicas. Everybody agree with this observation? Right, so that means that this object that appears at the end of this very nasty and bloody calculation should have this property that when I change two components of this vector in replica space, it should give me the same, right? This hypothesis or ansatz is called replica symmetric ansatz. This is what is called, let me put it there. This is what is called the replica symmetric ansatz. It's called replica symmetric ansatz. And this is going to be the hypothesis that will allow me to make this limit often going to zero, right? But at least you have to understand the motivation of this. The motivation is a simple observation that this was some kind of artificial construction. There is no reason why, you know, the label I put in each copy should be important. So, therefore, my final theory should be invariant under this labeling, right? Now, the question now is, which form has to have this x0 that captures this replica symmetry? That's called this object now. This object is now the solution of the silent point, but I want to impose this invariance under the permutation of replicas. That's called this P0 in replica symmetric ansatz. And in this took a while to understand, let me write it down, okay? For this particular case, for this mapping, the format this guy should have is the following. It's an integral or a parameter, capital delta of a density of deltas of the product of alpha from 1 to n of the exponential of minus x alpha square divided by 2 delta divided by the square root of 2 pi delta, right? This is the most general form you have for replica symmetric ansatz, question? Well, okay, so, very good question. Yeah, yeah, yeah, yeah. Actually, this is kind of funny, because you can motivate or you can argue that this must be the correct expression if you try to relate cavity method with replica method. And the deltas that appear in the cavity method are related to these deltas that appear here. Okay? Then you can try other things, like for instance, how can I be assured that this is symmetric under the permutation of replicas? We can try some things. Like for instance, one possibility would be the following. Okay, I can assume one possibility. Let's call our replica symmetric ansatz. Our replica symmetric ansatz, why not? That this joint distribution of this vector in replica space is just a product of simple distributions. I could do something like this. Let's put now like q0 of x alpha. And let us say that this distribution will depend on some parameters, right? And let us assume that the parameters depend on alpha. You'll see. So let us assume that this distribution for replica alpha depends on parameters alpha. And actually, let us write it down explicitly. This would be q0 of x1 delta 1. Times q0 of x2 delta 2. Add to q0 of xn delta n. So it is clear that this is not symmetric under the permutation of replicas. Why? Because even though it factorizes the parameters I put here, these are the parameters that parameterize the, for instance, the negation would be the variance, the mean value, et cetera, right? So these are the parameters that parameterize the distribution. If I change the indices, the parameters change. So that means this is not replicasimetric. So the only way for this to be replicasimetric is that all parameters are the same, yeah? Do you agree with me? And now you can generalize this thing to say, OK, maybe these parameters, they have some uncertainty. So this can be lifted up to say, OK, so I have the product of this product is conditioned to a given value of delta, yeah? Times the integral over all possible values of that delta. And that gives back this, OK? So this can be lifted up to say, OK, this is a condition to a given parameter. So then I can suppose that these parameters follow a distribution. And then I have this product. If Mateo were here, is Mateo here? No, if Mateo were here, he would say, ah, this is called the Finetti's theorem, right? And he would leave it in there and you would say, OK, thank you, all right? But you understand that, OK, so this would be a good trial, but there still is missing some possible information that this delta can be undetermined, yeah? And actually this captures this case when this density is a direct delta, OK? So this is the most general replica symmetric order parameter. Or, you know, you don't see it, but you believe me for the time being, I can continue, yeah? And I can try to motivate later, tell me. Yeah, yeah. Yeah, something like this. And actually, again, this can be motivated. This can be related to the cavity method, actually, but I will not have time to do that, but if you're interested, we can do it in a lecture on Saturday, right? I'm going to be here for the rest of the college, actually, yeah? Now, OK, suppose that... Go ahead. You have to speak up. What is the order parameter? Yeah, OK, there is a reason that normally to these parameters, I call them order parameters. OK, remember that in critical phenomena, order parameters are called to this value that is here in one phase and different from here in another phase, and then it varies continuously at the transition point, yeah? So in the fellow magnet, for instance, the order parameter is the magnetization. Now mathematically, the magnetization appears when you linearize this double sum, right? As we saw in this fully connected IC model. So normally, in my area, you call order parameters the object you have to introduce to go to linearize this double sum, yeah? Now, can also this be understood as an order parameter? Yes, because, of course, it's not going to be zero or different from zero when you go from one phase to the other because this is density, so it cannot be zero, yeah? But its form will change. And if you look at how the form changes, you can also locate transitions. Good? Ah, definitely, OK, maybe I'm going to... The call is... The... I think it's with two t's. Definitely. Ah, yes, it's because I said that this is the resolution for our case for this problem in random matrices, OK? Yeah, in principle, I should have here an infinite number of parameters and I would know that this is a Gaussian. But for our problem, I know that this distribution for each component in replica space must be a Gaussian. Here, OK, very good, this is very good because when we did this derivation... People originally did these derivations, right? They put this... A generic expression like this, right? They put something like... They would put something like this, right? Let me delete this. They would put like an integral over infinite number of parameters. This mu is an infinite number of parameters, right? Of a density over an infinite number of parameters of the product for single-marginals per cavity variable. X alpha, that depends of an infinite number of parameters. Yeah? You apply... You put this thing into the saddle point equations and then you obtain the consistency equations for this omega of mu. And you realize that for our problem, only the variance, one parameter is important, the other ones are zero. Like it happened when we wrote down the cavity equations for the cavity marginals for our problem and realized that the Gaussians are the ones which are a fixed point for those equations. But the most generic solution, especially between these, right? Can you speak up? This delta can be interpreted as like an order parameter in this system. For this mapping of random matrices, I don't see how for instance the AC model of random graphs, yes, because this parameter would be actually this parameter for the AC model of random graphs on Poissonian graphs would be for instance the cavity fields, okay? And the aviability tangent of the cavity fields would be the cavity magnetizations and these can be understood as order parameters. Right? Very good. Can I continue now? What I want to argue? No, can I continue? Ah, sorry, go ahead. For our case. For our case. For our problem which is... No, no, no, no. The people, this was not realized. It's because when you write down, you have to put, guys, you have to put this expression back to the side point. You remember that when we have the cavity equations for the cavity marginals in this problem, what we did is to say, okay, I realized that the cavity marginals have to be cautions. I plugged that thing into there and then I get the cavitation for the deltas. Right? So here you have to do something similar. I have to introduce this expression into the side point equations and then write equations for this... for this omega, for these densities. You write that and then you realize that the only densities that work are those densities related to the variances when these are cautions. And the observation is the same thing we did for the cavity method. You find equations, you find intervals for these objects and you realize that these objects they are cautions, they are a fixed point from a functional point of view of these intervals. But you have to do the derivation to see it. What do you mean by uniform cautions? The probability of having what? Yes. Yeah. No, something that might not be cautions but this issue that I mentioned that it closes under cautions is not for this guy here, it's for these guys here. So realize that if you put cautions densities here the equations close for these marginals. And then this density for these parameters they obey something which is not cautions at all. That I will not have time to derive it but I'll send you the notes. Yeah. The idea of what? Well, so as I said so you go to the you know that this obeys the Alpine equation well too but you can combine and you have a Alpine equation for this, right? So you plug this expression over there and in the replica limit I'm going to zero you will find close equations for this. But the derivation is not trivial and it's not easy to, I mean I cannot explain it to you, we have to do it to see how it appears. What would be what? You can do the same answers if you want of course in this case it is not a density so for instance this then the corresponding omega hat for this case will not be a density or since you can combine the two side Alpine equations and to have only one side Alpine equations for this one. It doesn't matter. So replica symmetric answers for this for p hat, yeah? All questions? You got, sorry? You got to convince? Ah, okay, okay. Everybody got very convinced in 75 and then something was noted in 75 and then nobody know how to sort it out and then it took 30 years to understand. Right? So this is a very convincing argument. Unfortunately for these mappings with random matrices replica symmetric answer that works. But for a spin glass models it doesn't work and this is something very strange. So it happens that actually replica symmetry is broken and you have to break it in a specific way to capture the thermonuclear properties of spin glasses. Yeah, we do. Yes, yeah, we do and only for some models it was proved mathematically that the way you break replica symmetry which is by the Parisi scheme gives you the exact solution for certain models. Yeah? No, no. This was a way. Yeah, there must be the same because if they are not the same, if I permit to replicas then I obtain something different, right? So then all the parameters must be the same. That's right, that's right. Yes. It's in the product. It's in the fact that the joint distribution of this vector in replica space is the product for each component. Right? Of this marginal for each component and then the parameters that parameter edge they have to be the same. Yeah? And actually this was very difficult to... So this expression was very difficult at the beginning to understand. So the first people who wrote the replica symmetric answers for this sort of parameter they did it in this way which is the incorrect Y. It's a particular case. This was done in 1985 in a paper of Michael Wong and David Sherrington when they tried to solve a combinatorial optimization problem relating that to a spin-glass problem. Yeah? More questions? Now going back to... This argument is very convincing, right? So how can you convince yourself? So the people realized... So the way the people realized that this couldn't be correct and then there was a lot of argument where it was the source of the problem of the mistake is that if you use replica symmetric answers for the paradigm paradigmatic model, a spin-glass model which is the SK model the Sherrington-Killipatic model and you study the entropy when the entropy goes to zero the entropy becomes negative right? So then something was wrong but if you look at the paper of the Sherrington-Killipatic they thought that what was incorrect was that you have to exchange the replica limit with the thermodynamic limit but the replica symmetric answers was correct because it's weird, not again so the replicas are independent alright? And then there was a French author that introduced a way to break replicas etc etc More questions? Okay, now I hope I managed to convince you that this would be a proper well the one I wrote before let's do it again a proper parameter in replica symmetric answers Again, for our case this is the integral over a parameter of a density of that parameter of the product for alpha from 1 to n of the exponential of minus delta divided by square root of 2 pi delta Now how can I use now this these ansatz to make the replica limit? Well, now what happens when you plug these ansatz into here you know the values of n appear explicitly and then you can do the limit very easily Let's do it right? Now the limit of n going to zero 1 divided by n of the integral of dx of now this p-note I'm going to put it in replica symmetric ansatz of the integral over delta omega delta the product of alpha from 1 to n of the exponential of minus x alpha square divided by 2 delta divided by square root of 2p delta and then I have the sum for beta from 1 to n of x beta square Yeah? So this would be equal to what? So let us say, so let's do it like this So this would be equal to the integral over delta d delta omega delta and then I have the limit of n going to zero of 1 over n and then I have the integral over the x of this right? Let's do it like this of the sum for beta from 1 to n of the integral of dx the product of alpha from 1 to n of the exponential of minus x alpha square divided by 2 delta divided by the square root of 2p delta of x alpha x beta square So when I do for a given value of beta I do the for all the components of x which are not beta Yeah? And then I have the integral over dx beta of the Gaussian weight x beta square this would be delta and then this sum goes from 1 to n and you have the same value for all values of beta So this is equal to this is equal to the integral over d delta omega delta the limit n going to zero 1 over n and then I have n times the same thing n times 1 over x of the exponential of minus x square divided by 2 delta divided by the square root of 2p delta of x square So now n appears explicitly right? No, it doesn't appear as the components of a vector it doesn't appear as the last element of a sum it doesn't appear anywhere else but it appears to be as an integer and now I can take the limit in this case so you have n divided by n which is 1 and then this will give you what the integral over delta of omega delta delta Questions? Go ahead Yeah, because you see so for a given value of beta you do the integral of a vector which are not beta see this is a Gaussian distribution this will give you 1 and only the component when the component of x when it's beta will remain and then you have the integral over x beta of this with x beta x beta square but this is the same value for all sum for beta from 1 to 1 Don't worry, let's do one more step one more step and that's it so this part so you agree with me that here I have an multi-dimensional integral in the replica space yeah and I'm going to do the integrals over all components but not the component beta yeah not the component beta because I have to do because the component the integral over beta is different than the rest eh? beta is one of the replicas okay so let's do the step by step I have the following right I have the integral over dx1 dx2 dx beta dxn of the exponential of x1 square divided by 2 delta divided by 2 root of 2 pi delta which is important, this is not equal to okay I'm just focusing that piece times and here we'll have x beta square divided by 2 delta divided by the square root of 2 pi delta up to exponential of minus xn square divided by 2 delta divided by 2 pi delta and here I have x beta yeah are you with me now I do the rest of the integrals but the integral over beta so the integral over this would be one right because this is Gaussian measure all the integrals would be one except the integral over dx beta so this would be the integral over dx beta exponential of minus x beta square divided by 2 delta divided by the square root of x beta square right and then let me put here I have the sum over beta from 1 to n here I have the sum over beta from 1 to n this is the same result n times and it's precisely delta so this is equal to n times delta so therefore this gives me the integral over delta omega delta the limit n going to 0 1 over n times m times delta better? very good questions go ahead yeah so what happens is the follower remember so let us finish with the bank again these derivations are not difficult they are just annoying being annoying and being difficult is not the same thing let me delete here now remember that this function and the other one p hat of beta you can combine both of them to have just one side of the equation for p not this side of the equation would be something like this p not of x is equal to the exponential of minus z divided by 2 x vector square plus give me a second that's right d the integral over y p not of y exponential of scalar product of x and y minus 1 and something in the denominator which I'm not going to write actually it's the trace of the one you have in the numerator yeah you have this alpine equation so what you have to do is to plug the R s ansatz in this alpine equation you have to rewrite it make the limit n going to 0 and get the consistency equations for this omega of delta when you do it you realize that this omega of delta must away the following equation you have that omega of delta must be equal to the sum of the series for k from 0 to infinity of the exponential of minus d d to the k divided by k factorial the multiple integral for l from 1 to k d delta l omega omega delta l Dirac delta of delta must be equal to 1 divided by and for our case which is Poissonian graphs is set minus the sum for l from 1 to k of delta l if you take the alpine equation and then you take a generic replica symmetric ansatz you get an expression which is much more horrible than this one and you realize that this simplifies when this q's work out and you get this and you are wondering how do you get this well it's very simple you plug this thing into here you do a Taylor's function of this of the exponential of this and then you plug this expression here and you obtain this I'll send you the notes of this yeah it's something like that does this remind you of something of what the cavity equations so what the replica limited captures is that you do the cavity equations for one graph and then you do it for another graph and then you do the average over all graphs and then you get the expectation value or the average of the cavity equations and you can do this thing and at the end of the day you get this yeah very good this part here this has actually a very intuitive meaning for the problem we are dealing with this is related with the fact that for Erdos-Renyi graph the probability of finding the degree with a given degree is a Poisson distribution all right so what this is the Poisson distribution you take a given graph you solve the cavity equations you take another graph and then for a given note you say what is the probability of this note has a certain degree and naturally all this expression appears but it's a bit involved to do it that way well in the cavity method actually the precise relationship is the following right but again to prove it is a bit involved there is no more time so remember that in the cavity method in terms of these variances and the variances are related to other variances that way the cavity equations the cavity equations were these ones the variance or this delta of i when j has been removed is equal to 1 divided by z I'm going to do it directly for connectivity matrices z minus the sum for alpha belonging to the energy root of i without j now what is the relationship between this and this you do a following there are very good ways to do it but you can do it even in the same graph you define in the context of the cavity method the cavity method you define the following you do the histogram or the probability distribution for these cavity variances right so you do direct delta of delta minus delta at i without j then you do the sum that's right that's right so you do the sum for all i from 1 to n and then you do the actually you would do the sum for all i from 1 to n you would do the sum for all j then you do the sum into the energy root of i you would direct by the number of neighbors of i and you divide by 1 over n so this object that appears naturally in the cavity method which is just calculating somehow the probability of finding a given delta when you when the graph is very large or you do the average over various graphs you obtain precisely that this obeys this equation this I didn't derive but I can send you some notes of how you derive this questions why do I use the cavity method here it's simply as I mentioned at the beginning cavity method and leplica method they are related and you see the problems with leplica method it's a very mathematical method and you might think it doesn't have any kind of physical meaning but since the two methods are somewhat related statistically what I was trying to argue is that this equation that you obtain with leplica method can be derived with the cavity method if you consider the expectation values over all possible graphs on the ensemble it's simply that, right? now which method you are going to use it depends you know some problems are very easy to tackle with the leplica method while in the cavity method you won't have an idea how to do it and in other cases the cavity method is better but the two in principle should be equivalent this one, no, this comes from the leplica method this comes from putting this ordered parameter in leplica symmetric answers and doing the derivations I'll send you the notes if you look at this you think does this thing make some sense and I was trying to argue that it makes some sense it's not like a mathematical artifact by trying to relate it with the cavity method because you see here this is very similar to the equations that you get from the cavity method so these inconsistent equations for this omega sub delta come from the finding in the cavity method for a gain graph this density and doing the average over various graphs and when you do the average you obtain this this one here while this marginal is the marginal that appears in the cavity method this would be the marginal you would take for the cavity marginals in the cavity method more questions because delta is a complex number and this is the density of delta so that means it's a density of the layer and the imaginary part you make it very small and actually what you can do very good yeah well you assume it commutes and then you see what it works as usual but what you can do is you take the real and the imaginary part of this and in principle it's better to take to leave a very small value of delta of theta more questions for practical reasons when you solve these equations you keep a very small value of delta more questions I'm going to ask this in the example now tell me so our exam I need to think about it I think it would be three questions to derive things we have discussed and maybe something that maybe we have not discussed I need to think about it but it would be three questions one question the total number of points I didn't get any instruction but the total number of points is going to be 100 I think that the first question 25 points the second question 25 and the third question 45 something like this and in each question you have to I ask you things I ask you things so the first question is I ask you several things and then second question as well what else do you want me to tell you about the exam it will be printed in paper another thing I can tell you as well yes what what you can use whatever you want you cannot ask me but you can do whatever you want you can use the notes you can look for my articles you can work in groups can you work in groups can they work in groups Mateo for the exam yeah you can work in groups if you want is that okay go ahead eh no in the paper in your tablet you can use internet whatever right so the question is the following so the exam is one hour and a half if you didn't understand the directions I did again they are annoying but they are very simple tricks right so you are going to spend a lot of time looking at the notes looking at the internet and you will not have time to do the exam so yeah look at whatever you want if you are not prepared you will fail more questions about the exam no listen the exam is one hour and a half so think about it for instance think about the exercise with it yesterday it was the whole day no no it was when was it Tuesday when I asked you to do a derivation it was one hour and a half no what happened did you manage to do this derivation in one hour and a half more questions so yeah okay so you can if you want you can take notes you can look at my articles I think the the only thing you can do is to ask me right good let's go