 Okay, so if I'm not mistaken, this is our last lecture. Okay, I'll pretend I didn't hear anything. Okay, so we were computing the density of states for the Gaussian orthogonal ensemble in the quenched version, so by averaging the replicated partition function just to recall, I wanted to quickly finish the calculation. We landed on the southern point equation for the conjugate density which read in n-dimensional coordinates minus i epsilon over 2 summation over a a square plus i mu hat star of omega. And upstairs we have the same thing into the dot product of y and w square. And then after a series of manipulations in the replica symmetric limit we ended up with some expression including angular variables and we obtained from here that i mu star of y in terms of the scalar variable which is basically the radius in n-dimensional coordinates was given by this expression where C of lambda satisfied some self-consistency equation and then it is equal by solving the self-consistency equation where we got C of lambda equal to 1 over 4 i lambda epsilon plus minus root of 2 minus lambda epsilon square. Okay, this was just a recap. So let us recap what we need to do. So by the Edwards-Jones formula we need to compute minus 2 over pi n. Then we have a limit epsilon to 0 plus. Then we have the imaginary part of the derivative with respect to lambda of here you would have the average of the logarithm of z. And this average of the logarithm of z we used the replica trick. So this would be the limit n to 0 of 1 over n log of the average of the replicated partition function. But then we know that the replicated partition function average over the disorder can be written as a functional integral over the density and the conjugate field of exponential n into an action which depends on the replica index. And this action depends on mu, mu hat and lambda. Lambda is the point at which we want to compute the spectrum. Which means that in the larger limit this will be dominated by the configurations that make the action stationary and we computed the saddle point equation there. Okay, so what we can do is we will now try to plug this expression here inside the Edwards-Jones formula here. So you see that we will have a logarithm of the exponential so we can cancel the two. Then we will have a factor of n that cancel out this factor here. And then what basically remains is 1 over little n into the action as n evaluated at this other point. Now what we can do now is, okay, we can simplify this then. So it would be minus 2 over pi limit epsilon to 0 plus. Then we will have to take the imaginary parts. Then we can try to exchange the limits and the derivative. So let's take the limit here, 1 over n. And then we take the derivative with respect to lambda of the action as n evaluated at the saddle point. So this trick is useful because now we can take the derivative with respect to lambda of our action. So if you remember now, I will write it again. The action consisted of three bits. But lambda appeared explicitly only in one of them, only in the third one, the one with the logarithm in front. The other two bits depend only on mu and mu hat. So the dependence on lambda is implicit. So when you take the derivative with respect to lambda of the first two bits, you should first differentiate the action with respect to mu and then differentiate mu with respect to lambda using the chain rule basically. And all the derivative of the action with respect to mu and mu hat vanish because we are at the critical point. So actually the derivative with respect to lambda actually acts only directly on the third chunk of the action. And this is a massive simplification. I'm just writing down for you. Can I erase this side? You're cracking jokes. The action I just re-writed for you was minus i. Remember that the action was still written in terms of the angular integrals, right? So we made an assumption on the behavior of mu, but the action is still in principle a function of everything. Radial coordinates and angular coordinates. So we will need to use our polar or hysterical decomposition on the action as well. But luckily we don't have to do it on all our terms, but just on the third of them. So we will have mu of y, mu of w. And then we have, let's say, y dot w square. And there we have the third term plus logarithm of the integral r to the n dy exponential of minus i lambda epsilon over two summation over a yi square plus i mu hat of y. So the lambda dependence, as I was saying, appears explicitly only here. Of course lambda is also hidden inside here because mu and mu hat once evaluated at the standard point satisfy an equation where lambda appears explicitly. But the point is that when we differentiate the action with respect to lambda, we should differentiate the action with respect to mu times the derivative of mu with respect to lambda. But the derivative of the action with respect to mu at the standard point is zero just by definition. So the derivative with respect to lambda of this object only acts here. So we can perform it explicitly and we can say that rho in the limit n to infinity, let's say, of lambda will be equal to minus two over pi. Limit epsilon to zero plus the imaginary part of limit n to zero of one over n. And then we take the derivative with respect to lambda of this object here, which gives downstairs this object here. And we can now evaluate this integral directly in n dimensional polar coordinates just to save one step. So you will get integral zero infinity dy y to the power n minus one, then exponential of minus i lambda epsilon over two y square. Because this is the radius square of the n dimensional vector y. Plus this object evaluated at the standard point, we know what it is. It is c of lambda y square. Times we will have here some angular integrals. But these angular integrals will cancel out between downstairs and upstairs. Note that here we don't have any scalar product anymore, so we don't have to keep one angle in the game. All the angular integrals downstairs and upstairs will cancel. So if we take now the derivative with respect to lambda here, we will get minus i over, sorry, minus i over two. Minus i over two. And then the radial integral will be basically the same with a factor. So it will be y to the n minus one times y square because we are differentiating with respect to lambda. So there is this bit that comes up. Exponential minus i over two lambda epsilon y square plus c of lambda y square. So the angular integrals are cancelled out. So the only difference between upstairs and downstairs is this here. Because here we would have y to the n minus one from the change to sterical coordinates times y square, which comes when we differentiate this object with respect to lambda. So the derivative with respect to lambda of minus i lambda epsilon over two times the radius which is y square. So if we differentiate with respect to lambda, we get this object. So now we see that our formula starts to work. Why? Because we have minus sign here that cancel out and we have a factor of two that cancel out. So we can cancel this guy here, this guy here. And then we have a factor of i in front. So we need now to take the imaginary part of i times something. So we can convert this into the real part of something. So we can simplify this and say that this is equal to one over pi limit epsilon to zero plus of the real part. Because we are taking imaginary part and this i out. So we take the real part of this object here. But now you see these two integrals, we can perform them exactly. There is nothing unknown here. It is an integral of exponential minus a y square into a power. So these integrals are of the gamma type. So I leave it to you to do this exercise to compute the integral upstairs and the integral downstairs. You just simplify and what you get is a simple function. This simple function will be little n times something. So this little n cancel with this guy and you get a perfectly defined replica limit. So all you have to do is to compute this integral. This integral, simplify them and observe that the leading term for n to zero is proportional to n. If you do this operation, all of this and the imaginary part which has been converted to the real part is just a function of c of lambda, c of lambda and lambda which are the only players in the game. So what you get is one over minus two c of lambda plus i lambda epsilon. So after this long tour of the force, all we have to do is to take the definition of c of lambda which is very wisely erased. So you take the definition of c of lambda that we had determined. Lambda epsilon is lambda minus i epsilon and then you need to extract the real part of this complex number which is one over a complex number. So you need to rationalize upstairs and downstairs and pick the real part. So let's see how this works. So c of lambda is one over four i lambda epsilon plus minus root of two minus lambda epsilon square and lambda epsilon is lambda minus i epsilon. So what we can do is we can write c of lambda in terms of its real part. That's called p epsilon of lambda and its imaginary part. So using the usual lemma that I gave you many times that the root of a plus i b can be written in terms of Cartesian coordinates, you can find out that p epsilon of lambda is one over root two root of two minus lambda square plus epsilon square plus root of two minus lambda square plus epsilon square square plus two epsilon lambda square. And q epsilon of lambda is equal to the sine of two epsilon lambda divided by root two root of two minus lambda square plus epsilon square square plus two epsilon lambda square exactly as there minus two minus lambda square plus epsilon square. So now we have two real objects that make up the real and imaginary part of c of lambda. Now all we have to do is to plug this object here, rationalize the denominator and extract the real part. Now if you do that, you obtain that the real part of one over minus two c of lambda plus i lambda epsilon is written as minus two p epsilon of lambda divided by four p epsilon square of lambda plus lambda minus two q epsilon of lambda all squared. Okay that's just a simple, simple exercise. So what remains to be done is the limit epsilon to zero plus. So in the limit epsilon to zero plus and four minus root two smaller than lambda smaller than root two, we have that p zero so p epsilon to zero of lambda converges to plus minus root of two minus lambda square over four and q not of lambda converges to lambda over four. So if you now plug this object here and this object here and you do your simplification picking the right sign and divided by one over pi, what you get is that rho n to infinity of lambda is one over pi root of two minus lambda square for lambda smaller than root two and zero otherwise. So this is basically an exceedingly complicated way to get the semi-circle but I just did the calculation in full top to bottom. So just for pedagogical reasons. The Edward Jones formula indeed works in the quenched, so in the correct version provided we use the replica trick and we continue analytically the result for the replicated partition function in the vicinity of n equal to zero. Now what I wanted to do now is basically redo the same calculation in a less trivial case, meaning in a case where we don't have another way to obtain the spectrum density. So the case of sparse random matrices. So the structure of the calculation will be identical but of course the result will be something non-trivial that we don't have any way to check unless we do numerical simulations. So can you erase here? Look guys, you're not happy? Look what you got. Four hours of my life. Brilliant right? We have a full course of calculus one. Sorry? Yes, excellent. Two points more on Saturday. A full course of calculus one. Limits, derivatives, you know everything. Multiple integrals and then in the end everything boils down to this simple thing. I think you should be impressed. You're not? Okay. Now let's gear up, buckle up. So what we are going to derive today is the so-called Bray Rogers equation for the spectral density of sparse random matrices. So this calculation should be relevant for the people who are working in graph theory, complex networks, this type of stuff. So thanks to the Edward Jones formula we can in principle compute the spectral density of the adjacency matrix or the Laplacian matrix of a random network. Okay? So the starting point is the usual formula. For some reasons I wrote rho of x instead of rho of lambda. I think with the notation in the notes, so going to confusion is the same thing. So you remember that z of x has this multiple integral representation exponential minus i over 2. So is it clear to everyone why we need to have a complex number here with a negative sign? In front. So why we can't really write this object in terms of a proper Gaussian integral on real variables? Because if we didn't have complex variable here, given that the spectrum of h can take positive and negative values, we wouldn't be guaranteed that this integral is convergent. Instead, if we put a minus i here, so the real part here, since this object is lambda x minus i epsilon. So you get that i and i gives a minus 1 times minus is a plus times minus is a minus. So the real part of this object goes as exponential of minus epsilon y square. And this guarantees that this integral is convergent. No matter what the spectrum of h is. Good. So now we do re-do the same calculation, but this time we assume that h is not a Gaussian matrix. So h ij has a structure of this form, c ij, the so-called connectivity matrix, times k ij, where the c ij are sampled independently from a distribution of this type. So our matrix is symmetric and each of the entries of the connectivity matrix is either 1 or 0 with some probability. So we have a connectivity matrix which is filled with 0 and 1s with some probability. If c is very high, you will get a lot of 1s and very few 0s. If c is low, you will get a lot of 0s and few 1s. So a matrix of this type is what? Guys working in complex networks. Yeah, so this will be an adjacency matrix of what type of graph? Sorry? Yeah, but if you really use this probability distribution, what type of graph, what's the name of the corresponding graph? Yeah, so this will be an average. So that's exactly the definition of an average rainy graph. The graph whose adjacency matrix has links or no links between edges exactly drawn from this probability. Okay? And on top of that, since it doesn't make the complication, make the calculation harder, we put some weights. So all the entries that are non-zero are modulated by some value, Kij, which is also a random variable. And we assume that, so the Kij are independent random variable, but the distribution of Kij is unspecified at this stage. So actually we can do the whole calculation till the end without specifying the distribution of the weights. Only at the very end, if we want to obtain specific results, we can plug in the distribution of K. Good. So we use again the replica identity. So all we have to do is to replicate the partition function and take the average. This time the average is not taken with respect to Gaussian variables, but is taken with respect to the joint distribution of the C's and the K's. Good. So the replicated partition function, we can write it as follows. So an external average over the distribution of K variables. That means that we are just integrating over DK11, DKNN, PK11 times PKNN. We don't care. We just take it outside. And then we perform the average over the connectivity matrix first. So this should be. And then we are basically replicating the partition function. So this integral runs over R to the capital N times small n. This one is multiple factor of N component of capital N component. And we have little n of them minus I over two summation. I and J summation over A. And then we have Y, I, A, Y, J, A. And here we have X epsilon delta I, J minus H, I, J. So it's the same thing we had before, except that before this object was a product of Gaussians for the diagonal and diagonal elements with different variants. And in this situation, this P, C, I, J has this distribution only on zeros and ones. And we can try to perform this, this average. So we need to perform the average over the disorder first. Remember, so we need to exchange the order of the integration. So what we do is we take this external average over K. And then we have the integral over product of the auxiliary degrees of freedom Y of what? So this, this diagonal term does not depend on the disorder. So we can pull it out. So we can write exponential of minus I over two X epsilon summation over I. Summation over A of Y, I, A, Y, I, A. So it is a Y, I, A squared times the bits, the bit that depends on the, on the disorder product. I is smaller than J, D, C, I, J. Then we have the P, C, I, J is equal to one minus C over N delta C, I, J, zero. Plus C over N delta C, I, J, one. This is the distribution of the connectivity matrix. And then I need to average with this distribution the term here. Remember that this guy here is called C, I, J, K, I, J. So we need to write, well, let's move the average inside because it's the average over K. So exponential of I summation, I is smaller than J, summation over A. Y, I, A, C, I, J, K, I, J, Y, J, A. So we have this, this big external average over K. And now we need to perform the average over the connectivity matrix of this, of this object. So now the things simplify considerably, right? Because C, I, J can only take value zero or one. So if C, I, J takes value zero, which happens with probability one minus C over N. So this entire object becomes the exponential of zero, so it just becomes one. So this entire thing crumbles down. If C, I, J is equal to one, which happens with probability C over N, then we are left with some non-trivial bit, which is the exponential of I summation of Y, I, A, K, I, J, Y, J, A. So we can get rid of the C, I, J easily. The average is over the connectivity is easy. Let's do it. So the average over the C, I, J will be equal to what? Well, we will have a product because everything is independent. So we have a product of variables that are independent. So we can do the average one by one. So then we can have the average in here over K. And this average will be given by one minus C over N times one, right? So whenever C, I, J is equal to zero, all this object is not there. So it would be one minus C over N. And then we have plus. So this would be basically the situation where C, I, J are zero plus C over N. And this is the situation where the C, I, J are one. So we get exponential I summation where there is the summation I smaller than J is taken care of here. So there is only a summation over A. So we get Y, I, A, K, I, J, Y, J, A. So the average over the connectivity is extremely easy. It's almost trivial just because C, I, J can only take values zero or one independently. So this multiple integral just becomes the product of individual integrals one for each entry. Okay, so we can rewrite this object in a slightly fancier way, which is one plus C over N times exponential of blah, blah, blah, minus one. So one plus C over N, exponential minus one. And then what is normally do with an I towards the larger limit is to trade this one plus X for an exponential, right? So this guy here will be approximated as exponential. So let me write it down. So product I smaller than J, we can put it inside the exponential as the exponential of a sum. And then we can instead of summing over I smaller than J, we can sum over all I and J with a factor of two. Okay? So if I do everything together, I get that this object is exponential of C over two N to a C over N, but they put a factor of two because now I'm summing over I and J all I and J, then summation over I and J. And then I let you show this over here can be written as average of the exponential of I, K summation over a I, Y, I, A, Y, J, A. Average over K minus one. Where this average here is no longer an average over the joint set of the case. It is just an average over one representative sample. So P of K. So the step from here to here uses this object here. And then if you try to do this, you write the average over the full set of Kij. And you will see that only one of these Kij will actually contribute to this average. So all the others will basically integrate to one. And this is the root of the fact that we can keep this average over K until the end. We don't have to do anything now. We can keep this average over the P of K until the end. Okay, so we have this object here, which is as a strange structure. It is an exponential of the average of an exponential, but still it's okay. So this is the stuff that goes in here. Just try to fill in the gaps between these two steps. If it is unclear, I will still be here. Now what remains to do is this integral. And now we use the same trick that we used last time. So we introduce a density of replicas. This mu of Y. And here you see that this step is crucial. In the Gaussian case we had alternatives. We could use like Gaussian linearization or other tricks. Here this is basically the only thing we can do. Because we don't have a Gaussian structure. The structure here is more complicated. It's an exponential of an exponential. Good. So now we do the same trick as before. Yeah, that's the whole point. So that's why I asked you to fill in the gaps between the two. So what happens is that since you have the summation over ij in front, inside you have a situation like an average over the whole set of kij of something that depends on one kij. So for example k23. So when you perform this average you have dk11, dknn, pk11, pknn. But then you have inside here exponential of something that depends on one specific k. Because the sum is outside. So for example you have an average over k23. So this means that all these integrals are just one except the one that you are averaging over. And since they are all equal you can just take an average over a single pdf. Yeah, the k's are taken independently but from a distribution that is arbitrary for the moment. So the cij are independent. The kij are independent from different distributions. But while we need to specify the distribution of the c we don't need to specify the distribution of the k. We keep it like this forever. Is that clear? So the integral that we have to do is dyA exponential minus I over 2x epsilon. Summation over I. Summation over A. YIA square times exponential of c over 2n. Summation over I and J of summation, sorry integral of exponential I k. Summation over A. YIA YJA minus 1. So to make progress we again introduce our density of replicas. Which will help us representing this object in a cleaner form. So our Y is a vector of replicas. We have our usual friend, so the functional representation of the unity. Where we introduce the same conjugate field as we did before. Summation over I product over A delta YA minus YIA. And then we can represent the second bit, this integral. The one that is different from the Gaussian case. And we get in analogy with what we had before. That the replicated density, sorry the replicated partition function has this form. So minus I n integral dy mu of Y mu hat of Y. So this term is always there. It comes from here. Then we get a term that comes from the functional representation of this object. So here we have a 1 over n and during the functional, introducing this functional streak. The n raises upstairs as it did in the Gaussian case. So the result is plus cn over 2 and not c over 2n. Integral over dy dy prime mu of Y mu of Y prime. And then you have here exponential of I k. Summation over a YA YA prime over k minus 1. So this structure is exactly identical to the one we had before for the Gaussian case. The only thing that changes is this kernel here. You remember that in the Gaussian case we had just a scalar product here to the square. Here we have a more complicated function of the scalar product between Y and Y prime. That's the only difference with respect to what we had before. It's a strong difference, but you can appreciate that we are basically conducting the calculations in parallel. Then what is missing? There is always a third term that is missing. The third term is the fact that this multiple integral has not disappeared clearly. So the integral is still there, but we can perform it. So now we have the integral. Actually, we can save time because we have already done it. Times exponential of I summation over I integral dy product over a delta of YA minus YIA. So there is this bit and this bit that we have included here. So here again we realize, so this is a vector, remember. Again we realize that since we have exponential of summation over I where I runs from 1 to n, this is just an n-fold copy of a single integral. So we can write the single integral raised to the power n as we did before. So in summary, this object here can be written as integral dy like a small n, this integral is a small n-fold integral. Exponential of minus I over 2 X epsilon summation over AYA square plus I mu hat of Y all raised to the power n. So this calculation is identical to what we did in the Gaussian case. So I'm just keeping it short because we did the exact same calculation before for the Gaussian case. And this object is nice because we can write it as exponential n log of whatever. So again, if you think about it, the action will still contain three terms here. It will contain this term here, which is identical to the GOE case. This term is different. It has the same form, but it is different in the details. And this term is identical. So now if we understood what we did for the GOE case, it is clear what we have to do. Everything now is proportional to n in this exponential. So we just have a different action corresponding to the Spar's matrix case. And the good thing is that this average over K is still there. So we don't have to specify it. Good. So what is the action? We can write it down in a neater form. So in summary, z to the nx is an action d mu d mu hat exponential of n, where sn of mu mu hat and x has three terms. So it is c over 2 dy dy prime. So c is the connectivity, remember, mu y, mu y prime. And then we have this funny average over K minus one of exponential i k summation over a y a y prime a minus i dy mu y mu hat of y plus the logarithm of the integral dy exponential of minus i over 2 x epsilon summation over a y a square plus i mu hat of y. So now we can write the Sunder point equation. Actually, if you look at your notes for the GOE, you can probably already guess what the final Sunder point equation will be. So we will have two Sunder point equations as before. We can combine them together as we did before. So I'll just write it down for you, the final equation. So you call i mu hat star of y. Let's change notation a bit to make things lighter. So we call it c times g to the y, g of y, sorry. We just, yeah, let's say definition. So we take this as a definition of this new function g of y to make the notation light. So the final Sunder point equation is, as usual, we have g of y is equal to a ratio of integrals as we had before. So downstairs we have dy prime exponential minus i over 2 x epsilon summation over a y a prime square plus cg of y prime. So this bit is exactly identical to the one we had before for the GOE. So nothing has changed. So upstairs now you can predict what the result will be. It will be the same integral as here times before we had like a scalar product to the power 2. Here we will have something different. The something difference is just this object here, right? So we are doing exactly the same the same thing differentiating. So what we get is dy prime exactly the same thing exponential minus i over 2 x epsilon y prime square plus cg y prime. And then we have here a certain function of the dot product between y and y prime. And what is this function? f of z is just the average over k minus one of exponential i k z. So you have exponential of i k z average over k minus one and this z is the dot product between y and y prime. So this is an integral equation for g of y in the n dimensional replica space, which is exactly the analog of the GOE equation where here we just had y dot y prime to the square. And note that the average over k is still, you know, in principle, it is here. We don't need to know what the distribution of k is in order to get to this point. So now we do a five minutes break and then we move on from here in choosing a specific distribution for k. Just shout. We have some victims. Okay. Okay. Just a couple of historical notes. So I asked Erika to upload three papers on today's folder. So this paper is 1988 paper by Rogers and Bray. So this is the paper where actually the formula was first derived. So in essence, this object here, we can call it the Rogers Bray equation, even though they derived it only for a specific choice of the distribution of k. So we will get there exactly at this point. So you can read this paper and it's a nice interesting document. I was in primary school and was hating math while they were producing this beautiful piece of work. Then there is this other paper more recent by my dear friend, Reimer Kuhn, who did spectral sparse random matrices. So he did basically, he redid the calculation by Bray and Rogers and he got exactly to this equation. But then he solved this equation using another method, which is probably more efficient. In other words, I just wanted to point out that equation 13 of his paper is basically the one that we derive here. So at this point, you have two choices basically. You can either, well, first you will need to specialize what distribution for k you want. So this is as far as you can get without specifying the distribution of k. So if you want to make progress, you need to specify the distribution of k. But once you have specified the distribution of k, you have two choices. Either you follow Rogers and Bray or you follow Kuhn. And you will get two different methods to solve this integral equation. I cannot do both, so I chose to do the Rogers and Bray method. The two are equivalent, but they lead to completely different expressions and completely different numerical methods. But in the end, you can compare the results with numerical simulations and find a good agreement. For example, in the paper by Kuhn, he does numerical simulation. You can find them on page 14 and 13. So here, okay, the choice of colors is not the best, but basically you have a numerical diagonalization of adjacency matrices, so the histogram of eigenvalues. And on top of that, the analytical solution, I mean, the numerical solution of the analytical result, which is the Rogers-Bray equation for the specific choice of the bond distribution, okay? So this formalism works, so we can go a bit further on by specifying the distribution of k, and then you can compare, you will in the end get some sort of integral equation that you will need to solve numerically. So the numerical solution of the analytical result matches exact diagonalization, okay? This is just two. Okay, so we can go on from here. So first of all, now you know how to proceed, right? So what we need to do is we change to n-dimensional polar coordinates, and we assume a replica symmetry, a replica symmetric solution. So we did exactly the same thing as we did before. So in a spherical n-dimensional coordinates and assuming replica symmetry, we have that g of y, where y is the radius, will be equal to integral between 0 and infinity dr, r to the power n-1, and then you will have exponential of minus i over 2 times one angular integral, which survives as before. So this will be sin phi to the n-2. And then there is a term that comes from here. So this term will be f of y, r cos, and downstairs you have the same thing, except this last bit there, okay? So all the angular integrals except one have cancelled out between upstairs and downstairs, and here we have this function that depends on the average over the distribution of the bonds. So at this stage, we cannot proceed any further unless we specify what this f is, okay? So we are somehow stuck because we cannot perform the integrals, simply, the angular integrals. Now, if we specialize, I told you that Bray and Rogers chose a very specific distribution for the bonds and then carried out the integration. So the distribution that they chose for the Hij, so the full matrix, so Bray and Rogers original choice was a matrix of this type. So with some probability, the entries are zero, and with symmetric probability, the entries are equal to plus one or minus one, okay? So this is a sort of GCC matrix but which is statistically, you know, on average it has a symmetric distribution. So you have zero, plus one, and minus one, okay? So if I give you this distribution and the fact that the distribution of connectivity is known as before, you should find out that the distribution for the Kij is just equal to one-half delta Kij1 plus one-half delta Kij minus one, right? So if you use the definition that I gave you before, so that Hij is Cij times Kij, okay? Then if you pick the case from this distribution, then the definition I gave you is exactly identical to this one. All your entries are zeros, and the entries that are known zero are plus one or minus one with equal probability. So with this choice, we can compute our function f of z. So our function f of z with this choice is minus one, and then we need to take the average over this distribution over K. So this will be plus one-half, and then we should sum over K equal to plus or minus one of exponential ikz. So this is minus one plus one-half e to the iz plus e to the minus iz, which is equal to minus one plus hyperbolic cosine of z. So now we have a specific form for our f of z, where we have taken this average over the distribution of K. Now we plug this expression inside here, and we perform the integrals. Should it be a normal cosine? Yes. Yes, you're right. I think the thing is that I think Rogers and Bray took the initial integral without the i in front. So I think you're right. But I think I used the correct expression in performing these integrals. Okay, thank you. Yes, I think I used the correct expression to do these integrals. Let me check. So what you have here is g of y is. So this integral we can do, the integral downstairs we can do easily. You have the result from the GOE calculation. I don't remember. It's ratio of gamma functions, right? Instead, upstairs we will need to perform the integration of basically here. You will have minus one plus cosine of yr cos phi. Yeah, I think it's fine. Yes. So upstairs you use, so you have an expression for this integral in terms of bestial functions. And an expression for this integral in terms of gamma functions. So there is like a long expression here. So you have like a power here. Well, I mean the exact expression is not particularly appealing. It is just, you know, you're just performing this integral. And what comes out is certain combination of bestial function. So i of k denotes the kth order modified bestial function. So you stick this expression in here. You compute this integral which we had in the, in the notes and you simplify some terms in, in front. And then what you, what you get is in the end this final result. So g of y is equal to what is equal to, so the independent terms are gone. So here downstairs what you do is you do the same trick as before. So an integration by parts and you do it to avoid the singularity of type one over r. So you integrate by parts here and you get the derivative of this object here that we call g of r. So g of r is for us same definition as last time. Yeah, which is this object here and upstairs you have the same thing r to the n minus one g of r and then a certain combination of bestial, bestial function. So if I'm not mistaken, this should be, well, the result came out right. So it should be okay. Yeah, minus over gamma and over two. So all I, all I did is this integral here, this integral here. Just simplify a few constants and take an integration by part in the denominator. Then you know what we have to do. We have to do the, we have to take the replica limit. So the limit n to zero. And in the limit n to zero something nice happens because this object has a finite limit. This object disappears and what remains is just a total derivative. So it is g at infinity minus g of zero. And the good thing is that also this object, this entire object here has a nice, has a nice limit. It's a complicated limit but you can, you can get it. So n gamma n half in the limit n to zero tends to two. This integral in the limit n to zero tends to a total derivative. So it is g of infinity minus g of zero. And this should go, if you look at the definition to minus one. And then all the rest are n minus one into this bestial beast. It goes to y bestial one of r y. So it seems that if you collect all the terms in the final result for g of y is this integral equation. Which is y integral between zero and infinity. The r y one of r y exponential minus i of x epsilon r two plus c g of r. Which is basically the equation derived by Rogers and and pray. So they use a different convention for the normalization of the integrals. But essentially this is equation 18. It should be a question 18 of Bray and and Rogers. So this is basically an integral equation for this auxiliary function g of x. So this function g of x appears inside the the integral and outside. So you need to solve this equation for each value of x. So at each point on the on the real axis, you want to compute the density. So the density will be computed using the address Jones formula, which requires this function g of g of x. So clearly this integral equation so far doesn't have an exact and explicit solution. So we don't we don't know an explicit solution for this for this integral equation. We can only compute it numerically. We can compute numerically this the solution of this integral equation plugging it into the Edward Jones formula and destructing the density. So the situation is much more complicated than than the Gaussian case where we can we could crack the problem completely. If we followed the other route, the one by by Cune, we would still get a final expression that has to be evaluated numerically in this in that case with the population dynamics algorithm. So neither of the two routes really gives you a very explicit results. So you still need to work to work out numerically what this object is. So actually not exactly for this for this model characterized by this distribution of K, but similar model. The numerical study was undertaken by Broderick's and the group of Annette Zipelius. Of course, you need to to know where to look for because the paper is called stress relaxation of near critical gels. So you would never guess that there is like a numerical solution of the Broderick's equation in there. So I uploaded this one as well. So you see on page like nine. Well, you can circulate this, but basically they managed to here. Here, here there is basically a longer version of the Edward Jones. Sorry, of the Bray Rogers equation for a specific distribution of K. And here upstairs there is like numerical simulation. Like comparison between exact diagonalization and the numerical computation of the integral of the integral equation. And they found a perfect, perfect matching. Instead in the paper by by Q, you find the comparison between numerical diagonalization and the result of the population dynamics algorithm. But the saddle point equation is the same for for both. So in principle by I mean by following this course, you should be able to understand to read and understand both both papers and reproduce all the all the calculations. Then, then you should sit down and try to write a code to solve this, this interval equation numerically, which is probably perhaps not the most exciting task, but at least, you know, having an equation is better than having none. Right. Okay, so to be honest, I'm done with what I with what I wanted to tell you. I think that this completes the picture. You should now be able to do all your, all the calculations by yourself and complete the steps and be able to appreciate and read all the papers on the on the subject. So once again, it was my privilege to have you here. I'll be still around until until Saturday. And so if you have any questions or any doubts, please come and talk to me and well, good luck for the exam.