 Good night everybody. Thanks for coming. So I wanted to today to build on The adverse Jones formula we introduced yesterday This time Good Okay, just sit down and shut up. Okay. I just recall the the adverse Jones formula, this is a formula that connects the average density of eigenvalues for a generic random matrix model with with real eigenvalues It is a long expression, but in essence It amounts to the Sokotsky-Plemaj formula and a suitable representation of a sum of complex logarithms in terms of An enfold integral. So I write it in the full in its full Splendor So we have a random matrix x. Let's assume it is real symmetric We have a certain vector of dynamical variables y Lambda epsilon is lambda the location at which we we want the spectral density minus i epsilon And this symbol here means average Over the disorder. So it means that we are taking The average over the joint pdf of the entries of our matrix x and this is the So this is the only input of the of the formula. Now yesterday, we performed this calculation for the GOE So ensemble of Gaussian real symmetric matrices in the annealed Approximation the annealed approximation means in essence that we are We are moving this logarithm outside of the of the average So we are taking the logarithm of a multiple integral of dy and dx Variables on the same on the same footing. Okay, this is clearly an approximation, but we showed yesterday that Perhaps surprisingly we landed on the correct solution So in the limit end to infinity we landed on the semicircle low Now I wanted to to show you now one possible way to Tackle this this problem in the quenched version. So using exactly this formula without dirty tricks. I mean with a serious Of dirty tricks, but less dirty than the one we saw yesterday Okay So, uh, we we do the calculation again for the for the GOE So we are expected to retrieve the semicircle again, but this time Doing a full-fledged quenched calculation So remember that we can recall that the main the main issue here Is that if you look at this this formula here, what we are after is the following Multiple integral over the joint pdf of the entries Times the logarithm of another multiple integral So if we want to perform this integration, we need to find a way This time a legitimate way so to speak To get rid of this logarithm, which is right right in the middle right in the way Because if if we did carry out first the integration over y Then the logarithm and then the integration over x we would be just running The address Jones formula backwards and we would just get a trivial identity row and of lambda equal to row and of lambda So the only way forward would be to try to exchange The order of integration and performing the average of the the disorder first Clearly we cannot we cannot do it or at least not in an obvious way because there is a logarithm in the way Okay, is the is the setting clear So for the g o e i recall that the joint pdf of the entries will be Okay the diagonal Entries so it is just a product of gaussian variables with different variants I hope you all recall why we need different variants on the diagonal and off diagonal terms And on top of that as as we did yesterday, I rescaled the variance of the diagonal and off diagonal Elements with with n in order to Since we are expecting a good large n large n limit Okay So we don't want we don't want the edges of the semicircle to grow to grow with n Okay, so what is the idea to get? To get rid of the logarithm This is a beautiful beautiful and elegant construction full of mathematical problems, but Can I can I remove this? It's too late anyway Okay, okay, so the idea to get rid of the logarithm in between Is the so-called? replica Identity Sometimes referred to as the replica trick So the replica identity is based on the following Identity, so what we want is the average Of log of z to the lambda of z of lambda, sorry So this is the object that we are after the average of log of z of lambda And we write this object as limit n to zero 1 over n logarithm Of the average Of z to the n of lambda So limit n to zero 1 over n logarithm of the average of z to the n of lambda How to prove uh this well what you do is you write z to the n of lambda as exponential of n log z of lambda Just write it in exponential form and then you expand this exponential as 1 plus n log z of lambda Plus orders of n square And then you average term by term So you get that the average of z n of lambda Will be the average of one which is 1 plus n Into the average Of log z of lambda Plus orders of n square Then you take the logarithm on both sides So logarithm of the average of z n of lambda So you get logarithm of 1 plus something small And log of 1 plus something small is A symbolically for n to zero something small So this object here is n average of log z Of lambda So if you divide by n and send n to zero, which is the limit in which We are killing these extra terms then you get your identity. Yeah. Oh, so far. I mean We're just gearing up. So so far so far. Everything is sort of okay Okay, but don't worry the dirt will come So so far we are on some sort of clean grounds So now look look stare carefully at this identity. You will encounter it Probably many times maybe even this afternoon or in future courses What is the advantage? Let's let's think about it. What is the advantage of this formula? So here on the left hand side, we have the average of a logarithm That we don't know how to perform on the other side The average has been moved inside the argument of the logarithm. So the the logarithm is somehow out of the way So it is not inside the average anymore And on top of that z of lambda Is raised to some power n Now we want to evaluate this expression in the vicinity of n To zero okay, so for real values that are close to zero But suppose for a moment that this limit was was not there Then we could interpret this object for small n integer as z of lambda replicated n times That's that's the the reason why the uh, this trick is called the replica identity So we are replicating z of lambda n times and if n little n not capital n capital n is the size of the matrix If little n is an integer then we are replicating a multiple integral little n times But a multiple integral replicated little n times has a very Nice property that it is still a multiple integral just a larger one right The reason is that if you have an integral dx of phi of x to the power two Well, you can write this one as integral of dx dy phi of x phi of y right So if you have if you have an integral raised to some power, this is another integral just in in a large number of variables And this here is the core of Of the method so we got rid of the of the logarithm and we get a replicated Version of our multiple integral Let's see how this works in in practice So what we of course here the the tricks the mathematical problems start to appear Because if we promote this little n to be an integer, which is what we need to do Then we will need to make sure that this limit Is well well defined Okay, because the smallest n that we have is one which is very far from zero right So we are sitting here, but somehow we need to analytically continue the result in the vicinity Of zero of n equal to zero at the end of the calculation If this can be done Then we might be on safe on safe grounds So how does this work in practice all we have to do is now replicate z of lambda which is this object here little n times Let's do it. So what is the average Of z to the power n of lambda We have the average So we are averaging over the disorder So we are averaging over The entries of our random matrix which are Gaussian random variables. So this is the average part Here's the average part over the disorder And now we need to include this object Which is z of lambda replicated little n times so Here we will have an integral which runs over what So if this integral runs over r to the capital n This integral will run over r to the capital n times little n Because we have the integral that we had before replicated little n times now This vector here will become a replicated vector So and let's call the replica indices With the name a So here we have a vector y Which will now carry an additional index a additional replica index Okay, so all I'm all I'm doing is I'm applying this this rule here except that instead of having two I'm having n So here we will have that this is dx one Well, let's call it Xi so we are not in trouble So I'm just applying the Xi one the Xi little n Phi of Xi one Phi of Xi n This is all I'm doing Okay, so then I need to replicate this object here. So I write it explicitly So this will be the exponential of minus i over two And then written explicitly this reads summation over i and j which runs from one to n And then we will have a summation over the replica indices a one to little n y i a lambda epsilon delta i j minus x i j y j a so This object here is just the extended version of this product of matrix times vector times vector replicated little n times So the the logarithm has disappeared our multiple integral which initially ran over Capital n variables now runs over capital n times little n variables And we replicated it at this stage little n is an integer. Do we all agree now? What is the advantage of this expression? Here with respect to the initial one. That's that's the whole that's the whole point of why we are doing this Right. Yeah, we took the logarithm out, but this was just instrumental to doing what? So we wanted to take the logarithm out Why because we wanted to swap the order of the integrations and do the integral over the disorder the average of the disorder first And now we can do it Right because we can swap This integral and this integral And do the integral over x first Which we couldn't do before Now if you understand this then everything goes smoothly It's absolutely, you know It is important that it is absolutely crystal clear to you why we are doing that the logarithm is out of the way Now we can swap the order of integration and do the integral over x first Okay, if if we keep it like this we don't have any advantage in in doing the the replica trick If we do the y integral first That's exactly identical of what we had before but now we can swap the integrals because the logarithm is no longer in the way Clear? Well, I hope it is So what we have to do now is which exchange the order of integrations And do the average over the disorder first the replicated Partition function will now be equal to what we exchange the order of integration. So this integral Becomes the outer integral product a one to n The y where y is a vector Which now carries an additional Index a And then we have exponential of a bit. We have a bit that does not depend on the disorder Which is the this bit here the diagonal bit so we can pull it out So we have exponential Of what? Of minus i over two lambda epsilon summation i one to n summation a from one to little n Of what of y i a Square So this bit Is just the diagonal bit here And now the remaining bit depends on the disorder. So we need first to to include the integral over the x So now we have An integral over the diagonal bit Which is and then we have the gaussian distribution of our Diagonal entries and then we have which which was this bit here And then we we have the diagonal bit corresponding to this to this time So this is plus i over two summation over i Summation of let's say x i i and then we have a summation over a of y i a Square Which is the diagonal term here So we get the minus and the minus which is a plus i over two and then we have x i i That multiplies y i a y i a So it is y i a square And now we have the off diagonal bit So in j the x i j root pi over n And then we have exponential minus n summation over i smaller than j Of x i j square which is this term here And then we have the remaining off diagonal bit There which is summation i smaller than j summation a from one to n Of y i a x i j y j a So now we are making progress right why Because what what type of integrals we have here? So we have Gaussian integrals. Well, which are Which we don't have mean zero, but still, you know, they are Gaussian Gaussian integrals and here we have Gaussian integrals as well So all we have to do now is we have to perform these Gaussian integrals These Gaussian integrals and then the results will depend on the vectors y And then we will have to perform this integration over y Good So we can perform now the We can perform the Gaussian integrals Using the identity that I Showed you before yesterday So integral minus infinity to plus infinity d q of exponential Of minus alpha q square plus i gamma q Is proportional So from now on I will Not keep in track of all the proportionality constants. Anyway, we will need to to send gamma to send n to infinity So these are not not important Okay, so this this type of structure is exactly the object that we have here and here And we have just multiple copies of of gaussian integrals good, so We can use this identity repeatedly With alpha equal to what? Well, it will be equal to n over 2 If if we are in this situation Or n If we are using this to kill these integrals And uh, what is gamma? So gamma will be uh one half summation over a y a a i a square or Gamma equal to summation over a y i a y j a If we are in that situation So i'm just reading off the values of alpha and gamma So that each of these integrals can be performed and then these are just n copies You know capital n copies Of the same gaussian integral. Do we agree on that? So if you if you look at this formula, what what are we expecting to see that the result of this integration and The result of this integration will involve a term of the type exponential of minus Some summation over replicas to the power two Right, so this is what what we expect to obtain So if we do if we just apply these these two formulas and just give you the the result We have that the replicated partition function So z n of lambda is equal to what it is an integral over r capital n times n Of product over a the y Vector with an index a and then what exponential of minus i lambda epsilon over two summation i one two capital n summation a From one to small n of y i a square which was this term here And then the result of of these n-fold Gaussian integrals So we we will have minus one over eight times n Which comes from here four times n summation i one two n summation A y i a square All to the power square This is the first term and then we have another term the diagonal one and then the off diagonal one Which is summation i smaller than j Of what summation over? a y i a y j a Square So this is the gamma For the first integral For the first type of integrals And this is the gamma for the second type of integrals That are all raised to the square and then we have a summation over i and a summation over i different From smaller than j just because these are different copies Of the same Gaussian integral So uh now what we uh, well what we would like to do Well, this is not really an equal, but let's say Roughly equal because I am discarding all the constant in in front Now if you if you look at this expression, we would like now to perform this integration over the y Variables, but these integrations are are nasty because due to these squares here Basically, we are We are coupling integration variables that belong to different sites i and j So we would need to find a way to decouple the The sites so that we we we can carry out this integration So one way, uh, so first of all let me simplify So this these two terms here can be grouped Into a single term, which is minus one over eight n summation i and j So we are now summing over all uh possible pairs Of summation over a Y i a y j a square Oops Yeah, so these two terms can be lumped Together into a single single term And now to proceed further I will introduce a trick So this is not the the standard way people use to to do the the coupling But I thought I would use this This other way because it is more I mean it is somehow more, uh, modern And it has applications beyond this case. So I thought it would be good to to show it to you So we, uh Introduce the following normalized density Which we called mu So mu will be a function of a vector y And this vector Stands for a vector y 1 y 2 y little n So this vector has a size which is equal to the number of replicas little n So I will denote Vectors that are of this of this form with with the with the narrow Instead this vector here is of size capital n So I will denote it with the boldface forms Okay So the definition of this object is so I first give the definition and then I'll try to explain Why this object is useful 1 over n summation i from 1 to n product A from 1 to little n Delta of y a minus y i a So this this structure resembles a lot the standard structure for the for the density for example of of eigenvalues 1 over n summation over I for example in the case of the eigenvalues and then we would have just the Delta delta terms But this time this object is replicated Little n little n times So why am I doing this? Well, I'm doing this because if we introduce this definition Then okay, let me write it here Then you can show that this term here can be written Using this density in a quite convenient form So minus 1 over 8 n summation i and j of the summation over a y i a y j a square So this term to be Prove it can be written as minus n over 8 integral over dy d w mu of y mu of w summation over a w a y a square y y a So here is an object that has only one in one index y a Which corresponds to the to the entries of this vector? And here we have the same object but coupled to a site So which is assigned to the site i the site over which we are summing So over a we are taking the product Over i we are summing The reason why we are taking this this definition is just and I ask you to check this yourself Plug plug this definition here. So mu of y and mu of w in here And use the property of the delta function Just to show that this right hand side Is coming out exactly as the left hand side here, which is the object we have in the exponential Okay, so we can trade this summation here for an integral over these vector fields You can you can see it Already because this mu will carry a factor one one over n Here so you will get a factor one over n from here one over n from here So one over n square which multiplied by n we reproduce the factor one over n that you have here for example Yeah No, it is it is not it is not preferred actually it is longer the the way i'm doing it The the problem is that the Haber-Stratonovich is specific for the GOE or when you have a Gaussian measure Instead this true the streak is more general So so you can use it also for ensembles that we would you you don't you don't have a Gaussian integral to play with For example, we will use this this trick in the next lecture when we are dealing with with like random random graphs or sparse matrices So I I just prefer to do it in in one go It is actually a longer route if you if you have a Gaussian measure you can it is quicker to do to do the Haber-Stratonovich Do we are doing this okay, so Now probably we have seen this this type of tricks Before so you probably know Like where I'm heading So what I want to do is to replace this finite summation with integrals in the exponential here And the multiple integral here I would probably like to replace it with a functional integral over densities Okay, we have seen this trick before So now you know what I need to do is I need to enforce This definition using a proper delta function, right so that I can trade this multiple integral for a functional Integration and I will have a term that resembles an entropy This we have seen exactly the same trick in the context of the coulomb gas analogy, so let's do it step by step We have this identity. So now I want to keep this one. I would like to keep that one Do you have another blackboard? Copy it again. Good. So you have everything in your notes So we want to enforce the definition of our mu So I recall that mu No, I don't recall what mu is. It's too long. You know what it is So now we introduce the same representation of the identity that we used in the past for the coulomb gas analogy So we are basically enforcing the definition of the density, which is n mu of y minus 1 over n summation I want to n product over a From 1 to n delta of y a minus y i a Okay, so this is basically the Um definition of our density mu of y and this object here Is basically a functional analog of the standard, you know Fourier representation of the delta function Okay, normally for a For a standard delta function delta of x you you could write this one in this form And this this object is basically the functional analog Of this of the Fourier representation for a standard delta function So what I'm doing is I'm enforcing the definition of mu of y which is this one using This basically So this entails so this mu hat is a conjugate field Which is exactly the the you know, it plays the role of k in this Fourier representation Okay So if we do that We can introduce this representation of unity in inside this integral So we we have that z n of lambda can be now represented as a functionally integral over Mu the density mu and the conjugate density mu hat Then we have a term That is what minus i n mu times mu hat So minus i n integral the y mu of y mu hat of y This term here Then we have This term here that we could represent In terms of the density this way So we get minus n over eight Integral the y the w mu of y mu of w And then we have summation over a y a w a square and then What remains Is what Is this bit here that we haven't used yet? So the integral over dy a that is still there This bit And this bit So these are the two bits that we haven't used yet So integral d mu d mu hat times The integral over product over a dy a of what of exponential Minus i lambda epsilon over two summation over i One to n summation over a from one to n y i a square Plus i summation i one to n integral d y mu hat of y Product over a delta of y a minus y i a So my replicated partition function is now expressed in terms of functional integrals Over some fields which are mu and mu and mu hat Which are defined over vectors of size little n the number of replicas And then we have this first term which comes from this definition The second term comes from the fact that this term can be written In terms of the of these densities and then we have the leftovers This object here And this object here. Oh, yeah, this one. Yeah, thanks Yeah, because I put the I put the n here. Thanks So what's what's next? So first of all, I wanted to draw your attention on the fact that here You see the thanks to to this trick. We have a capital n that is popping up in front of these Two integrals and we know that when we have a functional integral of exponential of n times something this Smells good Why? Yeah, so what we are what we are heading towards is the The possibility of applying a subtle point of evaluation to this functional to this functional integral, right? So here the the fact that n is cropping up in this In this way so in front of of an action if you want is is good news Okay, so what's what's next now? We can we can try to compute this multiple integral first And then we will have our action That we can evaluate with the subtle in a subtle point approximation So what remains to be done is Let's compute the n fold integral Below and then we make a break What we have Is the integral over r capital n times n Of product over a d y Which is this object here. Is everything all right? Okay, so you see Now just help me out on this here in this multiple integral We have summation over i from one to n and summation over i from one to n in the exponential So this smells good, right? So we have a multiple multiple integral of exponential of the sum Of terms so what is this? This just n capital n copies of a single integral right Because we could put the some summation in front as a product So this is just an n copies capital n copies of a single integral Single meaning that does does not depend on capital n anymore it is still depend it still depends on little n replicas So this object here is something raised to the power capital n of what of An integral which now runs over r to the little n Let's call it d y one The integration variable. So this is a vector of size little n And then we have exponential of minus i over two. All I have to do is to kill these summations Okay So exponential i over two lambda epsilon summation a one to n of y one a square plus i integral over d y mu hat of y product Over a delta of y a minus y one a So all I'm all I'm doing is I'm I'm picking one representative Out of this out of this product that I call y one All of this is raised to the power n capital n because we have capital n copies of a single integral And then I'm killing these summations Okay, well, and then this smells Very good So we have integral over r to the n d y one A watt of exponential minus i over two lambda epsilon summation over a Y one a square And then I can use this these delta functions to kill these uh integral Right because I have exactly little n delta functions that are killing this multiple integral So what remains here is plus i mu hat of the vector y one And then one once this is done. I can also rename this object as just As just y instead of y one and we are done Now y is this uh good again. So we have solved this uh this integral sort off Clearly we still have a dependence on on the uh, you know Function mu mu hat the conjugate field But this is good because we can write this object as exponential to the power n logarithm Of something of this of this integral And exponential of capital n is exactly what we want Why because we can now replace exponential of n times something into Here and so we will have exponential of n times into an action And this is this is very good. This is what we want Okay, so I think we can make uh a good uh Six minutes and 48 second break every start to go Okay, the fun is over So we computed sort of uh at least in terms of this uh conjugate function This multiple integral and we showed that we could uh write it as Exponential of n times the logarithm of of something of this integral here So we can replace this result in here because we have exponential of n Into something so I can just Raise this object Here raise this bracket here and Let's write here plus n times The logarithm of this for this integral okay So now we are sort of in business Because this object here can now be written as a functional integral over the density mu Functional integral over the conjugate density mu hat of exponential of n into Some action which depends on the replica index little n And this action is a function of mu mu hat and parametrically of lambda Or lambda epsilon So what is this? What is this action s? n So it is minus i integral d y Mu of y Mu hat of y Then we have this term here minus 1 over 8 integral d y d w Mu of y Mu of w Summation over a w a y a whole squared Can you can you read here either yes or no logarithm of integral over r to the power n d y exponential Minus i alpha lambda epsilon summation over a y a squared Plus i mu hat of y So we have that our replicated partition function can be written in terms of a functional integral of exponential into n times an action This action depends on the replica index that at this stage is still an integer Remember we replicated the partition function little n integer times So little n is hidden in the fact that the component of these factors are n-dimensionals And the action is formed of three Three pieces Three chunks one is a function of mu mu hat One is a function of mu alone, but couples different vectors and the other one is a function of mu hat mu hat alone now this this type of Expression clearly lends itself to a nice Saddle point approximation I have to to warn you though that at this at this stage the the math becomes A bit esoteric So what I what I mean is that If you remember if you recall how the Edward Jones formula with the replica Tricking bolt was was defined We had replaced the We had the derivative respect of lambda imaginary part blah blah and we replaced the logarithm of z to the lambda With what with the limit? n to zero Of one over n log Of z n of lambda Okay So technically speaking what we should what we should do at this point is to take the replica limit to zero first So that's that's what we should do the problem is that We cannot do it at this at this point before taking The capital n to infinity first Because we don't have a way to evaluate this integral for for an n that is not equal to infinity So to speak So what we are effectively doing is we are exchanging the order of limits So we are sending capital n to infinity before sending little n to zero Okay And at this point we just need to close our eyes and hope for the best Okay, so luckily the best will come otherwise. I wouldn't be wasting my time here but Strictly speaking we are we are like on on pretty shaky shaky grounds at this stage Good Let's close our eyes. So now you tell me what what I have to do here For capital n to infinity No excuses you had your coffee. What do I have to do? It's at the point. So you miss So this this object is a function of mu and mu hat So what I should do Is to find the critical points of this of this action, right? So is to find the points at which the function on the yeah functional derivative of S and with respect to mu And the functional derivative of sn with respect to mu hat are both set to zero So if we differentiate sn with respect to mu first So here we differentiate with respect to mu. So I get minus i mu hat And let's call it mu hat star because this is the The optimal solution to this set of equation And then mu appears also here But not but not here And here it appears two times So I need to multiply this object by two and change sign. So I get one over four Integral over the w mu star of w summation over a y a w a all square So this is my first equation So minus i mu hat star is equal to an integral over mu star And then instead the second condition here We'll give if I differentiate with respect to mu hat. We have minus i mu star And then we need to differentiate this object with respect to mu hat because mu hat appears here So if I differentiate this the logarithm, this object goes downstairs So I get Integral over r to the n d y Let's call it dy prime Exponential Minus i over two lambda epsilon summation over a y a prime square Plus i mu hat star Of y prime So this mu hat star So this this bit is just the fact that I'm differentiating with respect to mu hats So this object goes downstairs and then upstairs I still need to to do the differentiation So I get the exponential minus i over two lambda epsilon summation over a of y a by a square Plus i mu hat star of y Multiplied by minus i Which is This this term here brought to the to the right To the right hand side So I can erase this minus i and this minus i And so we have the the set of coupled Integral equation For the density is mu hats star and mu and mu stars So these are the the subtle point the subtle point equations Now the the the point is how do we solve uh How do we solve this system of two coupled integral equations for Scalar function of n dimensional vectors Okay, good luck So the the good thing is that We can now plug This equation inside So the second equation inside the first So we can at least reduce the two equations to a single equation for one of the two objects So if If we do that So if we plug the second equation into the um the first Okay, so combining the two I'll try to write it as big as i Okay, so we get minus i mu hat star Of y which is this object here Is equal to so we have a denominator Which is integral over the w hat exponential minus i over two lambda epsilon summation over a Of w a square. So i'm just changing names y prime into w plus i mu hat star Of w Downstairs. So now this single equation will only be involving mu hat star so We have we are combining the two equations into a single single one upstairs. We have a factor one over one over four and then we have The integral over the w Exponential of minus i lambda epsilon over two summation over a w a square Plus i mu hat star of w Times times what times this bit here And this bit here I can rewrite it as a scalar product as a dot product of y and w So I have now the dot product of y and w square So I have combined so here You have y the vector of little n variables and y appears here You see and then w is the integration is the integration variable if I only had another color. Yes, I do and w Well upstairs and downstairs are the integration the integration variables so now I I should solve This integral equation for the function mu hat star Which appears here and Inside an integration in n-dimensional coordinates So these are n-dimensional Integrals So how do we how do we solve it? well, first of all we We have to make some assumption on the behavior of mu hat Of y so the The assumption that is made at this at this point is that that mu hat and mu star of y will be just a function the modulus of y So this is the so-called replica Symmetric and high temperature So this is an assumption or or An answer that needs to be verified Afterwards it means it but it cannot be verified. We can only just Do this use it and then try to land on the correct on the correct answer The idea behind behind it is that well replica symmetric You know the replicas y1 y2 yn Were introduced as a mathematical trick to be able to exchange the order of of integrations. Okay, so if we When we we did that there was at the beginning no reason to assume that any replica would be different or should be different from any other Right, so the idea that y1 so the first component of the vector and y17 Should be different or should play a different role in in the following It does would would not come from from anywhere in this in this derivation All the copies all the replicas should be treated as as equal So it is quite normal or intuitive to assume that this object at least Should have some symmetry under permutations of of replicas under exchange of of replica labels if you want Clearly as probably Federico will Will discuss a lot about about this the situation is not is not so simple So we now know that this simple minded argument might not work or at least might not work everywhere And for all for our models and this gives rise to the whole business of replica symmetry breaking Which some of you might might have heard and I think Federico will Will cover in in his lectures So so far, let's treat this as as an answer We assume that mu star hat of y vector is only a function of the modulus of this of this vector because this simplifies the analysis considerably Okay, so What we what we do here Is we assume that this object is now a function of a scalar variable Which is just y Let's say let's call y the modulus Of of the vector y in replica space But still here on the right hand side. We have n dimensional Integrals, okay So now you tell me what I should do with with these integrals if we assume that the integrand is only a function of the modulus What what would you do? If we were in two dimensions, what would you do? Yes So you would you would go to polar coordinates or in this situation you would go to spherical And dimensional coordinates, right because the integrand only depends on the modulus Now have you ever used spherical and dimensional coordinates? Yeah, but they are not they are not They are not very nice, but you can still you know It is just a generalization of polar coordinates to n n dimensions So they were discovered by Mr Wikipedia Well, at least I took I took them from from him. So for example Well, maybe I should erase this we are I'm record Sorry Well, because it's Because I have I have too much respect for women Okay, so this is the list of n dimensional spherical coordinates in terms of a certain number of angles And the modulus of and the modulus of the vector Okay, which is y in this situation we call it y there Okay, so who knows the Volume element in in n dimensional spherical coordinates So in two dimensions, we know what the volume element is right If you want the Jacobian of the change of variables I thought we knew So we've got variables r and theta So what is the Jacobian of the change of variables? R square So so what is the what is the volume element? One any other options? Let's open a poll I I knew it was r Right, is it correct? Okay, so what is the generalization of this stuff to n dimensional coordinates? Yes Yes Me neither before mr. Help me So r to the n minus one times Sine to the n minus two phi one Down to sine phi n minus two so well in let's give it another one sine n minus Three phi two down to sine phi n minus two Okay, and if n is equal to two then you get r here and all this cancel So of course this is Only for yeah good so what I need to do now is go to uh spherical coordinates upstairs Go to spherical coordinates downstairs and then possibly cancel out Terms that are common to the numerator and denominator. Of course here you will have a lot of Angular integral in the in the numerator The integrand Here does not depend on angles at all here. It depends on how many angles Just one right it depends on the angle between y and w So it means that a lot of integral a lot of angular integrals will cancel out between numerator and denominator Only one angular integral will remain upstairs and downstairs Because there is a remaining angle here that we take as the angle between the vector y and w Okay, so this simplifies things a lot because we can cancel out a lot of integrals between upstairs and downstairs now if we do that What remains I I just write you write the The result So the final result will be minus i mu hat star Of y Good So down let's start downstairs downstairs. We have an integral over the radius Which is zero to infinity the omega Then we have the radius which is omega to the n minus one Exponential of minus i over two lambda epsilon omega square Plus i mu hat star of omega Times so this this bit should be should be clear We have the radial coordinates Which with with the Jacobian which is the radial coordinates to the n minus one and the radial coordinates goes from zero to infinity The integral only depends on the radial on the radial coordinates So it is omega square and omega is the argument of mu hat star And then there is one remaining angular integral Which is the one that hasn't cancelled out between those upstairs and downstairs So here we have for example integral between zero and pi d phi Sign of phi To the power n minus two Which is the only angular integral that remains And and so we will need to use this this one upstairs as well So upstairs what we have we have the same thing So d omega So first of all we have one over four Then we have y square So we have y square Omega square and then the cosine of the angle between y and and omega Or n w So we have y square over four in front Then we have d omega omega to the n minus one Exponential Of minus i over two lambda epsilon omega square Plus i Mu hat star of omega Then we have omega square Which comes from here Well omega on w i call them. Yeah, whatever w square And then the cosine of the Of the two So we have sine phi to the n minus two times cos phi square Sorry So so there is there is why this this is y square w square No, because uh why So this this object here is y Times So it is y times w times the cosine of the angle between in between them So if you if you if you square Here and here you get y square W square and cosine square Yeah, so the the y square is here with with the factor one over four Uh is the fine and and and this object here is just the radial the Jacobian The radial part of the Jacobian Maybe you are not appreciating it at At this moment, but look what's what's happening here. So very swiftly we have turned our Little n Which was an integer Into A real variable, right? So at this at this point this this expression Does not no longer know that little n is an integer Which is a very good thing, right? Because we we don't want n little n in the end to be an integer Because we want in the end to take the limit little n to zero So very swiftly by using this this answers The little n is now appearing as a parameter inside the the equation. So it is no longer an integer Do you agree? Yeah, yes yes, so the well the only The only way to to check this is that it it gives the correct result in the end But in the case of goe, we know the results right because applying the edwards jones Using all these tricks. We know that we should land on a semicircle Right for for other type of uh, random matrix models. You can check with numerical simulations, for example And it also it also works If you're asking for a theorem Well, you know, then you would you would need to ask Ask this to a mathematician with a lot of um Well, how shall I say in a in a way that is So mathematician would be horrified by all I've done And they would have stopped listening to me much earlier than Than this stage Okay, so just um just to be to be clear even though now there are a lot to be to be fair There are a lot of mathematicians that are trying to make replicas rigorous And then they will claim that they have done it Yeah No, I mean well, you're you're just picking one you're just picking one of the angles Right the important thing is that you pick the same angle upstairs and downstairs No, no, there is there is in principle. There's there is no difference. This this one just makes the calculation a little bit Easier not not so much the the important thing is that you pick the same angle between upstairs and downstairs You you cannot pick a different power of n in in the two. Otherwise you are picking two different angles Okay In the end in the end nothing will The the common contribution will will will will cancel out upstairs and downstairs Actually, we can compute now these integrals explicitly because we know we know the explicit expression for these integrals So so you can just you know if if you So there are formulas for for these integrals So sine of phi to the power n minus two cos square of phi So these expressions are in terms of gamma gamma function And downstairs So we can we can compute These two integrals explicitly And then there is another good Good thing, maybe you haven't you haven't noticed it that just for the gaussian Ensemble an incredible simplification has has happened Which is that this function becomes just equal to y square Into a constant you will not see it from from here, but you will see it in in a minute So y square the the only dependence on y now is Is here Right is in front So whatever mu star Mu hat star is once you plug it in in here and you solve the integrals this will be just numbers So the only dependence on y is this y y square in front This is just a feature of the gaussian of the gaussian ensemble Okay, this is not a general feature, but it is very nice. So so we know that exactly mu star will be a constant Whatever it is times y y square Good So if we if we call this object just to simplify notation G G of G of w We call it as exponential minus i over two lambda epsilon W square Plus i mu Hat star Of w Then we can rewrite this by pulling out A factor A minus sign. So we have i mu hat star of y Is equal to let's simplify things between upstairs and downstairs. So we will get gamma n of times n divided by two gamma of one plus n of times y square over four Which is this object here Times what upstairs we have integral between zero and infinity of the w w to the power n plus one because we have n minus one plus two Of g of omega And downstairs we have integral n minus one g Of omega so whatever mu star is These two objects are just numbers So we we know that the only dependence That mu star hat can have on y is of the type y square into a constant There is a second observation to be made here at least from the point of view of the Prefactor this object has a good As a good limit as little n goes to zero So it has a finite it has a finite limit as little n goes to zero Okay, so here at this stage you you don't see anymore So the integer nature of little n has completely faded faded out So actually in the limit little n to zero this object will go to one Here, uh, I can sorry I here there should be a minus sign in front But I will trade this minus sign by doing a trick here. So if I do an integration by part here I can rewrite this object as g prime Of omega and there is a minus sign that will cancel out this minus minus sign in front So actually this is the correct final formula Can you can you see there is a there is a prime here So can you can you see why I'm doing an integration by by parts here? Why it is it is convenient to do an integration by parts and lift A power of n instead of having n minus one to have just the power of n here Sorry, yeah, because in the end we are interested in the limit little n to zero Okay, so if I had an n minus one here taking the limit directly inside here would cause a problem Like a non integrable a possible non integrable divergence of the type one over one over w At at zero, but in particular at zero Okay, instead by by by doing an integration by part. I am on a bit on On safer grounds So I could in principle Pull a little n to zero without without thinking too much Okay, so in essence what we what we have here is we can write You can write i mu hat star of y Is a certain constant which depends on lambda Into y square And this constant which depends on lambda we can we can determine We can determine it self consistently So what is the equation satisfied by So c of lambda So limit n to zero assumed So c of lambda which is this object here should be equal to this to Whatever multiplies y square on this side. So we know that this object goes to one So c of lambda should should be equal to one over four Integral between zero and infinity the omega of omega g of omega Integral between zero infinity the omega g prime of omega and for example upstairs You will have integral zero to infinity the omega omega remember what g of omega was Exponential of minus i over two Lambda epsilon omega square plus what we had we had i omega star In here, but we know that i mu sorry i mu star mu hat star But we know that this object is equal to c of lambda omega square And similarly, well if you want you can treat this as a total derivative Or you can do the differentiation But the point is that you have c of lambda here and c of lambda Inside integrals here and these integrals you can perform them explicitly because they are of the form exponential of minus something Into omega square eventually multiplied by by an omega So these integrals you can carry you can carry them out explicitly Okay, so you remember what g of omega was G of omega was defined in this way But now we know that i mu hat star of omega As this form here So i mu hat star of omega is c of lambda a certain constant into omega square Or w square right So we can just plug this object here into the definition of g of w And now we have that e mu hat star of y Is this object times y square times this this this object here So this object times this object in the limit n to zero should be equal to c of lambda So this gives a self-consistency self-consistency equation for c of lambda So c of lambda must be equal to one over four Into the ratio of these two integrals where in the integrand we are using The fact that i mu hat star as a function of y is quadratic So this is an equation. This is effectively an equation For c of lambda because because these integrals now we can perform them explicitly It's exponential of minus on w square into something Times w and here we have just the derivative of it if you want So you get explicit expressions. It's a number on the right hand side. We have a number that depends on c of lambda So you can just solve this equation. So if you if you solve this equation So if you solve if you solve the equation for c of lambda, which i leave it To you what you obtain is i lambda epsilon and again What you're witnessing here is once again The birth of a semi circle So so this the semi circle low is is basically is being born At this at this moment when when you realize that this conjugate density must be quadratic in the in the argument And then you use this use this this fact Along with the limit n to zero to determine a self-consistency equation for this constant So now all all we have all we have to do is to putting all the pieces Pieces together and but in essence you will see that the semi circle is already in in here So I think I only have well, maybe, yeah Just 10 minutes 7 minutes to to finish we can we can finish Next time sorry No, I would meaning that I should not never finish or Or that I should finish now. It's hard to interpret but either Okay Can I can can I leave it there? It was an imperative. Sorry You want Sorry Coffee. Yeah, so so you want me to stop now? Yeah, that's that's what I wanted to do You just wasted five five more minutes Okay, see you on Whenever I'm always upstairs My turn is to speak