 Good morning. How are you guys? So where were we? So let me see Yes, so I left I left like a An exercise no a simple exercise, which was the following I was the exercise now, so we have an ensemble of N times N Real and symmetric Matrices as I take one matrix of this ensemble a With lambda vector a this is the spectrum I'm like calling formulas right and then I have that the empirical spectral density for a Rosu pay of lambda we saw can be written as Minus 2 divided by pi n the limit Ita go to 0 plus Of what of them give me a second the derivative with respect to set the logarithm of set A of set Set is equal to lambda minus i eta and they might be love the majority of our know the limit Ita goes to 0 plus the imaginary part of this and then the exercise I left it was okay Now suppose you have a you take a particular ensemble Where you have a random recipe of picking these matrices from that ensemble, so the the exercise was Okay, actually, let me finish this work for any matrix for any symmetric real matrix this set Set can be written as the integral I For I from one to N of the X I divide by the square root of 2 pi of The exponential of minus H of X Maybe so pay where H should pay of X is equal to one half The sum for I and J from one to N of X I Set times the identity matrix minus a components ij of the whole piece X here and But we said this this mapping is exact right for any type of matrix Any type of real symmetric matrix So far so good Okay, so what was the idea the exercise of the exercise now we take a particular ensemble? so suppose this ensemble is the ensemble of Erdos Graphs or random graphs where we are going to denote a given matrix like see, okay See And this is what is called the connectivity or adjacency matrix connectivity matrix We're going to we're going to assume that these graphs are Undirected that means that this matrix is symmetric. That means see I j is equal to CJ I we're going to assume that there are no cell loops and The product a probabilistic rule is the following the probability The C I j is equal to one That means that I take two notes I and J and they are connected by a link. This is equal To the divided by n. Sorry. The value is that this guy's text story is 0 1 and therefore the probability That C I j is equal to 0 This one minus D divided by n, right? And you one can show that these D means is the average connectivity of the of the graph And This is the average and then I give you this ensemble and I ask for this ensemble calculate the following Ensemble I want you to calculate given a now a C given C On this ensemble that we know there is a map exact map That tells you that the spectral empirical spectral density for a given C Is the formula I put here? Let's put it again. I've got a theme of set But then what I wanted you to do is to calculate the expectation value with respect the this Random rule you have for the ensemble. I want you to calculate the expectation value where what this thing means This bar means the following now a half a rule That tells you what is the probability of taking one of this connectivity matrix at random From the sample this would mean in this case the sum Overall possible values of a matrix Of course taking into account this this constraints of the probability of observing a given value for this connectivity matrix Times row Okay So this is just a definition what is an expectation value, right? The only difference like now the random variables are Take value zero zero some ones and instead of putting here an integral put a sum and with this sum The principle means is I have to sum overall possible values of the entries of the connectivity matrix Of course taking into account that the matrix has to be symmetric and for simplicity I have removed the diagonal terms. I put them to zero So far so good But of course what I want to do I want to do this expectation value using the formula above So that means that at the end of the day of the important thing I have to Do the quench average over is precisely the logarithm Of the partition function associated to this problem So this is the important piece I need to to the to calculate to arrive So using the replicatic. I know that this I can evaluate the task follows this equal to the limit And go to zero one divided by n the logarithm of the nth power of the partition function and then average over the this rule Prolastic rule for these ensembles offer those renegrafs so far so good. Yeah Very good. So This means that now I have to focus on this object here So let's take now this piece So I have the Empower of the partition function I like we consider first an integer So this is equal to what this is equal to what it is. So what it is is Okay, so the formula I put for seed there is a 1 divided by a square root of 2 pi But this is a constant that plays no role. I'm not going to put it anymore, right? So this equal to what is equal to the integral over the nx I'm using now is notation the nx is equal to the product for I DXI Yeah, oh The Exponential of minus H C of X H C of X Is given by the mapping that we have right? So this is equal to minus one half of X transpose that multiplies to see Identity matrix no entity matrix Yes, sorry entity matrix minus C X to the power and an integer So far so good Can you speak up? Yeah, but I said I Just said that since this is an irrelevant constant. I'm going to forget it, right? So you're right, but either So you can forget it because here you have the logarithm and then you have this derivative So this constant it will disappear. So that's why I'm not writing anymore But you cannot forget it you have to do the whole bloody derivation with all the constants and then realize which Which ones are important and some which ones are not important? Good. So this is equal to what? Well, this equal to the integral product and time. So let us do it I'm going to do this thing. I'm going to do all this step by step. Okay. So this equal to what? so the integral and then times again the same internal and This up to n times no again bear with me a second because I've been discussing with some of you And I think these steps to these steps explicitly is very important because here There are some certain things that some of you didn't do correctly, right? So again, so this object of power and this object is just in this case an integral is the object multiplied n times, right now Since the excess are dummy variables because they are integration variables Let me call you can you could call this one X and keep it like this You could call it Y said you could change it, right? But this will not be very efficient from the point of view of notation. So I'm going to call this one X one this X 2 X 3 etc, right? so this is equal to Dn X 1 exponential minus the Hamiltonian see X 1 but notice here that this this integral there is something which is free. It's not dummy Which is the value of the of the of the matrix? So I'm integrating over X. I'm not integrating so far over the matrix, right? So these all these interas they share the same matrix So here I cannot put an index one that doesn't make any sense Because this is not a dummy object. This is a free object, right? So I'll call this one Dn X 1 with X 1 and then Dn X 2 exponential of minus Hamiltonian C X 2 X 2 sorry up to Dn X 2 small n exponential of minus H C X N right and copies of the system But all of them they share the same matrix So now this I can use notation to put this thing in a compact compact form, okay? So these I write this as a product for alpha from 1 to n Dn X vector alpha The exponential of minus the sum for alpha one to n Hamiltonian C X alpha And again this seat doesn't have any alpha Okay, because it's shared by the same by all matrices So far so good Very good. And now for these objects, let's put it back here. Notice how I do derivations I do it piece by piece, right? I'm a step by step, which is very healthy thing to do Now I've arrived to what? the fact that the nth power of the partition function is equal to this to this integral To the product for alpha from one to a small n Dn X sub alpha Alpha Of the exponential of minus the sum for alpha from one to a small n of the Hamiltonian sub C X alpha And so far so good. Yes Very good. Now I have to do the average over the different realizations of C and this is denoted with this Overline, right? So what you see in this case is here. So after the average over this and remember that this average means what I put before Right. So now let me forget about this whole piece. I'm going to focus on this So I have to do the expectation value with respect the Ensemble of matrices of this object Lastly, remember again, what is this thing for the by definition? This is equal to the sum for all possible matrix entries of the probability of realizing a given connectivity matrix of the exponential of minus Valh the sum for alpha from one to n of H C X Alpha right and Now I just start putting putting definitions. No, that's how this is equal to what? The sum over all possible values of the connectivity matrix of P of C exponential of Minus One half the sum for alpha from one to n the sum for i and j from one to capital N of X i alpha X i alpha a Set times the entity matrix minus C entries ij X a J Now C is here And after the average over the matrix entries, okay? So I'm going to focus on that part and then I will gather all the results. So let us focus on that part so I have what I have a Exponential again, I'm going to focus this is not an equality. I'll focus in the part which is important right now, which is this part here Right, so this equal to the exponential of One half Or actually, let me focus first in the part of the argument of the exponential. So I have one half of the sum for alpha from one to n the sum for i and j from one to capital N of what of X i alpha c i j X j Okay good now Remember that we have this condition that a C ij is equal to C i and C i i is equal to zero This is hidden in the definition is enforced by their either by the definition of this probability and you can just Impose it directly in this expression. Okay, so since they Diagonals are are zero because it's our choice of correct. So there are nozzle loops, right to simplify derivations. This is equal to what? equal to One half the sum alpha from one When the sum for i and j, but I different than j of X i alpha c i j I'm imposing that c i i are equal to zero Good Now since these are the matrix is symmetric this equal to what this is equal to The sum for alpha from one to a small n and the sum for i is smaller than j of x i alpha c i j x j alpha Here I'm imposing that c i j is equal to see Is that okay? Do you see this step or shall I do it with a couple of more steps? it's okay Very good. So that means the following that now have to do the expectation value of we of what I have to do the The sum over C pc of the exponential of The sum over alpha from one to a small n the sum for i is smaller than j of x x i alpha c i j x J alpha Now so far so good This is equal to what this is equal to the sum over all values of C of PC of The product for i is smaller than j of the exponential of the sum for alpha from one to a small n of Let me put it like this of C i j the sum for alpha from one to replicas of x i alpha x j alpha I have not done anything. I've done is to put this sum As a product in front of the exponential But now since I'm focusing in one of the I'm focusing on the independent variables of the connectivity matrix Okay, so all these things factorize only have to do expectation value for for one of the matrix entries So this is equal To the product for i is smaller than j of the sum for c i j Of what actually let me I'm going to go there right so this is equal to what So I continue here, so this is equal To the product for i is smaller than j The sum over c i j when text value is either 0 or 1 of what of d divided by n chronicle delta c i j 1 plus 1 minus d divided by n chronicle delta c i j 0 of This is potential right Exponential of C i j the sum for alpha from one to and replicas of x i alpha x J alpha and then I do the sum because this some text to values, right? This would be equal to the product For i is smaller than j and then when C is 0 Here I have one multiplied by this right and we see is one I have this this way to multiply by the exponential with this one right, so this equal to one minus the over n plus d over n Exponential of the sum for alpha from one to n of x i alpha x j alpha So far so good Very good, and this I can write it as follows I can write it as With a model modest amount of foresight because I'm thinking ahead in The formula step of the of the duration this I write it as follows, right? So I write it as the product for i as more than j of one plus D divided by n that multiplies the exponential and Let me simplify this notation using the this notation of the vectors in replica space. So this would be x i vector in replica space a scalar product with x Vector in replica space j minus one Where we have introduced this notation if I say x with the hour or whatever below. This is x one Up to x a smaller Good And this equal to what this is equal to the exponential of The log of this but the log of this which is a product is the sum of the log the sum for a smaller than j the log of One plus D divided by n Exponential of the scalar product x i x j minus one good and Since sample at some point. I'm interested in the limit from one end and goes to infinity for very large matrices. I can do a Taylor expansion of the of This part here because this small one and goes to infinity and just keep the leading terms So this will give me the exponential of D divided by m the sum for i smaller than j of Exponential of x i vector x j vector minus one plus Terms that go like And to the zero or a smaller good now This this this this object here is symmetric under the exchange of i and j So what I can do I can see matrices some something the step I'm doing is the following Right, so suppose I have an object with which is symmetric under the exchange of doing dishes Maybe maybe I'm being too explicit with this. So suppose you have something like this Where this is an object which is symmetric under the exchange of i and j What the only thing the only step I'm doing right now is the following. Okay, so this is equal to this equal to I Write this thing as one half the same thing, right? That's one half. I have not done anything Here now for instance in this second piece I Interchanged i and j because they are dummy variables. So this would be equal to one half sum i is more than j is i j plus one half of the sum i Bigger than j is j i But this is estimated of objects. So this is equal to s i J and what I have what I have is actually one half of The sum for all i and j but i different than j because I'm missing the diagonal part Just that So I do this thing here and this is equal to this is equal to Exponential of D divided by 2n the sum for i different than j for all i and for j i different than j of Exponential of x vector i a scalar product with x vector j minus one Plus ten which are ordered into the zero What's up? Okay, because this is a double sum for i that goes from one to n and j goes from from from one to n Okay, so this what is in front? So this will produce n times n minus one divided by two terms Right, so this this sum will produce n times n divided by n times n minus one Yeah, by two terms, right? So this is proportional for n large is proportional to n square When you do the Taylor expansion the first term would be one over n So the sum with this one over n will be something which is proportional to n The next term in the Taylor expansion will have something that is one divided by n square So one over n square times In the sum that has n square terms is ordered It's ordered n to the zero Yeah Good More questions Now what I can do so remember this is not the complete double sum It's missing the diagonal but I can add and subtract the diagonal and the diagonal diagonal is also ordered n to the zero So this is equal Exponential of D divided by 2n The complete sum for i and j from one to n of the exponential of the scalar product xi with xj minus one Plus terms which are ordered n to the zero Questions so far Very good. Now if I put everything together Well up to a given point So what do I have I have I put everything together. I have what I have that the The partition function To the power n and average over this ensemble of graphs Is equal to what? Is equal To the integral A bunch of integrals for all variables and all replicas Of what of this exponential Now in the Hamiltonian that gave them up and there was the diagonal part that I'm going to put back We have minus set divided by two The sum for alpha from one to small n The sum for i from one to capital n of xi alpha Square yeah plus or times the The result the term that resulted from doing the average over the Statistics of of erdos range graphs and that gave this right That's d divided by 2n sum i From one to n of the exponential of x vector i scalar product with x vector j minus one And I'm not going to write down this thing anymore. All right good Tell me how I go from here to here. Yeah, let me do it step by step. It's fine. Yeah, so what I do is the following What I do So this is equal Let's do the step by step D divided by 2n I have the sum for i for all i and j but i different than j of this object, right? exponential of this scalar product Minus one And I'm going to add a zero to this expression But in a smart way to achieve something, right? So what I do is I add Plus d divided by 2n the sum for i From one to n of the exponential of x i scalar product with xi Minus one minus the same term The sum for i from one to n of the exponential of xi Scalar with xi Minus one and remember that I have terms of order n to the zero So far so good What are you? Ah, yeah, so far so good. Yeah This term that I have added and subtracted when I put it here. I complete the sum Yeah And then the other piece Okay that I added so what I do is to sum for zero This is order n to the zero. Why because you have a sum of one n terms divided by n So this is sub leading when s n goes to infinity Yeah, of course when you are going to do perturbation theory all this is very important, but we're not going to do perturbation theory Good, so that's it Why sorry? Because to the to to to to get to an expression of the Typical behavior of the spectral density for very large matrices. You only need the leading terms In the in the system size The sub leading terms will give you The corrections when you want to take into account the the finish the fact that the matrix may be finite Not infinitely large more questions Go ahead this notation means It's a generic back vector in replica space x x one Up to x as small n and in n is the number of replicas if I put something like this This sub vector in the in the replica space at note or site i So this will mean the following, right? So this is this means x i one Up to x i good I know that you know notation can be a bit annoying But you have to get used to this is because otherwise the if you want to write down this in explicit, it's it's even more annoying Yes Yeah, we have a sum We have the sum for yeah, this came actually this part came For the sum of alpha from one to n of x i alpha x a alpha Since i'm a bit lazy at least today. I wanted to get rid of the sum and I use the notation of the scalar product More questions You'll see why I'm forcing further in the derivation that I need to have the complete sum Yeah The only reason I'm adding This term is so again this notation here means the sum for all i and j but i different than j I am adding the diagonal. So I have the complete double sum Yeah Very good more questions guys, that's it Very good. So now I have this And The next step has nothing to do with uh, random matrices or uh, spin glasses is a is a is a Is a uh Actually, let me spend some time on this. Okay, then the next week has nothing to do with the spin glass You have d divided by n Yeah, yeah, yeah, okay. So okay, this is true because you have a lot of them But so you do the expansion of the logarithm and then you have the sum in front of the logarithm Yeah And then in the end the first term would be something which is proportional to n Considering the one over n that is inside of the argument of the look on the double sum Yeah, so so you see so you write so you have to be a bit careful sometimes with this so you have like The sum for i is smaller than j and you have something like the logarithm of one plus Let's say x divided by n Yeah So you can do the expansion of this of course this grows and this right But uh So actually finally know if this will not be in the sum if this would be here and then you make the limit This is the the limit of an exponential of something which at the end is what you get right? More questions Okay, so the next the next trick sorry has nothing to do with spin glasses or random matrices or whatever It's a trick that one should learn in statistical mechanics or in condensed matter, which is the following right? So forget about what we have done so far so suppose I'm going to write down the same h for hamiltonian, but these are different hamiltonian. This is just a general comment Yeah So suppose I have something like this. I have a hamiltonian I'm going to put discrete variables for simplicity. I have these questions Okay, I'm going to explain a very important trick actually All right And given this hamiltonian I want to calculate the Partition function so the partition function is equal to the sum Overall configurations of the exponential of minus beta h of sigma Right, so I can forget about what we have done so far. Yeah, this is about the stack make and tricks in a stack map now If I have a hamiltonian, which is linear In the thermal variables the partition function is Sorry If you want to put it in this context. Yes, but now it's simply a hamiltonian that depending on the context can mean different things It doesn't matter Yeah, if you want to put it in the context that this Sigma's are Spin variables and therefore these titas are the external local magnetic fields. It's fine by me But this has nothing to do with what I'm going What I want to explain just to put some formulas go ahead Why why it appeared here? Because if you go you you go back to the different pieces I analyze and you put everything together Right, then you end up here Yeah, so okay, so shall we So remember in the derivation so so no worry when you do difficult derivations It's better to isolate certain pieces and circle now I have to analyze this and then this and then this And I have I've put the pieces separately and then I put everything together In the average of the replicated partition function and the memory is always this Yeah, which I've built me because remember in the hamiltonian Remember that in the hamiltonian of That keeps them up in there was a diagonal term that didn't contain c Or the the the connectivity matrix this diagonal term was this one Right And when I was doing the average over the the c's I said, okay I'm going to forget about that part and I'm going only to focus on the part where c is And I did that piece, right? And then you have to go back gather all the pieces And at the end you get this Ah, then zero the but I said sorry I said that I was not going to write that thing anymore Yeah, but I said that I said I'm not going to write this in this term I mean if you want you can put it but okay. I have plus terms which I order You mean this? Yeah, okay Yeah, that's it More questions. I know that this derivation is annoying and difficult, but it's okay as can you As many questions as you want. So can I continue with this? Right, okay. So suppose that you have a hamilton. It doesn't matter which context, okay But it's a linear hamilton in the sense. It's a linear it's linear in the variables. I have to the trace over Yeah And I have to evaluate the partition function and this is very easy to write why because this I can write it as Okay, let us we can write it back there. So this is equal to what the sum over sigma one Times the sum over sigma two up to the sum over sigma n Let's say that we have n variables here Of the exponential of minus. Let me put here a minus to cancel the two minus, right? beta Theta one sigma one plus beta theta two sigma two Up to beta theta n sigma n, right? The exponential of the sum is product of exponential So now I can distribute each term in front of the its sum and this would be what this would be the sum over sigma one The exponential of beta theta one h one times the sum over sigma two The exponential of beta theta two sigma two Up to the sum over sigma n The exponential of beta theta n sigma n Yeah And this is equal to what so so this is equal to or I can put this thing in a compact form I can say this is equal to the product of i from one One of the sum over sigma i The exponential of beta Theta i sigma i But this sum is very easy to do is two times the hyperbolic cosine of beta theta i Two times the hyperbolic cosine of beta Theta i and this is the result of the partition function Yeah, good now suppose now I have this would be one example i'm going to continue Here maybe suppose now I have now different Hamiltonian I have a linear term And something which is quadratic And again, it does not depend on the context it doesn't matter You know sum over i and j whatever of j i j sigma i sigma j Yeah Good So now I cannot I cannot use this trick. I cannot factorize Because here this double sum would be like a double product and it's very difficult You cannot apply this this trick directly You all agree with me So many of the tricks when analyzing or deriving Or evaluating partition functions related to models is to find ways Of convert this thing This quadratic form into a linear form And when I say quadratic it could be cubic because maybe you have instead of having per voice interactions You have three board interactions for board interactions, etc, etc, right? So for instance meanfield theory Have you heard about meanfield theory? So meanfield theory is an approximation that you do here This has some kind of physical interpretation Physical intuition is a it's a manipulation that you do here to convert this double sum into a single sum Yeah, so in some cases You can do an exact manipulation of this to convert it into a single sum in other cases you put to an approximation For the most we are discussing you can do this in exactly You don't have to do an approximation, okay so now I know that this beast Is very very scary for you But let us analyze a model Where you can use this trick of linearizing this double sum Exactly and the idea translate directly to this Good Tell me you we're having what sorry But this yeah Good good good. Okay. Thanks for the questions. It's very important when I when I mean linear I mean linear in the sums Of the dynamical variables that appear in the Hamiltonian, right? So of course, this is quadratic But I don't mean this. I don't mean this sum that appears here. This is a linear Behavior with the some kind of function that depends on the variables. This is quadratic In variables, right? If you were how if you were to have here something like Some function here, okay Whatever the function is this is still linear In this context I'm describing And of course you can you can mention you can understand that like for instance if this would be instead of a easy variables would be What they are called pox variables that they take discrete values for instance zero one two three four five So of course I can put here a given power for instance power two or power three or power four doesn't matter And this whole trick applies Why because what is important is the fact that the Hamiltonian is is linear Is linear in the sum of functions of the thermal variables. Very good question more questions Right, okay, so let let me explain you the trick In a simple simple in a simple model and this model is going to be again the easy model Because everybody's obsessed with the easy model But welcome to consider the fully connected easy model, right? Consider the fully Connected So the fully connected easy model is an easy model where all spins are connected to each other Right, it's as simple as that so the Hamiltonian of the fully connected easy model is the following h of sigma is minus j divided by n The sum for i is smaller than j sigma i sigma j and maybe you you also have magnetic fields, right external external external magnetic fields, so let us put this thing as minus h sum for i from 1 to sigma i right So far so good And this again if you want this notation means the double sum for i and j but i is smaller than j Are you with me have we seen these signs before? Yeah They appear right so the tricks i'm going to do is some tricks i learned when i was a kid when i was very young in kindergarten, right So i need to evaluate The partition function of this model Which is the sum overall possible configurations of the exponential of minus beta H Of sigma right so this is equal to so this is equal to The sum overall possible configurations of this potential of beta j n The sum for i is smaller than j sigma i sigma j plus beta capital capital h The sum over i from 1 to capital n of sigma i All right, so far so good tell me You are very good. You have to divide by n because you want this is equal if you would You were to the expectation value of this This is what is related to the what is called internal energy in thermodynamics and you want the internal energy to be extensive Yeah with the system size And somehow to have a non trivial non trivial behavior. So again, so here you have Of the other n square terms Here you have n terms if you don't rescale this thing by 1 divided by n this will nom dominate over this one So you want both tends to be proportional to n. That's why you you divide by n very good More questions Right, so now what i'm going to do i'm going to I'm going to do exactly the same derivations at it to A complete this double sum Yeah, let's do it here Carefully all right, so i focus on this piece inside the argument. I have the sum or let me put it Let me put the whole I have in the first term in the argument of the exponential is j A beta j divided by n the sum for i is more than j sigma i sigma j And this is equal to what this equal to beta j divided by 2n The sum for all i and j but i different than j of sigma i sigma j Yeah Good I have not done anything because this is a symmetric object with respect to i and j So therefore I complete the lower triangular The lower part of the sum and i divide by 2 and then I add and subtract the diagonal term So I put here that is plus beta j divided by 2m the sum For i from 1 to n sigma i square minus beta j divided by 2n The sum for i from 1 to n j i square Now in this case j i square Sorry j i square sigma i square. So I'm a retire sigma i square is one So this would be a trivial constant But if this were not one right for instance if they seem If the easy marios would take values different values like for instance zero one Okay, this is not one but still Regarding the behavior with of the system size this order n zero Okay So I'm going to neglect this this part. So this part I put it here To complete the double sum. So I have now this beta j divided by 2n Now have the complete double sum for i and j from 1 to n sigma i sigma j Plus 10s which are order of n zero It's the same thing I did before yeah Good Very good now I come back here and I put that no I have that this is actually let me do here one more step Now this I can write as full right. I can write this thing as beta j divided by 2 and here I have the complete double sum Yeah, and one index i is with sigma i and the other index j is with sigma j. So this I can write as follows I can write as The sum over i from one to n of sigma i The sum over j from one to n of sigma j You agree and then anticipating that now I'm going to do certain manipulations. Let me here divide by one over n Here by one over n and I multiply here by an n Good And of course, this is the same object square, right? So this equal to beta j divided by 2 times n That multiplies one over n the sum of i from one to n sigma i The square So far so good So still the double sum is here now I put the double sum as a single sum square notice that if instead of having Two bad interactions I would have three bad interaction for bad interactions p bad interaction Right at the end of the day, what I would have here is is to the power p for p bad interaction For a fully connected model so Now that means if I follow here the division that this is equal to The trace overall possible configurations of the exponential of beta j divided by two times n and then a half here one over n the sum for i from one to n of sigma i The square plus beta h The sum of i from one to n of sigma i So still I have not done anything because here the double sum. I have it here. So still I cannot factorize From from this to here Okay, let us do it very good So this notation means now I have the complete double sum. So actually what this thing means I have the sum The sum for i from one to n and the sum for j from one to n Yeah, so if I plug this in over here sigma i sigma j So, you know, I can put sigma i with its with his sum and sigma j with his sum Right, so at the end of the day, I have the same sum square. So this would be equal to again Beta j divided by two times n and I put a beautiful one over n inside the sum For convenience later on one over n The sum for i from one to n sigma i the square Good more questions Can I continue? Yes Now I have not done anything and apparently with this derivation you start to complicate to complicate your life to torture yourself But you have a goal and a goal is once I have this Now this is very you can linearize this thing very easily How you do it you introduce a direct delta to take this beautiful object outside of the square And then you introduce a Fourier representation of the direct delta As simple as that, okay, so let us do it a step by step because at the end of the day All these derivations rely on the same observations, but the objects to which you apply These tricks are a bit more involved by the idea is always always okay. So now You see the following so I continue this one here. So I have equal The sum Overall possible spin configurations and I'm going to introduce an integral over m of The exponential of beta j n divided by two times m Square Plus beta h the sum for i from one to n Sigma i and I put here a beautiful direct delta that tells me that m Must be to the must be to what I had here n minus one over n the sum i From one to n sigma i So again, it appears that I'm complicating complicating my life Because I'm making this object much more this this derivation much more which many more terms, right? So again, I have not done anything if I were to use the Property of the direct delta what would happen is like I would replace m here By this term here, and I obtain this right I have not done anything Yeah, but the beauty now is the following and I I can introduce The Fourier representation of the direct delta So this would be equal to the sum overall possible configurations the integral over m now m hat divided by two pi of Two pi divided by n okay of the exponential of what? of beta j to n m square Plus beta h sum over i From one to n sigma i plus i n m hat That multiplies n minus one over n the sum for i from one to n of sigma i And all of us what's up? Ah put it in front Ah, okay, so okay I could introduce the Fourier representation of the direct delta in the standard way like this But if you carry on the derivation Yeah, and then you apply cell point method that is where we are going you realize that to have a non-trivial solution m hat has to be proportional to the system size Yeah, it's a change of variables. It's because this is an integration variable Right, so I introduce an integration variable that is proportional to n so that it appears here because Yeah, yeah, yeah, yeah, yeah, you can do it and actually leave it as an exercise Okay, so do it with the normal conjugated variable which would be m hat without then here Do the whole derivation and then you get to a cell point equations and the cell point solution tell you ah Wait a second to have a non-trivial solution So that all terms are are proportional to the system size m hat must be according to the cell point equations proportional to n the number of variables Since I know in advance, you know this because I've done these calculations as gasoline times I always put that thing over there, but since if it is the first time you do it, please do it carefully Yeah, but that's the reason behind it very good more questions Go ahead Yeah, but sure, but but you see this is the beauty of exponentials and things outside the exponential, right if I have here an n So inside the exponential This is uh this term is proportional to n proportional to n proportional to n and you will say what happened with this Well, I can put it inside the exponential, but it appears a lot of n Look of n gross not it's not linear in n. So in the thermonite limit, you know, these terms will win over this one Very good But you see all this all this you have to do very carefully and realize After the painful derivations that certain, you know constants are not important and sometimes and other times they are Can I continue? Yeah Where was I? Ah, okay, so yeah, so now So apparently we have complicated derivation For some no reason whatsoever But look what what what happened now, right? So we started with a Hamiltonian which is linear in the sum of thermal variables and quadratic And by doing this manipulation we have linear linear So this is linear with the with the thermal variables This is linear with the thermal variables We pay the price of converting the quadratic sum into linear having an integral But now since I would able to factorize this And do the trace over the spins At the end of the day I'm winning why I'm winning because in the original definition of the partition function after the after the sum over Two to the n terms when n goes to infinity, right? That's a lot of terms So I'm trading that By two integrals So at the end of the day I'm winning and on top of that these two integrals can be evaluated by the side point method Clear so it appears that I have complicated matters, but this is not true in the thermodynamic limit That's why this trick is so cool Yeah So can I continue? Go ahead From this one To this one Very good. The only thing I'm doing is the following Well, the trick I'm using is the following one So, okay, this is these are very good questions because these tricks, okay They are standard, but sometimes you don't see them and and again, they are very powerful Okay, so please ask as many questions as you need So the only trick we are the trick we are we are doing is the following. So I have A function That Where the argument is Something I desire All right, it doesn't matter the function in this case our function is exponential Which is has beautiful properties But I have a function of something I desire and this m of sigma is a function of sigma, right? The only trick I'm using is the following is that this I can write it as an integral over a variable m of this function If you want, let me call it x of f of x And the real delta of x minus m of sigma So from here Well, from the step before To here what I've done is to use this this identity Good more questions No, sorry, it's okay Linear in the sums of of of dynamical variables. You see I have here a linear sum over sigma i And here I have a linear sum over sigma i. I don't have any more double sums For the variables. I don't have the sum over i and j of sigma i sigma j No, we had it. Yeah, we had it before because if I go back here I have a linear sum square So that's a double sum and the point is like this. I cannot factorize it easily to calculate very easily To calculate in a straightforward manner the partition function. That's the point So, I mean if I go back, right and undo all the all the derivations I and I go to the original Hamiltonian I cannot directly factorize the arguments of the exponential in front of the trace over the the The states of the system Yeah, so, I mean you can try. No, so for instance again, so suppose I have The sum over sigma of the exponential of beta sum for i And you can put it like this i and j Sigma sigma j So this is not true, right? So How can I how can I do this? I don't know how to continue, right? So it is like I want to linearize this To be able to factorize the sum over all possible configurations better Very good more questions. Go ahead. Okay. Can you speak up? Ah, well, if you want I can do it more one more step if this will help. Um Yes, so that's a one more step Very good Let me let me do it there. Okay So what we have is the following So maybe let us do it as follows I'm going to use And I'm going to use the following property of the direct delta in the sense of distributions This one That the function f of x multiplied by direct delta of x minus a is equal In the sense of distributions of f of a Did I take that x minus a? Right. No, no, no. You mean To to go from here to here. Ah, okay. Just put a okay So what I do is that they are right down the direct delta of n Minus the sum one over in the sum for i from one to n sigma i as forward, right? D m hat divided by 2 pi over n of the exponential of i and m hat n minus one over n Sum i from one to n of sigma i This is what this is what you are asking No, the only thing I've done is instead of Instead of my Conjugate variable to be m hat is m hat times n The only problem is like I put here one over n I put this in the denominator of the denominator, okay If you want you can put it here. It doesn't matter Well, n is the number of variables It's the only thing I have done. Yeah Again, if you don't like that do the standard thing Carry on the derivation I'm going to do and at the end of the day you realize that in order to have a non-trivial solution And hat must be proportional point Since I know this thing in advance because it's all the time like this in this table of derivations I'll put it. I put it from the beginning Where you go? More questions? Yeah, okay, so I put this expression over here again And I have what I have The sum overall possible configurations of the integral over m m hat divided by 2 pi n of the exponential of beta j n divided by 2 m hat plus beta capital h the sum i from 1 to n sigma i plus i n m hat n minus 1 over n sum i from 1 to n sigma Now so far everybody is cool with this One two three Very good. Now next step. I simply rearrange terms. I notice now I want to do the trace over sigma. No sigma is here Sigma's are here and here and this is now linear so I can factorize right so this is sequel This is sequel to the integral over m over m hat 2 pi divided by n Of the exponential. Let me put first the terms that depend on m and m hat This would be beta j divided by 2 n m square plus i n m hat m and then a half times the trace or the sum overall possible configurations of the exponential of what of beta h sum i from 1 to n sigma i And then this term over here, right? Which is minus i m hat Sigma the sum over i from 1 to n of sigma i All right So far so good and this Now it's very easy to do. So this now Is equal to what this is equal to? Enter our m and hat divided by 2 pi divided by n of the exponential of beta j divided by 2 m square plus i n m hat m And this would be Shall I do it again step by step? Maybe yes, right? This would be equal to the product i From 1 to n of the sum over sigma i of the exponential of beta h minus i m hat Sigma i And if you You were going to ask me which is a good also a very good questions. I said you have here an imaginary unit Okay, this is weird. So what happens is at the saddle point Tells you that a i m hat must be real Okay, but we'll see this thing. Well, let me I will let it as an exercise now. This is very easy to write because this now is Two times the hyperbolic cosine of beta h minus i m hat to the power n Right And this I can write it as the exponential n times the log of this and I put it inside the argument of the exponential. So this is equal at the end of the day of integral m m hat divided by 2 pi n of the exponential Of beta j divided by 2 i'm missing here an n m square n m square plus i n m hat m plus n times the logarithm Of two times the hyperbolic cosine of beta H minus i m right now. Let us recapitulate because we Is easy to get lost in what we are trying to achieve. I deleted that one. It doesn't matter So so we managed to write down Exactly in the thermodynamic limit that the partition function for this case That by definition is equal to the sum over all possible configurations of explanation of minus beta h of sigma for the fully connected taxing model where this guy has Two to the n terms So when n grows this is a lot of terms. Do you agree with me? So I managed to trade off Having a very complicated sum By having just a double integral So this is equal To a double integral for m m hat Of the exponential of minus of the exponential of m a function that depends on m an m hat This function of m m hat is equal once more to beta j divided by 2 m squared Plus i m hat m Plus the logarithm Of two times the hyperbolic cosine Of beta h minus i m hat So why why do we do what we do? Okay, it's not that we want to torture ourselves. It's again I manage to come to transform a very complicated object in its evaluation to a double integral But not only that Look at the double integral is the double integral of the exponential of something that grows With the system size. So I I don't even have to do the integral Right, so that means that in the limit when n goes to infinity that's in 30 behavior of this Is exponential of n f evaluated at the side point That's why that's why we do what we do All these manipulations That we do in derivation and we start complicated things is to achieve Is to try to achieve that we can write the partition function of this as an integral of a few objects With an exponential that has something that grows with the system size Very good questions Do we have an interpretation of m hat the presentation of m hat is a variable It's a complex. It's a conjugate variable that forces the magnetization to take a given value Because it comes from the free representation of the direct delta Sorry To buy over and thank you. Thank you. Sorry. Very good. Very good You know, you cannot miss anything right now. Okay for for you all terms are Possibly relevant Very good. What else? And then you have log n divide by n when n goes to infinity is it goes to zero Let us do it. Let us do it. It's a constant, but maybe it's a constant that Well, first of all, it's a constant So because it's just log of n it does not depend on the interaction variables So it's not going to change the result of the solid point But I assume that maybe changes the value of the free energy even though this doesn't matter because Absolute absolute values of free energy is not important difference of free energy is not important But let us say Let us say that you want to do the derivation carefully. Yeah, so yeah, so we can include it here And then we'll do this So when n goes to infinity, it goes to zero. This is what you were asking Yeah, but the point. Yeah. Yeah, but you see what was your name? Sorry Daniela. So the point is like to keep track of these terms is important Because in the second mapping we discussed of the having the momentary function of the number of eigenvalues to the left of of x Now the replica limit instead of going to zero goes to something an imaginary number And in some cases those terms that you thought were irrelevant now it becomes relevant So it's very important for you to keep track of all of all this that in this case are irrelevant terms More questions, what time is it? Well, if you don't want to take n going to infinity So then you have to evaluate the derivation you have done because you threw away terms which were ordered n to the zero and maybe and the this is what Guys in this case. Yes, but in some cases you you want to take into account corrections to the saddle points due to the Finiteness of the the fact that the system might be finite and you have to take into account all those terms and do perturbations theory Now before we leave, right? So, okay. Now is this trick now understood? So you understand now the spirit of the spirit of this trick and it's always always like this It doesn't matter the object, right? So now let us go back to the to the expression that we have Once more, okay so for the Spectral density of the average spectral density of Poissonian graphs or the Erdos-Renic graph At some point we have after doing the Partition function to the power n an average over The the the Poissonian graph I have this no I have the integral Over i the product of i from one to n no I'll put it like this the product alpha from one 20 n vector x alpha Of once again the exponential of minus set divided by two the sum alpha from one when the sum i one when of x i alpha square Plus let me put it here D divided by two Sorry, I'm a bit tired already. Yeah, okay the sum i and j from one To n divided by n, right off The exponential of x vector i is called product x vector j the vectors in the replica space minus one It's it's okay. I'm a bit tired already. I think that's that's pressure is fine, right? So what I have here Is a double sum in the variables I want I'm I'm I'm trying to get the Thermodynamic limit So if I manage to put this double sum as a single sum I can factorize per node Yeah, and I can I will get to an expression where I can apply the saddle point method So now it doesn't matter that now for this trick that you are in the replica space It doesn't matter that you have something here that is very weird. You have an exponential in certain exponential, right? Doesn't matter at all. The trick is the same in in spirit in its soul You know, it's the same trick. The only thing you have to learn is how to what do I have to introduce to linearize this Cool Now the object and I have to introduce In the same manner that that in the fully connected isymole have to introduce this magnetization That by the way is the order parameter that tells you the transition in a critical point Here the object I have to introduce is this one Sigma is equal to one over n sum I on one one of Sorry px right the direct delta of x minus x I I introduce this I can write this thing in terms of this object and now I use a direct delta To tell this expression that this object that I have introduced is equal to this Go ahead. Ah, it's kind of funny. Okay. It's kind of funny. So if you go back, do I have time still? Can I have Five more minutes. Yeah, it's kind of funny. So so the trick with it before Let's do this thing differently to realize that It's kind of funny. Very good. Very good. This is excellent. You see I have so I have Double sum for i and j one to n of sigma i Sigma j before and the way I went to linearize these things it wasn't introduced to m But let's do it a bit differently, which is a trick you are going to use there Right, so I can say ah this I can write it as the sum or a specific Is invariable tau Of the sum i j one n Tau sigma j Tau sigma Yeah Where now these are connected to delta y because if I do the sum over tau, I substitute here tau by sigma i Yeah Now I can do the same thing for the other variable This is equal not to the sum over tau taking values minus one one. That's now tau prime Of the sum over i and j From one to n of tau times tau prime Chronic delta tau sigma i Chronic delta tau Sigma tau prime sigma j So before what I did is it was introduced the magnetization Now I'm going to now I'm going to I'm going to linearize with a more complicated object But this object is the one you you will do here You see This is going to appear naturally here as well. So now this Is equal to what it's equal to I can do the following so I have the sum over tau over tau prime tau tau prime And then this is a double sum complete double sum, right? So the sum over i I put it with this guy and the sum over j I put it with the other guy So I have sum over i From one to n chronicle delta tau sigma i times the sum For j from one to n chronicle delta tau prime sigma j And if I divide by n here and I put here on it in a square What is the meaning of this the meaning of this is a it's a distribution No, it tells you it goes by nodes and it tells you how many nodes are in the configuration tau So this you can call it if you want p of tau This would be the same function, but for p of tau prime And then you will introduce a direct delta for the object Instead of to the magnetization, but you know at the end it is the same trick So here introducing this object to linearize is not relevant Because what you introduce to linearize is the expectation value will expect this object But in this case you have to introduce this object directly More questions more questions. Okay Shall we continue tomorrow No, no, there's coffee break. I'm I'm I'm tired, you know So by the way, uh tomorrow I need to confirm you but tomorrow I'm going to switch the the the lectures with uh with with one One would be the first. I'll be the second. Okay. Thank you