 because I have here like 7 or 8 exercises, ok, and a couple of challenges. Yeah, ok, so let us go back to the idea of the cavity method that some of you have been asking, and I left an exercise to derive the cavity equations, the cavity equations for the cavity fields. Have you done this? Have you managed to do it? Yeah, everybody of you. Ok, so the idea was the following. So remember that to present the cavity method we use the is in Hamiltonian on a graph. So we have the is in Hamiltonian on a graph. I think I introduce something like this. The double sum for i and j belonging to the graph of j i j sigma i sigma j minus the sum for i from 1 to n h i sigma i. And then I motivated the cavity method to say, suppose you want to focus on the magnetization, etc., etc., remember, right? And at some point, so the idea of this motivation was to emphasize that I was obsessed to calculating single site marginals from the Gibbs-Bosman distribution. And what we found out in the derivation is that the probability, this would be sigma, sorry, the probability of finding the spin at no time again configuration sigma i can be written as 1 over the normalization factor of the exponential of beta h i sigma i. So the product, ok, let me do one more step. The sum overall possible configurations for the neighborhood of i of the exponential of beta sigma i. The sum for l belonging to the neighborhood of i of j i l sigma l times the joint distribution when i has been removed of the neighborhood of i. And then here we said in a graph, if I remove i, this joint distribution becomes the product of distribution. And if it's not exact, then it's the so-called beta approximation, right? So it's a pi neighborhood of i of sigma d i. This is equal to this and this exact one g is a tree. And if the graph is not a tree, then it's an approximation. And from there we get these cavity equations that say that pi sigma i. Well, this is not still the cavity equation. Pi sigma i is equal to 1 over set i exponential of beta h i sigma i. The product for l belonging to the neighborhood of i of the exponential of beta of that sum over sigma l. The exponential of beta sigma i j i l sigma l. The marginal at not l when I has been removed. And the way to close these equations, I left this thing as an exercise, maybe I'll show it later. Because these are not closed equations because this object here is not the same as this object here. The way to close these equations you do the same exercise, you remove a note and you obtain that p without j. At not i is equal to 1 over set i without j or the exponential of beta h i sigma i. The product for l belonging to the neighborhood of i without a, the sum over sigma l beta sigma i j i l sigma l and this. These are the ones that we call the cavity equations for the cavity marginals. Now I said that you can parameterize these cavity marginals using cavity fields. I can write down that now for instance that pi without j sigma i can be written as the exponential of beta h i without j. Sigma i divided by 2 times the hyperbolic cosine of beta with tilde h i tilde without j. And I said proof that if you put this thing into here you get the cavity equations for the cavity fields. Have you tried to do this? You managed to solve the derivation? Who tried to manage, who tried to work it out and didn't manage? Don't be shy, first time I tried I couldn't manage. I spent days based on how on how they do this derivation. Shall we do it? So how we do it? Well I mean the way we do it is very simple, the idea is very simple. I have to put this thing into here and worked out the expression somehow to get the cavity equations for the cavity fields. But there are smart ways to do this. So what is the smart way? One or simpler way? Well to realize first of all that the following from this definition of this parameterization of the cavity marginal. I realize that the cavity field h i of when j has been removed tilde is equal to 1 divided by 2 beta. The sum over sigma i sigma i the logarithm of pi without j sigma i. Right? Yes? Well why is that because if I take this definition I have the logarithm, the logarithm with the exponential will give me the argument. The argument will be multiplied by sigma i and then I have sigma i square. Sigma i takes values plus minus 1. I do the sum over sigma i that will give me a 2. They cancel the 2. They cancel this bit over here. And then I have the logarithm of the denominator. The denominators are constant and I have the trace over sigma i which is 0. Because it takes values plus minus 1. What's up here? Did this, sorry? Yeah, sigma is sigma. Maybe I remember I'm a pure poor human being and maybe I do mistakes when I rush. Yeah? Yeah, sorry about that. More questions? Go ahead. Ok, let us do it. Is that ok? This comes just from this parameterization. So this equal to what according to, so let us say that this is true. Ok? This equal to 1 divided by 2 beta, the sum over sigma i. Sigma i, the logarithm of this parameterization which is the exponential of beta h i without j tilde sigma i divided by 2 times the hyperbolic cosine of beta h i tilde without j. Now what do I have in the numerator? I have 1 divided by 2 beta, the sum over sigma i, sigma i and then the log of the exponential will give you the argument and this would be beta h i without j tilde sigma i. And then I have minus 1 divided by 2 beta, the sum over sigma i, the logarithm of 2 times the hyperbolic cosine of beta h i without j tilde. Now remember, ok, it works in this case because in this case the sigma i is in variables and they have values plus minus 1, right? So in this case you have sigma i square which is 1, right? And then you have the sum over sigma i that takes two values and then you have a 2 beta that cancel these two beta. So this first term is directly h i tilde without j, yes? And the second one here I'm missing a, apparently I'm missing a lot of things today. Here you have the sum over sigma i, sigma i log of a constant, something that does not depend on sigma i and the sum over sigma i of sigma i is 0 because it takes values plus minus 1, so this is 0. Better? Good. Questions? No, so then we use this in the cavity questions for the cavity marginals. So what I have, let's try to do it here. So what we have is what I know that the cavity field h i when j has been removed is equal to this object here, 1 divided by 2 beta. And now, ok, let me put the whole piece and let us do it step by step, the sum over sigma, sigma i, the logarithm of all that, right? 1 divided by the normalization factor exponential of beta h i sigma i, their product for l belonging to the neighborhood of i, without j of the sum over sigma l. And I put here the definition of the cavity marginally in terms of the cavity field. So this would be what? This would be a exponential of beta sigma i that multiplies j i l sigma i. L sigma l plus h l tilde without i divided by 2 times the hyperbolic cosine of beta h l without i. Help me. Ah, have I missed something? I don't think so. Sigma l factor where? Here. So I have beta sigma i j i l sigma l and I put this. This is sigma l. Ah, and I put. Ah, yeah, yeah. So this is the other way around. Yeah, thank you. Like this. Yeah? Excellent. This one here, because we did the derivation, I think during the first lecture, no during the second day, right? But we did actually derivation for the one above, right? I just erased the one where here I have the joint distribution. And I left this thing as an exercise that we are going to do today as well. How do I know that I can write this thing in this way? Ah, well, because, ok, because sigma i takes two values, right? Plus one and minus one. Ok? So, that means, since this is a Boolean variable, or is in variable, a variable that takes two values, to fully characterize this probability you'll need one number. Yeah? Because I only have to characterize the probability. Then I put it like this. The probability that that's not i without j sigma i is equal to plus one. Because I know that if I have the other one, since this is normalized, I only need one number, right? So, let's call this guy something. Let's call this i this guy a parameter a without j i, right? Then I know that I might need the probability that sigma i takes value minus one. But since this is the probability that is normalized, this is equal to one minus some parameter a. One minus a without, no? a i without j, right? So, the only thing I am emphasizing here is that I only need a real number to fully characterize this distribution because that distribution where the random variable takes two values. Are you with me? Right? Just one parameter. So, this is a smart way of expressing this parameter. More questions? This is not an approximation. This is not an answer. This is a way to parameterize the distribution. Has this form, this one here, well, from the definition, yeah? This is a definition of how I am going to parameterize the distribution. And the reason I realize is that this cavity field can be extracted from this distribution doing this expectation value. It's just that because that's a definition and therefore from the definition, yeah? More questions? Come on, don't be shy. That's it. Can I continue? Yeah? Very good. So now, notice the following, right? Here I have the logarithm of the problem on things in a numerator and things in the denominator, okay? In the denominator what I have, I have constant and I have a constant. When I apply the logarithm, since I have to do this over sigma, that will give me zero. Like before. So I'm not going to worry about any more about this guy and this guy. Then I have the log of products, okay? That would be the sum of the logs and here I have the sum. Alright? So I forget about the denominators. And then what I have here is the following. So this is equal to, so let me see. Equal to one divided by two beta, the sum over sigma i, sigma i, the logarithm of what? I'm going to do it step by step of the exponential of beta h i sigma i. This is like before and now I have the log of the product and this would be plus the sum over the number of i without j of what? One divided by two beta, the sum over sigma i, sigma i, the log of this trace here, no? The log of the sum over sigma l of the exponential of beta sigma l j i l sigma i plus the cavity field at del where i has been removed. Yeah? Sorry? This is a constant. Why? Because it was, this came in the equation from the, it's a constant in the sense that, okay. So then when I say it's a constant is with respect to certain variables. When I say here it's a constant is with respect to these dynamical variables. Of course this is not a constant because it depends on the temperature, yeah? But I'm focusing on the dependence on sigma. Since this is the ratio of two partition functions, right? The partition function of the original system and the partition function where I remove something etc. This cannot depend on sigma. So I'm focusing on the dependence on sigma, yeah? That's why I mentioned that it's a constant. Well, sure. Because this ratio, remember that one over set i in the relationship between pi sigma i and the cavity margin. This was in the derivation. This was one, it was set divided by set without i. And what is set? It's the partition function. So set is the sum over all possible configuration of the function of minus beta h of sigma. And doing the trace over the values of the spins, this cannot depend on the spins. Yeah? So when I, of course it depends on the temperature, it can depend on other control parameters. But when I say it's a constant, I'm referring to the spins. The dynamical variables better? Good, excellent, very good. More questions guys, don't be shy. No? That's it? Very good. So this is trivial now, right? Because I have the same, this expression is the same as before the same trick. So therefore this is equal to h i. And what is this? Well, this is a bit of a nightmare, but you do it and close the fingers at the end of the day everything is right. So this will be equal to what? The sum over all the number of i without j of 1 divided over 2 beta of what? Okay? I have the sum over sigma i, sigma i, the logarithm. And this trace is trivial, no? This trace is the two times the hyperbolic cosine of the argument in the exponential, right? So this is two times the hyperbolic cosine of j i l sigma i, my plus h at l without i tilde. Yeah? And then I do the other sum. And the other sum I have sigma i log sigma appears here for sigma i plus minus 1. And what is going to get this? So this is going to be h i plus the sum over all the neighbors of i without j. And this will give what the logarithm of two times the hyperbolic cosine of j i l plus h tilde l without i divided by two times the hyperbolic cosine of h l without i tilde minus j i l. Right? Because I have one logarithm minus the other one. So this is the logarithm of the ratio. In the numerator I have when sigma i is equal to plus 1. In the denominator when sigma i is equal to minus 1. Good? Very good. Now you use, you don't finish. And I'm missing one divided by two beta. Sorry. Now we have to remember that the hyperbolic cosine of a plus b is something, no? There is a formula. You have to speak up. Why is this the hyperbolic cosine? Because this is a real explanation, right? So remember that the, what's up? Ah, Bida. Yeah, sorry. Okay, today I'm like this. Very good. Thanks. I need to pay more attention to that. Right? Why is this the hyperbolic cosine? Well because, yeah? Because exponential of x plus exponential of minus x is equal to, by definition, a hyperbolic cosine divided by two. Right? Sorry. That's not divided by two here. Yeah? It's just that. Well, yeah. This sum, remember that sigma, I'm not being explicit because otherwise, it's a nightmare to write down everything explicitly. So some things I leave them implicitly. The sum over sigma is over the values of sigma which is plus one minus one. For this case. Yeah? Very good. So now you have to remember that the hyperbolic cosine of a plus b is equal to the hyperbolic sine of a times the hyperbolic sine of b plus the hyperbolic cosine of a times the hyperbolic cosine of b. And you have to remember that the cosine, the hyperbolic cosine is an even function and that the hyperbolic sine is an odd function. Yes? So now I applied this formula here. And what do I get? It's equal to, I continue here, h i plus the sum over the neighbors of i without j of one divided by two beta, the logarithm. Ok, I want to. Ok, the two cancel with the two. I don't have to put it anymore. And I have. Instead of writing this hyperbolic sine like this, let me write it like this, sh, right? Of x. This is an hyperbolic cosine. I'm going to write it like this. So then I have hyperbolic sine of beta j i l, hyperbolic sine of beta h l without i, plus the same for the cosine beta j i l cosine beta. And the same in the denominator with a minus sine, right? With a minus sine in the right place, right? So I'll have something like this. Minus, no, this is the other way around, minus here, plus, yeah? Hyperbolic cosine of beta j i l, hyperbolic cosine of beta h till the way of i. Right? Are you with me? I hope so. Sinx, sinx, cox, cox, cox. Sinx, sinx, yes. Ok, today I'm missing too many letters, sorry. Thank you. And then I take this piece, which piece do I want to take. And then I take this piece of the hyperbolic cosine and I divide everything by this. In such a way that here I have hyperbolic tangents. Right? This is equal to h i plus the sum over l in the number of i without j of 1 divided by 2 beta, the logarithm. And I have 1 plus the hyperbolic tangent. And I write the h of beta j i l times the hyperbolic tangent of beta h l without i till the... Divide it by 1 minus th beta j i l, th beta h l without i till. Right? Are you with me? And then I need to remember one more formula, which is the arc hyperbolic tangent. The arc hyperbolic tangent that I'm going to denote of x. That I'm going to denote, alright, like a th. This equal to one half of the logarithm of 1 plus x divided by 1 minus x. So here, what do I have? I have one half of log of 1 plus something divided by 1 minus that something. Therefore, that's the inverse hyperbolic tangent. So therefore, this is equal, finally, to h i plus 1 divided... Let me put the... let me leave the 1 divided by between inside. The sum over all the neighborhood of i without j of 1 divided by beta, the arc hyperbolic tangent of the... Hyperbolic tangent of beta j i l, the hyperbolic tangent of beta h l without i till. And that's it. And this is what we call the message, right? This was this function that we introduce as u of h u of j i l h l without i till. Clear? So the derivation is not difficult, it's just annoying. Difficult it is not. So then I left last week an exercise where now we are in the... Let's go to the first mapping with it. In the mapping we did, we have symmetric matrices and we were worried about the spectral density of those matrices and we found out that this can be, you know, attack as a spin glass problem. Right? So this at some point we wrote down what? We wrote down that is equal to the limit when eta goes to 0 plus of 1 divided by pi n, the imaginary part of the sum of i from 1 to n of delta i of lambda minus i, or better still, sorry, let me... It's an intermediate step, right? So the expectation value of x i square for set equal to lambda minus i eta, right? And then here we realized, so we discussed that we can use the cavity methods, etc., etc., and we got the cavity equations for the measure associated to this expectation value we have that the Gibbs distribution P of x was equal to 1 divided by set of a set of the exponential of minus a Hamiltonian, or the Hamiltonian H of x was equal to one half, the double sum for i and j from 1 to n x i set identity matrix minus a components i j of x j. Yeah? Something like this? Again today for some reason I'm missing letters, ok? So just let me know if you notice that I've missed something. And then again so the idea is to realize that for this object I can apply the cavity method here and I obtain the cavity marginals associated to the single set marginals I need to solve this problem in a different way. So the cavity equations we got for the cavity marginals were these ones, no? Were P i x i without j is equal to an normalization factor 1 divided by set i without j of what of the exponential of minus set minus a i x i square divided by 2 times the product or the neighborhood of i without j of the integral dxl of the exponential of that would be plus sign x i a i l x l times the marginal at node l this would be l sorry when i has been removed. Is that correct? I think so. Somebody last week told me that I forgot this thing, if you don't assume that the diagonal elements of the matrix are not zero, this should be there. And then we discussed that you know now the Danekal variables are continuous variables so you don't need an infinite number of parameters to fully characterize this distribution but you realize that if this had a Gaussian form then this is closed under the Gaussian form. So then I let this following exercise that we are going to do right now which is take a parametrización for these cavity marginals that capture this observation, the fact that these are must be Gaussian distributions, right? So we introduce the following parametrización P i x i without j is equal to 1 divided by a square root of 2 pi delta i without j of the exponential of minus x i square divided by 2 delta i without j. And after the same I did for the cavity fields but in this case I have to plug this thing into here and we express this in equations for the deltas. So the only thing I have to do or the first thing I have to do is this integral. So let us do this integral. So I have the integral over dx, dxl that with the exponential and now I put this thing, no? So I have the integral dxl divided by a square root of 2 pi delta l without i because be careful this is the definition, here the indexes that appear they are different. Here it is at the marginal at node l when i has been removed. So the corresponding parameter would be delta l without i of the exponential of minus x i square divided by 2 delta l without i and inside the argument of the exponential I put this part here, no? This plus x i a i l x l, yeah? So far so good? And then you know how to do this integral, no? There are many ways. Either you remember the formula or you notice that the way to solve it is to complete the square, do a change of variables, etc. So what you can do is either remember the formula, the formula is the integral dx divided by 2 pi or the exponential of minus x square divided by 2 plus ax is equal to a exponential of a square divided by 2. So this integral is equal to what? This integral is equal to, so if you want you can do a change of variables to put the deltas on the other side. In such a way that you have the same measure and then you apply this formula. So at the end of the day you will obtain what? You will obtain that this is equal to the exponential of, I'm integrating over, this should be l, no? This should be l, sorry. So I will obtain what? I will obtain x i square delta l without i a i l square divided by 2. I'm doing a lot of mistakes today, Apologis. Is this okay? I think so, yeah? Now I plug that thing back into there. So on the right hand side of the cavity questions for the cavity marginals I'll have what? I'll have that p i h a x i when j has been removed is equal to a normalization factor set i without j of what? Of the exponential, let me put it like this now of minus set minus a i i divided by 2 x i square and then I'll have this which is this result here and the product that I put inside the argument of the exponential as a sum. So then I'll have plus the sum for l belonging to the net root of i without j of this result over here in front of half a one half x i square divided by 2 and then I'll have the sum of a i l square delta l without i. Now let me rearrange this thing a bit. This now I can write it as one over the normalization factor of minus, let me put it like this minus one half that multiplies to set minus a i i minus the sum of over l in the net root of f without j of a i l square delta l without i multiplying x i square Is ok? I think so and this must be equal because this has to be a the cavity equation to the cavity the memory that must be equal to p i x i when j has been removed that must have this form must be equal to the exponential of minus x i square divided by 2 times delta i j divided by the square root of 2 pi delta i without j So you see what I told you before a Gaussian when you put a Gaussian on the you get back a Gaussian and on the other side you have a Gaussian so the system is closed when the functions have the Gaussian form and from here you can realize that you know the cavity equations become this the following since this is a Gaussian and this is a Gaussian that means that the variance that appears here must be the variance that appears here that means this delta i without j must be equal to the inverse of this So from here comparing you get that delta i without j the parameter that appears here must be equal to the parameter that appears here so therefore this part or the inverse of this must be precisely delta So delta i without j must be equal to equal to 1 divided by set minus a i i minus the sum over all the neighbors of i without j of a i l squared delta l without i Clear? physical interpretation physical interpretation no, there is a mathematical interpretation physical interpretation I have to think about it because since we are coming from a mathematical problem and we might be to a to a spin glass problem even though it's not a spin glass problem because this is what I mentioned before this set is a complex number so therefore the homingtonian is not a homingtonian the case Boltzmann measure is not a measure it's difficult to see to give an explanation of the physical meaning of this there is a mathematical meaning but not really a physical meaning sorry more questions so how do you find this exercise? this is simple do you agree with me? yeah, excellent, very good of what sorry? what do you mean we take the specific value of the Gaussian? you mean this mapping right so suppose that you want to calculate the spectral density or you want to estimate or approximate the spectral density of a symmetric real matrix ok and you are lazy and you don't want to analyze the matrix because if you analyze the matrix then to get the empirical spectral density is very simple or maybe it's not possible because the matrix is very very very large right so the recipe this mapping we it's actually exact ok so the spectral density the empirical spectral density for a matrix A rows to pay of lambda we notice that it could be written like this now in this formula we do something like this right so it would be one divided by pn the imaginary part of the sum i from one to end of delta let's put the expectation value this square one set is equal to lambda minus i eta and I'm not putting that eta has to go to zero plus right so your question is no it's not taking over a Gaussian well yeah because they are Gaussian measures yes but the question is you use the cavity method to do the expectation value directly yeah so therefore you have already calculated the expectation value and the expectation value is given are these variances and these variances they obey these closed equations better more questions ah well ok it's not that it disappears it doesn't disappear but of course you have to realize that this is the normalization factor of the marginal so you know that if I were to integrate this ok this will give me precisely the square root of 2 pi i this ten inverse and then when you do when you compare left hand side and right hand side you get the same information from the normalization factor or comparing the arguments of the exponential right because you know that you see this part this has an exponential form the numerator here has an exponential form so you know that this the variance that is captured by this expression must be equal to the variance that captures this expression so this will give you this and you say what happens with the normalization factor well it gives you the same information because if I were to integrate this because in the cavity equations are for cavity marginal so the marginal has to be normalized this will tell you after normalization it will tell you that is equal to the square root of 2 pi the inverse of this because it's the normalization of this Gaussian and on the other side you know that also the normalization have to agree so you get that the variance that appears in this normalization has to be related to the variance that appears here and you get the same equation yeah yeah they are automatically matched good, more questions well now I left one more exercise and then we are going to do a challenge the exercise was the following suppose now you have we consider particular type of matrices so remember this was for n times n real symmetric matrices or a matrix A suppose that now I can I can draw the matrix as a graph where the nodes are the the number of nodes is the dimension of the matrix and the links between the nodes are the entries of the matrix yeah so suppose that I have a graph asociado to this matrix which is what I call the homogenous random regular graph with degree degree capital K so what this thing means is the following a random regular graph is a regular graph where each node has the same number of neighbors which in this case is K like for instance, right so suppose that K is 3 is something like I have one node, whatever and this is connected to 3 other nodes and these nodes are connected to other nodes in such a way that the degree is 3 etc etc so each node has the same number of neighbors for this example is 3 now we assume that the graph is homogenous what does it mean it means that if I see you have to speak up sorry no ok in principle for regular graph they are 3 but the problem with a 3 is like if you keep growing it the boundary becomes larger than the bulk of the graph this is what is called a kelly 3 sometimes dealing with kelly 3 can be a bit annoying so to get rid of the part of the boundary growing exponentially what you do is you grow it to a certain extent right as you keep growing it etc etc then at some point you take the boundary and you start connecting the boundary trying to keep also the degree and of course there are many many ways of connecting the nodes of the boundary for each type of different configuration you get a different regular graph that's the name of randomness or random in random regular graph is the different ways you have to connect the boundary the nodes in the boundary between themselves yeah more questions now the part of homogeneity what does homogeneity mean it means what it means so suppose I see myself in this node I'm here and I look around and I see something right and if the graph is homogeneity this means that if I go to another node and I sit in that node and I look around I see the same ok so in this case this means that I'm going to take for simplicity I think I left a more general exercise I'm going to take that the matrix entries the diagonal matrix entries are zero for simplicity I think the nodes I left the most to have a given value a node and of the diagonal elements of the matrix aij were equal to a1 for i different than j a node equal to zero and a1 equal to one sorry yeah means that these things are always equal because otherwise it wouldn't be homogenous because I would go to a node and I would see something different if I look around me than in a different node so what I take is that yeah you're right I don't want to use the word has to be homogenous and isotropic because this is a graph saying that it's isotropic when you have a graph but it must be homogenous and isotropic because it could well be that if I go so for instance I could have a tree where this link has a given value let's invent a number pi number e and a square root of 2 right and this one they also have this pi a square root of 2 e e a square root of 2 so this graph would be homogenous because if I move for node to node I see the same yeah but it will not be isotropic so when I say here homogenous it means homogenous and isotropic that means all the links they have the same value better? go ahead in the sense I mentioned before in the sense that you keep imagine that you keep growing these three you realize that the boundary grows exponentially yeah so what I want to do I don't want to be in a situation where the boundary is going to dominate the behavior of my system because the boundary has more nodes than the rest of the the interior of the graph so then I go to the boundary and I connect the nodes of the boundary between themselves but there are an exponential number of ways of making these connections so at random regular graph the part of the randomness of the regular graph is the different ways you have to connect the boundary between themselves yeah so it's a tree to give a point and then of course you are going to have loops yeah the larger the tree the longer the loops they are more questions so now what I want to do is to apply this recipe to this particular case so let's leave this equation here so now since the graph is homogenous meaning homogenous and isotropic so you see so these parameters that I have here so for instance I have to go to a given node let's say that this is node i for each node i I have as many parameters as neighbors there are because I have to remove one neighbor per each node so I come here and I say here I have this cavity variance if you allow me to call it this way cavity variance whenever I remove this node right and then have to come back to another node and do the same thing but since the graph is homogenous and isotropic this must be the same for all nodes and for everything I have removed because you know when I go to a node I see the same and when I remove something I see the same right so for an homogenous random graph it must be that all these cavity variances they are the same let's call them delta cavity for all i from one to n and for all j belonging to the neighborhood of i right because the graph is homogenous I see the same at each node now instead of having a bunch of equations I just have one tell me yeah you can also do it so I was trying to do the simplest case but of course suppose that a zero is different from zero so here you have an each node you have loop and you can take the sorry no no the spectral density changes actually shifted and that's the only thing that's the only impact that that number has there is the same form I will let this thing as an exercise thank you for proposing an exercise excellent yeah no no because the loops in the equations appear like this right so it's a i i and that in the in graph vision this would be a node that starts at node i and that's at i so the corresponding graph would be a one that has has this loop at each node and now the weight of the loop and the weight of the neighbors they can be different and still the graph would be homogeneous no no no in graph theory there is a difference and sometimes it is the mixed vocabulary there is a concept of loop and a concept of circuit in graph theory when you mention loop is the link of a node to itself a circuit is a loop between different nodes so this is what in graph theory they call loop and the other things are referred to normally are called circuits but physicists they don't use that vocabulary they refer to paths that connect nodes between themselves as loops so for instance if I start with this node and there would be a loop I go to this node and I come back this would be in the stockmate they call it a loop but actually should be called a circuit whatever in this equality ok help me in which part you get loads so we are in a situation now this graph is not homogeneous and isotropic anymore because it is a mess but we are in a situation where the graph is homogeneous what this thing means is the following I am at a given node I and when I look around I see something I move at a different node and when I look around I see the same thing as before and that happens for all the nodes ok there are no boundaries because are random regular graphs I remove the boundaries by folding the graph into itself that's important so since one second let me finish the arguments so you see since the graph is homogeneous so that means that this quantity that should depend on the side and the nodes I remove because I see the same I have uniformity in the graph so that means that this quantity that in principle should vary with the nodes and the neighbors I remove there must be the same for homogeneous random regular graphs right go ahead no I didn't catch up sorry yes what I mean when I fold it into itself is like I create links between the boundaries the nodes at the boundary that's what it means I have here one node and then this node will be connected to neighbors let me put these neighbors as a circle so here I have the first neighbors and then in this next circle I have the next neighbors etc here at some point what I have is a bunch of nodes and this will be the boundary that grows exponentially when I say that I fold the graph into itself right I take now the neighbors and I connect them with themselves right this one for instance this one to try to sometimes you cannot do it to try to keep the same degree and sometimes this will not happen this is true but at random the graph this idea and the idea is that it has no boundary because when you get here you don't see a boundary yes yes no no for the cavity equations you start with the original you start with the original graph and you have to do this fictitious operation of removing for each node of the graph and actually for each neighbor given a node of the graph and then you iterate this thing on the cavity graph where you remove you do this operation of removal but the issue of homogeneity is to this equation as well something will happen here with the number of neighbors so you see my original problem is like I have a graph with the degree k and I work an homogenous graph and for this one I want to calculate the spectral density so I have to go to a fictitious wall where I have a cavity graph where the degree would be different one less and then once I solve these equations I go to the original graph but I don't see what what's so problematic about removing something I mean removing something doesn't change the fact that the graph is obviously homogenous yeah yeah but the degree minus one will appear there I don't know what's the problem with that can I continue? yeah so the connectivity is here this is the sum for L belonging to the narrow root of 5 without J so it's a graph where I have removed one neighbor so for this the cavity graph the connectivity will not be k would be k minus one but still these equations would be are simplifiers for one variable yeah more questions? for simplicity? no no no no no no no no this is for the pairs of nos I, I, J for which there is a connection yeah because I cannot write down everything right so what I said I suppose that now it captures some kind of it's the weighted connectivity matrix of a graph, of a random regular graph so when I say A, I, J it's for those nodes which are connected yeah and I said that the way is equal to A1 ok more questions? yes yes it's like in the it's like in when I introduce the cavity method for the easy model ok, if I want to calculate the magnetization as a function of the temperature you know, I fix the temperature I run, I solve the cavity equations and then I calculate the magnetization I change the temperature and from there I get the curve of the magnetization as a function of the temperature here is the same thing I fix lambda, I solve the cavity equations I get the spectral density for again value of lambda so I have to run for different values of lambda unless you can solve this in exactly this case and you don't have to do anything because you get the expression of the spectral density explicitly good this is very good, more questions guys that's it, can I continue? ok, now now since the graph is an homogeneous random regular graph that means that these variables that in principle should be different from each node and each neighbor I remove they are the same let me call this thing delta cap from cavity and of course since the original graph the number of neighbors of a given node I is the same, this would be K remember that this contour of I is the node which are neighbors of I since it's a set this absolute value is the cardinality of the set so this is K and therefore the cardinality of delta I without J is equal to K minus so in this sum I have K minus 1 terms which now are all the same because the graph is homogeneous so that means if you particularize these cavity questions for this homogeneous random regular graph you obtain the following obtain that delta cavity is equal to 1 divided by set I'm putting that AII is 0 for simplicity minus 0 minus K minus 1 the terms, the nodes, the neighbors which are connected I put the same weight which is 1 for simplicity times delta cavity and then remember that I have to once I have as of these equations I plug the solution of these equations to the delta I in terms of the cavities but again the graph is homogeneous and they simplify a lot and that will tell you that delta is equal to 1 divided by set minus capital K delta cavity and then from the original expression that the spectral density for this homogeneous random regular graph of lambda it was simply equal to the limit of eta going to 0 plus 1 over pi the imaginary part of the sum from 1 to n of delta I of lambda minus I eta since the graph is homogeneous all the delta I's are the same so you know you have n divided by n and this will give you for this particular case the spectral density is equal to the limit of eta going to 0 plus of the imaginary part of delta of 1 divided by pi sorry for today I don't know where my brain is 1 divided by pi the imaginary part of delta lambda minus I eta so all you have to do now is you solve this equation which is a quadratic equation for delta cavity you put the solution here yeah and then you put the solution the special here you take set equal to lambda minus I eta and you take very carefully the limit when eta goes to 0 plus good well now so let me see now I'm not going to do a part because I'm tired and I want to give you a challenge a step so when you solve this again this is a quadratic equation you put it here you get that delta is equal to the following so let me see delta is equal to set k minus 2 of minus plus the square root of the square 4 times k minus 1 divided 2 times k square minus set square yes well now I have to put plug this thing into here and into here and you have to be very careful you have to know a few results of distribution theory as functions to make this limit because you see so let us focus on this first part and this part would be set divided by this I have to take the limit when I plug all this thing into here this one part where I have to make the limit of eta going to 0 plus of something that is set of divided by k square minus set the square where actually I am going to put it lambda minus i eta divided by k square minus lambda minus i eta square and to make this limit and to do this limit you have to generalize the results we discussed in the first in the first day of the first week which was the following we saw that the limit eta goes to 0 plus of 1 divided by x minus i epsilon was something I let you this as an exercise to prove here if you notice you have something similar this was something then you have to prove what happens for the limit eta going to 0 plus of x divided by x minus i eta x minus i eta now you have to work out what this thing is so once you work out what this thing is and you make this limit properly you should end up having the following expression which is the following so the final result is that the spectral density for homogeneous random regular graphs is equal to 2 k divided by 2 pi of the square root of 4k minus 1 minus lambda square divided by k square minus lambda square and this is for lambda smaller or equal than 2 i's the square root of k minus 1 and this has a name it's called the kestemakay distribution or kestemakay distribution I'll let you do this final part questions we how we are supposed to get to get which one this expression from here very good so this you understand that if I put the denominator on the other side this is a quadratic equation quadratic polynomial for delta I solve it and I play it here and it will give you this you have to massage it a bit but in the end if I didn't do the mistake you should get this more questions remember the following there are two marginals in all these games there are two marginals at one side and the cavity marginals which are close equations so what you have is you have the pixi that depends on the cavity marginals and then you have close equations of the cavity marginals you know the spectral density is given in terms of the expectation values of this object and this object is parameterized by delta I so this is equal to delta I exponential of minus xI square divided by 2 delta I it's the same thing it's the same thing as in the easy model in the easy model you have the cavity fields and then you have the effective or physical fields so you solve the equations for the cavity fields and once you have that solution you plug them to get the physical fields and from the physical fields you get the magnetization so here it's the same thing why delta has because remember we had that equation as well but I didn't write it no matter that we have two sets of equations one is a set of close equations and the other one are equations that relate the in this case the variances with the cavity variances so I have delta I without J this one is one more one divided by set minus I I minus the sum belonging to the network of I without J of A I L S square you have this and these are again the cavity equations and then you have delta I is equal to one divided by set minus A I minus the sum belonging to the network of I A I L S square delta L without I these are close equations for the cavity marginals or in this case the cavity variances if you want them to collect like that and once you have the solutions for this you have to plug in the expression to get these variances and these variances are the ones you actually need to calculate the spectral density so in an homogeneous random regular graph when you apply this equation to that case you obtain precisely this equation because again the graph is homogeneous and here you are summing over all the neighbors which are K better? very good go ahead I think you have to prove it or maybe there is another trick sure I mean okay okay so let me give you so what you have is the following so there is one first part that was set divided by K square minus set square but I can write this thing as this set of K minus set K plus set right shall I continue? it's okay set divided by K minus set plus set divided by K plus set divided by okay so I need to understand the behavior of this which would be lambda a variable shifted a bit with the imaginary part divided by something minus that variable so it's what I wrote before so I'm not telling you exactly the derivation you have to do I'm telling you the spirit of the derivation right? so in this case again you have to understand the limit when eta goes to zero plus of this you have to understand this to understand what would be the contribution of these terms yeah of course here you have a real imaginary part better? more questions? okay what time is it? oh they are good okay so let me give you a challenge for tomorrow mapping and the idea of this mapping is the following right? so now again for some reason I'm interested in a problem in random matrix I want to map it to a problem in stack mech so suppose again I have a it's an n times n real symmetric matrix and let us denote as lambda vector a I call it a before this is the spectrum of a and now you see suppose that a comes from an ensemble of random matrices and I pick one matrix from the ensemble and then I do analyze of course the eigenvalues are going to be real suppose that this is the real line for a eigen matrix a I would have that eigenvalues are somewhere somewhere in the real line x here here whatever and this is lambda 1 lambda 2 I can order them lambda 3 lambda i lambda n minus 1 lambda n suppose I take now a point in the real line let's say this point x this point x this one here and I want for some reason I can explain later some motivation to know what is the number of eigenvalues to the left of x want to know what is the number or introduce the number of eigenvalues to the left to the left of x which of course I can express as follows I can express as follows let me denote this number of eigenvalues to the left of f of x sorry as I sub n of x something like this so then I sub n of x is equal by this definition to the sum of I from 1 to n of the heavy side function of x minus lambda Ia sorry is the cumulative distribution but now it's not really a distribution it's a random variable because I know the expectation value over this so now this is a random variable so it's not a distribution so suppose that a is a random matrix for a given random matrix this would be the distribution of the eigenvalues in the real line would be random and therefore this object would be random yeah very good now I want to calculate the moment distribution function of this random variable this would be a random variable now this if a is random then this number I and x is a random variable and a way to characterize this random variable is to calculate the moment generating function so let us define as g sub x of mu this would be the moment generating function I sub n of x which I denote as the expectation value with respect to this ensemble of random matrices of the exponential of mu of I sub n of x do you understand? more or less so my claim is that this can be written as the expectation value of a partition function that depends on this variable x minus I eta o sony like this to the power mu divided by pi I times a partition function of x plus I eta to the power of minus mu divided by pi I where the partition function is again something you have to find now I compute the spectrum suppose I compute the spectrum or you calculate the empirical spectral density so the empirical spectral density as we define this for a given matrix no because we are going to look at the fluctuations of this number and not only its typical value so it's like going a bit beyond in the previous derivation yeah it doesn't matter now I introduce the spectrum to put it this is like the question of the other day so why I want to do all these derivations I do if I know the spectrum that's not the point the point is like I need the spectrum to define quantities in this case the number of eigenvalues to the left of x and then I want to recast this problem into a problem into a different problem for which I don't need the spectrum so I can work out directly with the matrix more questions? good so then I leave this as an exercise to find this expression and the other one is you see now you have something which is equivalent to a partition function you will see and then you have to do this quench the average over the quench disorder which is the matrices but now this is very weird because the partition function is to the power of an imaginary number so how are you going to do this and you have to find a way using the replica method to use to do this expectation value that's it thank you