 Yes, but does it work? Yes. OK, great. OK, so I will continue on the Dyson-Schringer equation. And please, again, do not hesitate to ask questions. My plan is, as I said, to tell you about five applications of the Dyson-Schringer equation with the idea that by repeating the same techniques five times at the end, you will understand it. But maybe it doesn't work like this, and you prefer to understand one better. And so you should just ask questions. I will go more slowly. And maybe I will do one application less. OK? So today what I want to discuss is the denialization of what I told you about yesterday to matrix model in a perturbative setting. So we consider the following probability measure. So I will take several matrices. It will be the only result with several matrices, which is connected with the last talk yesterday. So you look at 1 over z, and then you have exponential of minus n, the trace of b in x1, n, xm, n. And so this is the notation that Sasha also used. So it's a Lebesgue measure on Hermitian matrices. So v is going to be a polynomial. And we will think about it as independent Gaussian matrices, plus a small interacting polynomial. And for the point that you want this polynomial to be a self-adjoint, so v is v star. So if you plug self-adjoint variable in v, then you get a self-adjoint matrix. So that this is real. That's the important point. You want to deal with probability measures. And what I will assume is that I need to have v going to infinity well enough. And in fact, I assume even more. I want that the trace of v is strictly convex. So the action of the trace of v is bounded below by c times identity. OK, so I take what I mean with a sum, I don't know, alpha i1, i p. So these are complex, essentially, x i1, x i p. Yeah. That's a classical polynomial with complex entries, but the whole thing is self-adjoint. You would have liked this to be matrices, or? Yeah, no, no, I mean it's just OK. So I could add fixed matrices which would come into these words. This is what you mean to say like what you did, but this is doable. This is doable. So you could have an extra family of matrices. But then the answer would be even more complicated because it would depend on the joint law of these guys. And again, this would have to come here. So OK, and so the, not today, but this is doable. OK, and so the theorem I want to say is that if c is positive for all k, which is an integral number, we can find epsilon which depends on c and k positive. So that's somehow what we proved yesterday, which was the expansion in the dimension of the expectation of the trace. It's going to expand in n up to our dot 2k. So this means that the expectation under pnv of 1 over n, the trace, let's say, of a word, xi1, xim, OK, so xik. So here, these are just integers between 1 and m. So this will expand. So it will be some 0 to k, 1 over n to the 2p, and times something that I will not to p of xi1, ik, so it depends on epsilon, plus small o of 1 over n to the 2k, yeah. And also, you look, for instance, at the free energy. So the partition function is here. It's a normalizing constant for this probability measure. So you have also the same type of expansion. Here, and then a bonus is that you have an expression for this, so the analytic function in epsilon, so epsilon of xi1, so maybe I should have put g. It would have been more. xi1, xik is the sum over, OK, so this is this of the product. So maybe I should just write it in short, the sum of alpha q times q. So q are just monomials. This will be the product minus epsilon alpha q, kq factorial. And then you put mg of the 1 xi1, xik, kq, q, k. And so what are these numbers? So it's like yesterday. So if I have a family of, let's say, p times of monomials, so this is going to be an integer number, which is going to count maps, OK? So to define it, I need first to associate to a monomial of vertex, OK? So maybe I should go over there because I am going to go in the dark corner otherwise. So to a monomial q, which is, let's say, xi1, xip, you associate a star, so what we call a star of type q. It's a vertex with colored half edges, which depends on your letters here. So it's rooted. So the first half edge has color i1. It's a root. The second has color i2. The third has color i3, i4, i5, and so on, until ip. So you can think about it as a vertex of degree p, but with half edges which are colored, depending on which letter you have one after the other. So if you have only one letter, it's just a vertex of valence p. OK, and then, so mgq1q1qpqp, so that's the number of maps. So remember that maps are connected graphs that you can draw on the surface, so it's of genius g, which is built on ki stars of type qi. OK, so for instance, let me imagine that you have, I don't know, q that you have only two colors, and you have q, which is x1, x2, x1, x2, q prime, q1, and q2, x1 to the fourth, x2 to the fourth. Then what you see is that this one will correspond to this type of vertices, and this one will correspond to just r. That's a red, which is not very red. OK, what about the green? Yeah, I don't know. OK, so you have a white one and a green, red one. OK, so you have these two types of vertices, so you draw them, let's say I have three like this, and I have one like this, and I don't know, two like this. And now, all what you want to do is to match the vertices which have the same color, so that the total genus of the map is given by g. So it has to be connected, so I don't know you have to do something like this. And so you see, I just did one matching, and apparently this matching does not live on the sphere because you have this crossing, but you can apparently draw it on the torus. So this would be a genus one. No, I didn't see that one. OK, that's terrible. Let's keep it a genus one because otherwise, I have to count the genus. I mean, with the other one, that could be a genus two or a genus one, I would have to count the number of faces. So OK, this is a genus one. And if I would have done something, I don't know what I could have done, but something different like this, and that'd be a genus zero, for instance. OK, so you imagine you're counting these numbers, and this is defined through the mg. All right, so some comments on this theorem. First, on the literature, so it's the literature. So this type of expansion was first proposed by Toft. And then, in a physics paper by Brezen, Parisie, it's in a physics paper. And in mathematics, there was a paper by Ercoloni and Ken, McClothin with here, when you have only one matrix. This case with several matrices, it was me and Edouard Morel-Ségala. And the point is that on several matrices, actually, there are not so many results. That's much more difficult than the one matrix. So in this generality, we don't know how to have something which is not perturbative. So next time I will show you in one dimension, you can take epsilon large, but here we need epsilon small. So there are still one generalization, which is to take only convex. Well, we need convexity. Actually, we can take epsilon large, but it's only convex. And it's only the limits of k equal to 0. And this is recent work by Jekyll and Dabrowski. And there is also something which works better, only also for k equal to 0. It's when the interaction is AB interaction. In this case, you can get limits, but you cannot get full expansion like this. And of course, when you can get the full expansion like this, you can easily also get the central limit theorem. So what I want to show you today is how to prove this kind of results. I will not do the several matrix on the board, because it's just more notations. But in the lecture notes, it's done with several matrices. So I will keep with one matrix, which is a bit weird, because tomorrow I will show you how to do it in a much more clever way with one matrix and more general way. But at least I think it will give you all the ideas to do the several matrix, and you can look at it in the notes. So the ideas are really like yesterday. We want to use these Dyson-Tringer equations. Is there any question on the result? Ah, sorry. I didn't put it. So I forgot that. This was something else. Yeah, sorry. I'm just saying that if the action is bounded below by C, C is positive. So you have a strictly convex trace of V. Then for all k, you can find epsilon C of k. It's going to zero with k. So that if epsilon is smaller than this positive number, then you have the expansion. It's the epsilon here. So the V has to be a small perturbation of the Gaussian case. That's really the point. And maybe actually a remark, which is that it's not very clear how convex functions of several matrices look. And so there is a lemma, which helps you to get some idea, which is that if you take a function f, which is convex, from r to r, then you can define f of a matrix, x. So x in the set of m-mission matrices. And trace of f. Then this as a function of the entries of the matrix will be convex. So then this shows you that if I take q, for instance, if q is convex, so this means the action is non-negative and I have this extra term. I get example. So I can take q of x1 is m to be any sum of pL of x1, sorry, sum of alpha L xi, where the alpha Li are real, and the pL are convex. So I'm just saying if I take any of this kind of combination of convex function of linear function in my matrices, this will be the trace of this guy will be convex. So this hypothesis will be fulfilled. In general, somehow you want that the potential q is going to infinity in the right direction. Because of course, if q was going to go to minus infinity, then eventually the partition function would be infinite and the system wouldn't be well defined. So you need to have some hypothesis. And for instance, this kind of example are going to do it. Other questions before I go to the proof? No? OK, so the proof is to go to use the Zistation-Schringer equations and the Ethan-Schringer equation. OK, and now that I've said I will take m equal to 1, again it's really only for notational purpose. And to simplify also, I will take it v to be symmetric. And then I'm going to write the Ethan-Schringer equations. And so this will be that. So if I look at the same kind of formula as yesterday, this will be the trace of x plus epsilon. So it's a derivative of q times xk minus 1 times the product of the yk i. I remind you that yk was the trace of xk minus its expectation. I take the same expectation as yesterday, but I have this extra term, which is going to come into the game. And this will be the expectation. So the right-hand side is exactly as yesterday with the sum of the trace of xl 1 over n, trace of xk minus 2 minus l, and times the product of the yk i plus the expectation of when you differentiate this guy. So it will be the sum i equal to p. And then I have the 1 over n, the trace of xk minus plus k i minus 2 times the product of rj different from i of ykj. OK, so if epsilon equal to 0, I have the same formula as yesterday. But today I have this extra term, and the proof is again by integration by parts. I'm not going to do it again. The proof is from the fact that if you take, I am not going to develop, but you can check this is going to work. So you differentiate this times the product of the yk i times the density. And you take the sum of the ij, then you will check that you get exactly this formula. OK, so for the problem here is that we see we cannot do as yesterday, because our relation is not an inductive relation anymore, because here you have this polynomial which have higher degree. So you could say, OK, these polynomials here are still of lower degree, but you cannot create all polynomial by just multiplying by this. So you have to do something. And so what we are going to do is to have, so we are going to prove compactness and concentration of measure, which was the first step yesterday, by some external result, which are coercive inequality, by functional inequalities, inequalities based on the fact that the issue of trace v is bounded below. These two results will just be due to this assumption, which will be crucial, which we cannot really deduce from the equation now. And then we will solve asymptotically the equation. And then to solve this equation, it's, again, a bit more complicated than before, but we have to show that somehow we can invert, as we will see, some operator. So this is what is called inversion of the master operator. So it's still mysterious, but this is somehow the way that we can do to solve the equations in all these problems. OK, so let's start, unless you have some questions. So what are going to be this functional analysis, so coercive inequalities? So that's very, actually, very nice results, which are used a lot in probability. To say that as soon as you have a measure which have a density, which is strictly log concave, somehow it looks like a Gaussian measure in many ways. And these inequalities provide you quantitative estimates. So it's very useful. And so the first one is due to herpes. So we have the concentration, which have that so if we look under p and v, and we take any function f of the entries minus its expectation. So the probability that this is greater than delta going to be bounded by exponential minus nc delta square divided by what I will denote grad f. And here, grad f is the sum of the derivatives square. And you take the L infinity norm. And that's, I could have taken any probability measure whose density, the log of the density, as a relation which is bounded above by minus n times c. So remember, c was a lower bound on the trace of v, but then I multiply by n. I will get this result. It's regardless of any other assumption. And then I have brass complete inequalities. And what brass complete inequalities tell me is that if I take g to be convex, so again convex function of the entries, then the expectation of g of the entries that I recent by mij, so mij is the xij under dp and v. So under dp and v, I can compare this expectation with the one under the Gaussian law. It's also all the entries. But now what I have is n trace x square. So this is actually p, but of the Gaussian potential. That's absolutely not easy, because you would say, well, it's easy because here I just use the bond on the upper quantity. But the point is that you have the right renormalization. So this is really the probability measure of the Gaussian case. And this is very nice, because of course, as I said yesterday, when we have the Gaussian case, we know how to compute many things. So we can use, a priori, all these bonds from the Gaussian case. So now how are we going to use it in our context? So we are going to take our favorite trace, what we want to prove the lemma. If we look at the trace of x and k, so we want to show that this is bounded for independently of n. And actually, what we are going to prove is even when k goes to infinity, so by using results on the Gaussian case. So I think I'm going to take some bond like this, something like this. For all k, which is smaller than maybe something like quote of n, I just need something which is going to infinity fast enough. It could have been log n. And then the other thing that, so this is compactness. And then what I'm going to do is to have concentration inequalities. For any choices, I can bond this by some d of some of k i. So here, I just, I mean, if I would use only what we did yesterday, I would have some k finite. But for later on, I will use k going to infinity. And I will abstractly take some results on the GUI, which is kind of unknown. And so how do we deduce this kind of result from here? Well, for the first result, we just take g to be the trace of x to the 2k. And because of Klein's lemma, we know this is convex. Ah, yes. And so here, I simplified. I assume that v was symmetric. So this is where it makes my life a bit easier, that these terms disappear. Otherwise, I can do estimates. But it's taken me up sometimes. So let me simplify my life. In the notes, I think I didn't simplify my life, so you can see how to do it. And so if I do that, then what I deduce is that this trace of x to the 2k under dp and nv is bounded by the same thing, but under the GUI, OK? And this, so by what we saw yesterday, it's smaller than actually here I had the n. Here I put the 1 over n. And now I can use the result. So as I said, what we saw yesterday is that it's bounded. Actually, you can see also that it's not going too fast. Even when k is going to infinity, more slowly than n to the 2 third. So you can use this kind of result. And so this is a comm-loche-fruity, who proves that, or Soshnikov, who have many proof of this kind of bounds. But so this is ready for free once you have brass-complete inequality. So for the second one, it's also more or less this. The only thing that we have to be a bit careful is that our function are not going to have this norm which is bounded because we have polynomial functions. So they are going to go to infinity at infinity. However, because of this result, and this is why I need some input, I know that my eigenvalue are not going to go too far. So in fact, I can proceed by approximation to deduce this result. So just the concentration. So you take first f, which is c1. And you deduce. So you want to take maybe g, which is c1. And you want to take f, your function of the entry is to be trace of g of x. Then what you see is that, so you want to compute this quantity to know how your rubon is going to go. And so if you compute the derivative of xij, you can see that this is going to be g prime x of gi. We saw that yesterday when we have a polynomial. And so you can imagine that if g is just a general elliptics function, this kind of formula generalize. So then when you have this, so what you see that this thing is going to be the sum, the l infinity norm of the sum ij of g prime x square gi. And so it's nice because you don't have a sum in this over n square term, but only over n. You can bond this by n g prime infinity square. All right, and then when you plug this into your inequality, your herpes inequality, this implies that the expectation of the trace of g minus, so maybe I can go in to state this greater than delta is going to be smaller than exponential minus c delta square times g prime infinity square. And this is true for any delta, so you can integrate it to get the only moment going to be smaller than g prime 2k over c to the k, and then I have the Gaussian moments. So to conclude, to get this bond, you only need, as I said, to use this control, and then to proceed by approximation. I'm not going to do that, but you just approximate your polynomial by a bounded continuous function. So you take g, which is x to the k, well, x to the l. And on x smaller than 3 divided by square root of c, and then smooth, let's say, going to 0 afterwards. And you can deduce from here that you have a bond also on the trace of x to the l minus the expectation. And so this gives you this estimate by using order inequality. All right, so let me at least show you how to deduce from there the convergence. Here it was just because in brass complete, you have to subtract the mean of each entries. So you also get still the Dyson-Schringer equation. And the idea is that somehow in your potential, you have much more confining potential. And so you can show that by your Dyson-Schringer equation, actually, this term is going to be bounded. Yeah, yeah. Also, you can see that the expectation of this guy by rotation, by invariance, by multiplication, by unitary matrices is going to be diagonal. So then it's actually a constant time identity. And then you can realize that the constant is never going to blow up. OK, so let me show you, after all this technical thing, how to use this inequality to prove at least convergence. And again, this convergence is only true in such a generality. I mean, it's only known in this kind of perturbative setting. Convergence, here I will forget about the extra terms. So I don't take this, I don't take this, and I don't take this. I'm sure I'm going to regret I did that, but that's life. OK, so on what you see here, because you know the covariance is bounded, and here you divide by n. So if you divide by n, this is going to be approximately the sum of the product of the expectation. OK, as before. So you see that now all these quantities, because of the compactness, are well-bonded. So you know you can take limit points. When you get an equation for the limit points, so you have limit points. So along the sub-sequences, but you can see that this will converge towards some mk, which, yeah. And so if you look at the equation, that the limit will satisfy. So I will have mk plus, so let me expand the q prime. So it will be a sum and some, let me write that q is a sum of alpha i x to the i. So then what I get is the i alpha i, where we'll have m of q minus 1, q plus i minus 2. This is coming from this term. So this will be the sum over l equals 0 to k minus 2 of ml, mk minus l minus 2. And so the point now is to show that this has a unique solution, which is bounded. So the question is unique solution. And you know something more, so you know because it's trivial that m0 is 1, m1 is 0. And you also know that a priori, you had this bound, that this was smaller than this thing over there. And so to prove uniqueness, so you know that there is a unique solution when epsilon is 0. So it's really a perturbative argument. And the idea is just to assume that you have two solutions and show that it's impossible to provide that epsilon is small enough. And to do that, you just get a closed equation. So you take two solutions first, m and m tilde, two solutions. And you take the difference. So you put delta mk, the difference, mk minus m tilde k. And what you get is a delta of mk. This is going to be bounded. So this I put it on the other side. So I will have epsilon, the sum i alpha i. And then I have delta of m. Well, actually, there is also value k plus i minus 2. And then I do the difference also here. So I will have m minus m tilde times m plus m tilde times the difference. So I can bound these two by the sum m. And for the other one, I just bound it by my a priori bound. So it will be 3 over the square root of ck minus l minus 2. And now to show that there is a unique solution, I take a generating theory. So I will take delta m. So let's take l of delta, which is the sum of the delta k, delta mk. And then what you see that so in the case k equal to 0 and 1, this is 0 because the boundary condition of the same. And so what you get is l delta is smaller than epsilon sum i alpha i. So here, I multiply by delta to the k plus i minus 2. And I divide by delta i minus 2. And I sum over k. And I forgot something, and delta mk plus i minus 2. And then when I have this thing, I also the sum inside. So I will have delta to the l, delta ml. Then I have the sum over the 3 over square root of c k minus l minus 2 times delta k minus l minus 2. Then multiply by delta k. So I have still delta square. OK, and if you do that carefully, you should find that you have l delta, which is smaller. And so epsilon sum i alpha i delta i minus 2 times l delta. So this is for the first term when I sum over k. And then the other one, I will get plus delta square. I have this sum, but which is converging provided delta is small enough. So I have 1 minus 3 over square root of c delta. And then I have l delta. So you see that if epsilon is small enough so that you can find delta positive so that more than 1, there exists a unique solution. And so what you have proved is the convergence at this point. So you have proved that. So this is true, of course, if epsilon is small enough. So here also I use that l delta was finite by my a priori bound. And so what you conclude is that the expectation of 1 over n the trace of x n to the k. So this converts to mk, which is solution of this equation, on which is a moment of some function, of some probability measure. OK, so that's the first result. What you can check also is that if you look at this equation, it's also satisfied by this solution that I was telling you, this enumeration of maps. And so because you have a unique solution, again, you can see that this limit will be given by this generating function for the enumeration of maps. And so I will not have the time to show you how to prove the next order today. But just in one minute, since I anyway started a bit late, I can show you how to deduce the convergence of the free energy from there, the convergence of the free energy. Well, to do that, it's an old trick that if you look at the nv, so it's x squared plus epsilon q, the potential is this. And you divide by the n of x squared, the log. You can write that this is the integral of the derivative of this guy. And when you do this differential, you have to remember that this was your density, so the exponential minus n trace of q coming. So actually what you get is the integral 0 to epsilon. And the mean, so under the interpolated potential, so the potential is tq of 1 over n, the trace of q. I think there is a minus. OK, and now what you see is that here you are in a good shape because this potential will satisfy all your hypothesis. I mean, it's a convex combination of x squared plus epsilon q on x squared. So it's still convex. t is even smaller. So now you can go to the limit and deduce that you have this. So that's how you can prove a convergence of course each time and it can become more and more complicated when we get to more and more complicated models. It's not always easy to find actually a good interpolation. And also, here you are happy because you know this partition function. It's well known. It's a Selberg formula. But in other cases, you don't know how to compute it. So for instance, in the analog problems when you look at in dimension 2, it's not known how to compute any of these partition functions. So in this case, you cannot deduce from there an asymptotic formula for this kind of model with potential. But in this case, it's possible. So next time, I will show you how to deduce in the same ideas the second order, the CLT and the second order. And also, then I will go to more complicated cases where we don't want to assume convexity. So all these a priori tools that I showed you on a breast-complete inequalities and Epps argument are not available. Thank you, Siri, if you have questions.