 And then also, and it said, thanks to the job of chemically. OK, so now we start our first talk of the day. Actually, lecture of the day, taking a series of questions. Thank you. Thank you. A little walk, no? Yes? OK, can you hear me? No. No? Can you hear me? Yes? No? OK, like this, no? Can you hear me? OK, what if you come closer to me? I'm going to give a blackboard talk. Anyway, so can you hear me? It's OK now? I mean, I have the feeling it doesn't work. It's not working. Do you want me to take the? It's working? Is it working? Yes? OK. OK, if you can't hear me, then raise your arm on the. OK, it's desperate. OK, I will try to speak louder. OK, so I would like first to thank Tamara Grava for putting all this work and organizing this event, making it possible. And so my course will be about Dyson Trigger Equation and Random Matrices. So the point is that random matrices provide a great example of complicated system where the eigenvalues are in very strong interactions. And somehow there was a need to get new tools to study them. And Dyson Trigger Equation are one of them. And this is one tool which I hope to convince you is very useful. And today I will show you how it works, how you can use this equation, this set of equations to analyze the so-called Gaussian ensembles. So the GUE, the GUE matrix. So this is somehow the simplest example that we can consider. And the GUE matrix is just a matrix which is a Hermitian with IID and independent entries, entries above the diagonal, so x, n, i, j are independent. And they are Gaussian. So they are complex in the case of the GUE. So this is a Gaussian complex with variance 1 over n. So if i is smaller than j, and so you can think about that as a sum of two independent Gaussian. And they are real complex, they are real Gaussian with covariance 1 over n on the diagonal. OK, so that's the basic example of random matrices we'd like to study. And what you would like to understand is a spectrum of these kind of matrices. And what is nice with the GUE ensemble is that you have an explicit joint law. So if you let lambda 1, lambda n be Ziegen values, then you can see that the distribution is quite explicit and is given by what is called a Coulomb gas law. Sum of lambda i squared. And here you have some normalization. OK, so this, if you would have chosen real entries here instead of complex, you would get the same type of expression. But you would have to multiply to replace here the exponent 2 by beta and here beta over 4. So beta equals 1 if the entries are real. And 2 otherwise. OK, but the point is that this kind of distribution is quite complicated to study, typically, because you have an interaction which is very strong, which is given by this Coulomb gas law. And this Coulomb gas actually is also interesting on its own and can be generalized to a higher dimension, for instance, by taking complex numbers here. And what we would like to find is how to, what I want to show you is how to analyze this kind of distribution by using the Dyson-Schwinger equations. OK, the type of question I will address here is mainly about macroscopic asymptotics. So what we would like to show is that there is a convergence of the distribution of the Ziegen values. So if you take test function f, can you show that this is going to converge? So this is one of the goal. And this is going to converge almost surely. And then the next question is about the fluctuations. So if you remove the limit, can you show that this converges in distribution toward some Gaussian distribution? OK, so yes. So n is going to infinity. So it's for this distribution, under this distribution. So usually what you do is you use Borel-Cantelli-Lemma to show almost all convergence. So this means that somehow you, of course, I'm not going to construct for you the sequence which is going to converge almost surely, but you have from Borel-Cantelli-Lemma a canonical way to extend your probability measure so that you give sense to almost all convergence. So that's really the meaning of the convergence. OK, so that's the goal. And the point is that, as I said, it's not easy to prove this kind of theorem because you have this strong interaction. And so Kertjo and Son, I think, was the first to use. So I think it's 97. Use a Dyson-Schwinger equation to prove this kind of convergence. Well, in a more general context, here instead of lambda square, you have a potential. So this is what is called matrix models. And what I want to show you today is how to use for the GUI. And what I will do in the next lecture is first to show you how to prove it for matrix model in perturbative situations. So this is matrix models. It's typically this kind of model, but where you have replaced the lambda square by a more general potential. Then, eventually, this can be several matrices. And then I will show you how to throw that out. In this situation, there are two settings which are quite different, which is the so-called one-cut situation where the limiting measure E of f, so this, you have to think this is replaced by, eventually, by V. So sigma will depend on V. So this means that V of x is something like epsilon x square plus some perturbation. So you are not far from the case of the GUI. So in this case, so you have the one-cut situation. So this is a perturbative situation. And then you have the non-perturbative situation where you take a general V. And then in this case, you have two types of difficulty. The case where the limiting measure here that we will show to exist, so sigma is some sigma of V, has connected support. And the case where it's disconnected. So this is a several-cut. And the last thing I will discuss in this class is an extension to discrete tiling, to discrete setting, to discrete measures, and random tiling. OK, so that's somehow the plan of what I want to discuss here. There are lecture notes. And I will not always do all the details, but I will try to give you at least the main ideas. And of course, you should not hesitate to stop me if you don't understand, because my understanding is that there are people from very different backgrounds. So you should really feel comfortable to ask questions. OK, so that's somehow what you want to do, is to understand this kind of questions. And to do that, as I said, we will use these Dyson-Schringer equations. And the idea that we will get equations for this type of observables under this type of distribution. And we will then solve this equation to eventually prove this kind of theorem. Let me tell you more precisely the result we want to prove for the GUI. So what I want to prove for the GUI is the following thing. So the expectation of 1 over n, the trace of xn to the k, I want to show that this is, I can expand it as a function of the dimension. And what I want to prove is that if I look at any family of moments, where ki are some integer numbers, then this will converge towards a Gaussian joint, so p joint Gaussian variables with socovions, which I will denote c, k, i, kj. So this is what I'm going to prove for you. And this will answer this question, at least for functions which are polynomial. So mgk is the number of graphs that, let's say, maps of genius g built over a vertex of degree k. So what is a map? It's a graph, a connected graph, that you can draw. You can properly embed onto a surface. And what I say that the minimal genius of such a surface has to be g. So you can do a drawing. So here I have a vertex of k of degree k. So k is equal to 6. And then you count the number of ways to match the half a g so that you can draw your surface onto a surface of genius g. So here you see that this is genius 0. You can draw this on the plane. But of course, for instance, if I do this, we will get a genius 1. And ckL, so the covariance of the Gaussian process, is going to be the number of planar maps. So this means that genius is 0, built over a vertex of degree k and 1 of degree l. So that's what we want to prove. So here you have to be careful that when you do the counting, you can think about the half edges here as labelled, because you have drawn them on the surface. And you have actually, here I rooted. I should have rooted maps. And so this gives you a full expression for this. This is the so-called Arer-Zage formula. And you can reconcile this with this question I asked at the beginning by the following exercise, which is that if you look at the m0 of k, it's going to be the integral of x k under the semicircular law. So where sigma is the square root of 4 minus x square dx 1 over pi. So why? Oops, nobody said anything. But this should have gone to 0 with this normalization. You should not renormalize, get something on trivial. So you can show that this is true. And so what this implies over these results, all this theorem, what this theorem implies is that the expectation of the integral of 1 over n, the sum of f lambda i, this converges to the integral of f of x is sigma of x. If you take f to be a polynomial. So by density of polynomial function in the set of continuous function, you can extend this to continuous function. So here, this is only L1 convergence. But you can see, in fact, that you can get almost no convergence by estimating the covariance. We said to use Porel-Cantelli lemma. And so you can use this. So this implies a fegrant point. This implies that 1 over n, the trace minus x expectation, where so this is finite. This is going to be smaller than some ck over n squared, because this converges towards a Gaussian variable, once I have multiplied by n. And so by Borel-Cantelli lemma, this implies that this goes to 0 almost surely. So then you can also extend this to a continuous function. So this proves this. And then if you look at the second question, well, if you just take any polynomial function, you just have to sum one of these. And so the second point, maybe I should write this A and B. So B implies that the trace of any polynomial function, so this converges towards some Gaussian variable with some covariance which depends on P. I should have said that this is centered. In fact, if you look at geo-imatrices, so if you take the real entries, you will not get something which is centered. But OK, when you take geo-imatrices, you get a centered Gaussian process. OK, so is there any question? Or can I go to the proof of this kind of thing by using the Eisen-Sringer equation? Yes? Here? Yeah, OK. So the third phase here, so you can see that here, well, it's very bad in drawing. But you should add that to somebody doing geometry. But OK, so you can think about here that somehow I can draw this vertex on this surface. And when I do that, it's properly embedded in the sense that the edges do not cross anymore. Clearly, when I embed it on the plane, it's not properly embedded because the edges are going to cross. But so the point is that here, your graph, you can always draw them on a surface in such a way that the edges do not cross. So of course, and then of course, you could imagine that you are going to draw them in a much more complicated surface, eventually with something like this. But this would be stupid because you have no reason to do that. So the genus of the graph is really the minimal genius of the surface in which you can draw it. So you should not do that. Yes, so this is the Catalan numbers. I do not remember the formula. I'm sure that lots of people do here, but yeah, exactly. So you can compute the genius by saying that 2 minus 2j is, so let's see, number. So I have to remember number of vertices. So here there is only one. So here there are two, but plus number of faces, minus number of edges. So exactly. So you can compute the genius by this formula. So here you have only one vertex. The number of faces are the part, the number of parts that you, if you look at this embedding, for instance, when you cut along the edges, you will get faces. And the number of edges, so here you had 6 half edges, but after the matching, you have only three edges. You can compute the, yeah. So the Ckl is, so maybe I can take this from again. So you have, let's say, two vertices. Actually, I could also take even here, but the total number need to be paired so that it's non-zero. And so I have this labeled half edges, and then I am going to match them so that I get a connected graph. So for instance, I have this, something like this. And so this has to be plan R to contribute to this number. I'm counting all these possible matching so that the result is plan R. Okay, okay, so now let me discuss how to prove that. So the original way that was used was simply to express all this in terms of the Gaussian variables of the entries just by using the formula of the trace. And you could also do that here and compute the moments. And compute the, actually the idea will be to compute the moments of this guy and show it converges to the moments of this guy. And again, if you have convergence in moments, it implies convergence in law. Okay, so the limit is a Gaussian variable. So you could do all this by just expanding everything in terms of the entries. But as I said, I want to generalize this strategy to a more complicated system, and then there is no a priori definition in terms of independent entries. So I will show you how to prove this by using Dyson-Tringer equation. Okay, so first I have to say what is Dyson-Tringer, what are the Dyson-Tringer equation? In this case, Dyson-Tringer equation. Okay, so for the lemma, that one, if I look at my moments, I will define this as m and k, then this will be the expectation of the sum L equals zero to k minus two of one over n, the trace of xn L, one over n, trace of xn k minus L minus two. So that's the first one, and the more general one is that so if I set, I will set yn of L to be the trace of xn L minus its expectation. Then if I look at the expectation of the trace of xn k one times the product of yk i, i equals two to p. Of course, I will not have the room to write it, okay? So this will be the expectation of, yeah, I should, yeah, let me go over there because otherwise you will not be able to read it. So the expectation of the trace of xn k one pi of equal to k i. So these are any kind of choice of integers. So this will be the expectation of, so the sum L equals zero to k minus two, one over n, the trace xn L, one over n, the trace xn k minus L minus two times the product of the yk i, plus the sum from i equal two to p of the expectation of one over n, the trace xn k one plus k one plus k i minus two times the product of g different from i of y kj. Okay, so what are these equations? So that's, these are equations. So I have two sets of equations, which are an equation on the product of the expectation of moments. Okay, my observables, all these trace of x to the k, and what I'm giving you with these equations is some equation on only these observables, the expectation of these observables. So the point which makes the GUI much easier than any other system is that you can see that that's somehow a recursive equation in this case because if I look at the total degree in the xn, here I have the total degree which is sum k i, and here I have lower degree. If I look at the total degree, if I sum all the degree, I have the sum of k i minus two and here as well. For you see actually it's an inductive, it's an equation which is inductive and the expectation of the moment, and so in fact it defines uniquely all these moments just because it's inductive. So it's very natural that I am able to analyze it at the end to show you this type of result. Okay, however again what I'm going to do now is to show you how to analyze it asymptotically because this is somehow the way I will proceed in other more complicated situations. So further proof maybe of this Dyson-Tringer equation so it's integration by parts. And to do that you just have to notice that when you have GUI entries, so if you want to do integration by parts, so if you take the entry ij to the expectation and you take any kind of function of the other entries, all the entries. Okay, so you have to remember that this xij is a complex Gaussian, risk of variance was over n. So you do integration by parts and what you find is that this is going to be the derivative of f with respect to, and this is where you have to be careful, xji. Okay, so if you have a complex, just one real Gaussian variable, it's well known that Gaussian variables are characterized by the integration by part formula. Okay, so that's the usual thing. Here you just have to be careful that here you have gi and here you have ij because just it's complex. Okay, and when it's real, it's still true because it's ij and gi. So you just have a factor one over two coming because I took, no, actually you don't have a factor one over two in this case. Okay, so that's the formula of integration by parts. So how can I use it to prove, for instance, well, I can prove directly my business formula. So I want to get an equation for this guy. What I just do is that, yeah. Okay, so I take a function f, which is smooth, and with the, yeah, infinite support. Yeah, I mean, any kind of smooth function. I mean, if you don't like it written like this, you just should think about the fact that this, because here you have the derivative of this guy, minus the derivative of this guy, this is going to be f prime x just by integration by parts on the, so the parts that you are maybe worried about is just this one. Okay, and when you go from minus infinity to plus infinity, it's just zero. So it's just this written in terms of the entries of, okay, so that's important because actually we will do a lot of integration by parts. So now if I want to apply it to get a formula for this, what I do is that I just write that this is sum over the ij of xij, and then I have xa k minus one g i, okay, okay, product of the ykl. And now I am going to do integration by parts with respect to this guy. So what I get is the sum, so I put the sum outside. I will have the one over n, and the expectation of the derivative with respect to sji of xn k minus one g i, the product of the ykl, okay. So now to conclude, I need to compute these derivatives. Okay, so it's not very complicated to compute this derivative because you just have, you're just dealing with polynomials in your random variables. So if you have something like this, no, it's not a good choice of, okay. If you try to differentiate something like this, what you will get is just the sum over the place in the polynomial that you will differentiate. So it will be xn m, so sj. S l minus m minus one, i t. And the sum, yeah. The sum of all the possible m from zero to l minus one. Okay, so we should use that to compute this, the derivative of this guy and the derivative of this guy. What you see that the derivative, if you apply this to, so it's g i, so this will be the sum of xn m, g g, xn l minus m minus one, i i. And when you differentiate this guy, so l, so what you will get is the sum over, so you have to think that you're taking the sum of the derivative of x, so maybe I should write it, okay. So this is just a trace, okay. So you use this formula, the sum of m and the sum of s, and then you can make the summation of s and what you get is just l xl minus one and ij. And so when you put these two formula inside here, okay, you get exactly this formula. This term is coming from the differential of xl and g i because you are going to sum over g and i. And this term is going to come from this derivative because you have to remember that you are multiplying by this guy, xn k minus one g i. Okay, so that's just differential and actually another interesting thing is to do this when you have several different random matrices independent and you take instead of xl, x to the l words in these several matrices and you can do exactly the same kind of computation in this case, so the several matrix case is exactly the same. Okay, so now I have still 20 minutes, so I want to show you how to analyze this equation. Everybody understood the derivation, it's okay. And so how do you analyze these equations? Well, the first thing is to see that, as I said this, in this case, the Dyson-Tringer equation give you really an equation with a unique solution. In particular, you can derive two a priori estimates, which is rich or crucial, which is further concentration for the lemma, is that for all k, there exists ck and dk finite, such that independently independent of n, such that you can bond your moments and you can also bond the covariance. And so that's, sorry, concentration and compactness somehow. So this is somehow the compactness and this is a concentration of measure because it shows you that the trace is very close from its expectation in moments. And so the proof is very easy, so it's just by induction over, so k and the sum of the ki. So how do you do that? You look first at the case where this is zero or one. So if it's zero or one, it's very easy because you get zero. Okay. Sum ki equals zero, one, ck is equal to dk equal, well actually, c zero. c zero is one, c one is zero and d zero, well d zero is d one is zero. And then you assume you have shown this bond at four k and the sum of ki smaller than k and you put it back in your equations. And so what you see that this, you can recenter it, so what you get is a covariance. Okay, so maybe I can show that. So you get from one that mn of k. So you recenter, so it will be the sum from l equals zero to k minus two of mn l mn k minus l minus two plus, oh yeah, you can, the expectation of yl yk minus l minus two. Okay, and again what you see that you are doing the induction on the sum of the coefficients, so here you are reducing them from by two. So if you have proven this estimates previously, you see that this is smaller than dk minus two and this is also going to be smaller than the sum of the cl ck minus l minus two. So it's going to be bounded. So by induction, mn k is bounded. So it's smaller than the sum. I can put that the l ck minus l minus two plus dk minus two times k minus two, which I put to be ck. And similarly for the expectation of the yl because I have this formula, I recenter it with the expectation. You can show it with the expectation of yk i. So this is going to be this guy times the product of the yk i minus its expectation. So minus the expectation of the sum of one over n trace xn l trace of xn k minus l minus two times the expectation of the product plus the extra term, which is a sum from i equal two to p of the expectation of one over n, the trace of xn a1 plus ki minus two times this product. And now you just write in this right hand side that the trace of xk is just yk plus mn k. And you can see that the right hand side will only depends on quantities that you already bounded. So by induction, you can show that this is also going to be bounded. All right, so that's good. So in the next example, I will discuss it will never be possible to deduce from the Deysen-Schringer equation this kind of bound, but we will have other tools to do it. But once we have the Deysen-Schringer, this is bound, the idea that we can analyze asymptotically the equations. Okay, so how do we do that? Well, if I come back here, for instance, once I have this uniform bound on this moment, I know this is going to be small. So I know asymptotically that I will have, okay, so I have no more blackboards, it's terrible. So I know asymptotically that I will have an equation for mn, okay? So I have mn k, which is the sum of mn l, mn k minus l minus two, plus something which is a further one over n square. So now I can prove by induction that this is going to converge. I know that mn zero is equal to one, mn one is equal to zero, and by induction, mn k is going to converge to mk, which is solution of mk, which is the sum of ml, mk minus l minus two. Okay, and there exists a unique solution, so that's the point also that you have a unique solution. And now if you want, you can find out that this is this number of planar maps, because this is the same induction relation. So this is m zero k, and the way you can see that this m zero k satisfies the same induction relation is by looking at your description of m zero k, and you see you want to count the number of possible matching so that this is drawn on the sphere, and so what you do is that you look at the place where this would be matched, and then what you see is that once you have matched the root, you obtain two planar maps. And this time with a vertex of degree, which is here l, and here, so it's the remaining vertices, which is m, so k minus l minus two. So m zero k has to be the sum of m zero l, of m zero l, m zero k minus l minus two. Okay, so this is just to say that this limiting equation has a unique solution, and because it's, and it is also an equation which is satisfied by this number m zero l. Okay, now we can do the same thing for the covariance, for instance. Okay, so for the covariance, so here I started to write the equation, so I have the expectation of yk, yl, which is the expectation of one over n to the thumb. I think I put an extra one over n, sorry, again. Going to be the sum of the trace of x to the m, trace of xk minus m minus two, yl minus, so the expectation for the product of the expectation of this times the expectation of this, but which is zero in this case, okay, because yl is centered, and plus the expectation of the one over n, the one over n, the trace of xk plus l minus two. Okay, and then what you do is that you recenter these guys to get just something which depends on yl on the m. So what you get is a sum of mn, m, mnk minus m minus two, times, well, this expectation, so this is zero. And then the expectation of, so we'll have mnm times the covariance of this guy, k minus m minus two, yl, and then mnk minus m minus two, the expectation of ym, yl, and plus at the end, and here I forgot l, plus l over n, mnk plus l minus two. Okay, so here I think I forgot something, so it's, let's see, trace, so this is zero, okay, this is zero because this is centered, and here what I see that I have the covariance, but again with a smaller exponent, zero as well, and so I can see that by induction, I can again show the convergence, and what we get, what we get is that the expectation of yk, yl will converge to some quantity which is ckl, which satisfy the equation that this, so this term, and this term are actually the same because you sum, so it will be m, m, m, it's not very nice, s, the covariance of k minus s minus two, and l plus l mk plus l minus two. Okay, so it's the same thing, on now again you can play the same game and see that the number of planar maps that you can build over two vertices of degree k and l will satisfy this very same equation. So here again when k plus l is zero or one, you just get zero, okay? So that's, and now you can continue, so maybe I think my time is more or less complete, oh no, I still have five minutes, so you can continue for instance for if you want to compute m and k minus its limit, okay? And then maybe I'm not going to do it, but imagine you want to do that while you suppressed m, k, e, and then you're going to subtract also the limit. You recenter with respect to the limit, and again by induction, you prove that this difference, once you normalize it by n square, which is the size of this error term, will convert towards some induction relation, on this induction relation will be just the induction relation which is also satisfied by maps, which this time are of genius one. And now you can continue like this, I mean it's done in the notes to show that if you look at the general polynomials in the yk, they will converge towards the moments for the Gaussian variables. Okay, so I think you got the idea, I mean it's just a bit painful to do all this computation on the blackboard, but what I wanted to show you that because it's very easy in this case, it's just induction. And still it allows you to get quite nice results in terms of enumeration of maps and why the topological recursion appear in this context. So the topological recursion are in a very basic form, this type of induction relation where you see genius enumeration of all maps of any genius driven by this kind of equations. And so next time I want to show you the very same ideas generalize to much more complicated questions, and namely so next time I think I will do a perturbative cases where you don't have Gaussian matrices, but you have something which is not too far, and where we can use the same kind of ideas. Thank you very much.