 Est-ce que vous m'avez apprécié ? Oui. OK, donc aujourd'hui, je vais continuer dans ma dernière classe sur le limit de la modèles matriques dans le régime perturbatif. Donc, je vais vous rappeler ce que nous avons prouvé d'aujourd'hui. Donc, nous avons considéré une modèles matriques, ce qui est de cette forme. Et ce que nous avons prouvé d'aujourd'hui est les asymptotes, les premiers asymptotes de ces modèles. Donc, on assume que le trait de V, si je regarde la quantité, c'est basé par la identité de c times. Donc, c'était pour utiliser toute cette coercive de l'équalité. Et on assume que la V, je peux l'écrire comme le potentiel de Gaussian plus la perturbation. OK. Et ensuite, ce que nous avons prouvé, c'est que ce... peut-être que je dois écrire quelque chose comme ça. Alors, cela implique qu'il existe epsilon, qui dépend de c. Positif, pour que si epsilon est plus petit que epsilon c. Et ces deux conditions sont satisfaites. Et ensuite, vous avez une convergence de tout le monde. À travers des limites, qui, en fait, je ne vous ai pas montré, mais je vous montre que ce limites satisfait une relation de recursion. Encore une fois, je vous montre quand vous avez seulement une matrice, mais c'est vraiment la même quand vous avez plusieurs, et c'est fait dans les notes. Et ensuite, vous avez cette représentation. Donc, si q est la summe de monoméons. Donc, ce sont monoméons. Et ensuite, c'est alpha q. Q factorial, q k factorial, divisé par q k factorial, times the number of planar maps with one star of type. So x i one, x i k, and q k stars of type k. And this is the summation of q one over all the q, so that this alpha q is non zero. Okay, and I showed you, I also noticed that this implies that you have also convergence of the free energy, and just by derivation. So what, so if I denote that, maybe I denote that to epsilon. So this will be the integral from zero to epsilon of to t when n goes to infinity. Okay, so what I just want to to emphasize is that this result was really quite important in physics, but also in operator algebra, because it makes a relation between the combinatorics of these maps, which is really nontrivial. And random matrices. And so it was used in several purpose, for instance, so people were interested in the combinatorics of these maps, for instance to do easing model on random graph. Okay, easing model on random graph would correspond to take so vertices which are white and vertices which are blue. And then having some relation between them, okay. And then create maps, planar maps, based on these two ingredients. So planar map again, something like this. Okay, so you can imagine what this, what here. Okay, so this is one. Okay, so easing model would be to compute the number of this type of maps where you are given a number of white vertices, a number of blue vertices and a number of connections between them. And so it's easing model because you have two different states. Okay, and you are interested in the size of the interface between these two states. So this corresponds, if you remember what we said just a day, to Q which will be X1 to the fourth plus X2 to the fourth. So these are for the two blue model. And then you have some interaction which will be XI1, XI2. And so this quantity which also can be seen to be the sum of the product of minus alpha Q times these numbers. So these quantities could be computed I think by meta by using this relation and by being able to compute the partition function for the matrix model. So now there are other ways to compute this kind of numbers but I think it was the first way that this was done. Another application of this kind of relation that I use for instance is that sometimes you have this kind of enumeration question and here you would like so what I was interested in was to define a threshold state based on this kind of combinatorics. Okay, so this was some work with Dimash Atanko and Van Drone. We wanted to define a trace. So what is a trace? A trace is just some linear functional and polynomials which have properties which is positivity. So positivity is is just that the trace of P P star is non-negative. P star is just when you take the adjoint I mean imagine that you plug in a self-adjoint variables so this defines P star and so you need to have 2 of 1 which is 1 and 2 of PQ or 2 of QP so this is what is called a threshold state. So and we were interested to know if this sum, the sum we had with this specific choice that we were interested in here was a threshold state and so by seeing it at some limit of matrix model it was easy to see it was a threshold state. Ok, but when you are given just this kind of sum it's idling and trivial. Ok, so this was one use of the uses so there are many uses so of course also in the work of NR is to understand the topological recursion so compute interesting partition function and show that sums over map define a threshold state so that's a few of the applications that I want to mention but there are much more and I hope I convince you that the proof of the convergence was not very sophisticated I mean there are not so many improvements over this if you look at colored at many matrices models Ok, so today I would like to continue by showing you the next order, I will not prove the central limit theorem, I think it's done in the notes but so what I would like to prove now then we will go to non perturbative results so the theorem will be again the same hypothesis maybe epsilon prime is more than some un epsilon c so what we want to prove is this time if we look at this minus the limit so this is going to convert as n goes to infinity what sum 2 1 1 x ik k et avocat rollerie as yesterday we can see that if we look at the log of the partition function minus n squared times so this is f0 epsilon so this will convert towards f1 epsilon which is going to be this style minus integral from 0 to epsilon of 2 1 ok, pourquoi ? parce qu'il faut le dire et aussi j'ai oublié et c'est important ok, donc vous ne pouvez pas déduire ce genre d'estimage parce que quand vous différenciez avec respect à epsilon la log of z n v epsilon ce que vous trouvez c'est n squared times the integral d'une expectation d'un model vt d'un model vt et d'un model q ok, donc ici c'est clair que si vous pouvez faire l'expansion dans ce modèle où vous avez la force t en face de la q, puis vous pouvez vous pouvez voir que c'est bondé parce que nous avons ce bon contrôle donc il n'y a pas de problème pour prendre le limiter et l'intégral ok, oui oui, oui il n'y a pas de choice d'Ij en 1M ok, mais encore une fois je vais vous montrer la preuve quand M est equal à 1 parce que sinon c'est un peu plus maitre pour avoir toutes ces indices qui vont autour ok, donc est-ce qu'il y a une question sur le résultat oui donc ok, donc c'est le point qu'il y a un petit epsilon donc donc je veux dire que si vous regardez une matrixe à un moment, j'assume cette condition donc vous savez que sous cette condition la support de les valeurs de gain sera connectée donc pour une matrixe nous sommes dans la place et je vais vous montrer que vous pouvez le faire en plus de la générité et pour plusieurs matrices c'est le right c'est une condition naturelle donc comme je l'ai dit ce résultat a été étendu recentement par Jacqueline Dabrowski sans cette condition sur le epsilon mais en gardant la convexité mais seulement ce résultat, oui nous ne savons pas avec plusieurs matrices que la prochaine ordre ou seulement la condition de la convexité donc certainement pour une matrixe c'est un très petit subset de l'hypothèse dans laquelle nous savons que nous pouvons faire le limiter sur l'expansion que je vais discuter après ok, donc comment nous avons prouvé ça donc je vous remercie que ce que nous avons prouvé c'était deux estimations concentration compagnon donc qui était que nous pouvons bounder ce gars où le yk est juste un trait de x à k moins d'explication nous avons eu la boundation a priori et aussi nous savons que nous avons bondé sur les moments donc c'était plus petit je pense que j'avais quelque chose comme ça 4k qui est plus petit que je pense que c'est une quote de n et ce que nous avons utilisé pour obtenir le limiter c'était une équation d'explication qu'on va utiliser donc on prend m equal à 1 donc l'équation d'explication c'était que j'ai un trait de x à k moins d'explication x plus epsilon prime of x times the product of yk donc je vais seulement prendre 1 de cet équation ou non ok mais donc la relation était cette ah ce n'est pas un très bon choix et plus l donc c'était un trait de x il y avait une équation quand j'ai pris pour prouver ce résultat je n'ai pas besoin de ça si j'aimerais prouver le limiter je vais utiliser le produit mais je n'ai pas besoin de ça et ce que je vais faire c'est de vous montrer la convergence de la covariance c'est toujours la stratégie d'avoir une information additionnelle sur les moments où vous avez let's say p traces vous devez premièrement avoir une information quand vous avez p plus 1 traces puis plug-in votre équation pour avoir une information comme ça parce que tout de suite dans votre équation de décentre vous avez toujours parce que de cela vous avez toujours un trait si vous voulez procéder par une recursion vous devez faire ça vous devez faire une convergence premièrement la corrélation le limiter si je prends tout K et L cela converge contre un CKL je regarde cette équation cette et et ce que je veux faire c'est d'avoir une équation sur les moments de Y je vais récentrer et si je fais ça ce sera juste une covalentation et ici ce que je fais avec le côté de la droite peut-être que je dois mettre l'ondas je vais juste récentrer ce sera je substitue le sens de tous ces gars M L et puis j'ai l'explication du trait XK donc j'ai Y K-L-2 ah, bien sûr j'ai pris LL ah, non seulement finitiellement ah oui j'ai toujours utilisé la même décision et puis une fois que je suis venu dans le pays et l'officier quand je lui ai dit que j'étais mathématicien il m'a demandé pourquoi pas utiliser les lettres chiennes et c'est vrai que si vous utilisez seulement latin et grec à un point où vous pouvez prendre des lettres ok, donc quand je récentre je vais avoir ce gars puis quand je récentre je vais avoir la même chose avec celui-ci mais c'est simétrique donc j'ai deux de eux et puis, ce que j'ai c'est la somme et cette fois j'ai le produit de la Y mais j'ai une de la N et puis j'ai ce gars ok donc j'ai un gros parce que je sais que les moments sont bondés et ce que j'ai déjà vu, ça va aller à ce que j'ai dénoté pour epsilon, ou peut-être quand j'ai seulement une matrice, c'est mu epsilon de xk plus L minus 2 si je regarde à la droite je peux replacer ça et encore une fois je vais avoir quelque chose de petit ok, donc je peux replacer ça par la limite plus plus delta plus 2s over m mn minus m times this variance et donc ça va aller à 0 et donc ce que j'ai vu que j'ai maintenant qui ressemble bien, qui sont covalences ok, donc j'ai covalences le problème que j'ai éventuellement avec respect à ce que j'ai eu quand epsilon allait à 0 c'est que cette équation de covalences n'est pas recursive anymore ok, donc je peux mettre tout sur le même côté ce que j'ai c'est la trace qu'est-ce que j'ai xk minus 1 times times x plus epsilon q prime of x et puis je mets ce terme ici ça va être minus 2s de cette limite mn est pas absolument bien m m ok, je vais changer pour s ok, ms puis j'ai xk minus s minus 2 puis j'ai yl et donc ce que j'ai juste montré c'est que ça va converger ici j'ai oublié le l juste là-bas ça converges vers l epsilon xk plus l minus 2 ok, et donc ce que j'aimerais dire, que je peux mettre tout le polynomial ici ça montre des conventions de ce polynomial mais pas vraiment tout le polynomial donc ce que j'ai envie de faire c'est que c'est un opérateur appliqué au k minus 1 en fait je vais multiplier les deux sides par k en fait je vais même écrire ce xk c'est une notation où epsilon p ce sera p' j'ai multiplié par k p'x x plus epsilon q minus 2s de mu epsilon donc c'est mu epsilon x s xk minus s minus 2 ah ok, donc ok, donc maintenant c'était pour le premier terme peut-être que je peux vous donner directement la formule ok, donc ici il y a une sorte de accélération donc je veux comprendre ce gars comme fonction un polynomial p ok, donc quand je multiplie par k, donc ce que je vais vous vérifier c'est que si je prends xk prime si je prends le polynomial p qui est xk puis l'intégral de p'x plus p'y par x minus y du epsilon donc ce sera k times the sum of mu epsilon xs x k minus s minus 2 ok, c'est juste de le write in a more general way and so what you see I have proven for you is that if I put here the trace of psi epsilon p times yl actually I can also put now the trace of some q not q r minus expectation so this converges to a what's what so this are just the two derivative of my monomial so it's p prime r prime x d mu epsilon ok and the formula over there is exactly that for p which is q which is x to the l ok so why did I write it like this because what you see now is that what you want trust is to have any polynomial so you want to invert this operator ok so what you would like to say that this implies that if I look at any covariance this will go to the integral of psi epsilon minus 1 of p prime times r prime x d mu epsilon so that's really the goal and so it's always the other step that is crucial in solving the stringardization equation it's inverting this operator which we call the master operator inverting the master operator ok but once we have done that once we have done that we have shown that all the things which are going to 0 are still going to 0 for this inverse then you are good ok and so I will not do that but you can imagine that so why is it because I I know so the idea is that when epsilon is small on what you do in several matrices case so when epsilon is equal to 0 this will be clearly invertible because you can obtain any polynomial in this form because this has a lower degree ok so x p prime of x you can obtain any kind of polynomial which vanish at 0 in this form and here you have a triangular matrix somehow so you have to do something with this because this is spoiling your inversion and what you are going to do is just to say well if I don't look at polynomial because this will not be invertible in the world of polynomial because of this but you can eventually invert it on the space of analytic functions ok and so the idea is that so if you define a which is the sum alpha q a to the degree to the degree of q where p is the sum of alpha q q ok so meaning that if p if p of a is finite this means that somehow if you have an analytic series its radius of convergence is greater than a minus 1 when it's finite and so the theorem will be that for there exists epsilon positive so that for all epsilon smaller than epsilon 0 there exists a so a is typically greater than 2 such that psi epsilon is invertible on the closure of the polynomial by this norm ok I'm not going to do to do this computation because it's done in the notes in the modern world case of several matrices and for this case of one matrix I will show you a much more elegant proof next tomorrow but the idea is really that you see that when epsilon equal to 0 this is clearly invertible and when you add the epsilon so the operator corresponding to this adding this guy is bounded on this space because you have the epsilon you can invert the sum of them just by by iteration ok so this is done in the notes but once you have done that I think you will believe me that you have this result to you it's ok I can continue to ok so let me go one step further is how to deduce from this convergence so you you need to go back today's entringer equation ok so to show this ok so if you look at so what you add so you have the trace of x to the k so this time you look at this so this was this expectation of the sum x k 2 and you have to add expectation of trace of x k minus 1 q prime so maybe minus epsilon and so what you do again is you subtract the limit and so if you do that so you subtract here the limit what you see is that what you get is the trace of psi epsilon so of the polynomial x k which is going to be 1 over n the sum of the expectation of the recentering ok so you put this on the other side you recenter this guy so here I cheated just a little bit which is that you should have also plus the difference between the limit and the expectation ok ok let me just write it like this if you use the equation that we had for the limit and you do some algebra you will just find this equality and so you can show by so you can now multiply by n square yes by n square suddenly I have an hesitation so why do you have should be n square is it n or is it n square I divided it by 1 over n so it was 1 over n square ok so that's right so that's the right formula and so again what you can see is that the right hand side so you can show by induction that this is going to 0 and this is going to your covariance and so again you are going to invert this operator to get the formula for the limiting so this will go to the sum of l of clk and then you invert your operator to get the formula for tau 1 but you find that tau 1 of xk so to invert it you need to express this in terms of your operator but so this is your covariance kernel apply again to prime of x p of x minus p of y divided by x minus y and so ok so this is a covariance kernel applied to p and r and so what you can see by using this formula over there that tau 1 which was the limit of your rescale difference is going to converge to c applied to p of x minus p of y divided by x minus y ok so this is just a sum of epsilon minus y ok sorry I am getting a bit in the notation but so I think the ideas are quite clear or just using again your equation taking away all the terms which were small due to your a priori estimates inverting your operator so you have to do that you need somehow to identify everything in terms of polynomials ok and yes so here is the integration so I suppressed I have to suppress the limit so this is the limit of this guy so I already saw that this was a trace of any polynomial to the measure the equilibrium measure so I suppressed the limit and when I do that actually this expectation is non zero but you can see that I mean some algebra but you can see that when you subtract everything the main term on the right is going to be given by the covariance and I mean that's just ok so it's again the same trick that when you have this operation it's too low ok ok so I admit I have been a bit fast but so see the covariance it's something which is so see it's just this function where is this getting lost also so where did I write C ok so what I so the covariance is defined as a function by linear function on polynomials ok now on the right hand side here actually I forgot like this yeah I forgot this on the right hand side so I have to to find out this right hand side dependent p on the polynomial p and the point here is this the point is that the sum of xk xl xk minus l minus 2 when I multiply by k I forgot to multiply by k that's why it was a bit so this is this formula that I just chose before p y of y divided by x minus y ok oh sorry y ok and now I mean before when I apply C I think about C applying to the polynomial in x times the polynomial in y on a compact way to see that is to express this polynomial in x and y this ratio ok now when what I prove was that the one of epsilon of p was converging to this C applied to p prime of x minus p prime of y divided by x minus y and so you have to yeah it's always the problem to do this kind of computation on the blackboard but I hope it's more or less clear at least the strategy is that now you cannot invert by induction your operation you have to invert your operator so you have to rephrase everything in terms of polynomials and here it's not very elegant but tomorrow when I will do it on matrix model it's kind of more clearer and I will do this kind of because here I start with where it's not always completely clear to identify everything yeah it was a bit messy at the end do you do you understand more or less the strategy I mean the computation don don who wants more explanation maybe that's the best way to put it nobody that's what I was expecting ok but so again the strategy is always the same you linearize everything around your limit what you get is terms which are going to be neglectable because of your apriory your apriory controls and then at the end you always arrive to equations depends on some operator ok and to be able to conclude that you have your result for all polynomial you need to invert this operator and this is a master operator and in all this question it's always central and what we will see tomorrow is that in the case where we don't have a small perturbation inverting this operator requires actually that you have off-criticality so the density vanishes like a square root and that's why I was asking Ken about this because it's always a central question in the matrix model whether the density of the equilibrium measure decays like a square root and this is the place where it comes ok so if there is no more question then I will go to the matrix model one dimensional ok we will start to ok because as I said for several matrices we only have this kind of approach which is a bit painful but effective ok so matrix beta ensembles ok so now I look at the joint flow of the eigenvalues and eventually I can put some beta ok and the question is to do the same but now with less assumption on v and and of course I remind you that in the case where beta is equal to 2 this is the joint flow of the eigenvalues for the GU case when v is x square and you can add a potential so it's the previous case so when beta is equal to 1 this are the joint flow of the eigenvalue in the case of the GOE where you have real entries and in this case what I told you about before could also be done but it's always a bit more complicated because you have extra terms so actually you have not you don't have an expansion in 1 over n square but in 1 over n so it's always a bit more complicated and it's so you could see it becomes a mess after some while and so in the case between equal 1 it's always even more a mess ok but so now if v is general we still have convergence to the equilibrium measure this is the first result but we won't be able to use the Dyson-Tringer equation to prove that and we have a stronger tool which is large deviations the first theorem is I assume that v is continuous and v goes to infinity fast enough so I think it's 2 log x this is greater than 1 ok so this is to ensure that somehow your partition function is finite then there exist an equilibrium measure a probability measure so that for all f which is bounded continuous this convergence towards the integral of f v and furthermore actually mu v is compactly supported ok and another third one theorem that and so the third one theorem that we will discuss is if so v is sufficiently smooth so it's cp for some cp big enough mu v can be mu v as compact support so it's connected sorry connected support and the measure is of critical times h of x and then you have the then you have the the central limit theorem beta should be positive of course and you can look at the expectation and you multiply by inverse towards some d mu 1 so as I say you can only take n for the first order and the log also at the dnv this 1 over n square this will be f0 plus 1 over n of 1 and you can also go I will discuss until you can do an expansion of this guy but this time it will be 1 over n ok and in fact you could also do expansion to higher order of this guy but I didn't write it ok and I will discuss a non connected case later but we will see that the central limit theorem is not available is not true in this case or not under this form so it's what f1 so it's something else it's some linear functional ok so maybe in the 5 minutes I just say how we prove the first result and so this is a new concept we didn't use so far which is large deviation the theorem is that under the same assumption over there the distribution of the empirical measure satisfies a large deviation principle with a good rate function which is I of mu I write the over 2 v of x plus v of y plus log of x minus y mu x y minus the infimum of this guy over all measures ok so that this is not negative and so what does this mean this means that this means that if I look so I look at this subject as a probability measure equipped with a weak topology ok and so this means that the l'imp-soup as n goes to infinity of 1 over n squared the log of p and v probability that this belongs to some f bounded by minus infimum i over f and if all is open you have the same thing but with the lim-inf ok and so how do you get the this type of conclusion the point is that I achieved for I is a good rate function also I should say that so I is a good rate function so this means that the level sets and the point is that I achieve its minimal value at a unique mu v ok so that the way we see the convergence in a probability there were many other ways to see this convergence I think you can use this for a non-sphagoté point and I imagine that in Riemann-ilbert techniques you have other ways maybe to see that so in probability we call that large deviation principle and why do you deduce this type of results from that so the point is that if you take f so the proof is to take f to be the complement of a ball at mu v ok and so in this case you see that this will be strictly negative because I has to achieve its its infimum and when it's it is taken on this cross set which does not include its minima so it will be strictly positive because of the minima because you just subtract the infimum it's zero ok so then what you see that for any such ball what you get is that this probability t is decaying like exponential minus and square so you can use Borel-Cantelli lemma to see that this the probability that the goal of f minus mu v is greater than delta so this will be like exponential minus and square delta and so you conclude by Borel-Cantelli lemma maybe next time I will start by using the arguments to prove this it's not far from Laplace method actually it's not very complicated but I don't know maybe it's Tamara do you think I should give the proof or it's ok for most people to get the or just heuristics not because you told me not to go too fast and so I don't know I don't know if it's or it's a known in this community so well I will give heuristics and if if there is a demand I will give details so you can tell me before tomorrow it's always a problem with details that you get lots of epsilon like what I showed you at some point you can be quite unclear si on a la convergence on va aller au S.M. ce que je veux c'est que si on fait la equation il n'y a pas de solution en général donc on ne peut pas utiliser la convergence il n'y a pas de solution quand vous savez que la priorité est connectée merci beaucoup attention