 So roughly speaking, there are two methods of random metric theory, the way we are looking at the problems. So of course, there are many other different ways to approach random metrics, but what we have been doing is basically just two objects. One is the green functions, and the other one is some dynamic ideas. So the question is always you want to handle some local properties. So if we combine these two, we have some quite effective method to deal with many problems in the matrices. And so because RDDG was actually a lot of these green functions, estimate and also also part of the dynamic method. So I thought that one idea is maybe I just collect the dynamic idea using various parts of random metric theory, and then just look at all the problems so that we try to solve, we use some idea, some dynamic ideas. And now, so one is you can do this metric. So here are at least a few of them. So one is this match of Brownian motions and this convergence speed. And the other one is sometimes you can also use dynamic idea and green function estimate. And then it's also, I find it also amusing that you can also, in the green function comparison theorem, you can also, the result, you can also try to use some dynamic ideas. So I will give one example of this as well. And then also for the eigenvector, you can also look at eigenvector from a dynamical point of view. And last thing I will say is just some related result in all the dynamics. So this is the plane. And many of these results in this lecture hold on the four moments assumption. And so by Terry Tau and Ben Bu. But so I may not, so I will try to aside it, but sometimes I might forget. So all right, so the magic Brownian motion. So you take a, you take a matrix H of t, H is just, it's a symmetric matrix. And now you, you look at the, this is basically the magic Einstein process. This is db of t, each component bij is independent Brownian motion, but subject to the symmetry assumptions. So, so the, one, one can solve the, one can process. And so you find the H of t at time t is exactly the linear combination of initial data. So this H is H0, initial data. And plus the GOE piece. And so, so anything you say about eigenvalue of this flow, you will tend, you will tell you about, about the, the behavior spectrum property of this H of t. So, so the, if you look at eigenvalue equations, then this is the, the standard Dyson Brownian motions. So these eigenvalue equations. And then actually the eigenvector also has a equation. The eigenvector equation is, is very complicated. But you will see that, you even see that lambda k minus lambda l square term here. And then the, also in Tachington this Brownian motion is actually, there's, there's another match, there's another Brownian motions, independent Brownian motion here. So, so the, the dynamics looks very complicated. And then, but they are two, they are, the key important thing here is, if you look at the Dyson Brownian motion, it actually has the GOE. So here I take the GOE, but of course there's more general result, but, but let's take a real, real matrix, and then the GOE is invariant measure. GOE's eigenvalue distribution is invariant measure for these dynamics. And also, if you look at the, the eigenvector equations, then you find that the, the harm measure on the orthogonal group is actually invariant under these dynamics. So these are the two, two things we keep in mind. And then the whole thing, everything we, we are trying to do is really just to study these two dynamics and study the time to local equilibrium. You don't want to study the time to global equilibrium because the global equilibrium is very, I mean, the time to global equilibrium is very easy and it's not very useful. It typically takes a long time to reach it. And so you only study the time to local equilibrium. And now, if you understand this time to local equilibrium well, and you combine with Green function comparison method, and then you can prove the universality of random matrices in many cases. So this is the general strategy. Now, so let's start with the, the case of the eigenvalue piece. So the case of eigenvalues, because we are only, we only care about eigenvalues, so you can say the initial data is V of i, and V is, you just, you just give a different name of the initial data of eigenvalues that is V of i. So, so for the DBM, now the only thing matter is really the probability distribution of this V of i. A standard object is, is computer seizure transform is V. And this goes in V of E. And then you put a complex formula Z and E plus i eta is equal to this guy. All right. So the, the, the thing we are, we are, we are especially interested in major part of this because it tells you the, the distributions of, of this V, of this V i. Then, so the, the results is the following. It's, if you take, if you have certain range, that this imaginary part of this M of V is, is in between two constant. So you need to know that it's in between two constant. And, and then for range of Z in a fixed energy, you fix one energy E naught. And then there's some, there's some neighborhood of the size eta star. The eta star is, oh, by the way, I should, I should mention that in, in my convention, I always choose the, the eigenvalues. It's from minus two to two. And so, so eigenvalue is a space in this, a typical eigenvalue space in this, is one over N. So this is, maybe it's, it's good for you to make the comparison. All right. So you fix the, you fix the energy, energy E naught. And then you take a size eta star. And this N to 2 F is just slightly bigger. It takes a eta star. Then you find that, you find this, the, the, if you compute the, or you compute the correlation functions of the eigen, eigenvalue distribution. So here I, I, I suppose, I'm assuming that the, this standard notation of eigenvalue probability distribution of K point correlation function of eigenvalue correlation function. So I call this Pt of K at time t. And then you look at one fixed energy E, and then you, you shift by one over N. And then times test function, subtract the same quantity of the GOE is, is less than n to the minus epsilon, minus some spatial currency. So this gives you the, the fixed energy universality. And under, under the assumption that the, the initial data is the, the Stigio-Chainswell initial data is bounded for a certain range of, for certain range of energy. And also for the, for the complex parameter eta, is only bigger than eta star. It means you are trying, your assumption is initially, you don't assume the, the fine structure of eigenvalues. You only assume the, you only assume that it's bigger than, assume it's probably bigger than eta star. On the other hand, this is good enough to tell you as long as you know information up to the, the scale eta star. And then you take a time slightly bigger than eta star that you reach local equilibrium. And this energy E has to be in this interval, I forgot to say it. The energy E has to be in this interval actually has to be slightly smaller because you're assuming something in this interval. But your universality has to be slightly smaller. All right. So, so this is the fixed energy for quite some time. The, it's considered as one difficult question in the, if you, I mean you can ask the university in many, many different sense. And this is one of the hardest case. And this was done for the Wigner matrix, it was done by some of my coworkers. And in some permission case was done by Taowoo and some of my coworkers. And they are actually quite a few people here. So I didn't, these are all of them. And now, now all these, the, the result we have here is some sort of local convergence to equilibrium. But this idea of local convergence to equilibrium was, was first tried and by, by Erdisch, Schrodinger myself. And, but over there we assume that eigenvalue has satisfied some rigidity of eigenvalues. And, and then, and then also there's another sense of eigenvalue universality called gap universalities. And so, so I will not talk much about that. So, and the local version of gap universality was also, was also proved. And so we just did some, of heuristic, some previous result. And now, because today what I want to do is something a bit, a bit different. So I will not give a detailed proof of this theorem. I'll just do some, some elements of ideas. And in later on maybe I'll show you one or two pieces because, because what I'm trying to do is just show you that the dynamical ideas, it's actually useful in many places. So, so the elements of ideas we use the couplings and we also use homogenizations. And in some places we, we use maximum principle and finite speed of propagation in the, in some, in some sort of particle system method. And also we use essentially material for linear statistics but for non-compact functions. So, all right. So now, so this is the, the fixed energy in the, in the bulk. And then if you look at the question at the edge, then you have to assume something slightly, you have to assume something stronger is, you have to assume that at the edge there's a, there's a point because you don't really know, you don't really know what's where's the edge. And you know that you need to assume that the initial data, the fixed edge, and with this fixed edge, the imaginary part, behavior is the same as, this MSC is a semicircle. So here I have written this MSC, this is a semicircle, a stereo transform, this is for semicircle. And it satisfies the equation m star plus m z. Well, I think it's zero. So, so this is the semicircle, the stereo transform. And we assume that because we need a square root of behavior, so it's just, it's bounded below and above by the semicircle behavior. And then the, the time to equilibrium is, you need this eta star is bigger than n times two-third. This is the smallest time scale you can deal with, you can do it. And now if you take time, it's slightly bigger than the square root of eta star, then you can show that the statistics were converging to the GOE statistics. And I mean it's the same as previous theorem, it's just, it's just the eigenvalues at the edge. So converging to be here is the square root of eta star near the edge and the eta star in the box. I explained this eta star square root, where it comes from later. Okay, so, so you may, you may ask that why do we bother to prove all this theorem after spending so many years. And there are many theorems already approved and why do we go back to study all these local versions. So the reason is, we study a local version because for various reasons. One is the so-called rigidity of eigenvalues. So which was, I mean for those who you know, there's a concept which means eigenvalues has to be close to what is classical locations, quite close. But this is not an easy result. I mean this concept, it's not easy to prove eigenvalue has to be quite close to its classical location. So this is not easy. And the second thing is, there's some estimate called level repulsion. So level repulsion is always assumed in the previous work. And now here there's no assumption of level repulsion. And the next point is that the assumption is always local because in the beginning of random metric theory, if you want to talk about the box universality, you also have to understand the edge because the edge will influence the box. So now this is completely separate. If you want to know the edge, you just, you have to study the edge. If you want to know the box, you study the box. Everything is local. And also the assumption is, depending only on the boundary, it's a real transfer initial data. And the scale is the only bigger data stock. You don't have to go to the smallest scale. Then it works. There's a tiny, I was cheating a little bit. There was some tiny technical assumption about this V. I mean here, at least, there's only this assumption, but actually there's another assumption this V cannot be too big. You can imagine there's some V very far away, experiential, experiential, large. It's going to change many things. So when there's some sort of some, pretty still be bound on that. But otherwise, these are all assumptions. All right. So then I want to use random graph and to see the application of this method. And it can be the Erdisch-Renier or random deregular graph. So now, the random deregular graph, so the idea is now, in this case, we want to see what we can use for this method tell us. So now, so you can look at this Erdisch-Renier graph with probability P to select the edge. And so the typical number of edges, per vertex is MP. We write this as equal to Q squared. So Q squared, it's, Q turned out to be a quite good parameter. And then if you look at the sparse random matrix, I mean, here we use a center one, but you can also remove the center condition. Then you find it's minimally zero. The variance is one over N. Or by the way, the variance one over N is convention. We always follow, which is different from Terry's talk, which is, the variance is one. And now you look at higher moments. The higher moment is, it's not decayed like one over square root of N. It's decayed like one over Q to the K much, too. So higher moment decay is slower. And then this Q, you can choose, choose Q to be as big as the square root of N. And then it becomes the Bernoulli random matrices. But the point Q is very small. It's very sparse random matrices. So we prove that at some point, for some time that the Q is bigger than entry one-third. This means entry two-third of the typical number of edges is the bucket edge universally holds. And for a bulk universality later on, we show that for Q is bigger than entry epsilon, then the bulk universality holds. And this proof is completely, this proof is correct because we used the previous result I just mentioned about this local converging speed. And the reason is because for the sparse random matrices, this rigidity of eigenvalues is actually false. It's not just that you cannot prove it. It's even wrong. So that's why this actually is a lot of motivation is to try to prove things without this rigidity assumption here. This excellent result is, I think when we did this Q to the N to the one-third, of course I mean one can improve this. But then the D and the Kevin Schneidery, they were able to show that at the edge they can find a deterministic L so that with respect to this deterministic, there's no more, I don't understand. But where is the, did you see? Okay. Oh, there's no IC, there's no battery. There's no battery here. I can use it for quick. Okay, all right. So you just imagine where I talk. So they find a deterministic L and then find the lambda one minus L is Tracy Whedon. And the way they did it is they did not use any of these, the dynamic method I talk about. So what they were able to do is they just compute the green functions and compute the higher order expansion of green functions and find a self-consistent equation to higher order in Q. And then they were able to identify the edge eigenvalues for Q bigger than N to the one-sixth. It's a spectacular expansion because it's expanding to all the in Q. Yes? Why is Q bigger than equal to zero means? Oh, it's one, okay? So, sorry, sorry. It's, oh, okay, so it's, anyway. I mean, you need some edges, so it really means one, it's not there. Okay. All right, so this was the sparse result. And now we combined his method and this edge universality, this local converging spirit, Dyson Brown's emotions. And what we can do is we can do slightly better. It says there's a quantity called H-I-H-S square minus one over N, which is the fluctuation of the, so you look at this chi, which is the fluctuation of the edge, the total edge, and then you find out if you take your eigenvalues, subtract this quantity, L, and also in addition, you subtract the chi, then it will converge to Tracy Whedon in the time in the Q bigger than one-ninth. So what it means is at the point of N to the one's, Q equal to N to the one-sixth, yes, N to the one-sixth. The edge eigenvalue distribution is a sum of, is a summation of Tracy Whedon and plus a Gaussian chi. You can check that for Q bigger than N to the one-sixth, the Tracy Whedon wins over the Gaussian. And for Q below that, the Gaussian wins over the Tracy Whedon. So this is the, but once it below N to the one-ninth, our result become weaker and then we didn't identify, we didn't identify lambda i completely for below N to the one-ninth. And for that, you had to do expansion even higher order in Q. And so, well, I don't know, maybe some brave young people would be able to do it. So, and then, so this was the, and now, so let me go back to green function. So I just remind you the green function is written in this way, you take these eigenvectors and then you look at this lambda GMI, the eigenvalues, and then the imaginary part is typically look like this. So now, the green function estimates, you look at everything we have done and actually, essentially most people, whenever you look at green function, it's always some self-consistent equations. Now, you analyze it as self-consistent equations in various way and, but what I'm trying to mention is that sometimes you can integrate some dynamic idea into the problems that are doable. And so, so this is a case of D-regular graph. So for D-regular graph and as long as the edge D is bigger than log to some power, we prove the local law is correct and this means the green function is up to some almost entry minus one in the imaginary part is a good estimate and also the eigenvector is delocalized. And then, and then for the finite D case, the theorem becomes much more complicated because it becomes the custom K law and it also has a convergence and we also prove the eigenvector is delocalized but here we only do this in the bulk. There are some problems at the edge. Now, now the idea here is if you look at random D-regular graph, so besides the self-consistent equation ideas, the only thing we the only additional element we play with of course there's the real implementation is quite complicated including various multi-scale estimates. But roughly speaking is that if you look at point and then you look at it's a neighborhood and now you can see that as long as this radius is not too big it's basically like a tree with a few loops. And then the idea is now if you look at the edge you look at you look at the edge near the boundary then you re-sample then you re-sample the edge the edge of the boundary and this actually provides enough new elements to analyze the D-regular graph. I mean this is a bit hard to understand but roughly speaking random D-regular graph has some sort of because you have lots of conditions so your magic lasts a certain independence and the typical way we are dealing with a problem is always you assume the magic element is independent or maybe it's dependent on your neighborhood but rather than the D-regular graph the dependence is quite complicated so sometimes whenever you write down this short complementary formula it just completely there's nothing I mean it becomes completely dependent so you rely on this re-sampling ideas and to provide some additional randomness into the problem and so that's alright so I will just say this is some of the previous works and they are quite a lot and so I don't have time to go over all of them and how they prove it and so some of this comparison is really unfair because the deal is for fix the deal is for the deterministic whenever I put a star the deal is for deterministic ensemble so it's not a really fair comparison alright now the next thing I want to say is these switching ideas actually one can implement these switching ideas and provide some comparison result and I find this part is quite interesting the switching is you take a metric like this you just switch these two and then you find that the D-regular condition is completely invariant so this is invariant and then we call this the Q this generator of these switching dynamics so the switching dynamics has operated Q associated with these dynamics and on the other hand now you look at the Dyson Brownian motions and then because this number of edges are constrained in one one one direction and then then you denote as generators by L so then you find that if you compute the dynamics under two different dynamics one is under these switching dynamics the other one is you compute the same thing under the Dyson Brownian motions and then you this function I write a function of this A divided by square root of D minus 1 but actually this is really just this function is either the eigenvalue or eigenfunctions or typically what we took was a green function then you find these two dynamics actually are closely each other and with some error terms and then there are some error estimates so the regularity of F so roughly speaking what it says is eigenvector distribution of Dyson Brownian motions switching dynamics are closely each other so instead of trying to prove that our typical way to study the universality is to study the flow by Dyson Brownian motions and then show that eigenvalue didn't change very much under this flow but here we did it differently here what we say is you look at the Dyson Brownian motion and you look at the switching dynamics but actually they follow each other very closely for some certain interval and in this interval you find the Dyson Brownian motion already reached equilibrium and on the other hand already reached local equilibrium it's already reached local equilibrium and on the other hand you still know that these two dynamics are not far from each other and then because the switching dynamics will keep the probability distribution of eigenvalue for this Dirichlet graph invariant then you know that the Dirichlet graph satisfies this bulk set by the bulk universality so that's the idea so here you find that a different way of play with the dynamics it's not just the study is local equilibrium you actually compare two dynamics and then you find that one of them is invariant dynamics for your probability measure and the other is Dyson Brownian motion so this is a different way of play with the dynamics ideas alright so next I want to talk about the eigenvector yes yes what you are doing is for example you just compute a green think about this as just a green function so if you you can I mean a typical object we are computing is you take HMRZ and then you can compute the trace so this kind of quantity and this kind of quantity can tell you where the location of eigenvalue is so it's always play directly on the green function because if you look at the eigen if you really write down the eigenvalues then of course you still can differentiate it's not a problem but derivative eigenvalues were produced you would need this level repulsion to understand the dynamics the eigenvalue derivatives and so but once you use the green function so this problem is sort of avoided alright so next thing is about eigenvector now about eigenvector there was two kinds of eigenvectors one is dedocations and this dedocations this was because we we all start from Rödinger Schrodinger operators so this is the key concept we're trying to understand is the eigenvector it's sort of fairly flat and the other one is some stronger notion called quantum unicardicities and there are many people did that and roughly speaking it just says the eigenvector is basically flat but when they say flat it's really really flat it's really in terms of eigenvector it means if you average the eigenvector over a set then it's really completely flat it just becomes one and so so we will call this probabilistic QE because we are dealing with the matrix and then you take the set has to be large enough and then you take an average then you want to say it's completely flat alright