 So this concept is called a probabilistic QVE. And then actually, you can even ask the question. You can ask the eigenvector. You take a projection onto a fixed vector. And then you can ask, what is the probability of distributions? And it's the probability of distributions. You can guess what it is, because if you just think about it, you choose a random vector from the sphere. And if you choose a random vector from the sphere, then you will see that this would converge to a normal distribution. And then, so we will call it. You can think about this as universal eigenvectors if you want to use these names. And it's not just one eigenvector. It could also ask quite a few of them. So this was for eigenvector. And now the theorem is, in a way, is apparel to the eigenvalues problem we talked about in the beginning. It's if you look at it, the steel transform is still in between of two numbers. But here, we also assume that if you take a green function and then you project to the direction you are interested, it also has to be bounded above and bounded below. And under these two assumptions, oh, sorry, sorry. Forget what it is, I'm not. I forgot. I mean, this was some typos. And again, you assume this in a certain interval. And then, once again, you wait for some time, eta star bigger than this t. You wait for some time bigger than your eta star. And then, for the eigenvectors inside these regions, inside this with the energy inside your interval assumptions, then both this probabilistic QE and also this eigenvector universality holds in the bulk. So this is a theorem apparel to the eigenvalues problem. The previous eigenvalue problem says if you wait for some time, you will reach this local equilibrium, like these GOE ensembles. And while we say here, see, if you wait for some time, then it will reach the, basically reach almost like orthogonal groups, the same thing. Oh, sorry. So it means that eigenvector, if you measure its convergence to equilibrium by these two properties, one is this probabilistic QE. And the other one is by the probability distribution of a projection into a fixed eigenvector, fixed vector. Projection into a fixed vector, then you will reach the equilibrium. I mean, of course, you can ask. Because now you have a many vector in very high dimension, you can project not just into one direction. You can ask many, many questions. A lot more complicated. So here, we didn't really say everything. It's really equilibrium. But I think this was, anyway, you can say quite a lot about this. I mean, if you project into a finite direction, it's also OK. All right, so some of these results were previous proof under four moment assumptions and by Terry, and Wu, and T, and Yin. All right, so what's the idea of, so here, let me try to explain something, which I, so why do we study this eigenvector flow? So this was the, so if you, let me see how much time I have. So if you look at the eigen, if you look at green functions, so a green function has eigenvector here and also its eigenvalue here. And now, we understood this eta for bigger than n to the minus 1. This is bigger than the eigenvector space in very well. So this part was the so-called local law. And now, we want to understand this for eta less than n minus 1. Now, if you want to understand this for eta less than n minus 1, then you start, then you immediately face a question that this is essentially, this is as hard as understanding this lambda i probably distribution, because if you look at these green functions and the lambda, the eigenvalue has a structure below the scale 1 over n. So it's almost impossible to understand this, because this eigenvalue has fluctuation. And also this eigenvector has fluctuation. Because on the other hand, if you want to talk about a single eigenvector, then you have to look at this eta really below 1 over n, otherwise, you don't see it. Otherwise, you'll see the average. So at some point, we asked a question that, oh, all right. So now, because understanding the eigenvalue is below 1 over n, this was done in a very complicated way. There's just no way to understand this object. So maybe we just took out this piece. So we decided to take out this piece, then it becomes the eigenvector. And it's just eigenvector again. But then you think about the eigenvector. The invariant measure is often a group. And the eigenvector distributions is actually just chi-square. And then you feel that the chi-square may be something easier to understand than this Wigner-Daisen law. And so that's the motivation we start to write down the eigenvector equations. And then, all right. So what we show really is, because this part, this is average, what we show is really that whatever you can say about the imaginary part of GAA, roughly speaking, it's also correct. If you take this GAA, it's just one point. It's a standard summation. So what's the idea? How do we study these eigenvector equations? So if you come back, so now let me remind you. Oh, sorry, it took us a long time to come back. So we look at this eigenvector. Now, this is a horrible mass. And especially, this is lambda k minus lambda l square. So if you look at this term, then you almost just want to give up. And then, so it's by some coincidence that we felt that this was possible to study. And so what we do is the following. So we take this moment. And then we just take its expectation. Then we find the expectation of the eigenvector moment. Satisfying equation is a random work in a random environment. It's lambda j minus lambda k square. But this is a random jump given by what of lambda j minus lambda k square. So we take a moment. This equation is still pretty bad because this singularity of lambda j minus lambda k can be terribly bad. But fortunately, we have spent quite some time on this dynamics in the eigenvalue province. So we find that you can actually study this question. And then you can study how fast the solution converges to the equilibrium solution, which is what. So you study this random work in random environment with questions. And then you study how fast it reaches the equilibrium. And the method is by some finite sphere propagation maxima principle. And also we use some local semicircle laws. And actually, this was possible to study this object, despite there is a singularity here. Now, of course, just study the moments now. You have to take a product of many different j or higher moments and so on. So this all can be done. And this is how we identify the object. So there was that. So here, you find the dynamics is understood in the case of eigenvectors. So here and now, I think I still have a bit of time, so I try to explain two things. One is in many of these problems, we are studying the convergence of eigenvalues and how do we do it. So this part is really the method. How do we really prove this fixed energy universities? How do we do that? So the idea is the following. The idea is you look at the eigenvalue equation of lambda i, which is Dyson Brownian motion. So let me remind you this lambda i is dbm again. So you take this lambda i, it's your dbm flow. And then you take another copy. You take a mu i, which is another dbm flow. So you take this, the same thing. And then you do the same thing with d mu i. Of course, this would be mu i minus mu j. So the only thing is the difference is in the second one, we take the second one with initial data, with initial data from the GOE. And the first one is whatever you want to study. Then we look at the difference. We look at the difference with the same driving, with the same driving Brownian motions. So use this with the same, these are the same, the same driving. So the same driving, then you cancel out. And then you find the difference of this ui is given by this equation, this bij is this object. So if you look at this one, the eigenvector, it's actually quite close to each other. And so, sorry. So then this coupling idea, actually, so then you try to study this equation with this random coefficient. And now, then to study this equation random coefficient, the key of the reasons that we are able to say quite a few of all this is because this idea of home generation. And both this coupling home generation will come from this paper, which did a fixed energy universality. And home generation says, even though this guy is very random, on the other hand, the kernel of this evolution at time t, once the t is large enough, I mean, the large here is still quite small, but large is rated 1 over n. Once the time is large enough, then you can really compute these kernels without any problems. This is something a bit strange. But typically, home generations, you need some sort of average. I mean, the home generation of PDE, you always try to say the converging is certain norm, you need some average. But here, we are able to do the home generation at a fixed, at an error infinity norm. Let's put it this way. And so that's why we can do a fixed energy universality. And this concept was used in many places in all this work of we did this local result. It's all based on lots of these ideas coupling home generation, also depending on how we cut off these dynamics in various ways. And finally, let me explain at the edge why the speed is the square root of eta star. So this is some explanation. Yes, let's do it. So if you look at this coupling object, B, I, J, and suppose you cut out the index from bigger than n times eta star. And the reason I use n times eta star because the index J is integer. And the eta star is always everything rescaled by 1 over n. So if you cut off the outside dynamics, and you just look at whatever happened. So then you find your dynamics as the inside part of the diffusion. And outside piece, you can look at the effect of external inference on this UI from here. And you compute this object, B, I, J, and you plug in the typical value of these eigenvalues. Plug in its typical value. And you use the eigenvalue distribution is a square root of x. Then when it comes out, it's eta star minus 1 half. And this eta star minus 1 half is actually inverse of time to local equilibrium. So this is the idea of how you decide what's the speed of the edge. All right. So now I want to say a few other relative results. And I try to say two results. And one is what is many things we are dealing here are the eigenvalues. But actually, you can also play with the same. For singular value, you can also play with the same game. The singular value equations is not a Dyson Brownian motion. But it's almost S i minus S j. But you can also, yes, S k minus S j. But then also, S k plus S j. So this looks very similar. But actually, if you symmetrize it, if you just artificially make another copy, refraction, and this will just change your index. And then you find that it's almost like a Dyson Brownian motion. It's almost like Dyson Brown, except that each object, you don't interact with your mirror image. So here, you look at a summation. This j will not, of course, there's no index 0. And also, this index j, you cannot interact with plus minus k. So because you don't interact with your mirror image. And then there are many, many of the results we talk about for the eigenvalues. Actually also, I mean, these dynamics and the Dyson Brownian motion are fairly close to each other. So many results can change into singular values as well. So this is one other example of dynamics. And there's another one, which I will talk about these results to close this lecture. So these three convolution problems, you take the h is v star times xn plus v plus yn. And now we want to understand the eigenvalues of this. And this v is given by the harm measure, the distribute. So this was the, so Kevin just talked about the local law of this province last week. All right, so they were converged to the free convolution of this x and y. And then the theorem of, I don't think anyone, except you are Chinese, you cannot pronounce this last name. It's actually, it means the car. And this is Benjamin Landon. And so they prove the following result. So they want, they prove that actually, the Bark University eigenvalue distribution holds for this model. So I was very happy that this is possible to do because all the previous results, they're always based on some sort of magic Brownian motion. But here it's on the harm measure. You still can do it. And the idea is, again, the idea is you look at evolution. It's, again, a dynamical idea. So you consider the time evolution of this province. But now instead of this y, now you start to rotate this by u. Now this u is given by this SDE on the, the SDE is, you can compute a DU, this DW, this is Brownian motion times u, and minus 1 half A times u. And the A is chosen, I mean, if you, if you don't have a stochastic, if this is not Brownian motion, then you don't need this term. And then this is already evolution on the, on the, oh, sorry, it's not even, it's on the O, N. And now, because this one is a Brownian motion, so you need some correction term. And then you can check that this is still, still keep the, still flow on the, on the group. And now you also know that because this flow, because this, you can move this u to move this u star, you can, you can conjugate again. It becomes u star, u times v star. And once u times v star, then it becomes a rotation on the, on the orthogonal group. And then you know the orthogonal group is invariant on the group action. So eigenvalue of this H of t, it's actually, it's still the same for all time t. So now we, so now you, you introduce a dynamics, and then you find the dynamics, it's just invariant. The whole eigenvalue didn't change. And you may just say that, oh, this is totally stupid, the whole thing didn't change. And, but then, but then you, you look at the equation of eigenvalue under this dynamics. And, and then it's almost like a Dyson-Brownian motion. It's almost like a Dyson-Brownian motion. Okay. Now, oh, sorry. So, so it almost like a Dyson-Brownian, but here, here there are various choices because the, the covariance, the quadratic variation has a choice of sigma square, and also this A has a various choices. So you can, you, you have a lot of freedom to choose your dynamics. And then there's some, some choice. There's actually this, I think that this is what's good for a graduate student. You look at the, you look at the problems for half a year, you keep on trying with all kinds of choices, and you find out there's a good choice. And so they find there's a good choice that the eigenvalue is, is almost like a one, but my second quantity, which is small. And then there's standard monogail term, there's a monogail term, and then also there's, there's some correction term in the, in the DT. Oh, sorry, I forgot the DT here. Okay. So, so then you find that this gamma ij is also small. But all this, here you have to use the, the result of, of Baal and, and Erdice yesterday, they prove that the eigenvector are completely localized. So it's a, so usually it's, then you find that this, this is the dynamics. And the dynamic is so close to Dyson Brownian motion, then you take a Dyson Brownian motion and then you couple them together. And once you couple them together, then you show that this couple dynamics coverage, they could even vary fast. Once they get used to the same similar ideas. And then you show that the eigenvalue of this mu i and lambda i close to each other. But on the other hand, this mu i is rich, rich the, the, rich this Wigner-Dyson statistic. So this way you prove that the mu i, this way you prove this the lambda i will also be universal. So, so this is another ideas way of using the dynamic. So this is the end of my talk. And so let me just summarize, let's say a few words. So, so I was just trying to, to show you that one, one study this random magic problem and random magic itself has no time. But, but because we all train from, from day one so everything has to have a time. So it just, it just artificially put the time in. Just try any possible way to put time into your problems. And then that's the end of my talk. Thank you.