 So I'm gonna take this vector V, which is just a vector one A inverse, A to the minus K, and then zeros. Okay, so some fixed K. So you can see where it comes from, right? It comes from this picture here. That's gonna be roughly what the eigenvector of this operator should be that corresponds to the top eigenvalue. So we check, we can check, this is an eigenvector. It's not quite an eigenvector, but it's gonna be almost an eigenvector, right? So we look at JV, well, let's say, so J over root N, so we normalize. So remember this G over root N on the top corner, it almost looks like the matrix which just has ones on the off diagonal and zeros on the on diagonal. We have seen that last time. So J over root N, the top corner of that, just very close to that, that matrix, just zeros here and it's in the off diagonal, except for this A, of course, which you see. Let me call it GA, so that you can distinguish from the unperturbed J. It's too small, right? So GA over root N, and let's plug in this vector V, okay? So again, GA over root N is very close to this particular operator, right? It has these ones on the off diagonal and it has this A loop. So V is almost an eigenvector and we can roughly say what is the error. So if I subtract from this V times A plus one over A, okay, and look at the norm of that. Well, there is some error coming from the noise in the kais but that's small, that's like going to zero, that's clear. And then, because these kais are very close to, to one when you divide them by square root of N and there's some error coming from these normals but that's again, that's going to zero. Remember K is fixed, we're just looking up to a finite depth. So that's fine, but there is an error coming from the end, which is that we stopped playing around with this at this point. It just, there's zeros afterwards. So there is the eigenvalue equation doesn't quite satisfy, so there is an error but you can tell you how much it is. It's roughly like A to the minus k times some constant because that's how much the entries of this vector are there. Is this okay? So when you look at, compare the vector times the vector plugged into the matrix times the vector, same as the second value, then you get an error of this, error of this size. Okay? So now this V, okay so, so if you have an equation like this with a V which has a norm one, okay? Then it tells you that this matrix has to have an eigenvalue close to this number and the distance is as most this. Okay? You have seen this kind of approximate eigenvalue equations. I'm not gonna prove that if you want to can do it as an exercise, but the conclusion, just write it down, so J over root 10 has an eigenvalue. Well, I leave here some dot, dot, dot close to A plus one over A. Okay? If you satisfy the approximate eigenvalue equation, then there's an eigenvalue nearby. How close is it? Well, it's because this V is not of length one, we have to normalize it so that what would happen if we plugged in something that depends on V, but the normalization is the length of V and V is never too short, so we are fine. Actually, we can just skip this because V is always of length one, so we can just skip this bound. Okay? So, now we have shown, okay, we were pretty good. We have shown that there is some eigenvalue. Right by setting K to be large, sufficiently large, we can show that there is some eigenvalue that will converge to this number. This is so far we have proved. We just have to make sure that this is the top eigenvalue, okay? And there are no other eigenvalues above this one. So maybe I should say GA, GA. But this actually follows from interlacing because, remember, so here are the eigenvalues of GA, let's say, and you do, when you do a positive perturbation of rank one, this is a standard fact that you probably know or heard, then the eigenvalues of the new matrix are interlaced with this one and all of them move up somewhat, but they never overtake the next one, okay? So this is where the new eigenvalues could be, okay? So, I've identified, so this is GA and this is J. So I identified that GA has an eigenvalue here, near, higher than the top eigenvalue of J, but because it's higher, it has to be the top one. All the others are less than or equal to the eigenvalues of J. So since lambda two of GA, so the second largest eigenvalue of GA is less than or equal to the lambda one of J, okay? And which we know is, let's write it like that, square root of n, two square root of n, square root of n times two plus little of one, right? We have seen that from convolution 3D. This eigenvalue has to be lambda one, okay? So that's the end of the proof. Any questions? Yes, so I gave an oral argument, why it should be at least two, and what I claim is that it's just follows from the Wigner semi-circular law, right? Because if the empirical eigenvalue distribution converges to the Wigner semi-circular law, then there has to be at least one eigenvalue near the top of the Wigner semi-circular law. So it has to be at least two. It doesn't give you anything above, it could be anywhere above, and in many cases there are. For example, if you take, for any A, these Js will still satisfy the Wigner semi-circular law. This actually follows from interlacing, right? Because all the other eigenvalues will be interlaced with the previous ones, and this will not contribute, because it just has a weight one over n. So the Wigner semi-circular law is still okay for this GA. Excuse me? Well, I think the issue is how do you tell that this is not a third eigenvalue? There could be one that is twice as large, right? How do you rule that out? How do I rule, I don't have an upper bound, that's the problem. So I don't have a direct upper bound on the top eigenvalue anywhere else except from here, except from this argument, that was the point. Which theorem? No, it doesn't give you a good enough upper bound. Yeah, that's the problem. So you see what it gives you, that's a good question. So what it gives you is, what is it? It's one plus a, right, I think. Yeah, it gives you one plus a as an upper bound. And in fact, the true upper bound is that the true thing is one over a plus a. This is the truth, and this is the bound. Okay? So it's clearly, it's not good enough. Yeah, good question. So that's a great question. And the answer to that is this is exactly what these kind of models are good for, is you're able to separate what comes from randomness and what comes from, you know, essentially looking like a semicircle distribution. So for example, in this case, this has very little to do with randomness, right? We didn't really use, the only thing we used, the way we use randomness is just use that these normals are small, they're of order one. That's what we said. And the kais are also some deterministic things with something of order one, but that's not crucial. So the beta ensembles, you remember, they did look like GUEs, G-O-E or G-O-E, except the repulsion was stronger. So when you send beta to infinity, right, then the repulsion gets stronger and stronger and the eigenvalues become rigid, okay? In fact, in the limit, the eigenvalues will, there is a limiting eigenvalue distribution for finite n and there is no randomness in it. For finite n, when you take this kind of beta go to infinity limit, you get the Hermite polynomial zeros, as eigenvalues. And this Jacobi matrix becomes just the recursion matrix for the Hermite polynomials. And even in that case, this kind of argument works. This argument is unchanged. Okay. So let me, this was Baig-Ben-Roussage and I wanted to tell you about beta ensembles. So that requires, I'll be able to start this today, but maybe not finish, okay? And we're gonna show, what we're gonna prove is the Dmitry Adama theorem, okay? So let's go back. You remember we had this kind of general abstract argument for Jacobi matrices. So we said, two matrices are equivalent if their spectral measure is the same. We said, well, this is equivalent to saying they're conjugate to each other. As long as you're talking about this cyclic vector case. It's equivalent to saying that they're conjugate to each other by some orthogonal matrix, which is one on the top corner. And it's also, there is also a unique representative for all these matrices, for an equivalence class, such as a Jacobi matrix. You have proved in your problem session this uniqueness. So as you see, there is some kind of correspondence between spectral measures, okay? So the data that describe the spectral measure is lambda one, lambda n, so n point. And then some spectral weights, which you call q one, q n. So, and their sum is one because it's a probability measure. And sigma is just some of delta lambda i times q i. This is a spectral measure. And as you see, these are described by two n minus one pieces of data. And then there are also the matrix entries, which is a one, a n, b one, b n minus one. So what did we go about this? So if you had a matrix with a specific spectral measure, then for that there exists the Jacobi matrix, a unique one, okay? And of course if you have a Jacobi matrix, there is a spectral measure for that. Now, this sort of almost gives you a one-to-one correspondence. There is actually one thing that's missing. What's missing? What's missing is that you don't necessarily know that for a given matrix, given spectral measure, there is a matrix with the spectral measure. Okay, well certainly there is a matrix with those eigenvalues, but can you set the spectral measure to be this? Oh, the answer is yes, so I leave it as an exercise. You just have to find the general symmetric matrix with the spectral measure, because then we know that there is also a Jacobi matrix like that. So for every measure, which is supported on n distinct points, there exists a matrix. I'm just gonna write it quickly. So for every sigma, there exists a j. That's a nice exercise. And so this is nice because it's one-to-one correspondence. We don't exactly know, I mean, how this works, this correspondence, and you can certainly find these. I mean, if you have matrices, you can find those eigenvalues. That's really just solving algebraic equations. So I guess this correspondence is algebraic, even. So one way to, but we're probabilists, right? So we want to understand what happens if you pick these to some law at random. Then what's gonna happen to these things? What is gonna be their law? So as opposed to the case of full random matrices, this is a very simple problem, because it's a one-to-one correspondence, you just have to compute the Jacobian of a transformation. Except you don't know what the transformation is. It's a bit complicated. So here is what you do there, you use a trick. And this is the trick that you always use in random matrix theory. You know the matrix and you want to know something about the eigenvalues. So more often than not, what you do is go to the moment because those are connected to both easily in a simple way. So if you can write Mi to be or MK to be the integral of X to the K, the sigma, right? So this is the, which is sum of lambda I to the K, QI, the Kth moment. Then certainly you can write here M1 and MK. M2K minus one. It's a good idea to go up to 2K minus one because you'd like to have here 2K minus one data points. And you see that both of these things are very easily connected to here, right? So certainly if you have the measure, it's very easy to compute the moments. Okay, that's easy. And if you have the moments, if you have the matrix, it's again very easy to compute the moments. You just do this path counting, right? So it's a sum over returning path of the product of the weights over the path. That's what it is. So this is also easy. Another nice exercise is to show that these moments, they actually determine the Jacobi matrix. Well, in fact, we'll see it in a second. That's very simple. So the moments determine the matrix, the matrix determines the measure, the moments determine the measure. We got this again. Simple in this case. So the way to understand probabilistically what happens is very simple. There's a transformation going from here to here, which is simple, right? It's just a polynomial. MK is a polynomial in these guys. It's also a polynomial in these guys. So we just write down the Jacobi matrices and hope we can find their determinants. Okay, that's it. So let me give you a conclusion. Two more minutes, yeah. That's why I'll give you the conclusion. Okay, so here is a theorem, which is partly the Dimitri Edelman theorem, but it's a slight generalization that we gave with Krishnapur Ryder and myself. Which is that, which is the following. So it's a way to, okay. So let's say that you look at the matrix if A and B are chosen from the following distribution. So this is what the distribution looks like. It's exponential of minus trace V of T, V of J, okay? So what is it? V is some polynomial, potential actually. But in the simplest case, it's just a polynomial. So take this polynomial of your matrix, take the trace of that, exponentiate. And then there is an extra term which has the B's in them. So it's N minus K. Times beta, let me write it like that. K times beta minus one and log BN minus K. Okay, so this is a density that you put on the A's and B's. It's, you know, this is just some polynomial here of the A's and B's. And this is just a logarithmic term. So it just will come out and multiply the B's. Then the eigenvalues have distribution. Again, it's exponential of minus some of V of lambda I. Okay, so it's polynomial of the lambda I's. And then it's multiplied by a Venderman. And finally, let me write, and the Q are independent of the lambda. Okay, so the spectral weights are independent of the lambda and have Dirichlet beta over two, beta over two distribution. Okay, so my time is out. So let me just say what this means. So when you plug V equals X squared, okay, then these entries will be independent because trace of V squared will just have some of the squares of the entries. Okay, the A's will be Gaussian because you just have A squared here in the density or exponential of A squared here in the density. The B's are gonna be gonna have some density which is exponential in A squared and B squared and in this logarithmic term. So the B's are actually going to be chi's. And so one of the corollaries is that the base ensemble can be, when V is just X squared, then this Gaussian beta ensemble that is called has a representation as a tri-diagon matrix with independent entries. And that's the term is due to Dumitri and Adelman. And V is more complicated than the entries are not independent, but the dependence is not very strong. So there's still things you can do. So maybe I'll stop now and we'll continue tomorrow.