 Okay. Hi everybody. We're gonna continue a series of lectures and beta ensembles and the goal today is to to continue the study of the stochastic area operator. This shock doesn't work. Let me see some bigger shock. Okay, so so what we discussed last time is that when we look at the tridiagonal ensemble for the beta ensembles, then it looks like some random operator and it actually converges in a certain sense to a limiting random operator, which is a differential operator. That's called the stochastic area operator and because it's a beta, which is just multiplication by x. Well, first of all, there is a minus second derivative plus multiplication by x plus 2 over root beta times multiplication by b prime, which is the derivative of white noise, that derivative of Brownian motion, so white noise. We also had the boundary condition that, which is Dirichlet, f of 0 is 0. So I stated the precise theorem of what the convergence means. We're not gonna prove that theorem. That's a little bit technical, but I'd rather focus on today is how do you work with an object like this? Because again, this is not just some abstract beautiful thing that converges to and then you're happy, but you can actually deduce a lot of the properties of the limit from here. And how you do that is this is what I'm gonna talk about today. So first of all, our goal was to define a quadratic form. First of all, we define this space L star, which is functions f in L2 such that f is just the ordinary area operator. It's finite. So this was our L star function. And for every L star function, we would like to define the quadratic form with respect to this operator. And of course, the quadratic form with respect to a is obvious and this operator is just a, this part is a, plus multiplication by b prime. So we just have to make sure we can define this multiplication by b prime. And not only that, but I promise you that there is some domination. So this multiplication by b prime is in the operator sense is dominated by 1 minus, by epsilon times this af. Okay, so let me write it like this to the prime. So multiplication by b prime, this is in the operator sense, is dominated by epsilon times a plus some constant, random constant depending on epsilon times the identity. Okay, and when I write these things in terms of operators, okay, so I've written this last time and some of you are complaining that I didn't tell you what this meant. So let me tell you precisely what this meant. What this meant is that, is that for any af, this is the meaning of this, okay, in our star. So a less than or equal to b, so if I have two operators, this just means that, well, okay, so let's start it like this. So a less than or equal to b for two operators means that for any af in our star, this inner product, the quadratic form, have this inequality. Okay, and if you have this inequality, then this implies inequalities for the corresponding top eigenvalues. Okay, so the case top eigenvalue of this is less than or equal to the case top eigenvalue of that, and that's a nice exercise that I recommend you do using the Quran Fisher characterization. It's really, it's very simple. Okay, so that's what we mean by this, okay. All right, so the trick was to take this b prime apart or the Brownian motion, if you like, you write it apart in two ways, in two pieces, so you write it as an average version plus the rest. Okay, the average version is the integral of, so b, average t is the integral of t to t plus one of bsds. Okay. Now this average version, of course, has a derivative, which is just the difference of b at t plus one and t. So then a derivative is a perfectly fine function as opposed to b prime, which is not a perfectly fine function. It's a distribution and you can also make sense of all this story in terms of distributions, but I'll put this aside for these lectures. So that's bt. And now I'm just going to define this inner product, so fdf, in terms of this decomposition. Okay, so you can just write it as the integral of zero to infinity of excuse me, so the integral of zero to infinity of f squared and let me use this b b prime f, sorry, b bar prime dx. Okay, so this one is perfectly fine. Okay, as long as the only issue is here whether this converges, that their b bar prime is not too large, so that's the only thing we have to look at. And then and then we do the second part, which is an integration by parts. You do that by integration by parts, so f prime f of b tilde dx. Okay, so integrating against the derivative of this, we write the first part just directly and the second part we write using integration by parts. That we can just take it to be the definition and I just have to say that this is not going to be too large. So, so let's see why is that? So this comes from the following bounds, so if you look at this b tilde bar, b tilde prime, or b bar prime, which is a perfectly nice function, and even, and also if you can just look at the b tilde, both of these guys are stationary Gaussian processes, okay, as a function of t. So basically, and they're actually, you know, the correlation decays very fast. I think if you go from x to x is one, they're already, it becomes independent. So basically these things as a function of time, they just look like iid sequences of normals, except they made it to be continuous in some sense, okay? So just like for any of those things, it's not so hard to prove. I'll give you an exercise that these these these things are both less than or equal to, it's an absolute value even, of of some constant times the square root, this is a random constant, a random constant for Ivan, which is, which is a log, let's say two plus x. All right, so that's, that's how, that's how iid normals grow. You can of course say more precise bounds, and I think what you really need from this is our two things. So first of all, we'll need that this b bar prime, yeah, is then, then we'll, it just follows from this. So it'll be less than or equal to some constant depending on epsilon plus epsilon times t. Okay. That's that's what we need, and the other thing, this is just the direct consequence of that for every epsilon. You can do this, and then you can also do the same thing for b tilde squared. Okay, so that's again. Let's satisfy this inequality. Okay, and yeah, so that's, that's what we'll use, and now we just, just use this equation to, to bound this inner product here. Okay, so the absolute value of this, so okay, so let's call this star and okay, so the absolute value of star is less than or equal to, so first, the first one we can just, right, you can replace this b by prime, by, by, by the upper bound. Right, so the upper bound will get cc epsilon, so that gives you a c epsilon times epsilon to norm of f. Right, and then squared of epsilon, let's c3c epsilon. And, and then there is a, you know, you can use this epsilon t is, is upper bounded by the, the area norm, so faf. Okay, so that's the first term. Okay, and the second term, all you have to do, right, so, so this thing, you can split this into sum of squares, this product, but you have to do it in an uneven way, so, so f prime should get a little bit less weight, so that they can make that arbitrary small. So I will write this as, this is less than or equal to the integral, epsilon times the integral of f prime, squared dx plus one over epsilon, so this is my uneven splitting, right, times the integral of f, and now f squared, and now I'm gonna do the upper bound on, on, on the b tilde, which I, which I used to, b tilde squared, which, which I'm using here, so I just write c epsilon plus epsilon t dt dx. Okay, so that's nice, maybe, maybe I'd better do it with epsilon squared, so that this is still small. Okay, so I can do that. Am I doing this right? No, no, no. Yeah, I can make this epsilon squared. Just put your c epsilon squared, and then, and I'm good, so, so, so now, and then I just have to bound this, and as you can see, you have an epsilon times an area norm again, plus some constant, again, depending on epsilon, a different constant times L2 norm squared f, okay? So here is a, here is the proof of what I did, of what we claim, so this, this, this one goes into the area norm, right? And this guy, other guy also goes into the area norm, this t, I think, and then that just gives you the outer norm. Or maybe, maybe I'll have to have here two area norms, but that's fine. Well, no, I'm just, I'm just bounding these, so, so, okay, that's good. Okay, so, so we proved this, so okay, and, and by adding these things together, right, we get the equation that we claimed last time, namely that the area beta is less than or equal to one plus epsilon times the area plus c epsilon times the identity, and you have a lower bound, again, here, the same kind, and the same thing holds, and same thing holds for the k-tagon value, and as, as you know that the lambda k of the area, so, is, it was a some asymptotic to some constant, this is an explicit constant that I don't remember now, k to the 2 third, and, and this implies that also the beta ones have this asymptotics. Okay, so I like to call this the Wigner semi-circle law for the stochastic area operator, because it sort of tells you, if you think about it, that if you, if you sort of draw the density of states or the, or the, the eigenvalue empirical eigenvalue distribution, this fact that the cases at k-thirds actually gives you, shows you that if you draw the histogram, then you essentially see here a parabola, or in the square root of k kind of function, which is just the, the edge of the semi-circle. So, so that's what you expect to see, right, when you just focus on the edge, then you'll see the edge of the semi-circle. In there. Okay, so, so that's already one thing that we proved, you know, about general beta ensembles, just using this representation. It wasn't very difficult. I'm gonna go on and prove some other things, unless you have some questions now. Okay, no questions. So, what else can you do? Well, another nice thing that you can prove relatively easily are the tail-bounds. So, yes. Yes, yes, exactly. So, so, so you're talking about this constant. Yeah, no, this inequality, okay, is true like this. It's the same constant. There is no, you just replace the operator by a Saigon value, and it's perfectly okay to do. Okay. Like that. And of course, you know, if this is true for every epsilon there exists some random C epsilon that this holds, then it implies that. Yes, Arjun. Yeah, yeah, so we actually proved it. We only proved it for GOE, but there's no difference in the proof, truly. It proves for all beta. Both of the proofs work for all beta. But, you know, this is some very refined version of it, right? You zoom in near the edge, and you take a limit, and it's still kind of semicircular in this, in this kind of weak sense. All right. So, let's, let's, let's try to... So, so, so first of all, here is now a definition. Tracily done beta, okay, as a distribution, is equal to minus lambda one of the aerobate operator. Okay, so, so this is, this defines the tracily done this beta distribution for betas that are not classical, and for betas, and for betas that are classical, this is a theorem that we actually proved now. That you can represent the tracily done distribution this way, and as you know, you might have seen this before. The tracily done distribution just looks like anything else. Snake eating the elephant, but, but, and you know, it's very hard to distinguish them for various betas, or very hard to distinguish them from anything that looks like this in general. But, but, but they have this nice properties that, of course, this is about the top eigenvalue, and it's asymmetric. It's much harder for the top eigenvalue to go back to the bulk, because it has to push everybody else down, than to go outside, right? So, so, so this, this tail, this tail is, is, is fatter than Gaussian, so it's e to the minus, actually two-thirds beta. Let me put a to the one-half, e to the three-halves, right? Okay, so that's, that's the probability that you're in that tail, and in this tail, it's, it's thinner than Gaussian, so it's e to the minus beta over 24 a cubed. Okay? So all of these are not too difficult to prove, just using what you already know. And, what I'm gonna do, the rest of the lecture is, is prove this one. So, so what is precisely the theorem? We write it like this, so the probability that trace of even beta is less than a is equal to this, and maybe put here a beta plus little one, and this is, as a converges to infinity. So that's the, that's the only, that's a minus a, sorry. Okay, that's, that's, that's, that's a theorem. And, and really the upper bound here is the, is the one that's easiest and is actually real fun. I'll tell you why. Part of it is because it's a proof in which why you're doing the proof you get to be a physicist without a punishment. So, you know, you always go to the physicist and tell them, well, what you do is not rigorous, and the physicists say, well, yeah, well, they give you the right answer. So in this proof, you can do the same thing. And in fact, it's not, you know, in fact, if I give you a proof, it will actually be rigorous even though you do have to do some guesswork. So, so here is the nice thing. So, so lambda one is minus the trace to be done, remember? So you want to look at this event, right? So if lambda one is greater than a, then the really quotient formula gives you that if you, if I take f, a beta f, so the inner product, right, then this is, this just has to be greater than a times the other two norm of f squared, right? Because the lambda one is just the infimum of the ratio of this and that over all f. Okay, so, so the probability of this is upper bounded by the probability of this. Okay. When you actually want to get the exact lambda one, then you have to optimize over all f, including random f that depend on a. But at this level of crudeness or this level of asymptotics, you don't have to take random f, you can just pick some deterministic f. So no matter what f you pick, so this is the physicist part, no matter what f you pick, you get, you get a bound. And the bound is, is, is correct. But of course you want to pick a good f. Now how you pick a good f, you don't have to tell anyone as long as you pick, pick when it works, it's fine. So I'm going to try to tell you, but, you know, I still, I won't tell you precisely because I want to be enjoy, I want to enjoy being a physicist here. Okay, so, so let's look at this. So, so f a beta f. So what is the probability of this? So, so let's, let's write it like this. So f a beta f, let's write it out. So you have in the, in here, you have the airy part. So you have an f prime plus you have the f square root of x to norm squared, right? Plus you have this guy, which is the f, well, how should I do it? I'm going to do it in a very silly way now, but anyway. So maybe I write it like this. So this is the integral of f squared b prime dx. Okay. Okay. And this has to be greater than or equal to a times f l to the norm. Okay. So we're interested in the probability of this. And again, anytime we can give a bound on this probability for any f, that's an upper bound on this. So, so let's see what's happening here. Well, just take an f this, this and this and this are perfectly fine deterministic things. All right. Let's just compute them. What we're going to focus on is what is this? So what is it? You're integrating a square root of function or a square root of function, which is itself a function against the white noise. Okay. But it's a deterministic function. So it's just a paleo integral. Okay. Which is just a normal random variable. There is nothing special here. It's just a normal random variable. What's the variance of the normal random variable? Okay. Which is the fourth moment norm, f for normal f. Okay. So the L to the norm of f squared. Okay. So I can replace this here. I can replace this here by a normal, a standard normal, multiplied by, let's see if I do it right. The fourth moment, I think of f like that. That's squared. I think it's maybe like this. Fourth, maybe squared. Yeah, maybe squared. Yeah. Yeah, that's right. Not to the fourth part yet. Okay. So what is it? This is a tail inequality for a normal random variable. Okay. It's the simplest thing that you have seen in your life. So anything like this, you can easily study. Don't throw your sponge at your coffee. That's a great idea. Okay. So this is a given upper bound. So maybe twice. So just a normal tail bound. You rearrange everything of, what happened to my inequality? Okay. We rearrange everything for, so that one side you have this normal and the other side you have this thing and you write it. So, and use the standard normal tail bound, which is minus x squared over two. So you, so you get here, I think the L four norm of F. And now to the fourth power and here upstairs you have a square, I think. One, two here. It's a minus. And you, and you pack in all these things that come up. Right. So this is a prime squared plus F square root of X. L two norm squared plus a times F. Like this. Yeah. Something should be minus. Just to figure it out. It should be like this in the end. Yeah. A is minus A. I think we're talking about, this normal has to be very large. Great other name. You know what? There is no point in for you guys to figure this out. This should be, this should be, this should be the end. Okay. There's some sign mistake. So. Okay. So now there is, there comes the argument. So what do you want to do, right? You want to maximize this thing. This is your job. What's in here? So you can see this is at least homogeneous in F. So that's good. So multiplying by a constant doesn't change. And it's, it comes, it comes with a variational problem. So you can use it, so with the large multipliers, except you can't because it's too complicated. So basically what you try to do in all these cases is you want to say, well, one of these things will probably not matter. So you, and I will see from the next argument, maybe you will see that actually the one that's not, not matter is the optimizing F you accepted to be kind of flat. This derivative is not going to be that important. So, so you just take this guy out. Okay. And then you optimize that. Now that you can solve. And I tell you what the solution is. And I'll tell you what the answers are. So. So the actual solution is F. He quotes you. So there, the real thing is the square root of, well, we get it right. A minus X plus. Okay, F of X. Okay. So, so let me draw this function. It's just a square root function like this. Okay. So this is essentially what you want F to be, but this quite won't quite hurt because, because it first of all, it doesn't satisfy the boundary condition here. And also you get too much, too much L, too much Dirichlet norm from this part. So what you do is you cut this off by, here you just put the line of slope one. So you put some means, which is A minus X plus. Okay. So that's, that's one cutting and then, and you also cut it here like that. So you put them in of X times A, I think. Okay. So that's a steep one, but it's fine. It's fine to make it too steep. So, so again, you solve the variational problem X times square root of A. That's what it says. So some of the variational problem and then, and then you get dysfunction and you, of course, the variational problem ignored this term. So, so, so it won't necessarily be nice for that term and, but the small modification will be nice. So that's what we do. Okay. And, and now I'll just give you the answers here, which you can, while I'm writing it, you can do this in your head. So the integrals, the L to norm squared is a cubed over two. The L, the F prime norm, squared, I think, is big O of A. So it's very small. It's not important. The root XF norm squared is a cubed over six. And where is my fourth norm? Yeah, there you go. The fourth norm is a cubed over three. Okay. So you put this together and plug them in here and you get that. Okay. So that's the end of the upper bound proof. Okay. Any questions? Yeah, yeah, because it's squared up here, you see. So everything up there is a cubed times and then you divide by a cubed again. Okay. Ah, no, no, that's, that's not a dumb question.