 I think, I think maybe, yeah, yeah, yeah, maybe I left out the beta in here, this normal, okay? So in fact, here you should put the beta, and maybe like that, or this should be multiplied by the variance, which is beta, two over beta, yeah, thanks. Yeah, it's just because I ignored the variance of the normal, okay, any, any, any, any other question? Okay, so, so let me, you know, I promised you a proof, so I gave you, I gave you an upper bound, so I still owe you something. And since it's, you know, it's towards the end of the lectures, I'm going to be brave and use some more significant, more serious tools than we did before. And the tool for the lower bound is called, it's called a riccati transform, okay? So this riccati transform is the, the word in this, in this world of Schrodinger operators for taking the log derivative. So it just shows that no matter what great things you did in your life, they may just end up naming the log derivative after you, you know, in other places it's called the hopfkopf transform and the various names. But you know, if you have, right, so, so if you have this, this differential equation, right, at f, so, so, so what is the eigenvalue equation, right? So, so you have something like an operator, which is minus del xx plus some potential, okay? You write it like this, we're depending on x, this is, this is your Schrodinger operator. The eigenvalue equation for this is, right, it's just that lambda f is equal to minus del xx, vx times f. So this is a second order linear differential equation and this is a way to try to understand the roots. If this is satisfied for f and also the boundary conditions are satisfied for f, then f is an eigenfunction with that eigenvalue lambda. So of course one thing you can try, even though I have zero chance, is just to pick a lambda and try solving for f, right? The left boundary condition is given to you, so, so there is, you don't have a choice there, but the right boundary condition, which essentially just means that f doesn't blow up, or f stays in L2, may or may not be satisfied, okay, so, pretty much all the time it won't be satisfied, nevertheless you can still get information out of solving this equation for a given fixed lambda, okay, and, and, and I'll tell you what that information is and this is what the Riccati transform is for, so, so if you take p to be f prime over f, okay, then you get the first order differential equation, which looks like this, so p prime is x minus lambda minus, oh, sorry, vx here, okay, so vx minus lambda minus p squared, so this is the term that you get from doing this, this change, and that's it, okay, so vx is gonna be your potential, which for us it's like x plus this derivative of white noise, or the O'Brienian motion, so, so, so, yeah, so, so you can solve this equation instead of that, okay, and here is what the information that you get, okay, so, so here is the, the, the following fact which is true in general for, for Schrodinger operators, and if I have time in the end I'll maybe explain it to you, but for now I'm just gonna state it as a fact, because it's, it's subtle, so, so you have lambda, lambda one, lambda one is less than or equal to lambda, sorry, lambda is less than or equal to lambda one, so, so again, I'm picking a lambda and I'm solving this equation, okay, this happens if and only if, maybe less than lambda one, like this, if and only if the solution does not blow up, okay, and I'll, I'll tell you, I'll tell you how, what this means and how, okay, so I solve it, there is, there is this p squared term, so actually this p squared term in a differential equation can be bad, right, can pull you down very fast, so, so you see now this is perfect for pale tailed bounds, right, so you want to see the probability that this is less than or equal to, I just, I just, I can just solve it for a fixed lambda and I get, I get bounds for the probability of lambda one being less than the fixed lambda, okay, so let's see how, what is the picture, okay, what is the picture for this p, so, so here, so let's look at this function, right, so for us, the Vx is x plus, plus, plus this Brownian motion, right, some variance, prime, so let's forget about the Brownian motion for now, okay, and let's, let's just look at the area operator, so what is the, how does the drift term look like, so, so let's say where it is zero, where it is zero is where x equals p squared, right, so that's a parabola, so there is x and here is p prime, right, so there is a parabola where this is zero, okay, and, and here, everywhere else, outside the parabola, the drift is down, so when you solve this differential equation, you think of some particle moving in this drift field, it will try to go down, and, and again, so here it is going down, and here this is in the middle, it's, it's going up, the y-axis is, is the drift, so it's p prime, it's, the y-axis is, so what, what am I, the y-axis is p, yeah, yeah, sorry, okay, and that's p, and I'm drawing the drift, okay, as in, in this sense, so it's a, it's, this is really a three-dimensional plot, what I'm doing, sorry, should be a z-axis and those are represented by the arrows, okay, so I'm just drawing this differential equation, so, so what's gonna happen, okay, so let's see, for example, the airy case, okay, so let's, let's say that I do this at zero, okay, what is it gonna, what's gonna happen, so, so I'm gonna, I have to start at infinity because I start with f prime over f, and f starts at zero, okay, so this one actually has to start at infinity, that's some technical thing, it's not so hard to figure out what that means because the p prime, p squared really pulls it down very fast, so, so your particle will come here and then it will get into this, this valley, okay, and it won't blow up, okay, so that means that, that means that, that's the top eigenvalue, the law, the first eigenvalue is greater than zero, which is true, okay, so now the other thing we should notice that is that this translation invariance, we still have here, so, so I should draw a different field in general for every, every lambda, but, but here the potential is just, you can just put this into x, so instead of changing lambda to be something larger, I could just look at the same picture and start it earlier, okay, okay, so, so I could start, say, from here, and, you know, if I, if I started early enough, then what the solution will just come down and blow down like this, and I will tell you that, you know, that this value, and the negative of this value, so this minus lambda one, minus lambda here is actually an upper bound for the top eigenvalue, therefore the bottom eigenvalue of the, of the airy operator, okay, so, so now, so this is the picture, but, but, but now let's put in this B, B prime, right, so when we put in the B prime and solve this ODE, right, this becomes an SD, stochastic differential equation, so what I do there is the drift of the stochastic differential equation, and what I didn't draw is, is, is of course it has some, some noise term, so you have, you know, this is, this is really nice, it's literally like a story, right, you have this particle that moves in this field and you're rooting for it to blow up or not to blow up, and you want to see what is the probability of those things happening, um, so, so you know what could happen, well, and everything that I've drawn here can happen, except there is also some randomness, right, so it's possible that this particle will start here, you know, and it's biggles, and it's biggles, and it biggles, and instead of staying in this, this groove here, it actually fights itself through this drift, gets to here, and then it blows down, so that kind of thing can happen, it has some probability, and actually you can estimate it, okay, so let's, let's, let's actually do the computation now, okay, so, so here's what I have to do, okay, to have this bound that I had up there, so what I have to do is I have to give a lower bound than the probability of the following, I have, I start at minus a, okay, and I have my stochastic particle, okay, and I have to make sure that it doesn't blow down, okay, so what I hoped for it to do is to come here and then, and then end up in this groove, okay, so, so let me write it like this, so, so, so the probability that lambda 1 is greater than a, okay, is actually equal to a probability, and let me put in here some starting points, so I start at minus a in time and plus infinity in space, and p is my particle, and this is that it doesn't, doesn't it blow up, okay, this is actually an equality, this is no, there's no inequality yet, now, of course, blow up, so again, two things can happen to this particle if you look at this picture, one is what I've drawn, it comes down, and it hangs around for a while, and it ends up sticking in this groove, okay, the other thing that it can go is that it goes to minus infinity, that's what I call blow up, okay, so now notice that there is some monotonicity here, right, so if particles start below another particle, then you can, then they always keep this kind of order, so, so I can, again, I need a lower band for this, so let me, let me put this lower band, so this is of course greater than or equal to the probability of starting with at minus a, and instead of infinity 1 of the same thing, okay, because it's easier for, for this particle that starts below to blow up, and then I'm gonna try to describe a particular scenario for this to happen, and it turns out that this scenario is gonna be good enough so that it will match our upper bound, and the scenario is this, I'm just gonna have this, have this, I'm gonna look at this strip here from 2 to 0, so I'm gonna start my particle at, at 1, and I want the, wanted to stay in here, okay, so that's one particular way to go, and then after, after I want it to, after I, it stays in here until 0, I ask it to, to do this particular one, okay, so, so let's write it like this, so it's the probability that p stays, again, it's starting from minus a and 1 at p stays in 0, 2, until time 0, okay, and then after I, I, I got to here, I want this not to blow up, okay, that's all I want. Now, of course, that depends on where I ended up, but if I stay in, in, in, in, in 0 to 2, then I'm certainly did end up above 0, and, and of course the easiest to blow up is when you start with 0, so if I want just an inequality, I, I, I, I, the following inequality is correct, I can write it as a product, it's a probability of, of 0, 0, starting from 0, 0, that p doesn't blow up, so notice that this doesn't have a in it anymore, this is just some constant, so we can basically forget about it, so we're just, so we're just, oh, I forgot to close this thing, so we're just gonna compute that last, last term, okay, and here, here comes the part that I feel a little bit guilty about, but maybe you won't kill me, which is that we're gonna compute that using Gersonov's formula, okay? So what's Gersonov's formula? P is some funny diffusion with various trips, and we know that these things are absolutely continuous with respect to ordinary Brownian motion, but there is some density, and then when you want to compute probabilities, you can, for P, you can also compute these probabilities as an expectation of this density over the event that you're interested in, so here is what this gives us, so this is equal to the expectation, and I'm gonna give you just the answers here, to 0, x minus b squared, now this is ordinary Brownian motion, db, plus or minus beta over 8 of x minus b squared, squared dx, okay, and oh, I forgot to put an exponential, so yeah, this is a, this always comes in this exponential form, this is just the density, what I'm writing, okay, density of the two processes, and this thing has to be on an event, which is what I want, so that b is in 0 to, okay, for a time bt, or bx, x is time for x in minus 8 to 0, okay, so this again is, this thing is equal to that, okay, so what am I doing? I want Brownian motion to start here, minus say 1, I want it to stay in this interval and integrating this number over there, so okay, so as it turns out, this is the nicest one, okay, so look at this term, because this is what gives you the answer, when you look at these terms, imagine that b is between 0 and 2, okay, so this thing you can basically forget about, right, because in this event b is between 0 and 2, so you're just integrating x squared from minus 8 to 0, okay, so what do you get here? So these terms give you minus beta over 24, one-third coming from the integral of f squared of a cubed, okay, so that's what we were after, this density term, and the other term is a db term, you can show by integration by parts that this is actually big O of a, so it's really, doesn't really contribute to the main leading asymptotics, and then you also have to check the probability that b is in this interval for that short time, but for that time from a to 0, but that's just exponential in a, right, any Markov chain staying in some reasonable set for a time is just exponentially in a time, so okay, so that's again unimportant, so that's the end, okay, so I hope this was somewhat useful, I've shown you how to think about random matrices in terms of random operators, right, by actually keeping the structure of the operator and using that to deduce questions that you're interested in, even in the limit, right, and they have seen I think three or four kinds of operator limits of matrices, and all of them gave us some theorems about laws of eigenvalues and other things, and as you know, this area is huge, so there is a lot to do, you have seen Elliot's talk where trying to understand how these tri-diagonal operators move under Dyson's brandy emotion, for example, so they have some nice progress, but there is still a whole lot to do in the area, and there are many, many open questions, including analyticity of beta, of various statistics that are not known, and so maybe I'll stop now, and if you have any questions, yeah, let me know. More questions? Yes, absolutely, so there is a very nice paper of Brian Ryder and Michel Ledoux, where they use exactly these techniques to give large deviation bounds for matrices, finite matrices, yeah, you know, it's not, they do almost the same as what we do, it's just harder to do with the finite process. Well, yeah, I mean, I guess maybe the precise term is intermediate deviations, so yeah, thank you for that question, so yes, that's another thing, this is also a nice way to deduce the Tracey-Wiedem formula, which is the Pen Leve formula, because when you write, you know, you look at this equality, P doesn't blow up, right, P is an SDE, and when you have an SDE, the probability of some SDE is not blowing up, it's actually, you know, expressed as a solution of some PDE, this is always the case, so you can write the PDE for the Tracey-Wiedem formula using this, and with some extra work, the key is that the values of that PDE are not always meaningful, because you don't really know what it means when you start your, you see, these things have meaning, right, this is Tracey-Wiedem tails, but only when you have plus infinity, so the difficulty is that you don't have an interpretation, a priori, or what it means when you put here a different number, okay, but you can write a PDE for this P, where A and this thing varies, so to that comes this, to the rest, you come to the theory of rank one perturbation, this is done in a paper with, in the thesis of Alex Blumenthal, my firm and student, which is that, it turns out that if you do rank one perturbations of these beta ensembles, then, and you push it through this limit, they just become boundary conditions, so for the rank one perturbation Tracey-Wiedem laws, you have the same thing except you put here some W, okay, so now I have an interpretation of that number for every A and every W, and I also have formulas for that number using, using Pen-Levey equations, and you can actually just plug in those formulas and check that they satisfy the PDEs, and it's easy to check that the PDEs have a unique solution, so that way you prove the Tracey-Wiedem laws, it gives you Pen-Levey directly without Frith-Homme determinants, yes, it's a good question, I don't know, I don't know a good answer to that, and probably for two you could work something out, but if you want for a thousand it would be, it would be maybe possible, but you'd have to give up some precision or something, well, if you go among, well, so the optimizing functions are the eigenfunctions, right, and I do have some idea of what they look like, they're similar to the arey functions, to the arey eigenfunctions, am I talking about, no, the optimizing functions are the eigenfunctions under this strange large deviations, yeah, sorry, so they're not similar to the arey, because this is large deviation regime, yeah, sorry about that, well, you know, they, yeah, so the answer to this is we haven't looked at it, whether we know, right, so we, you expect it to be close to what we guessed for optimizing functions, but I don't know how close, I think you could probably get something out of it by, by if you study this diffusion, then you can study this much more precisely, and there are various papers of studying these things more precisely, and getting further terms in this expansion of the, of the tail, and from those papers you could probably, if you wanted to get information about the optimizing functions, yes. More questions? If not, let us thank again Berlín for this, for a big round of applause.