 into let's say L2 of R with piecewise linear functions. So with EI, the first the I coordinate goes to the indicator function of the interval I over M. So M is N to the one third and say I minus one over M. So I minus one over an I over M. Let's make it half open or something. And then you have to, for this to be an L2 isomorphism, you have to multiply this by root M maybe. So this is a way of embedding Rn. And if you do this embedding, then of course with this embedding, you've also embedded the operator. So the operator will then act on these piecewise linear functions. So now at least these things live in the same space but there's still this problem that they act on completely different functions, okay? The same space but different functions. This guy acts on functions that are piecewise linear, right? In fact, piecewise constant, not piecewise linear. And this one acts on ones that have second derivatives. There's basically, the intersection is zero, I think. Constant functions, that's the intersection. But it turns out that this is not a problem for actual operator convergence because what you can do is, so these operators are both kind of nasty but both of them have, you can define for them inverses, okay? It's assumed that there is no zero eigenvalue. There's a zero eigenvalue, then you define the resolvent. So it subtracts some constant and you can subtract the complex constant. It doesn't matter. And you take the inverse. It turns out that the resolvent of this or the inverse of this is a compact operator. So actually it's defined on the whole L2 of R. The inverse of that you can also define as a compact operator, defined on the whole L2 of R. And those things are going to be, and the statement is that those things can couple in a way that their difference in norm is going to zero. Okay? So that's the precise statement. Okay, so let's call this Hn. I'm gonna call this H infinity. And I'm gonna say it's like this so this is gonna be kind of schematics. So Hn inverse minus H infinity inverse in norm. And this can be operator norm. So say two to two norm is going to zero. Okay? And this notion is called norm resolvent convergence for operators and it implies a lot of things. That's the nice thing about it. It implies that the eigenvalues converge. Not only the eigenvalues converge but the eigenvectors. If they're distinct eigenvectors in the limit then the eigenvectors of the approximating things converge in norm to the limiting eigenvectors. So everything beautiful that you want to know is in it. And it's a simple statement. And often this kind of statements can have simple proofs. So in this particular case, actually the proof is not that simple but in other cases the proof is very simple. And then you get a huge amount of interesting information out of some simple proof just by working at this slightly more abstract level. Two to two norm. So it's the maximum norm of what it takes in a L2 ball of radius one. Maximum L2 norm. This is also called the operator norm. So almost. But that's not how it works. So let me tell you, yeah, that's a good question. So the eigenvalues, so you can imagine what the eigenvalues of the area operator are gonna look like, because they're the limit of the eigenvalues of the random matrix. So those are the eigenvalues that turn here the boundary. Okay, so the tradition in this kind of, so these are like random Schrodinger operator. This is a kind of continuous random Schrodinger operator. The tradition here is, and that's why we took a minus sign here, is to make them almost positive definite in the sense that, well, they're bounded from below. So this is not positive definite. If you erase the noise, it is positive definite. That's just the area operator. But multiplication by x is positive and minus second derivative is the square of first derivative. Am I doing it right? Okay. Yeah, minus second derivative is the square of the first. Okay, you figure this out. So that's the precise statement. Okay. And let me see. So in the remaining time, I wanna tell you first about how we think about this. So you could say, well, okay, I have this operator, but when I have this convergence, it's very nice, but what is it good for? The point is that this is extremely useful. So it's a very nice representation of the top eigenvalue of which you can read off a lot of things in a very simple way, okay? And so let's start with the area operator, okay? One thing I didn't tell you is that I have to set the boundary condition here, okay? And the boundary condition is Dirichlet. Okay, so what is the area operator? There is just minus del two plus x, right? So what do we know about this operator? So you have seen the area function, right? So I'm sure if you, whenever you do any kind of random matrix, I think you always see the area function. So you probably have seen the Wikipedia page of the area function. So there's the area AI and BI functions. And the area BI function that blows up near infinity. So if you want to have a solution of this, right? So take this and apply some F and let's say that you just want it to be equal to zero. Well, you can solve it. There are gonna be two linearly independent solutions, right? And those are these things. The two linearly independent solutions, every solution is a linear combination of those two. But let's say that you put here lambda F, right? So you want to find some other eigenvalue of this operator. Well, you can put your lambda into the x. Move it over there, so x minus, whatever. So what you see is that all the solutions are gonna be just shifts. For all these lambdas, they're just gonna be the shifts of the AI and BI functions, okay? So if this guy has any eigenvalues whatsoever, they cannot have a BI component because that goes exponentially, well, it's actually super exponentially to infinity. So it just has to be a shift of the AI, okay? And the AI decays exponentially. So in fact, as long as the shift of the AI satisfies the boundary condition, it's gonna be an eigenfunction, okay? So the AI function, you've probably seen this thing, looks something like this, it decays, and then it does stuff like this here. Oscillates, oscillates faster and faster. It grows, I think, like fourth root or square root or something with oscillates faster. In fact, if you want to see the k zero is, okay? k zero is approximately, is approximately minus some constant times x to the, sorry, k, k to the two thirds, okay? So the zero is if you go that way, it gets dense, get dense. So what are gonna be the eigenfunction? So it's very simple. You just take this AI function and shift, okay? And the eigenvalue, because I erase this thing, but the eigenvalue is just gonna be the amount that you have shifted, okay? So the first eigenvalue is the first zero, the second eigenvalue is the second zero, and so on, okay? So we already know that lambda k is approximately c times k to the two thirds. Now there are general theorems that if you have a random Schrodinger operator which looks like this, this plus some potential, and it's in the half line, then if these things grows the right way, then it has a discrete spectrum, okay? So from that theorems, you can just say, well, this thing also, this guy has a discrete spectrum. These are the eigenvalues that they behave like this. It's not exactly, it's just approximately. In fact, the proper answer is that the eigenvalues are minus the airy ai zeros, and the eigenfunctions are just these ai's shifted and normalized, okay? So very simple. Now let's say that you want to have, you want to have asymptotics for not the airy eigenvalues, but for the airy beta eigenvalues. Okay? I shouldn't be doing this, but so, right? So you'd like to see that this disoperator also has a discrete spectrum and behave somehow similarly to the airy, okay? And here's what you can do. Okay, so here is a nice lemma that you can prove. We may do it today or tomorrow, which is the following. So you can look at the airy beta operator. Okay, let me just denote it like this. Okay? Then this can be upper bounded and lower bounded. Okay? So for every epsilon, the following is true. You can upper bounded by one plus epsilon times the airy operator plus the identity times some random constant. And lower bounded this by the same way. One minus epsilon times the airy operator minus the identity times some random constant. Airy beta is the stochastic airy operator, okay? So this is a random operator and you have this bound. Okay, so what does this mean? This means positive definite order, right? So if you take the difference of them as operators, then it will be positive definite. The constant C is dependent on the noise, exactly. That's the only place that the noise comes in. Okay, so this tells you everything you want to know. We'll prove this, it's not hard. In fact, it follows from the fact that these operators, B prime, okay, is less than or equal to epsilon times one plus epsilon A plus some random constant times I, okay? So multiplication by noise, it behaves nicely this way. Now we will prove this, right? If you use this, the difference between the airy beta and the airy is just B prime, okay? Just this multiplication by B prime. So as long as this is small, then you get automatically a domination of this kind. Well, that's exactly the question. So the answer is yes. So for example, it holds for the top eigenvalue. Why? Well, you just use, you can use the real equation formula, all right? So the top eigenvalue is a super overall functions of length one to length one of the quadratic form you make out of this, evaluated at that function, right? You can put that definition through these inequalities. And you get that. And if you use Courant-Fischer, which characterizes the second eigenvalue, you do the same, you get automatically that. So in fact, this implies that, right? So this implies that the same inequality applies for kth eigenvalue for every k. Exactly the same inequality as applies for the operator applies for the kth eigenvalue. So just this simple fact gives you these asymptotics. Okay? So for every beta, the, so this is some fact about the error process, which is, you know, you can prove it with determinants for beta equals two, you can prove it with faffians for beta equals one, or you can prove it like this. Well, I'll show you a proof and then we'll figure out. It's a, yeah. So, okay. So I have to tell you what functions you're gonna apply this operator for. And for what we did here, I will have to also tell you what the quadratic forms are, right? The quadratic forms have to be figured out. So we define the norm. I call L star norm. Okay, so F norm star. Okay, it's just let's just call this the L two norm of F. So we need an L two norm component plus the error norm of F, okay? So that's just F af, okay? And well, you know how to define this. And this one is just the integral of F prime squared. It is the L two norm of F prime. Okay, so maybe that's just by integration by parts from here, all right? So maybe write it like that. So this is the L two norm of F prime plus the L two norm of F times root X squared. Okay, so this is the L star norm of F. You can define it for every function unless the derivative doesn't exist. But if that's say, then let's say this L star norm is infinite. Okay, so that's this function space we're gonna work in. The ones that have L star norm, finite L star norm, okay? And already in order to work with this operator, we have to be able to say what is, so we need to compute F a beta F, right? How did I put it down? A beta F for beta, for F in L star. F comma AF, yeah. A is just the ordinary area operator. So you don't take the norm of AF, you take F comma AF. That's why you have this square root, right? Okay, so this is already not a trivial thing, right? Because you, I mean, let's see, so, sorry, what is your question? So where does L star live in the history of math? Okay, I can't tell you that now, but don't ask me. There are much better people to tell you that. Okay, so let me, so the only thing that we need to define, of course, is F B prime F, right? Because this thing is just, this is just equal to F AF plus F B prime F, okay? So, you know, at this point, you could just say that this is the paleo integral, okay? Of F with respect to Brownian motion. But you have to be, but you know, you should be able to define this also for F's that are depending on B prime, so. So there should be a way to do it without actually using any stochastic integration. Stochastic integration is messy, so there is a way to define it and here is how it goes, okay? So, so you want to understand this as integral of F squared B prime DX. That's what it should be equal to. So, you could try to do integration by parts. You could do this as F squared prime, so F prime to F prime F, all right? And B DX. So this now is a perfectly fine integral, excuse me? Minus sine, thanks. Okay, so this one is a perfectly fine integral except you may not, you know, because B is now just a function, F prime exists and F exists, so everything's nice, but it may not converge. Okay, so to control this, you would like to control this somehow, right? In particular, you'd like to control in terms of this A. Okay, so that's an exciting thing to do which we'll do next time. But the main idea, just before I stop, let me show you that is you write B as the average of B plus B tilde. Okay, so what is B? The average of B is the integral of T2, T plus one of B of SDS. So just average on an interval of length one. So this is gonna be a very nice function because it's, let me see, so the average is, it will have a derivative which is itself a function, okay? And this one on the other hand, which is the remaining of the average is gonna be small. So there is a smooth part and there is a small part. So we will be able to use this separation to actually give bounds in terms of, and growth of these things because we have some growth bounds on these to show this inequality over there. Okay, so that's what I'll do next. Thanks. Yeah, so basically if you, the simplest thing is if you, you can check that if you have an F which is an eigenfunction for the finite operator, then it will, has to be small in the beginning and it comes from the, how the boundary is at the finite operator. So you remember that actually it just comes from this very top entry here. So if you look at J, then you have this chi, and you have zero here, okay? So it comes from this zero, well not zero, you have this A, A but it's small, right? This is normal, which does is small. So this is big, big, big small. If you put here, if you put here a root n, then you'll have Neumann conditions. So in the discrete case, in this convergence, this is a general fact by the way about difference equation approximating differential equations. Is that in the discrete case, there you never have boundary conditions but the way you operate these entries at the boundary will actually give you boundary conditions in the limit.