 Dobro, więc, before we go into the new material let me just summarize on two slides what we do yesterday, so it is short summary, so first I gave you a little motivation to worry you about random shading that operators disordered quantum systems, about under some transition this was an indication that random matrices, as a theory is useful not just in probability theory but also it has some elevens in fyziks, as well. And then let me remind you that the main goal is to prove the local law, and we are using the resolvent method, which means the following written here, I just wrote it up for Wigner type matrices. The local law means that you want to understand the resolvent, g was the one over h minus z the resolvent of the random matrix, and gi refers to the ii's diagonal matrix element. So we'd like to understand this random quantity, and this random quantity turns out to be a close to deterministic number called mi, and closeness means that there is an explicit bound in terms of one over n times imaginary part of z. So everything in such a way that the relevant regime is when the imaginary part of z is very small, that's the regime where the stiljes transform or the resolvent carries a very valuable information about the local distribution of the eigenvalues. So the imaginary part z is always very small, n is, of course, n is the size of the matrix, that's always very big, and we also know that the imaginary part of z cannot be too small, it has to be above one over n, otherwise the resolvent will become truly fluctuating, and you cannot hope such a low-flage number type results, and this bound exactly reflected, because the control is in terms of one over square root of n, n imaginary part z, so as long as the imaginary part of z is much bigger than one over n, then this is useful, then this error term is it low of one. And now, so this is the goal, and now, of course, we have to figure out what mi is, and mi is the resolution of our quadratic vector equation, so let me just write it up here, one over mi equals z plus s mi, so it's a vector equation in terms of, so the unknown is a big vector, m, which has n components, with a side condition that the imaginary part of mi is always positive, and of course z, which we usually write as e plus i, and this is also in the upper half plane, so this is the starting point, and this equation we will see today, that this equation has a unique solution under this additional constraint, and that unique solution that defines the mi, and the gi is close to that mi, that's what the local law says, and once you have that, once you know how the diagonal elements behave, then, of course, you will know how the trace behaves, it's just some of the diagonal elements normalize it, so the trace will become, let me write it up, maybe, so if you take the imaginary part of m, summation of imaginary part of mi, and normalize it, then I also will denoted by, the average I denoted by this bracket, then this guy will be the still-test transform of the so-called self-consistent density of states of a measure, and this is the measure whose pictures you have seen on the various pictures, so, for example, in case of the Wigner matrices, this is just a semicircle, but in general, it's something else. Okay, so this is the connection between the resolvent and the solution of the self-consistent equation, it's Dyson equation, and the message was that even though if you are interested only in the density, in other words, you are just interested in the average of this imaginary part of m, which is just a scalar function, there is no way, there is no simple equation for the closed equation for this quantity, you have to go through the equation, through this equation of n variables, which we call vector equation, because unknown object is a vector, you have to solve this vector equation, only from the solution you compute its average, and that gives you the stillest transform of the measure. But of course, I mean the local locale carries much more information, this information is needed also for universality, carries much more information, not just the information and the density itself. Okay, and then, this is still yesterday, so we mentioned that there are two steps in the proof of the local locale, one of them what we call the derivation of the corresponding equation to see how this, how such an equation emerges if you start from the resolvent, so here let me just call g i, little g i for simplicity the resolvent, diagonal matrix element of the resolvent, and then we have to prove, and that we discussed last time, we have to prove that this g i satisfies an equation, which is reminiscent to our q v, our quadratic vector equation, but it has an error term, d i, and this error term carries, it turns out to be small, it carries many different terms, the most important term on it is a large deviation, it's a fluctuation estimate on some quadratic functional of an independent vector of random variables, so this we discussed last time, and then in exercise store when we discussed it how to estimate this thing, and then it turns out that this d i with some very high probability behaves like this one over square root of n, it takes exactly the same precision what here you want to reach, as I should have mentioned, I said it yesterday, but all these bounds are, this little big line is understood, little grain of salt, so first of all, these bounds are always understood with very high probabilities, I'm not almost sure bounds, because sometimes the resolvent, very small probability can behave completely crazily, so also there is a little tolerance here, so typically instead of this one there should be a factor which grows slowly with n, for simplicity in the notes you have n to the epsilon here for any positive epsilon, but you can have also some log power, but I neglect these subtleties, so I will compress all these things into this notation, which also includes the high probability sense. Okay, so back here, so the d is this error term coming from large deviation bound that's also understood in that sense, so it's a high probability and with an additional factor of n to the epsilon. Okay, so we have this equation, so this equation is very close to the, it looks very close to our qv, the deterministic equation, and then the second step is that you want to compare the solution of these two equations, you want to compare the unperturbed, we think of that equation as a perturbation of the unperturbed deterministic equation for mi, and then you would like to understand that what happens to this equation, what happens to the solution of this equation if you perturb it by some small object, it's a random object, but for that second part of the analysis it doesn't matter whether d is random or not, what matters is that it's a small quantity, small vector actually, and then you want to conclude that then the solution is, this equation is stable under small perturbation, so in other words, then the solution in some appropriate norm, which in that case is just the maximum norm, will be bounded by the g minus m difference, will be bounded by the d, by the perturbation. Okay, so that's the goal. And as the step number one we discussed last time, this relied on the Schur complement formula and also some large deviation bound, and then step two comes today when we discuss the stability and various properties of this QV. So that's what we had more or less last time. Is there any question early in the morning? Yes. Well, high probability means, for example, let me write up what I mean by this di. What I write here, it really means the following in my language, that you take the probability that this di is bigger than, now you have the n eta, but as I said, I want to allow, I want to afford a big factor here, not too big, but a factor which goes to infinity. So let me just put in n to the epsilon, that the probability that this guy is much bigger than this one over n eta, this is very small, so typically it's n to the, one over n to the d, and this is understood in such a way that this holds for every epsilon md positive, and there is a constant here, if you want to be absolutely precise. So for every epsilon md, there exists a constant c. So this constant depends on epsilon md, so that this holds for every n. So that's what I mean by high probability bound. And here, of course, you should think in such a way that epsilon is tiny, so arbitrary small number, and this is an arbitrary big number. And that's when the estimate is useful. So this tells you that you have an arbitrary, this probability has an arbitrary polynomial tail. The precise kind, I mean you can use any epsilon, any delta, if you put some more, this kind of bound depends on how strong moment conditions you assume on the random variables, on the initial matrix elements, if you assume a bit more, then you also can get some exponential, sub-exponential decay here. But this is not so important for us. The one over n to the d is, as a probability bound, is already very, very strong. So everything works with that. Okay, any other question? Okay, good. So now let me come to the analysis of this qv. So today we will work with that equation. There will be no randomness today. We already motivated that the main issue is to understand various properties of that equation. Now here's just a remark, which I'm not going to pursue here, but actually the equation can have an additional term here. There's a term, a vector a, which I could insert it into that. This is also part of the data. And we can do most of the analysis also if you have an a here. We call it external source. The role of the a comes in if your original random matrix does not have centered entries. So I said that now we discuss always the case when the matrix elements have zero expectation. But if you don't have zero expectation, you can also include it into our analysis. And that's how you include it. But I'm not going to discuss it today. I'll just put it in once. OK. Now the important data is the matrix S. So there is this S matrix, which remember that was given by the variance of the matrix element Hij. By nature, this is of course because H is a Hermitian matrix. This Sij is naturally symmetric. So S equals Sd, of course. And this is extending assumption. And the other standing assumption is that the matrix elements of Sijj are positive or non-negative. These are also obviously in our application. But from now on we will just look at this equation coming out of nowhere. We don't have to remember it. It comes from a random matrix as long as these conditions are satisfied. So we think of S as a symmetric matrix with non-negative entries. We assume some upper bound on that. So it's equivalent to an upper bound on the variances. OK. So now let me introduce a notation here, which you may find confusing, but it's a very useful notation. The notation is that I don't want to write too many indices. I will use the vector notation. So in particular this equation I would like to write in such a way minus 1 over m equals z plus m, z plus s m, and I cannot reproduce boldface in the blackboard, so these are understood in boldface. This is a whole vector. Let me underline that, refers to the boldface. So I will write the qv in this compact form. But of course this compact form should be understood on that side. You know what it means, because m is a vector and there is a matrix acting on a vector that you know what it is. But 1 over m looks a little bit strange because m is a vector and now you take the reciprocal of a vector. This is not quite allowed, except that here I'm defining the reciprocal of a vector just entrywise. So this is for simplicity. And similarly, more general, if I write a function of a vector, then I understand it in such a way that the outcome is a vector and I will use a similar thing for product, so I will, few times you will see multiplication of two vectors, u times v, this is an ad hoc notation, this is not a dot product, this is, the outcome is a vector, which is just entrywise multiplying vectors. And similarly, if I write somewhere that u is smaller than v, then it's understood entrywise. So it's just a short timing notation. So the equation up here is already written in that, in this simplified, shortened form. Now, let me start with discussing a little bit the existence and the uniqueness of the solution of this equation, I will not do it in full detail, I will just, because, yeah? No, no, this is, no, there is no, this I mean exactly in that way. So in particular, you may not be able to compare any two vectors. So this is exactly what it says, for every ui has to be smaller than vi, and if the u1 is smaller than v1, but the u2 is bigger than v2, then I cannot compare them. No, these are for real numbers, of course. I mean, sorry, you are right, it's a bit misleading. So, of course, the first one is for any complex number, second one is only for real numbers. Okay, any other questions? Good. So let me discuss a little bit the existence and uniqueness of this equation. This is sort of a folklore theorem, and I'm not sure if we could find the first mentioning of that, but let me just say how it goes. So the statement is that this equation, this is what I called a qv, it's always understood that with the side condition that m is on the upper half plane, so component wise is in the upper half plane, and actually sometimes you think of it in such a way that z is a parameter, of course the solution m depends on this parameter, and sometimes you think of it in such a way that m is a function of z, so you solve it not just for everything z, but you solve it as a function of z. So you can do, once you establish that this is a unique solution, it makes sense to talk about the z, the mz as a function of z. Okay, the statement is that this has a unique solution, there exists a unique solution, and also it has a representation, namely it's no wonder that everything is called m, namely it's a stiertjes transform of somebody, in other words there is a probability measure, a unique probability measure, the real line, so that this mi, for every i there is such a measure, this mi is the stiertjes transform of that measure. I mean, eventually we know that this mi is supposed to represent, supposed to approximate a resolvent, and the resolvent itself is a stiertjes transform of somebody, so it's natural that m mi is also a stiertjes transform of somebody. Okay, we also have few properties of this measure, which I didn't put on this slide, you can find in the notes. Okay, so now how does one prove something like that? It's a fixed point argument, it's the easiest, and basically you do it in the most brutal, most natural way, you write this equation, it's a fixed point equation, so you take the reciprocant minus, so you write the equation in such a way that m equals minus 1 over z plus sm. Notice that this notation convention is used all the time, here this is z plus sm is a vector, absolutely precise, of course z times m is a vector, and then z is just a scalar, but by this notation I mean that it's z times the 1111 vectors, I add z to every component of it. And then I take the reciprocant of that vector, as we discussed in an entry-wise form. Okay, so this object here is an element of hn, it's a vector h is dot per half plane, this was somewhere, but let me just put it here. Okay, so you write this fixed point equation, then you set up the usual fixed point procedure, so you define the map, which is exactly this map, and then, which is exactly the map on the right-hand side, and then the fixed point of that map is the solution. Okay, so now I would like to set up a fixed point argument, and typically there are two things about the fixed point argument. To establish one of them is that you have to find a compact set, which is mapped into itself, that's one ingredient, which I didn't work out, I didn't write up explicitly, it's a big set of, maybe in the next one, so it's a set, it's a big compact set, appropriately chose them, and then the more important thing is that you have to find a good metric, in which this is a contraction. So that's how fixed point argument goes. And here is the right metric, don't get scaled, it's a fairly natural metric. So first of all you define the, you go back to the hyperbolic metric, on E, which is written like that somewhere. So anyway, so here's the formula. So there's two points in the upper half, plane zeta and omega. The distance of zeta and omega is given like zeta minus omega square divided by the imaginary part. That's the distance. This is related to the hyperbolic metric, those who like that. So it's capital D here, something is missing here. I missed the hyperbolic metric, so the two times cosine hyperbolic times the hyperbolic metric minus one. So this capital D is basically the hyperbolic metric, standard hyperbolic metric on the upper half plane. So this is our metric, we have to check where you are seeing that it's a nice metric and so. And then this is a metric between two elements of the upper half plane and then you take the maximum of that coordinate wise, because eventually you want to compare. You have to set up something in h to the n, so you want to set up a metric on vectors, but then you take just the supremum of that. And that's it. And then here is the set. There is a set in which you has a big set, which is basically a set of that type. You want to be a little bit away from the real axis and then you don't want to go out to infinity, so you take a big set like that and then you work on that. OK, so precise form is not that important that there is such a set and especially there is a metric. And now you have to, you would like to check that this is a contraction. So you have to check that this map here, which is called phi in the previous transparency, sorry, so there is this map phi, which contains composition of some standard, some simple operations. You can set up this map in such a way where on the u you act by a linear operated by a matrix multiplication and linear transformation plus an affine transformation plus you shift it and then you take the reciprocal. Now you can follow all these three operations, you can follow through on the matrix so you can see how the contraction emerges and then you can prove a type of contraction that the d phi of u phi of v is smaller than d of u v. OK, so this requires a calculation and phi is this map here, this guy is phi of m. So this requires a calculation, not completely trivial, using various properties of the upper half plane, but the operations what are used here, linear transformation, taking the reciprocal and so on, these are the typical transformations which behave very well under the high-turbolic matrix so it's believable, I don't claim that you should see through immediate, but it's believable that such a contraction works for a map of that type. OK, and then once you establish the contraction then you can start from anything in the constant function, the constant vector which is everywhere, or the upper half plane so it's I, I, I, I, everywhere and then you run the iteration and then it will end up with the solution. OK, and then so in that way you get a map, you get a solution, but also you can think of it that you solve it and this z here indicates a little bit secretly you can also think in such a way that you solve it not just for a big z in the space of hn but you think of it as a function of z, so instead of m you think of it as m of z so m is a vector and m of z is now a vector valued function so you can set up a space and this is you can set up a space and a metric not just for vectors but vector valued functions of the upper half plane and then you have to use a little bit analytic theory so you find that along this operation this operation keeps analyticity so you can find that when you do this iteration when you are not just getting the solution but you also get that this solution is analytic in z so it turns out that because every iterate is analytic and there is some standard compactness result in analytic functions which tells you that the limit of analytic sufficiently strong limit of pointwise analytic functions is analytic so you get an analytic function on the upper half plane remember because we want to check not just the uniqueness but we also want to check the definition of somebody and remember that there was a characterization of the stirtial strength and something is a stirtial strength if it's an analytic function on the upper half plane that's why we would like to check analyticity and then there was another condition there is a normalization condition this I8 if you go out in the imaginary axis to infinity then anything any stirtial strength has definite behavior in that direction so we will not go through it and in particular you also get a bonus estimate for the solution so so you also get once you establish that this is an analytic function that this has stirtial strength then you also get that in supremum norm is bounded by one over eta as any stirtial strength from does that so at least we have some upper bound some a priory bound on the boundedness of the solution so that's how these are the things that you get out of basically from nowhere just from standard nonsense and now the goal would be to improve such an estimate because remember that we are interested in the situation eta is small actually it goes to 0 so the estimate which is a further one over eta is not particularly useful for the analysis so we would like to improve it so the goal improve it you would like to get of course you may need some other conditions on the conditions here in terms of s of course but you would like to get conditions which tell you that this m the solution is an order one object and this is what we will work on next so that's the next chapter that we would like to get bounds on the solution and immediate so let me introduce these two norms that we will work in two different norms there is a L2 norm and there is the L infinity norm the L infinity term is clear this is just the supremum or maximum of each entry and the L2 norm is also the usual L2 norm except that I normalize everything by one over m I like to think of this summation over the indices as a probability measure so in this way various herder inequalities and so on will become simpler ok so this is the L2 norm and then we will have three types of bounds the first bound will be an L2 bound which is with some additional condition on s which will be useful in the bulk it should be useful in the regime where you have a bulk means the regime where this regime here when you are strictly away from the edge inside well inside the support of rho so this is the bulk ok so there will be a bound of that type which is an L2 bound on the solution bound always means that the bound will not be that bad it will not be a one over eta bound it will be an order one bound this will require a bound at least in the simplest case will require a condition sijj remember the sijj the variance of the matrix if you are in a midfield situation then typical size of the variance is of order one over n so here we will assume that but we will assume it also as a lower bound not just an upper bound one can do weaker conditions as well but for simplicity I will do that there will be another type of bound which is unconditional L2 bound unconditional means that it doesn't use anything about the s operator the s matrix apart from these two properties that it's a symmetric and positive entries there will be such a bound and this bound is very useful because it's a robust bound but then this will not work at zero somehow zero is a special point zero means that z equals zero the spectral parameter and then there will be L infinity bound so that will be really the improvement of the bound in supremum norm L infinity bound of order one but for that we will need an additional condition some kind of regularity condition s so these are the three types of bounds and now in the next slide I will discuss one of them so this is a bit more precise formula of this bound so this will be the L2 bound which is useful in the bulk or actually we will see that it's also useful when you are far away from the far away from the spectrum at all and here I assume that the s is the matrix elements are for the one for the one over n so this notation is that there is a lower bound and the matching lower and matching upper bound so it really means that just write it up once so this really means that the s i j is smaller than c over m and bigger than little c over m for some positive constants fixed positive constants of course all these constants n is our big parameters so all these constants should be independent of n okay so in particular if you have such a property then you can easily see that the s operator when it acts on positive vectors of course s has a if v is positive I mean entriva is positive or non-negative then s has non-negative elements so s times v is also a positive or non-negative vector and under this condition it's easy to see that this basically tells you that the sv is a vector that of the sv is comparable with some c times the average of v and there is a similar lower bound with another constant c so s is a very nice averaging it acts like an averaging operator so okay so this is a useful property it's a quite strong property we can relax it quite substantially but I don't want to discuss it here and now then under these conditions you get an L2 bound you can improve the bound there is no one over it an order of an L2 bound and moreover you can get a bound this other norm the unmarked norm is always the L infinity bound the supremum bound you can get a supremum bound which is an order of one bound as long as you are either in the bulk rho is this density or there is another option that you are far away from the bulk far away from the spectrum the spectrum looks far away from the support of rho so the support of rho is a compact set this is one of the result which I didn't state and then there is a regime when you are inside the bulk that's good because then you can estimate things in terms of one over rho and then there is a regime when you are far away from the bulk so this is the outside regime which is also good so you can use the distance to the support and then there is always a regime which is the most critical and what happens near the edge and this regime I'm not going to discuss that's a long story and this is actually the analysis what you need to do if you really want to get these nice pictures and I said that we have an analysis we have an understanding what kind of sink rate is the rho can have then then we have to analyze everything it will be everything will be basically in the bulk so in that situation you will have an L infinity bound on the solution and also you will find that the imaginary part of M imaginary part of M is an important quantity this is the quantity which gives you the density the imaginary part of M is comparable with rho rho is the average of the rho is the average of the imaginary part of M so it's entry-wise comparable so this is a lower bound and also as an upper bound but you need an additional factor which is the L infinity norm of the M so the lower bound is very good the upper bound is also good as long as you are in the bulk so as long as you have the M being bounded by order 1 but once you start going near to the edge then all these things start deteriorating especially this bound starts deteriorating then you have to