 da je vznik s njenem. The role, the notation role, is used a little bit for two different things. They are actually the same thing, but it has the use of notation. The role here means that in the picture when I draw the picture, that it is just a probability measure on the real lines. It is a probability measure. a z taj sem zelo vzalazal z zemljenj z, zato sem vzalazal komplexen zemljenj v zemljenj, in zato sem po vzalazal po vzalazal s enelitivnem, hrvonimkimi vzalazalami, kaj je tudi samo mič, ki sem je izvednjen na m zemljenju. Zelo sem vzalazal z zemljenj z vzalazal z zemljenj, ki vzalazal je način, maybe sometimes you get confused at rho. Sometimes it's a real object, and sometimes rho has an argument up there and sometimes it's in the real line, but in the up there is just a harmonic extension. Okay, so that's what, so this is the statement, and let me just show how one proves it. Pr roots are very simple. It's more like setting up the right conditions and so on. That's critical one. So let's try it up. The key equation. The key equation is just to look at the original equation here, or it's here. And then you take its imaginary part, it turns out that the imaginary part of this equation is a useful information. If you take the imaginary part of minus one over m, and it's a complex number, s мне pause je da pravid. REALLY is equal to at 3. This is real real real real real real real real real real real. The table does look not good, but we are not seeing no value null, but well-designed, as you know, those APIs will be replaced m, those minimalizations for variables of various variables, je zelo vsega vsega, vsega, vsega, vsega, vsega, vsega, vsega, vsega. In potem je vsega, vsega je pozitiv, tako, imačna pa je vsega, in potem vsega rekt, imačna pa je vsega. Zato, ki so to je. Zato je vsega vsega, in potem se izgledaj, vsega je vsega, vsega je vsega, in potem se izgledaj, od bilo in od obav. Zato sem pričeljno proti vznikov, da je vse lovite vse med ove, da sem svoj, da je vse večo izvoredil, ko se očič施ak nje ne ne, zato s večom, da smo počekal se, da smo počekal, da smo počekal, da smo počekal, da smo počekal, da smo počekal, da smo počekal, z njega, z njega, je to pozitiv, ki so počekal, da smo počekal. prehvant delaj, z 62 rdem m. In here I'm neglecting constants and all the constants, if you wish, included in this in this regular notation. So you have that. And now you multiply through with the m, the m square. From this one you get the major part of m is bigger than m square times kaj je začnevaj veččas. Jel je naprvnit komponent začnevaj, neč očelimo resoldu. In točeš kaj je vsega vsega. Zato nekaj je vsega, na kaj je vsega nekaj pač, na kaj je vsega nekaj vsega. Na taj, na kaj je vsega nekaj pač, je konstant, skeler, tako, nekaj je pač. In pačeš, nekaj je vsega, m absolute value square and the average of m absolute value square is exactly the two norm square. So you get that. Ok, now you simplify with the imaginary part of m and then you get an upper bound on the two norm of the solution. So this is the first thing here. This is how you get, how you improve the original 1 over eta bound to something which is an order of one bound, but for the moment it's not in the L infinity norm, which is a stronger norm. For the moment it's only in the L2 norm. Ok, so that's the first thing which you get. Now how to get the other bound, which is now an L infinity bound, but it made it iterate with the rho. This you also get in the same way. You get basically from that equation, but you read it from the other direction. So, just write up, so you operate with the same equation smaller than imaginary part of m. I just read the same equation in the other direction, but the imaginary part of m is, of course, always smaller than m absolute value. The imaginary part of everything is any complex number is smaller than m. And now you can simplify with one of the m and then you get that and divide through and then you just get the imaginary part of m that m is bounded by imaginary part of m. Ok, so that's the other bound. So, this is a bound which I have advertised here that you can have a bound in terms of 1 over rho, but you can also have an alternative, you can have another bound that m is always bounded by the 1 over distance to the support, because m is a sths transform and that's a basic property of the sths transform that away from the support of the measure it basically follows directly from the integral representation away from the support of the measure the size of the sths transform which has the 1 over distance of that. Ok, so this is the alternative bound and finally you can also get a bound in, you can get a bound of 1 over mi. So far we have an upper bound now if you wish you have a lower bound on mi which you formulate as an upper bound on 1 over mi just by using the equation, taking absolute value in this equation for simplicity we just do it when z is on a finite set and then you estimate, after estimate the L infinity norm of s times m but then you use the bound, the upper bound on s, the upper bound of that type and then you see that that you pick up the first L1 norm this is what you get out of it as an L1 norm first but then by the Heldere inequality you can estimate it by the L2 norm and the L2 norm was already bounded. Ok, and then once you have an upper bound on 1 over mi then again you go back to this basic equation here and then you see now you have a bound on 1 over m 1 over m squared even upper bound on that so you can read this inequality from this side and the average imaginary part of m we rebound it from above by the imaginary part of m divided by times 1 over m but 1 over m is now already bounded so you can forget about it here. Ok, so this will be a lower bound on the components of imaginary part of m in terms of the average and the upper bound is similarly easy to read this equation from the other direction you want to estimate this from above so you estimate this side from above as times the imaginary part of m is basically the average and then you get the upper bound. Ok, so you don't have to follow it in every detail just shows that these are sort of relatively simple operations and everything goes back to this the imaginary part of the original equation. Ok, so now let me introduce something more interesting which is actually a fundamental object in our analysis we call it the self-energy operator f so that operator matrix I mean the situation trivial matrix so in order to see that let me rewrite this equation here in a little bit different form I am doing this equation just rewrite it but it's a useful rewriting it's imaginary part of m divided by m so one of the out of this m episode where you square right pull one of them on the other side it's equal to eta times m this is not important and then plus m so it's multiplied from this direction and then I want to smuggle into the argument of s I mean it's a vector so sx on that vector over m so I want to write it in such a way that it's s acting on m times imaginary part m over m so I just smuggle in this m but now I would like now I would like to view this as an equation on this quantity on the imaginary part of m divided by m absolute value I mean on that side you see that quantity directly this guy this one doesn't matter remember that I am interested in the tiny eta regime I wish I would be able to forget about it that's not the important thing but then here I see that there is a new matrix new operator act with a linear operator acting on the imaginary part of m divided by m so this is with the f matrix acting on this guy and if you wish then this is the definition of the f so here it's written out more explicitly so f is a linear map which acts on any vector in such a way that you first take you first multiply this vector by the vector m entrywise multiply then you let the s act on it and then you multiply the result by m so this is a other more formal definition of the operator f the matrix f and if you want it it's written out in in coordinates actually this one over is wrong so so that's what we have now so the point is that we would like to play with this f operator this f matrix and this f matrix has a few very nice properties so first of all it's a symmetric matrix it's easy to see that if you take the usual scalar product then the form looks like symmetric s is symmetric so you can distinguish between m and m on the same psi on the two sides so it's f is symmetric and also it has positive matrix or non-negative matrix elements because s has this property m has this property so then you can use the paramphrobian use CRM which says the following that in such a situation you have an eigenvector called f which is actually the maximal which picks up the maximal eigenvalue in terms of absolute value so this corresponding eigenvalue of this eigenvector is just the norm of f and then under some conditions some irreducibility condition which actually follows from this lower bound in our case you will know that this eigenvector is unique so let me just take this situation when this one is actually fijj is strictly positive this is a simpler situation and then there is a unique eigenvector in that case the eigenvector has positive elements such that ff is the long times f so that's what paramphrobian uses now why is this good go back to our basic equation the rather this equation here and scalar multiply this equation by f so let me do this calculation so so you get f times this eigenpart time divided by m equals eta fm this guy here everything is positive I don't have to worry about complex conjugates and now I scalar multiply this equation by f so I have f imaginary part mm but now f is positive f is symmetric I can put it on the other side and then f hits its own eigenvector so this guy here is just f eigenvector is of course just the norm of f so this turns out to be just that and f imaginary part m over m simple formula but now you see that the same quantity appears on the two sides so from this one you can express this is the last formula here we go next step so from this one, from this identity you can just express the f and the norm of f so what you conclude all these things are positive numbers this is positive so all the divisions are fine and then you express the f and then you got this is one minus and then something let me not write that precisely so most importantly this is smaller than one so you get a bound in this, in this using this baron-fribianius thing you get a bound on this f operator and this bound is one the constants here the s operator had some constants here but this constant didn't play any role here it's just this appeared the only slicing that I use but it's easy to remove I use the irreducibility side just use the s side j is not zero even that is easy to remove it's independent of an input and this is what it makes a very useful bound you see that the f operator is a funny thing the f operator consists of two parts there is the s matrix which is our original input and there is the m which is a complicated object for that you have to solve the equation and s could be very large, could be very small similarly the m, the solution could be very large, very small everything can happen and this particular combination of the m and s if you write it in this form and this particular combination is bounded it's always bounded by one so that's a useful bound and now once you have that once you have this norm then you can easily you can easily prove an unconditional bound on the L2 norm in that case now for that you go back to the equation here and express simply the m from that equation which is not very hard let's try it up here so from the qve you can express the m just rearrange the equation for m and then you get probably there is a minus 1 over z times 1 plus m times take this equation multiply it through by m and here you get the z times m and then from this z times m you divide it through by z and express the m in terms of 1 over z plus everything else now again I'm here you have to write everything in short but you have to read it proper so this means that for m and m these are vectors so this vector m is acted upon by the matrix s you get a new vector and then you multiply and revise that vector by the m again and I want to estimate it's an L2 norm so the L2 norm maybe L2 norm squared which is the L2 norm the L2 norm is bounded by 1 over z the one has an L2 norm 1 because remember that my norm is the L2 norm are normalized that's why it was good to divide the 1 over n in the L2 norm and then here I have the msm the L2 norm of that by trying inequality but this msm can be estimated in such a way that that I smagal in the absolute value so 1 plus absolute value msm one more line and then what is this msm what is this msm this is a vector again it's a vector but it's very reminiscent of the f operator so the msm vector you can get from the f operator in such a if you plug in the 1 the 1, 1, 1, 1 vector into the f if I plug here 1 then I get exactly the msm vector so finish here so this guy here is just the f acted upon f acted on the vector 1 and this is an L2 norm I forgot here the 2 so this is an L2 norm and now this is smaller than 1 because f had f was bounded in L2 norm I mean this norm here is the usual L2 operator or matrix norm and the 1 had L2 norm 1 so this is so this is just bounded by 1 and in that way you get the uniform bound of 2 over Lz for the 2 norm and again the point here is that this bound did not use anything any quantitative properties of the f operator were on upper bound it's just 2 every all that time and of course it's true it's a good bound as long as you are away from 0 exactly at 0 this bound is not very useful but there's a good reason for that depending if you don't assume anything about this s as I did here then you cannot hope that the solution will be bounded solution will remain bounded at 0 for example if s is just not there if s is 0 at all then the solution of course m is 1 over z that blows up so there is some truth in this blow up here ok so this is the second thing let me just do quickly the third type of bounds remember I said there were 3 bounds one of them was an L2 bound which is useful in the bulk which was estimated in terms of the 1 over rho then there was this unconditional L2 bound which didn't require anything about the s but this is not this blows up at 0 these are all L2 bounds but at the end of the day I need an L infinity bound I want to replace I want to improve this bound in L infinity but on that side I would like to see something which is all there 1 and in order to do that you need some extra condition and this is again the condition is not optimal but some kind of condition of that type is needed I assume that the s is a piecewise herder 1 half continuous function so the simplest way to express this is that if you assume you don't need it but that's the easiest way to present it if you assume that the s i j comes from a profile so s i j is a matrix a big m by a matrix but you can imagine that there is a function s which is a continuous which is a function of the continuum on the 0 1 0 1 square and then the s i j is just this value of the function on the grid points of the 1 over n grid within the square so you can imagine that way and then if you form it if you require that then you know very precisely what herder continuity means and herder continuity means that this function s should be 1 half herder continuous or more precisely it's enough for me to go back to that I have pictures of that type that the s matrix was represented by some block rectangle block square and there were certain numbers in the blocks and that's a typical situation of a piecewise herder continuous variance profile it's just constant on these blocks but then you have a big jump and this is all allowed here now under this condition you will have the bound what I wanted here so under this condition I have a bound of order 1 and this is done let me not go through it in full details so this is done in such a way that you basically you already have an L2 bound so now if you want to do an L infinity bound once you know an L2 bound then the way to do it that you want to avoid the catastrophe in such a way that some not too many but some coordinates are very big that would be a situation when the L infinity bound is big but the L2 bound is not big so in order to avoid that you basically want to prove that the components of this vector they don't differ too much from each other so in order to do that you basically write up the equation for i and j for the two different indices and the difference between the m, i and mj and then you do a little calculation from the equation the 1 over m can be expressed in terms of the equation subtract it from the i and the j and then you get something where the s where the difference of 2 of s of the s matrix at two different points i k and j k explicitly appear and then you can start using the hurdle condition so let me not go through it fully so in that way we got an L infinity bound as well under this conditional condition now let me come to the next thing which is where you will see the real use of this alpha operator let me come to the stability of this equation so often in this case in this case so far what we achieved so far we achieved a bound various useful bounds on the solution and now you want to study the stability and the stability is needed for many different things of course as I explained it is needed for the local low when you compare the equation for the j the resolvent elements and the solution but it is actually also needed to establish these pictures I mean these pictures are established in such a rate the various hurdle continuity and so on these are established in such a rate you study this equation as you move the z as you move the spectral parameter a little bit and this kind of analysis also has stability analysis you can view this I write up the same equation for z prime where z prime is close to z then you can view the new equation as a small perturbation on that equation so this is stability the key point is that everything has to be uniform in eta as eta goes to zero all these things what I am discussing here are all trivial if you can have four bounds of one over eta powers but we cannot because we want to do a local low and we want to go to zero ok so here is here is this explicit theorem about the regularity as I mentioned already before so the statement is that more precise statement that so far from the general theory we have seen that the solution is the steal distance from somebody but now if you have but now actually you can get much more you can prove that this measure here the generating measure is not just a measure but it is absolutely continuous to the Lebesgue measure and actually it even has further one-third further one-third continuous and also we know much more we know that the support of all these measures are independent of new i i is the component and also it has finitely many intervals so it really looks like that as I am drawing it is not completely crazy thing and we also know this real analytics so this curves here what you see for example semicircle and all the other curves these are all real analytic curves of course away from the single edit when it goes down to zero ok and let me just show how one proves it using the stability operator this introduce is the stability operator of this equation the easiest way is that this is the equation and I want to see how it changes with z so I differentiate the equation for z that's easy and get this guide and now I try to solve this equation for the derivative of m and that's also easy to linear everything is linear here so derivative of z derivative of m with respect to z is the same as m squared but then there is this operator 1 minus m squared s acting upon that so in other words you can compute the derivative of m if you want to do regulator you should control the derivative you can compute the derivative you can estimate the derivative if you know the m squared and if you can invert this operator of this matrix and later on we will show probably not today but later on we will show that this inverse of this operator is bounded and the bound is expressed in terms of the one over imaginary part of z imaginary part of z is the density or rather this harmonic extension of the density actually it comes out with square but that one shows that the stability operator that the equation is stable as long as you are not at the edge once you are away from the edge then this imaginary part of m is strictly separated away from 0 once you start approaching to the edge then this bound blows up and then you have to do something but this I am not going to discuss but once you know that let me just do everything in the back once you know that because m is analytic and with using this bound and using this equation the bound on the inverse operator you can easily get basically differential inequality for the derivative of the imaginary part of m in terms of one over the imaginary part of m square so at least simply o d of the type f prime is bounded by one over f square and then you know that it has held one third continuous solution ok let me jump over this because I want to get let me check so this what I wrote here is actually I have to do that so let me just recall what the spectral gap is the spectral gap of Hermitian matrix is in our context is a gap is a distance away from the spectrum away from the largest eigenvalue so picture here so if these are the eigenvalues here Hermitian matrix T and this is the largest eigenvalue where there is a zero then you then you look at the gap and the gap is the distance between the largest eigenvalue next largest eigenvalue but the largest eigenvalue is understood in terms of absolute value so there is a more precise definition let me jump through it and then oh sorry so ok maybe I should so then then one can prove that this f operator has a spectral gap then next time more carefully and then let me just flash up this picture and then we will continue from there next time so the key point is that this stability operator is 1 minus m square s which we have to estimate from below this you can rewrite in terms of the f operator f is our good operator which is now a symmetric operator but twisted, rotated so the 1 minus m square s turns out to be 1 minus times a phase factor times f and then the bound on this operator here follows basically from this picture the f as a symmetric operator has a spectrum of this type it has a largest eigenvalue coming from the peromfribinus and then there is a gap between the largest eigenvalue and the f, spectrum of the f but then you would like to understand 1 minus f times a rotation and then this rotation rotates if you wish in this naive picture rotates the spectrum of f out of the real line and then the distance between 1 and the spectrum of the f becomes large because of this rotation determined by this rotation angle and this rotation angle is directly related to the imaginary part of f so this is how we get the result but I will review it next time ok