 To se pomembno, in druga in ljudec lečeva. Kaj, dvač, možete vrvati? Kaj? Vsem početno. Zelo bi predmelt srednje pravda iz lekova. Zelo je bilo početno. Zelo, ki vam zdičimo, sem prisil zvedno srteni Vektor-Aqe, qve, ki je zelo in veči metrič, s s, ki v aplikaciji žel je zakočeno, asa je zame, asa je dobrozena. To čisto. To bilo o tem, da bijemo, ki je to dobrozevana pri nebezavetnji donovir. Ne imaš z psnotanjem izgleda, tudi praviti pri nebezavetnikom metriče. Prezumente se, da se dobrozena in vse n vas. Zato poved tämä matrič nekaj semetričske metriče z zelo in z zelo nekaj negativne potomitaj. Zelo vi tudi podegrebalo sezat nj Scorpio. plus sm, when of course this m, as we discussed last time, this is really a whole vector, it's an n vector of complex numbers, all of them are in the upper half plane, h was the upper half plane, z is a spectral parameter, which a complex number is also in the upper half plane. So this was the basic equation and the solution of that equation, everything was under the condition, the medjane part of m is positive, and the reason why we studied that is that, so the relation to random matrix is that this g i, i, the diagonal matrix elements of the resolvent was supposed to be close to the solution, to the i's component of the solution of this equation. So this is the reason why we studied that equation. Okay, and now what we did last time is that we, I showed you that this equation has a, has a unique solution, the solution is analytic in this parameter z, so there were various bounds, we would like to, we wanted to bound this solution, especially in the regime where eta goes to zero, so you have, you have to think of the eta, which is the imaginary part of z, and you would like to think of it as positive, but, but quite small, that's the critical regime, all the bounds that we discussed last time become really meaningful when eta is very small, and then we have found that, that there are l2 bounds on the solution, sorry, m solution is a big vector, there are two natural norms on this space, the l2 norm and the l infinity norm, and we had bounds in both senses under certain conditions of s, on s. And, and then key ingredient in this proofs was to introduce the so-called self-energy operator, right up here, which is the operator, I call it operator, but it's actually a matrix, I call it always operator because we have the whole theory, also not just for the discrete situation, but in the continuum situation. So f is a, is a matrix in that case, which is like the m, s, m, where m is the solution, but of course it has to be the proper, so f acts on a vector, so let me put here a dot, and it acts on a vector in such a way that you take this vector and multiply it by m, remember that m is a whole vector, but we know how to multiply vectors in entry-wise form, that was the convention last time, and then you get this, this new vector, the entry-wise product of the input argument times this m, and then you let s act on it, it's a matrix multiplication, you get a new vector, and then you multiply it by m again, I guess that was the, that's how you should read this line. So that was this, that was a, a self-energy, we call it self-energy operator, and that had various properties, most important, obviously it's a symmetric operator, so, so a symmetric matrix, which is obvious. It also has positive entries, because everything is non-negative entries, and more importantly it, it, it's also bounded, it has a uniform bound, it's bounded by one, and we had, I gave this proof, it went through the Peron Frobenius argument, and that's actually a very good information, because you see, it started with an equation, where apparently there was no sizes of the input, so s could be very big, and when you solve the s, then you got some m, so in principle, this combination of m, s, m, may also be big or small, but it's not, there's a cancellation, there's a compensation between the size of s and the size of m, in such a way that this combination, the m, s, m becomes an order, one object, exactly bounded by one. Okay, and then it has another property, which I didn't, didn't explain too, too much last time, but it's, it's also can be proved, is that this operator has a gap, so it's a, it's a symmetric operator, of course it has a, so let me look, let me draw off in a spectrum of this operator, so, so the, the operator has a norm, the norm is, is also an eigenvektor, because it's a Peron Frobenius operator, the size, the norm is, is bounded by one, but this bound is not effective, so it can be arbitrary close to one, let me just draw the picture in such a way that, as if it were very close to one. Okay, so that's, that's, that's the largest, in absolute very, largest eigenvalue of f, but then it also has a gap, and gap means that the spectrum looks roughly like that, so gap means that between the largest eigenvalue and the next eigenvalue, there is a distance, there's an order, one distance, and also the gap was defined in such a way that, that, that, that also the lowest part of the spectrum doesn't, there's a gap between the lowest part of the spectrum and the minus f epsilon, f norm, so the spectrum looks like that. The part inside, where separated away from the two endpoints, and there is a single eigenvalue, a single, a single eigenvalue, and the norm of f. Okay, so after, this is how the spectrum, so this is what I'm drawing here, the spectrum of f. And of course, I mean f is real matrix, so these are just eigenvalues, but anyway, the important thing is support. Okay, and then, and then we figured out that there is this, there was this stability operator, the one minus m square s, which this stability operator is the basic operator, so basic stability operator of this equation, in a sense that if you perturb this equation, then how much the solution depends on the perturbation, will be determined by the inverse of that operator. And this we have seen in such a way that, for example, when I perturb the z, or even just differentiate this equation in z, I want to see how the m is a function of z changes, then the change, then the derivative of m with respect to z could be expressed in terms of this inverse of the one minus m square s. This one minus m square s is a basic operator, and we'd like to understand its inverse. Okay, now, this I haven't done, this is what I will do now immediately, but what we discussed last time is if this stability operator had an inverse, had a bounded inverse, then lots of good properties follow. For example, the real analyticity would follow, the real analyticity of the self-consistent density. It would also follow, not just real analytic, but it would also follow that around the edges. And there is this old picture. This is the picture of the row. Real analyticity means that everywhere away from the edges, it's a real analytic function on one hand, but also it's one third further continuous, which means that this one third further continue to extend at the edges. I mean, if something is real analytic, it's also further continuous, but it's real analytic only inside, without any, the first statement doesn't tell you any control on the derivative, but the second statement has a control on the regularity, and it extends even to the boundary. So that's the one third further continuity. And eventually, the stability operator will also be used in the final issue that how to go back to, how to prove that the resolvent is close to the solution of the self-consistent equation that also uses a stability step and the same stability operator emerges there. Okay, so that's how far we got last time. So now, the only thing, so now what is missing from this picture is to prove that this stability operator has a bounded inverse, and we will do it, I flashed it up last time, but it was very quick, so let me go through it again. So we will do it with the help of this operator F. So let me do this little calculation over there. So you would like to understand how this one minus M squared S operator, I would like to rewrite it in terms of this operator F, because F has good properties, F is a self-adjoint operator, and I understand its spectrum, and it's not very hard to rewrite in the following basis of all that it acts on a vector W, then I write it in such a way that the W in the first part is nothing, and then here I would like to smuggle in these M absolute values. So let's just see how I do it. So I have an M squared, I divide it by M absolute value times F, because F itself has an M here, so I just divide and multiply by this M, and then the whole thing will act on W over M. So this is just an identity, because F itself was defined in such a way that it's M, S, M. And then these two M here and these two M here cancel. Okay, so that's the first step. And now I would like to get this F operator here, so I write immediately a polar decomposition. I write M, which is a truly complex number, or rather it's a whole vector. I write it's a polar decomposition, e to the i phi times M absolute value, and everything here is a vector, so M is a vector, phi is a vector, so with the help of that, you can rewrite the whole thing in the way how it's written there, so let me just copy it e to the i phi, and then M absolute value one mile, e to the minus two i phi minus F times one over M and W. Okay? It's correct. So, you see, it's clear what happens. The M square is just e to the two i phi times M absolute value square, so the M absolute value square and one of them in the denominator gives you one M upstairs, e to the two i phi pre-factor there, so all these things give rise to that term here. And then this W here I just wrote it in such a way that that eventually I get the operator that I'm interested in inside, okay? So that's just an identity, and now I notice two things. So if I want to invert this operator, but inverting this operator is the same thing of course than inverting this operator, e to the two i phi doesn't do anything, it's just a unitary thing, and the M and the one over M these are also harmless, because we already knew, and that was done last time, we knew that the M absolute value is an order of one object, so you can multiply the right by that, it doesn't change the norm or the norm of the inverse of that operator, so eventually everything boils down to the question of invertibility of this operator of e to the minus two i phi minus f. Or if you wish, you can also multiply through with the e to the minus two i phi, it's the same thing as the invertibility of the identity minus e to the two i phi times f. I guess everything boils down to that. But now you see that the f itself, here is the spectrum of f, f has a real spectrum, and this guy here, this guy sort of looks like, which typically does not have a real spectrum, especially if phi is non-zero, what is phi? Phi is basic, I mean, the phase of the M, but phi is essentially the imaginary part of M, at least when phi is small, or not too big, let me put it even more, because actually the sign of phi is the imaginary part of M. So now, because it has a real spectrum, as long as this guy is not real, this operator will be invertible, so in other words, this operator is invertible as long as the sign of the two phi is non-zero, but actually because you know that the f may have a spectrum close to one, but it doesn't have a spectrum close to minus one, this is the point minus one, actually you know that in the case when this e to the minus two i phi happens to be minus one, it is not dangerous, because the only dangerous thing is when this e to the minus two i phi is close to one. Okay, so I have just one case when the sign of phi is zero. Okay, so this is the intuition. Now let me first show the simple case when one can actually do it easily. The simple case is when everything here is as the Wigner case, not the generalized, not the Wigner type case, but the Wigner case, the Wigner case is when the solution of the self-consistent, of the quadratic vector equation is just a constant vector, namely the solution is just m, m, m, m, m, with the m being the still just transform of the semicircle law, that's the very old situation. In that case you know various properties of this m, this is not so important. I basically did the same calculation as before. In that case it's even easier because then there is no need for vector notation. F is symmetric, the spectrum lies between minus one and one, here is the spectrum of F, this is the spectrum of F, there is a little gap between the rest of the spectrum and the largest eigenvalue, and the largest eigenvalue is red dot maybe very close to the one, actually in the semicircle case you know exactly that this distance between the largest eigenvalue and the one is exactly of order eta, so that would not be very useful if you try to invert the one minus F then that would blow up like one over eta. But that's not what you do, what you need to do is to invert the one minus, now I look at this other form, I invert the one minus F e to the 2i phi F, now this 2i phi F it has a spectrum which is not a self-adjoint operator but it has a spectrum which is rotated out from the real line so here is the spectrum of this rotated one rotated by angle of 2 phi and now a question that you want to invert it you want to invert it one minus of that operator so the question at how far the one, the green dot is from this other rotated line and then you see that the only danger that is typically far away the only danger is when this angle 2 phi is zero when I didn't rotate anything even if I rotated with 180 degree that would not be a problem because this endpoint here the other endpoint is short it doesn't go up to one so that's roughly the idea and analyzing this you will get the bound what you expect so the inverse of the inverse the norm of the inverse of this operator is basically the same as the norm of the inverse of this one minus e to the 2i phi F and this can be bounded basically by the phi this angle which is the same as the imaginary part of the solution which is the row and now this already indicates so first of all this argument will not work in general because if you do it at least not in this simple form in this case I use commutativity all the time I use the fact that this e to the i phi is a constant in the picture indicates that the 2 phi is just it's not a vector but it's just a constant in reality in the Wigner type case this phi is a vector so the picture is not completely correct the picture indicates that in various different spectral subspaces I'm rotating with different with different phi's this I will discuss in the next slide the picture also indicates what to do if you want to do an edge analysis which is a much more complicated thing and I'm not going to do it here but we can do it we did do it if you are in an edge situation edge means that you are somewhere here when the density is very small and this kind of bound one over the density is very small this is a situation and this phi is essentially zero and then we still want to do a proper analysis but in that case you have to do an analysis in such a way that the corresponding eigenvector the corresponding direction the direction which corresponds to the largest perinflubiris eigenvalue has to be separated from the rest because you know you have to isolate that you have to isolate the eigen space corresponding to this eigenvector and then the rest can be again treated by the fact that there is a gap in the spectrum but this one direction has to be followed through very carefully in the true in the whole analysis but I'm not going to do that so this was a situation when this rotation angle is constant if the rotation angle is not constant then it's morally still true but I cannot draw it and then it's not proof doesn't go just in such a way that you look at the spectrum rotated spectrum and so on so there is a lemba which is the key part of our proof it's called stability lemba and the situation in the following are take a Hermitian matrix t plays f plays the role of the t here here just in the full generality take a Hermitian matrix where uf and suppose that it's normally smaller than 1 so basically t is the f and now you take a unitary matrix u in the application this unitary matrix will be this diagonal matrix e to the minus 2 i phi but it's not a constant matrix in general it's just a diagonal matrix but the important thing is that it's a unitary matrix for this lemba that's what you have to know so you take the you want to invert the unitary minus the t and the t has the spectrum t is the same as the f I shouldn't have called it t so this is the spectrum of f and now you have a unitary matrix the unitary matrix has a spectrum which lies on the unit circle and I want to try to invert the u minus t because these are not commuting so picture is a little bit misleading but the claim is that the inverse this is an invertible thing and the bound on the inverse depends on two things first of all it depends on the gap of the operator so the gap is the distance between this largest eigenvalue and the rest of the spectrum this is sort of understandable and it also depends on the fact what happens to this to the eigenvector here so there is an eigenvector the param for being this eigenvector and now you ask yourself how this eigenvector behaves how it looks like when you act upon it by the unitary operator u and then there is this other factor this one minus t norm which is essentially one times the fuf and this second term the second factor in the denominator that keeps track of what happens to what happens to the single eigenvalue okay so that's a lemma which one can prove the proof itself looks a little bit short it's actually one of the harder exercises you have to separate various cases it's not terrible hard ones it's not completely trivial so this is the basic lemma and then once you use this lemma and you plug it in let me not go through it the details if you plug it in you evaluate especially the second part in our situation then you basically get the same effect what matters here is that this fuf here is a unitary matrix that this fuf will have a non-trival imaginary part because f in our case f is a param for being this eigenvector so it has real entries as positive entries and the u in our case e to the minus 2 phi so this plays the role of the u this multiplies entrywise when you compute the fuf then this e to the minus 2 phi multiplies the entries of f by some non-trival complex number non-trival complex phase so this calculation is done to exploit that and then all together you get that this inverse is bounded by 1 over the imaginary part of m square previously in this situation you got the 1 over imaginary part of m itself for the first power that's what this picture indicates but in reality in the general situation it's not constant you get only the square and this is due to the fact that this picture indicates that I'm rotating it in one direction here but if phi is not a constant phi is a vector then sometimes rotating that direction sometimes rotating the other direction and then you lose an order ok so that's the regular type so now let me just summarize about what we know about the quadratic vector equation third dimension existence, uniqueness and so on boundedness and so on further continue to run and then the stability operator has a bounded inverse at least in the bulk so in the bulk means when this bound is part of rho is an imaginary part of m and this bound is an order of 1 bound that's the bulk and actually there is a similar situation which I didn't explain but there is a similar bound when you are away from the spectrum so the only critical situation is when you are exactly at the edge on your edge and this requires more careful analysis so you know about the quadratic vector equations so let me just stop here for a minute any question because now we will go to the matrices but you are supposed to know all these things in order to go to the matrices ok, so let me go to the matrices so now now if we do the same thing with the matrix ison equation and the basic equation everything looks the same formally but of course the meaning of these objects are different now so the unknown object is now a matrix and m by m matrix the condition that the imaginary part is positive, here it was a vector so imaginary part being imaginary part of a vector being positive means entrywise positive here it's a little bit different m is a matrix it's not a self-adjoint matrix but you can take it as a imaginary part of a matrix imaginary part is just defined by 1 over 2i minus m m minus m star exactly as you define the imaginary part of a complex number and then this becomes of course a self-adjoint Hermitian matrix so this sign here that the imaginary part is bigger than zero it means as a quadratic form as a positive operator positive Hermitian operator so this is the site condition the z is still in the half plane as before and you would like to solve the equation now what is this s here where the s in the application when we do it for the for the random matrices then the recorded random matrices then s was acting on any matrix s acting on any matrix was just the expectation of h r h h is the random matrix but you should think of it in such a way that this s is an operator which acts on matrices so this s is a linear operator linear map on the space of matrices into a space of matrices a super matrix if you wish so if you feed in any n by m matrix then you compute h r h h r h in n by m matrix you take its expectation expectation respect to this h h is the random thing r is just a dummy parameter so you compute this and you get a matrix and that's what you call SR and of course this matrix includes all kinds of correlations or second moments of this correlated random field correlated random field being the matrix elements one could write it up explicitly in terms of the second moments of the h and then you get a nice formulation so that's what we should think of this s as a map from matrices to matrices but we don't need for most of the analysis we don't need this specific form of this, this is only our motivation we need two properties of s first of all this is self-adjoined now self-adjoined for what for self-adjoined it's in a Hilbert space but now we talk about matrices so in the space of matrices in the Hilbert space structure the Hilbert space Hilbert Schmidt scalar product if you have two matrix A and B and by matrices then the scalar product is defined as you take the trace of A star times B and for simplicity time for convenience here I take a normalized trace which is typically not part of the definition but now I put it in and you can easily check that with this property this is real scalar product in the space of by and matrices it also generates a norm according to usual which is called the Hilbert Schmidt norm so this is a scalar product on my space and then it's easy to check with this definition easy to check that with respect to the scalar product s is self-adjoined so if you take to arbitrary two matrices r and t and you compute r scalar product with s and t is the same as the s acting on t and scalar product with r so this is one property what I need the other and this property the self-adjoined property is the analog of the property of this on the matrix on this qv level of the fact that the s i j was a symmetric matrix and the other thing is that it's positivity preserving which is the analog of the property that the s i j is entrivized positive but now here the super operator doesn't really have entrivized things so the positive preserving means that if you act on if you take s of r where i is a positive matrix so it's a positive matrix means of course it's self-adjoined and positive then the outcome is also a positive matrix and a negative matrix so these are the two properties what we need for the rest of the analysis is about s and it's very easy to check that this guy in our application this one satisfies both properties so now you can keep on asking the same questions what we did in the vector equation so the first thing is that does this equation has a solution have a solution at all and this was answered well before us actually the best paper is best reference is the health on fire spy health paper when it was absolute honesty but there are some remnants of that in our earlier work and this equation is an old equation in one way or another it showed up in many different context so the claim is that it has a unique solution under this constraint and moreover it's a steer test transform of something now in this case this is what is this something, it's a measure but because the outcome is a matrix the measure should be not measure with real values but it's a matrix valued measure so there is an analogous extension of the concept of steer test transform for matrices and then it looks like that so let me not spend much time on it the proof is basic and there is a fixed point argument you just have to raise everything to the matrix level and similarly you have the corresponding density of states which again formally you do the same thing you take this measure here it's average in that case earlier the average was summed up the components divided by n now instead of summing up the components you have to take the trace that's the analogous concept but it's basically the same thing and that will be the density of states so this formula will give you the picture this picture of the density and now the operator the key operator is this let me write it up the stability formally is the same the stability operator which used to be here this guy 1 minus m square s in this case it looks as follows you take the identity minus m s acting on something times m so again this is an operator which acts on matrices and it acts on matrices in such a way that you give me a matrix I plot it into the argument of s I get another matrix I sandwich it between m and then there's the identity outside so formally it looks like that except that now it's important where on which side m is I mean these are now matrices so I have to be careful about the order and this is the operator what comes up when you do the usual stability analysis as we did before if you differentiate here we got the stability operator by differentiating the z with respect to z here you can also do it if you differentiate with respect to z and see the linear stability and this is the operator what you get you can also write it in a little bit fancier ways this operation that you sandwich between something between and also write in a fancier way as a script s script c script c when the script c acting on any matrix r just means that you sandwich this matrix by m on both sides so this is our stability operator and now now in order to so now we would like to get basically the same type of results as we had on the vector equation raised it to the matrix level so here is the result the results sound very similar analogous to the vector equation case but the proofs are much harder and then you have to use the whole matrix structure so let me assume the flatness