 A különböző étnezzén poszíció de a felszíteni nagyon éranabb, azért, ha a szár, de a nagy négű operációé, amikor a métrik, és nem négű sébésé, és ez nem négű sébésének előtt hagymára egyújtos szár. A levénkben a normal pár étél estár, ezt a szár, és a szár és a szár, és a szár és a szár, az utóképpen. És az SR-t, a metriára, hagytunk egy helyes ide, és ez a zöldönnyre. Ez az az álló alapított subjárt. Örgözöm, hogy amilag gondolom, hogy elbízva a magánal, hogy az igazából elbízja, ráöltem a csinálta, jövőgözõrm ton Finally grade-t odjárom. Foglalmas, akik olyan csennél, nem értem ezt. Amikor úgykül érültem, ha olyan csinálta, The very energy city away from the edge. There also, again, you can get bounds and its inverse. The precise forms of the bounds are not terribly important. These are the same type of analogs, natural analogs, that have similar bounds on the vector case. And most importantly, we know that the stability operator is bounded so this was the stability operator, where they're... So its inverse is bounded in the natural norm. a natural norm. This is an operator which maps, which is defined in a space of matrices. And on the space of matrices, as we discussed, the natural norm is the Hilbert-Schmidt norm. So this operator here is the operator norm acting on the matrices equipped with the Hilbert-Schmidt norm. It's a bit complicated, but if the space of matrices is equipped with a natural norm, then this norm naturally induces a norm on the space of operators acting on that space. And this is the, which I call the spectral norm here. And this guy in this norm is bounded again by something which is bounded as long as you are in the bulk. So as long as the density is away from zero, or you are far away from the bulk. And this power here, this 1000 power is just not completely serious. There is just a constant 1000 would do. The optimal bound is actually would be the two as in the previous case, but for that again you need further conditions. And this we didn't do yet. Okay, so let me try to indicate some proofs. So of course the main difficulty with everything is non-commutative, unlike in the previous case. But nevertheless I would try to mimic some of the ideas that we had in the proof before. And remember the key thing, the key equation was also in the vector case, is the equation what you get when you took the imaginary part of the equation itself. That led later on to an equation which revealed how this f-operator looks like. So here you have to do the same thing, but already when you take the imaginary part of my equation, so it's this like matrix equation, even when you take the imaginary part of the equation, you have to watch out what the commutativity means. I mean the imaginary part of 1 over m was just the imaginary part of minus 1 over m, as long as it's a complex number, it's just the imaginary part of m divided by m square. It's a triviality, but now if m is a matrix, then you have to be careful which order you do things. And the right thing to do is this one. So it's even not, you see, I'm trying to avoid taking the absolute value of the m, should have said that, that typically m here is even not a normal matrix. It's not just a self-adjint, even not a normal matrix. And if somebody is not a normal matrix, then even the absolute value is a little bit fishy. I mean, there are two ways to define the absolute value. Typically, even if you take the mm star square root, or m star m square root, but if it's not a normal matrix, these two things are not the same. You can decide for one, and if you look it up in any book, which defines the absolute value of a matrix, usually they choose one of them, but they don't tell you that there's another choice, and there's always this ambiguity. So you have to try to avoid taking the absolute value. Okay, so this is what you do. This is one way of doing it. And then this is the imaginary part on that side, and on the other side you just take the, that's easier because taking the imaginary part that goes through the s operator, s is a linear operator. Okay, and if you start studying this equation, then you can try to deduce similar bounds as before. So, for example, the cheapest bound, which you get immediately, is that s is a positively preserving operator, imaginary part of m is positive, so this guy is positive. You can even neglect it as a lower bound, and then you get that the imaginary part of one over m is bigger than the imaginary part of z, and inverting that relation yields you at least a trivial bound, that m is bounded by one over eta. We had the same bound before in the vector case as well. But of course in general you have to get something much better. Now, let me not go through each of these estimates, but let me just emphasize or focus on one thing. The key question is that what is the proper analog of this operator f? That's because if you remember the huge part of the proof in the vector case relied on the fact that I created this f operator, and f operator had good properties, much better properties, than the s operator, it was bounded, it was self-adjoined, and so on. So how to do that? What is the right operator here? And so here is the goal. So the goal would be, with the help of this f operator, the goal would be to get some bound on the inverse of this stability operator, and the result is, I think it was already a previous transmission, so the claim is that the stability operator is invertible, at least as long as you are in the bulk. The key is to find this symmetrization. So here I just flashed up again, just remind you what was done before in the vector case. In the vector case we had the m, the solution, which is a complex vector, we wrote up its polarity composition, a particular is the m absolute value, and out of this m absolute value we created this f operator, this would be m absolute value s, m absolute value sandwiching in between, and then we have seen that the 1 minus m square is basically the same as this f operator minus e to the minus 2 alpha, so this is summarizing what I did on this blackboard here. And now what did we need about f? So f had two good properties, I emphasize that one of them is that it had to be symmetric, because at the end of the day you want to re-drew the spectrum, and it was useful, it was very important that the f had a real spectrum. So we needed that f is symmetric, this was needed when we did the spectral analysis, but it also needed this positivity preserving, because positivity preserving was needed for the Peron Frobenius argument, so f had these two good properties, it's even written here, and now we would like to mimic, we would like to find a corresponding f in the matrix case, but in the matrix case this f will be one level higher, because now I'm talking about an operator which acts on matrices, so my new f will be on the same level as this s operator, this script or calligraphic s operator, it has to act on matrices. So it has to be self adjoint with respect to the Hilbert Schmidt scalar product, that's the analog of the symmetry, and it has to be positivity preserving in a sense that if you feel in a positive matrix, then the outcome should be a positive matrix. Ok, and here is just reminding you this formula, when I took the imaginary part of this aspect of this equation, and I appropriately re-organized it, then this operator's MSM operator sort of popped up. So now let me try to do this in the matrix case, let me do it on the blackboard, because then you see it's slower, at least at the beginning. So I took the imaginary part of that equation, as I already took before, so imaginary part of m, and when you took the imaginary part, then you got this here, somewhere there was the imaginary part of m, so it was sandwiched between 1 over m star and m, but I can multiply through my m star from one side and m on the other side, so I write up the equation in that form, so let me write it up here, it's eta m star m plus m star s imaginary part m. So that's the imaginary part of the equation. Let me forget about that part, because this term is helping me with a positive term, but eta is tiny and it doesn't help much. But now I would like to do something like a polar decomposition here, because in order to do this f operator to smuggle in the m absolute value, so what I will do, but I don't know what the absolute value is, that's the problem, and also I know that even if I knew it, I have to be careful on which side I put it, because this is non-commutative, so what I will do here, I take a matrix Q, which I don't know yet, I just assume that there will be a matrix Q and I play with it. The intuition is that the Q absolute value will be something like the 1 over square root of m, but this doesn't quite make sense as it stands, but this is the intuition, that's what you should keep in mind. Okay? So now I take this 1 over Q mQ star and multiply from both sides this equation. So this is that. So again the 1 over Q plays the role of the 1 over square root of m. So this guy here altogether will be the amount of imaginary part of m divided by m absolute value in this guy, in this side. But this dividing by 1 over m absolute value I have to equally have to divide into two parts, one on the left square root of that on the left and the square root of that on the right. And this is what this, this Q will mimic at the end. Okay? So now let me, let me rewrite this term here, that term I neglect, this is not so important. So now I have the 1 over Q, because I multiply the equation by 1 over Q and 1 over Q star from the two sides, then I have the m star. So I'm just following my hands, multiplied by 1 over Q from the left. Now I'm, but I don't like something which looks like 1 over Q times m star. I want to get everything in a symmetric way. So the right way to put, to make it symmetric, I smag it in the 1 over Q star at that point, but then I have to compensate for that. Becomes a Q star. Okay? Now I have this S and in the argument of F, I have the imaginary part of M, but remember that the previous trick was here also that the argument of S was the imaginary part of M, but I wanted to see the imaginary part of M over M, and it is the same guy in the argument of S as it's on the right hand side. So on the left hand side. So on the left hand side I have this guy, but originally I have only the S acting on the imaginary part of M. So I smag it in the 1 over Q. Imaginary part of M and 1 over Q star on both sides, but then the price that I pay for it, I have to compensate. So again I divide and multiply. It looks like I'm not doing anything. And then on the other side, I will multiply the whole thing by the 1 over Q star that will hit the M over Q star. But again I don't like something looking at asymmetric. So I smag it in the 1 over Q and the Q. Yes, I didn't do anything, it's just an identity. But I claim that it looks much nicer than before. You may not agree, but that's the fact. This one looks much nicer, because now I should think of it in the following way that there is this operator, this part, which goes up only. So the operator which looks like Q star SQ acting on something in the middle and then Q star and Q. So now what are these some things? So let me call this guy X just so that you can see a little bit better what happens. So let me call this X and then let me call this guy Y. It's the operators. No, sorry, this was the Y. This is the Y, so Q, Q. I know that this is the Y. This is the Y and this is the Y star. I just defined the 1 over Q and 1 over Q star to be the Y. That's here. And then on the other side I see the star of the Y. OK, so now with these notations the equation looks like that X equals Y star. And now here you have this operator but the operator is this Q star SQ and that is the FQ star Q. So this operator will be our F operator, write that up in a minute. And this F operator acts on the guy in the middle and then you have the Y outside. That the F is defined exactly as this formula test that F on any matrix R is defined just by taking the Q star SQR Q star Q. So read it in the right way. Take a matrix R. First you put it here. You sandwich it between Q and Q star. You get another matrix. You let this S operator, the super operator act on that matrix. You get a matrix. And then you sandwich this matrix again by Q and Q star at the opposite order. OK, so this is the definition of F. And with this definition we got this equation. Now you can see immediately this F operator is a good operator because of the proper way of putting the stars in the right place. You can immediately see that this operator is actually a symmetric operator with respect to the scalar product. And it's also positively preserving. You know, once I tell you the formula, then it's not hard to check this properties. But the key was to get this formula. OK, now, but what do I do with the rest? So this is now my equation looks like that. But it's still not exactly in the form what I like. Remember, I would like to get a peromphrobeneus type equation. So I would like to get an equation that F acting on the operator is exactly that X is an eigenvalue, eigenmetrics in that case of that operator. So if these y's were not there, I would be happier. Then it would be exactly a peromphrobeneus like eigenvalue equation. So I have to get that. But fortunately I can do it if this y has some additional property. And notice first of all that the x and y are of course not, they are not unrelated to each other. So this is the x. This is the y. And then the only difference between the x and the y that in the x is an imaginary part. So actually the x is the imaginary part of y. Just by the formula. And now I don't want this y and y star on that side on here. Of course I can put it on the other side. I can put the y star inverse and the y inverse over there. That doesn't hurt much. But if y were unitary, then you could feel that this y and y star should cancel each other. Not on that side, it's other on the other side. It would cancel each other if they were commuting with x. In general it's not true, but it's true in this situation. So the fact is that if y is unitary, y star y is unitary, then in particular it commutes with its own imaginary part. That's an easy check. An x and y commute. So if y is unitary, then you can put the y's on the other side. And then you can commute through the x, so the y disappears. And then the equation would be exactly what I want. And that's exactly this pattern for a bemused type equation from which you would conclude. Among many other things you would conclude that the norm of f in this vector norm is bounded by 1. That's before. And then hold the whole term for bemused machinery would work. So now that's the goal. So this will be my f operator, but I still have to get this if. And of course I also don't, I didn't tell you what the Q is. You see the Q is just a dummy object in this whole calculation so far. But before that let me just finish how one does the proof. And at the end I will show you the Q operator. So once you have that, once you believe that there is such a good Q operator, then you can also rewrite. You can do the same calculation as before. You can rewrite the stability operator, which is basically a MSM operator. This guy here. You can rewrite it in terms of the f operator. And let me not go through the calculation details. It's just an identity. It's the same type of trick. You keep on smuggling in and out this Q's. So this calculation here is basically the analog of this calculation before which was much more visible when you had the stability operator. And you wrote the stability operator in terms of the f minus a unitary and then sandwiched by something harmless on both sides. It's the same calculation you can do here. Here is the stability operator. And it becomes something which involves the f operator on that side. Or maybe the next line is easier to see. So altogether you can write a stability operator in such a way that you have some decoration on the two sides. This is now this operator K of Q, which is operating on the space of matrices. It operates in this sandwiching way. That's not important. These two sides are not important. The important thing is the middle one. And the middle one is exactly its identity minus a unitary times the f. This cy is the sandwiching operator with the y. And if y was unitary as a matrix, then the operation of sandwiching with the y is also a unitary operation on one level higher. So altogether you will get stability if this guy is stable. And all we need, and then studying this guy basically goes back to the previous statement, previous stability lemma, which is a self-adjoint operator times a unitary operator and invert that, invert one minus of that. So all we need for that is we need to find what this Q is. So once what we need was assumed here rather here. It was assumed that there is a Q and a y in such a way that I can write the one over Q and one over Q star as a y, which is unitary. So in other words, what was assumed? We assumed or we used that the m, I'm just reading this line here in a different way. So the m is written like Q y Q star and this guy here in the middle had to be unitary. So now what is that? This looks like a polar decomposition. A typical polar decomposition give you any matrix and you write it like an absolute value times a unitary. Now as I already said before, if it's a non-normal matrix and the absolute value is not unique. So also you can write, you can have a polar decomposition such that it is absolute value times a unitary. You can write it the other way around that the unitary times an absolute value, but the two absolute values are not the same. So it's obvious that the standard polar decomposition has some kind of arbitrariness in it. It's not symmetric. Now this guy here is much more symmetric. The unitary is in the middle and the Q here should be thought of as sort of the square root of the absolute value if it had existed. Okay, so this is what we are after. And once you know what you are after, you can do it fairly easily. So we don't do it for every matrix m. Our m matrix has a good property. So this is the goal here. But our m matrix has a good property. It has positive imaginary part. So what we do is that, just write up one more line from that. So m has positive imaginary part. So you write m in such a way that it's a plus ib. Okay, so a is the real part, b is the imaginary part of that matrix. But now I continue this writing in such a way that I pull out the square root of b. B is the imaginary part of m, which is a self-adjent. It's a positive operator. So its square root exists. There is no ambiguity about the square root of a positive operator. So I take the square root of that and then I compensate for that b1 over square root of b plus i square root of b. This is just an identity. Okay, smuggling in and out things. Now this middle factor here is now a good guy. What is this guy? This guy, of course, a and b desider is our Hermitian object. So this guy here, 1 over square root of b, a is 1 over square root of b, is a Hermitian matrix plus i. So this whole thing here is a normal matrix. It's a self-adjent plus an i. It's a very simple object. So I would like to write m in such a way, something, a unitary time something else. So this guy in the middle, it looks like a good candidate for the y, except that it's not unitary yet. So I can make it unitary by dividing it by its length. And that's what I'm doing. That's what I'm doing next. So I'm defining the lengths of that object or rather the square of that. So I define the w as the length of that operator or rather the square root of that. So 1 over b1 over b square plus 1. This would be the length of that operator. But I take the one quarter of that in such a way that now the w square is exactly the length of that. And now I divide through by that. So this is this guy here. So this becomes that. a1 over b plus i divided by w square. This guy here will be my y. This is unitary. And then whatever is left, I just collect on both sides. W and then square root of b and square root of b. And this will be my guy. So this will be the y and this will be the q. OK, so in that way you can check the properties. So this gives me this polar decomposition. This is symmetrized polar decomposition. And if you know that b is the imaginary part is bounded away from 0 and from above. And a is also bounded. Then you can check that this q operator is now written explicitly. It's an order one operator in a sense that it's norm and it's norm inverse is also bounded. So this will make sure that all the sandwiching operators which are hanging around. It's taking 1 over q and q. That all these operations are harmless as far as the invertibility is concerned. Everything boils down to the invertibility of this guy as before. And for the invertibility of that guy is just again rewrote it in another way. The invertibility of that guy is basically can be done exactly the same way as previously I mentioned this lemma, this stability lemma. Because now it was formatted in a Hilbert space contact. This Hilbert space could be, doesn't have to be my original n dimension as cn space. It can be the space of matrices and by matrices equips with the Hilbert Schmidt norm. And in that setup the lemma looks exactly the same as before. Of course one has to check that the super operator also has a gap. And also I have to check what happens to the only critical eigenvektor, which here it's actually not an eigenvektor, it's an eigenmetrix because it's a super operator acting on matrices. It's an eigenmetrix which is this Roman F. And you have to check that this guy here is separated away from zero. Ok, so that's roughly what you do. And once you have that, then you can basically go back to the previous scheme. So here is finally our statement. So if you have back to the correlated random matrices, I think the theorem I already put it up there before. So this theorem is that the resolvent is approximated by the m, the solution of this matrix Dyson equation. Both in entrywise sense, so g x y is close to m x y, and also in trace sense. And typically m is not diagonal because g is a general equation. And also g has non-trival of diagonal components, so this is a general setup when the resolvent does not reduce the diagonal case. So let me not discuss this. Basically this part I already explained that once you have the original equation, the self-consistent equation, then you write up an equation for the resolvent, which has a small perturbation of that, and then using the stability you can get. You can get a good bound on g minus m in terms of the size of D. So let me just summarize in this little table. So we have studied various models of various generality, and in this table you see the models. There were four models. There was the original Wigner case, when S i j was 1 over n. There was a generalized Wigner case, and the row sum was 1. There was a Wigner type case, when the S i j was arbitrary, but it was still... The matrix elements were still independent, and then there was this marginal case, the correlated Wigner, the correlated situation. Here you see the disome equations. They all look the same, but the meaning of the terms is getting more and more complicated. Here the m is a scalar, here the m is a vector, and here the m is a matrix. So there is a scalar equation, and the vector equation is a matrix equation, and then here the equation, but the meaning of m is in terms of random matrices, and here the stability operators. So everything looks the same, but every line, the next line is one level higher as the previous one. And what I would like to close the session with is that to emphasize that it looks like the correlated Wigner matrix is the only one which needs matrix equations. So far one could get away with the vector equations, but this is not really the right point of view. The right point of view is that the matrix equation is the true object. This last one is the true object, because eventually you really want to understand the resolvent, and the resolvent itself is a matrix. So this last one, the matrix equation is a good object, and in some cases, some simpler models, it can happen that the matrix equation is restricted to a diagonal matrix, everything becomes diagonal matrix, and then of course if you have a diagonal matrix, you don't want to describe it as a matrix, you just describe it as a vector in the diagonals. And if you have the diagonal matrix as a constant matrix, then you don't forget about even that it's for the vector, then it goes back to a scalar level. But the basic object is always a matrix. Behind it is a matrix hanging around behind all these things. So I'm not going to do any recapitulation. Let me just come to the summary. So we reviewed the local laws for various random matrix ensembles in increasing generality. I discussed before. I gave a quantitative analysis of the solution of the matrix descent equation, especially its stability. I emphasize again that the matrix descent equation is the correct equation in random matrix theory. It has many, many applications. I just showed the application for the correlated case. But actually if you take many other situations, for example, you take gram matrices or inhomogeneous circular law, you take non-hermitian matrices, you do the gear-cos formula, gear-cos trick, you hermitazize, hermitazize, and you do the gear-cos trick. Or you can do some structured matrices, some block matrices, and everything. All these models, all these models are all can be cast into the framework of the matrix equation. If you do that, then you can see that suddenly this nice structure emerges much easier. So sometimes it works. It works to rewrite a simpler problem into a more complicated matrix problem because the structure reveals itself. And then for the concrete case, the correlated random matrices with slow correlation decay will eventually prove the optimal localo and all these consequences just to vignal die some meta universality. Okay, so thanks for the attention. Everything else is in the notes. Is there maybe one quick question? Yes, everything is non-centered here. Actually this was, I'm not sure if I put it somewhere. I think at the very beginning I still put an A in it. But then I removed it. There was an A here. This was in the vector case. That's the expectation here. The same thing we can do also for the matrix level. I didn't put it in, but the question disappeared. But if you have a vector, so if you have a z plus a plus sm. So this A stands for the expectation. You can do that. That's actually the torque I explained is better in the previous talk. So you can include expectation. I mean certain answers will differ a little bit. I mean some of the bounds don't look exactly the same. But you can, but the final results will hold. So we have locality, we have universality, everything with a general expectation. Yes. At first implicit I didn't put it in this discussion, but I can do that. Okay, maybe we should leave over questions for lunch. And let's thank Lazulo for his nice lecture.