 We are going to prove it in For a very general family of random matrices So we assume that the entries of a matrix are independent with the only possible exception that the entry a j may depend on a ji and This would include in our class fully independent ensembles like Geneva and As well as her missions q Hermitian and other ensembles and the second assumption We will assume that the real parts of Entries are random. The imaginary parts are deterministic and although this sounds not completely Natural this is also intended to include two most natural classes one is real matrices where the imaginary part is just zero and another is complex matrices with Imaginary part independent of the real part in that case we would condition on the imaginary part Okay So what we are striving to establish is the delocalization property Namely that any I unit eigenvector For any unit eigenvector and for any set of coordinates of size epsilon and Then L2 norm falling on these coordinates is polynomial in epsilon the power 6 we get here is Not optimal by any means, but we want to get a polynomial dependence and then We proved that we will prove that If the entries have a uniformly bounded density Then this delocalization event is likely Namely the probability of failure is exponential on epsilon and plus the additional term which is Small if the entries hay I say have this for example abounded for fourth moment and This can be established up to eight the coordinates So this is what we are going to prove but there is a much more general Analog of this result namely the same statement is true Without almost any assumption on random entries Namely if we assume that the entries are not Deterministic they're not contained in the small in small discs then the same assumption The same result would be true that the localization will be likely This is a Much more complicated Results, so I would not talk much about it and we'll we'll Discuss the previous one, but the approach to both results is the same first of all we want to get rid of The notion of eigenvector and eigenvalue so Instead of Eigenvalue, which is random we can consider any deterministic complex number and Also, we can consider any set of Cardinality epsilon and and if we manage to prove that the minimal singular value of The matrix a minus lambda i reduced to the complement of this set is smaller than Delta square root of n with small probability then The localization event has probability one minus this p zero times a harmless coefficient and times The coefficient and choose epsilon n Which causes trouble because this is super exponential and the first time we we tried To attack this problem With epsilon that argument which resulted in a complete failure Because in any module we found an obstacle which is Which we couldn't pass being that Dependence between the entries being the entropy cost which is too high or the super exponential bound So what can we do if? We don't know how to solve the problem the usual Approach would be to relax the problem to solve something easier and then at least Will get partially happy so instead of the minimal singular value Let's bound the average singular value Now which kind of average should we choose and? Arithmetic mean Would be immediate, but this is not too interesting because if you take the arithmetic mean of the singular values The most contribution comes from the largest singular values, so we don't feel the small ones Geometric mean is useless by the same reason Harmonic means Sounds more exciting Because harmonic means puts the maximum weight on the smallest singular values but We don't know how to handle harmonic means fortunately there is Mean which we know how to handle and this is second or harmonic and this is due to the a negative second Moment identity So what is this? This identity already arrived in the talks of Terry tower and it was proving it was one of the questions during the problem session Now so let me remind you what it is Suppose that I have a matrix B and in our case B will be a Minus lambda reduced to this set I see so this will be and times one minus epsilon and Matrix and then we have a linear algebra identity which is the sum J from one to one minus epsilon and S J of B Where as J a singular values of B to the negative second power Equals the sum J from one to one minus epsilon and of the distances between the J J to the negative second power and let me say what this B J and H J mean B J will be the so B J will be the columns of The matrix B here one minus epsilon and and H J will be the linear span of All except J column This is this identity is Very easy to establish. This is just a way of writing the Hilbert Schmidt norm of The matrix B inverse acting from its image you can write the Hilbert Schmidt norm in two ways one would be The sum of the squares of the singular values, which would give the left-hand side and another the sum of The squared norms of the columns and this would yield the right-hand side Okay, so now we have these distances The distance between one column and the span of the rest and I can write each distance as The Euclidean norm of the projection P J Sorry of B J P J is The orthogonal projection with kernel the subspace H J so We want to To estimate that Negative square of the distance of the norm of the projection Which means that we want to Establish a small ball probability for the norm of the projection and let's look at This vector B J. This is a column of the matrix and the entries of the column are independent and They have a bounded density This is precisely what we prepared last time if a We have a vector with whose Coordinates are independent and have a bound a uniformly bounded density Then for any orthogonal projection of rank D The density will be bounded by the density of the coordinates to the power D like for the coordinate projection and And We are going to use this in an equivalent form. We will consider the levy concentration function of the projection and Which is the Supermove the probabilities of small balls the balls of radius T square of D and This levy concentration function is bounded by a power of D key and One important feature of this estimate is when you translate the densities into the small ball probabilities You gain square root of D Which comes from the volume of the Euclidean ball? So it seems that we are ready We have to the probability that The norm of P J B J This projection on the linear span of other columns is less than T square root of the Dimension and since the kernel has dimension One minus epsilon and minus one So the dimension of the image will be epsilon and Minus one and I'll be a little inaccurate here. I'll drop this minus one just not to Carry it over this Will be harmless If epsilon n is greater than say eight we can put the constant one half in front So this is less or equal than Whatever we are supposed to get This is indeed how we will proceed, but the direct application Of the previous theorem doesn't work. Let's see why first This pertains to deterministic projection and the projection P J is random It's a projection on the linear span of the columns and these two are dependent Because the entries of the matrix are not independent, but this dependence is very easy to get rid of Let's draw the matrix This was our a minus lambda So I took a set I of Cardinality epsilon Epsilon n and Throw it away. I considered only columns belonging to The complement of I Here is the diagonal Okay, then I take a column be the jth column this is be J and Look at the span of the others and our dependence condition is Set up in such a way that only the entries Symmetric with respect to the diagonal can depend on each other. So the only entries Dependent on this column will be In the Jth row and let's throw the Jth row away so I Can I will consider the vectors B J prime in C and minus one These are the columns of the matrix With the Jth row erased and Then if I erase the Jth row both from the column J and from all other columns I would be able to write that the distance between B J and H J is greater or equal than the distance Between B J prime and H J prime and now we are in a good shape because The column B J prime the vector B J prime now is independent of The other vectors on which I span H J So can we use this result now? There is another caveat We proved this result over R and Our vectors are complex. This is not a big deal. We can consider say B J will be X J plus I Y J and we can consider a real realization of this vector say B J tilled which is Vector X J Y J Which is an R? to and minus one So now we have real vector the same way we will create a real subspace H J tilled and and it doesn't go through because The vectors X J are random the vectors Y J are deterministic Okay It's easy to help. Let's erase this these vectors Y J We are going to project out the imaginary coordinates But it doesn't work either because if I consider the the the real analog of H J H Sorry, it's not B J B J prime and H J prime tilled this independent version I'll have that this is a to One minus epsilon and minus two Okay, epsilon is small This is more or less to and if I project Out the imaginary coordinates, I'll have a projection whose Range is n minus one dimensional. I project the space of the dimension almost to and on the n dimensional subspace and I can Possibly get the whole subspace So I would not be able to prove any small ball estimate for the projection However, there are this This is not Too serious of a problem. We can apply an easy symmetrization trick To be able to use this theorem. So let's see How how can we do it? We are going to operate on the level of levy concentration functions. So I am going to consider the small ball probability of the norm of the B J prime where the kernel of the J prime is H J prime B J prime which is X J plus I Y J minus Any point Z any complex point Z in Of the respective space see to see sorry and minus one less or equal than T square root of Epsilon n Okay The projection is linear. This is a deterministic vector. So I can write it as The probability that the norm of P J prime X J minus you less or equal than T square root of epsilon n where I Hide under you This deterministic projection and this is a deterministic point Z Okay, now let's square it and instead of squaring it. I'll multiply it by the same thing So that's the projection of Norm P prime J X J minus you less or equal than T square root epsilon n But here I can replace this X J By its independent copy Say X J hat Okay, what what else can I do? The space H J prime is the complex space. It's invariant under multiplication by complex numbers So if I multiply X J by I I can pull this I out and I also multiply of course this you by I and then This is less or equal Than the probability that by triangle inequality the norm P J prime of X J plus I minus plus I X J hat minus you minus plus I U less or equal than 2t square root of epsilon n and Let's look at this vector. This is a vector with in the pen if I write it and Real form I'll get I'll get a vector say X J tilde which is X J X J hat This is a vector with independent coordinates and All these coordinates Have uniformly bounded density. So I Can operate now in the space instead of the space C and minus 1 in the space are 2 and minus 1 and Then I can finally apply the bound for the levy concentration function and I have that this is less or equal Then ct to the power a Real dimension The image of pj the complex image of pj had dimension Epsilon n the real dimension will be Thus to Epsilon n That's seems strange that we've got to here so our Small ball probabilities better than we would expect it But actually this to do would disappear in a second because We had originally the squared probability So let's summarize what we proved Let's formulate it has a proposition so the the probability that the distance Between The space between the vector bj the original vector and The original space hj is less or equal than t square root epsilon n is bounded by Ct to the power 2 epsilon n and this is The reason why the bounded density case is uncomparably easier than the general case We have we established this as this small ball probability bound Without any information About the space a hj. Oh, sorry, epsilon. Thank you We take this we took the square root So we we do not know anything about hj and This bound is uniform over all positions of such spaces This is impossible if the entries are discrete then the small ball probability bound depends on the arithmetic structure of the space a hj and determining the arithmetic structure and showing that a typical realization of Hj Doesn't have any arithmetic structure Takes 80% of the work in the proof of the general result. Fortunately, we will not touch it at all so we have now this and if I denote yj By yj what appears in the right-hand side of this inequality So the distance To the negative square between b j and hj then I can conclude that for any say to positive their probability that Y j is greater equal is greater than to a over epsilon n is less or equal than C over to To the power epsilon n over 2 Here I used to Being t to the negative 2 okay so we have the tail probability for one of yj's of course If we have the tail probability for one variable we want to establish the tail probability for the sum and The problem is that of course why j are not independent Yes, here we can use some basic functional analysis to get this tail probability first, let's write this inequality in a different language if I I have a random variable y I can define its weak p norm as the supremum positive To to the negative one the probability that y Say absolute value of y is greater than 2 To the power 1 over p This is a standard Notion and functional analysis in harmonic analysis. This is the weak LP norm and in this language we can say that the weak LP norm of yj is less or equal than a constant over epsilon and for P being Epsilon and over 2 and then Theoretically we can use the triangle