 I'm going to answer the question one should pose at the very beginning of any lecture series. So we proved something, we proved no-gap delocalization. The question is why we care besides the fact, so we know that the eigenvectors are delocalized, so what? So we are going to discuss a few applications to random graphs, more precisely to order any GNP random graphs, but before this let me again flash this transparency. We are considering no-gap delocalization, which means that any unit eigenvector, for any unit eigenvector and for any set of coordinates of cardinality epsilon, and the mass falling on these coordinates is polynomial in epsilon, and this event occurs with high probability. We are going to use it for random graphs, so I'm going to cheat, I proved the theorem under the bounded density assumption and I'm going to use it without bounded density assumption. I'm going to use a more general result which I didn't prove, but it can be proved along this using the same ideas although the proof is some five times longer. And here we have the same result for any eigenvector and for any matrix of a reasonable norm without almost any assumptions on the entries. However, even these two sentences that v is an eigenvector and the matrix has a rather small norm can be significantly relaxed. That's a beauty of geometric approach that you can basically twist everything if you are ready to pay a small price. Let's see how we approach this theorem. We reduced to the statement of this about the smallest singular values and we did it in the following way. We took the matrix A minus lambda where lambda is an eigenvector, v is or x let's say is zero and then we assume that the norm of x i is small and then we wrote that the norm of A minus lambda x i complement is equals norm A minus lambda x i which is less or equal than this is at most twice norm A times this small. Since we are estimating the probability of inequality instead of equality I don't have to require that A minus lambda x is precisely zero. I can instead, I can replace it by saying that the norm of this difference is small. Appropriately small, appropriately small so that it would not affect what we have here and so our delocalization theorem is actually a result not about eigenvectors but about approximating eigenvectors. Second in the previous statement we have this term that norm A can showing that the norm of A should not be too large and sometimes it's not convenient. For example if I take a matrix with IID entries whose expectation is non-zero then the typical norm will be about n not about square root of n and it would not fall under these conditions but this is also very, very easy to manage if say instead of A I consider the matrix A plus B where the rank of B is one and the norm of B is less or equal to say than n to the tenth. Then I can retell the same story. I have an eigenvector and I write the norm of A and I have an eigenvector of A plus B. I am writing that the norm of A plus B minus lambda x is small and then the norm of A minus lambda xi complement plus B x is less or equal than this small plus the norm of A minus lambda xi and this is small as well. So instead of A minus lambda xi complement we have this additional term B times x but I assume that B has a norm one so I can write this B times x as some number theta times B where B is a fixed vector in the sphere and theta is a random parameter which I don't know anything about besides that the absolute value of theta is less or equal than n to the tenth. So instead of proving that with high probability this norm cannot be small I will prove that with high probability this norm will not be small for any theta. To do it I have to take the union bound over theta the number of theta is infinite but we know what to do we discretize theta we use epsilon net argument and pay the price which is the entropy cost so what is the entropy cost here I have to discretize theta and theta is a real number so I have a one demand I have to discretize the interval negative n to the tenth and to the tenth and I discretize it with the steps one over n which will be definitely enough because the norms are over the order square root of n I get the entropy cost n to the eleventh and I will have to multiply this n to the eleventh by the failure probability which is a constant to the power epsilon n if epsilon is not too small this n to the eleventh will be easily absorbed so we can afford matrices of rank one perturbations of rank one as well and of course rank one plays no specific role here I have I can have rank 10 or whatever so why did I tell the story and besides generalization for the sake of generalization let's consider Erdos-Raney graphs so I have a graph G where V is an n element set and for any VW in V I connect them by an edge with probability P independently for each pair and I don't know how self-loops so this is what is called an Erdos-Raney or GnP graph and then let's consider a matrix an adjacency matrix of this graph a Gn by n matrix with a ij equals 1 if and only if i is connected to j okay this is a random matrix it's Hermitian and with the independent Bernoulli entry Bernoulli P entries there are two problems with it that diagonal of a G is zero because I didn't allow self-loops and the expectations of the other entries are P and not zero but if we know that we can handle matrices with perturbations of small rank and we can handle approximate eigenvectors this is not a problem for us I can write a as a minus p no let's write it as some a tilde minus p times 1 and plus delta now what is what here a tilde is the Hermitian matrix with a i g independent Bernoulli P entries above the diagonal p p is the probability one is the matrix of all ones delta is the day is the diagonal matrix which should take care of the fact that the diagonal entries of a tilde are not zero and the norm of delta is one so I decomposed my matrix into the matrix of a manageable norm this is sorry Bernoulli not Bernoulli P but I take shifted centralized Bernoulli P which means that the expectation of a i j tilde is zero and this is the term responsible for the expectation okay so what do we have these entries are centered and bounded for bounded centered enter entries the norm of a tilde cannot be too large e to the power negative cn so we are fine on this count we don't this term would not bother us and then the rapid perturbations this is rank one and this is small norm so I can put this small norm into the error term and any eigenvector of the matrix a will be an approximate eigenvector of this matrix the conclusion is that the the localization theorem applies to the adjacency matrix of the graph so what can we do with it let's first consider the question of nodal domains of a random graph so what is a nodal domain you have say a Dirichlet problem for the Laplacian you calculate the eigenfunctions and then you ask how many connected components their positive set has let's consider it a one-dimensional problem so I have a string the first eigenvector is just the sign lambda 2 will be like this lambda 3 like this etc and then I count I consider the component the connected components of the same size so this is one connected component this is another connected component we have 2 for the second eigen vector we have 3 for the third one etc and the same picture can be seen on in multiple dimensions for example for the full Laplacian on compact manifolds without boundary and there if you increase the number of the eigenvalue the number of the nodal domains increases this is a classical study in analysis and differential geometry going back to current and even beyond that and about 12 years ago Nati Lineal proposed the program of studying nodal domains for random graphs of course random graph is not a manifold so we want to see whether there is a difference between the behavior of the nodal domains and the behavior of for graphs and for the manifolds and he came up with surprising answer that there is a huge difference actually Deckel Lee and Lineal proved that the number of nodal domains for all known first eigenvector of all eigenvectors of an order Schraini graph except the first one is bounded by a universal constant later and this is a result of 2008 later Aurora and Bascara showed that the number of nodal domains is actually two exactly two for all eigenvectors except the first one the first because that for the adjacency matrix of a random graph the entries have known zero expectation the first eigenvalue is separated from the rest of the eigenvalues the first eigenvalue is of order and the rest is of order square root of n or square root of NP let me flash for a moment the properties of the random graphs we will talk about them a bit later and our aim is now to prove this Deckel Lee and Lineal Aurora Bascara theorem that the number of nodal domains is exactly two to complement the picture there was a more recent result of Tao and Wu showing that if you consider an eigenvector of an order Schraini graph then with high probability it has no zero coordinates which means that these nodal domains maybe described as connected components of sets of strictly positive and strictly negative coordinates ok so now let's see how to prove that the number of nodal domains is exactly two let's decompose so first of all if we consider the first eigenvector it has all positive coordinates it can be written explicitly and the other eigenvectors should be orthogonal to it which means that they must have both positive and negative coordinates so I must have at least one positive and one negative nodal domain so let P be largest positive nodal domain the same and will be the largest negative nodal domain and I'll decompose the set of vertices as the disjoint union of P and and the rest and our aim is to show that the rest is empty but we will start with a more moderate task we will show that w is small probability 1-0 small of 1 and here you can put an explicit term the cardinality of w is less or equal than twice log no c log n over p squared and to simplify the argument I'll assume for a moment that P is constant so how would we prove it this is elementary and doesn't require any random matrix theory let's consider this set w and w is a disjoint union of nodal domains say positive nodal domains negative nodal domains and let's take a point a vertex from each of these small positive nodal domains these vertices cannot be connected because this is P1 this is P2 P3 if I take a vertex from each one these are connected components so that I cannot have an edge between two components which means that if I pick a point vertex from each nodal domain Pk I get an independent set a set of points which are not connected by edges and this is an elementary probability 101 problem we can prove that the cardinality of the independent set is log n over p at most so this K capital and L capital are less than c log n over n okay now let's take let's suppose that this is an empty set and let's take one connected component from this set one nodal domain I'll assume for a moment that I have a rather big additional nodal domain is greater or equal than log n over p but this nodal domain fell into w for a reason it's not the largest one and this means that then the size of p is also greater or equal than log n over p and we have two sets of large cardinality by the same reason they cannot be connected by edges otherwise they would merge to the same connected component so I have two large sets which are not connected by edges and again probability 101 problem is that this is unlikely which means that our assumption was wrong cardinality of which pk is bounded by sorry log n over p cardinality of which pk is bounded by log n over p number of these domains is bounded also by log n over p combining it we get log and log squared and over p squared great so the exceptional set is small and now let's prove the decently linear Aurora Vascara theorem okay I'll assume to the contrary w is non-empty and I'll pick vertex double vertex say V in w and I have an eigenvector x the eigenvector and I'll assume that say x double x V the coordinate corresponding to this vertex is negative very good now let's write the eigenve eigenvalue eigenvector equation so a x is lambda x and I'm going to read the fifth row of this equation so a lambda x V is a x V is boundaries of a i 0 so once so this is the sum of our all say you and gamma of B of x you where gamma of V is the set of all vertices connected to the vertex V because only for connected vertices I get once and now let me estimate the L1 norm of x reduced to gamma of V not L2 but L1 norm so this is the sum over you and gamma of V x you what is gamma of V x V is negative and V is from this exceptional set which means that V cannot be connected to the maximal negative nodal domain and so gamma of V is disjoint from the negative nodal domain oh sorry this is absolute values since I take the L1 norm and by exclusion we have you in gamma of V intersected with P x u plus some u and gamma of V intersected with W x absolute value of x u but if I consider the vertices from the positive nodal domain the coordinates are positive I don't need the absolute value here and then let's complete this sum to the sum over all over the whole set gamma of V so this is less or equal than the sum over you in gamma of V x u plus and I have to compensate over it for it I compensate by at most adding the same sum so sum over you in gamma of V intersected with W x absolute value of x u and I know what this is by the eigenvalue equation so this is less or equal than the absolute value of double