 of lambda times absolute value of x v plus twice the cardinality of gamma of v intersected with w times the norm of x u infinity. And here we are going to use the localization and actually we are going to use two types of the localization at the same time, l infinity and no gaps. First let's use the l infinity localization and the l infinity localization tells us that with high probability the norm of x, sorry the norm of x infinity, let's use just one coordinate is bounded by a constant log n to the constant over square root of n. This was proved by Erdesh, Noles, Yao, Yin. And there are different exponents c small for bulk and for the edge but we are not going into these details, just one absolute constant will be enough for us. So this is roughly speaking of order 1 over square root of n. And lambda is a non-first eigenvalue, it is bounded by c square root of n times this coordinate is bounded by l infinity norm, so c log cn over square root of n plus twice. And here I have already prepared the bound for this set, the size of w is if p is a constant as c log squared n times log to the cn over square root of n. So this is a small change, the dominating term is here, it's bounded by c log to the cn. Okay, second perfect, so we bounded the l1 norm, now let's pass to the l2 norm and hence the norm of x reduced to gamma of v, l2 is less or equal than the square root of the norm of x infinity times the norm of x reduced to gamma l1 and again I use the l infinity delocalization, this is c log over square root of n, this is log to the cn, so it's less or equal than a constant n to the negative one quarter log to the cn. This number is tiny, at the same time the cardinality of gamma of v, so gamma of v is the set of all vertices connected to v, this is, its cardinality is the degree of v and the average degree is np and it's concentrated about np, so with high probability cardinality of gamma of v is greater than np over 2. And I have a set of coordinates of large cardinality which carries a very small l2 norm, this contradicts no-gaps delocalization. So the assumption that we have a vector in the exceptional, a vertex in the exceptional set is incorrect and we proved the Aurora-Bascara theorem. This is a typical situation that two types of delocalization l infinity and no-gaps work hand in hand and let me describe another such situation, the Brace paradox. The Brace paradox was observed by civil engineers in the 60s of the previous century and it says that if you add a highway to the existing highway networks, you may increase congestion. This is counterintuitive, but this happens in practice. And there were attempts to explain Brace paradox mathematically and the most popular model was suggested by Fan Chiang who formulated as follows, let's model, we have to model the highway network and we have to model the congestion. So for the highway network, she took an order Schreini graph, this may feel a little far-fetched but at least this is the first step. So we will consider an order Schreini graph and what is the congestion? To formalize it, let's recall that if you run a random walk on a graph, this random walk mixes and its relaxation time, not mixing time but relaxation time is the reciprocal of the second eigenvalue of the Laplacian, the first eigenvalue of the Laplacian is zero, the second is what is called the spectral gap, so it is the reciprocal of the spectral gap. And since its mixing of a random walk is related to the traffic capacity of the network, we will say that the congestion is measured in terms of the spectral gap. So let me write the spectral gap of what? We will consider the normalized Laplacian. So let's take the adjacency matrix of the Schreini graph and then dG will be a diagonal matrix with degrees on the diagonal and then we normalize the adjacency matrix by this three matrix and we get what is called the normalized Laplacian. So the conjecture of Chang was that if you take an order Schreini graph and take independently a random pair of vertices conditioned on that they are not connected by an edge and add this edge to the graph then with positive probability the spectral gap will decrease and positive would be independent of n. So we have two probability spaces, the space, the ambient space of the graph and the space in which we add the edges, in order not to mix them I'll talk about the proportion of edges which lead to the decreasing of the spectral gap. So let a minus of G be proportion of non-edges adding one of them decreases the spectral gap and then the conjecture of Chang was that the probability that a minus of G is greater than some constant C is greater or equal than an absolute constant C prime. This seems counterintuitive because adding an edge to a graph brings it closer to the complete graph and the complete graph has the maximal spectral gap but nevertheless this conjecture was proved very recently by Eldan, Raz and Schramm. We are going to prove a little stronger version of this conjecture. We will prove it with C equals one-half. So surprisingly if you add a random non-edge to the graph the spectral gap decreases with probability at least one-half. To do it we have to analyze when the spectral gap decreases and this job was done for us by Eldan, Raz and Schramm. This is a deterministic statement, it's a linear algebra or rather tedious linear algebra that if you have a graph so that the degrees of the vertices are commensurate and you take an eigenvector, the second eigenvector of the Laplacian then if we have a non-edge and the coordinates of the eigenvector satisfy this inequality then the addition of this non-edge to the graph decreases the spectral gap. The proof is a rather tedious calculation but it's straightforward. You just write the definition of the second eigenvalue using the Rayleigh quotient and you know the difference between the adjacency matrix of the graph and the graph added with an added edge and you know the first eigenvector of the graph and the first eigenvector of the graph with an added edge you just write the Rayleigh quotient, cancel what you can cancel you get at the theorem of Eldan, Raz and Schramm. So let's prove there the result and as you see I have this additional term is very small say p is constant it's 1 over n squared and the coordinates are typically 1 over square root of n so this is 1 over n and this will make it 1 over n to the 3 halves so this term cannot do much harm and what we have to compare is x u squared plus x v squared over x u x v this is the interesting quantity. So what can we say about the vector x? The vector x is an eigenvector of the Laplacian and we can apply the localization. Although there is a small caveat we proved the localizations for the adjacency matrix this is not a adjacency matrix but i doesn't play much role it just shifts the spectrum and these dg are the matrices of the degrees if p is constant then the degrees are highly concentrated the degrees are almost np with now with the small change so the matrix d to the negative one half is almost a diagonal matrix with nps on the diagonal and this means that an eigenvector of the Laplacian will not be precise eigenvector of the adjacency matrix but it will be an approximate eigenvector of the adjacency matrix you can estimate the error using there in the degrees and we can handle approximate eigenvectors so they are delocalized in both senses which means that the l infinity norm of x is bounded by if you do the accurate calculation it is you you get an error because the degrees are non-constant and it's not one over square root of n it's c and to the negative one fourth log n to the c and the calculation can be found in the notes and also we have no gaps delocalization which means that if i exclude a set w there is a set w of vertices side that the cardinality of w of v minus w is small and to the say one one minus something one over 48 such that for any w in this side w we have that x w is greater or equal than we wish to have n to the negative one half but again since we are dropping a set of cardinality all small of n incur some error this will be n to the negative five eighths the reason for it is there are no gaps delocalization if i look at the complement of w on the complement i have the opposite inequality and if i have the opposite inequality coordinate wise i can estimate the l2 norm which falls on the set v minus w and this l2 norm will be small and if the norm is small the set cannot be large according to the no gaps delocalization so we have that typically the coordinates are large with high probability and all coordinates are also small with high probability okay so let's look now at this ratio and let's take the maximum of this over say u and v in in the positive set not positive normal domain we didn't talk about the normal domain of the normalized drop of lesion but in the positive set so x u and x v are positive this can be written as if i cancel say x u this is the maximum at most twice the maximum over u v in v of x u over x v and this is less or equal than c n to the negative three eighths log to the c n key this is small this is much less than n to the negative one half which means that the additional term in aldan rashram theorem doesn't matter and so this inequality would hold sorry n to the three eighths sorry n to the three eighths it's maximum over minimum and if i divided by square root of n this is much less than one and it means that if i have a non-edge u v such that both coordinates x u and x v are positive then adding this non-edge would decrease the spectral gap the same way if i have a non-edge in the negative set then both x u and x v are negative and this fraction is positive and the same argument shows that if i connect so if i have the positive nodal domain and add a non-edge or if i have a negative nodal domain and add a non-edge the spectral gap decreases again this is based on combination of l infinity localization and no gaps localization the rest is bookkeeping let's count the number of non edges in positive and negative set so a minus of g the proportion of non edges which lead to the decrease of the spectral gap is less or equal than the number of non edges intersected with p plus the number of yes plus the number of non edges intersected with n over the number of non edges and now we have to estimate this fraction and again there is a the number of non edges maybe calculated for an urnish rainy graph and this is highly concentrated the average number of non edges is the number the possible number of edges times 1 minus p because we do not connect an edge edges with probability 1 minus p and then we have an error term so this is a cardinality cardinality of p choose 2 plus times 1 minus p plus 1 minus p cardinality of n choose 2 over the the cardinality of v choose 2 and v is p union n because i consider it positive and negative sets i don't claim anything about the nodal domains here okay let's bound sorry i it's greater or equal maybe some edges in between also contribute to the to the spectral gap so if i want a lower estimate i have plus and minus the error 2n to the three halves and plus 2 plus n to the to the three halves the cardinality of v is n so this n to the three halves is negligible and this is the cardinality of p plus the cardinality of n and this can be bounded below by the cardinality so cardinality of p choose 2 is the cardinality p squared over 2 plus a linear term the same way i treat this and then i use the convexity of this of the square function i get the cardinality of p plus the cardinality of n is greater or equal than the cardinality of n p plus the cardinality of n over 2 squared over the here i do the same thing this is the cardinality of v squared over 2 plus o small of 1 and the cardinality of p plus the cardinality of n is n so i get one half plus o small of 1 as claimed and we this way we proved the conjecture of fan chiang and moreover we proved a stronger result with c equals one half and this would be a good point to stop any questions from yes if you let's uh if let me swipe to the to the localization theorem everything is based on the localization and the localization requires uh that a are not contained in the small disks let me write what it means precisely it's for any z complex the probability that absolute value a i j minus z is less than c one is uh less than c two where c two is a constant less than one what happens if we consider Schreini graphs with a small p then the value of the adjacency matrix may be zero and zero occurs with probability one minus p so we have to trace the dependence of constants and no gaps the localization on the constant c two and the proof shows clearly that it's polynomial which means that i can go down to p which is n to the negative alpha where alpha is some constant in zero one can we get to uh alpha close to one and can we get to the sub polynomial level i don't know okay so uh let's all think mark again