 OK, donc aujourd'hui, pour la dernière lecture, je veux adresser une question un peu différente que l'un des deux prévues. Néanmoins, une sorte de refinement de ce que l'on a vu sofar. Donc imaginez que vous avez un second de 5-9 graffes qui convergent dans l'un des limites pour des probabilités rôles, c'est la distribution uniforme sur les graffes randomes. Ensuite, par la convergence théorème de l'HUINO, l'hompéricale distribution de eigenvalues, ils convergent dans la majorité spectrale, la majorité spectrale de cet objectif. Le conséquence de ça, c'est que si f est le support de la majorité spectrale de l'HUINO, la proportion de eigenvalues, qui sont un support élevé de ce support, vont aller à l'une. Donc la question basique que je veux parler de aujourd'hui est, est-ce qu'il y a d'autres eigenvalues? Et si non, comment pouvez-vous... Donc c'est juste de dire qu'il y a d'autres eigenvalues. Il y a une proportion vanishing de eigenvalues, la proportion vanishing de eigenvalues, qui est à l'extérieur à la root de la support limitée. Ok, donc il y a une première commande. On a vu que la map qui, la map qui a une majorité spectrale de l'HUINO qui donne la muraille, continue pour la topologie de Benjamin Ischam. Ok, donc en fait, ça implique les eigenvalues typiques, qui sont ce qu'ils décrivent. Les eigenvalues typiques, je dépend juste de les points de la route typique. Donc, les eigenvalues, qui vont être à l'extérieur de ces intervalles, vont dépendre, elles peuvent être de deux types seulement. Elles peuvent être de deux types. Elles seront responsables de la route locale à un point anatypique, dans votre graph, ou elles seront présentes par des structures de géométrie globales dans votre graph. On va donner un exemple très bas à l'exemple. Si G est un graph régulier, ok, et c'est sur les eigenvalues, vous ordre les eigenvalues comme l'adjacent C, donc l'adjacent C matrix n'a pas de négative entries, donc le premier eigenvalue est le D, et c'est le perron eigenvector. Mais si la deuxième eigenvalue est également equal à D, ça signifie qu'il y a deux components connectés. Vous voyez que les eigenvalues qui sont à l'extérieur, les supports limites, ou le fact que l'an est minus D, dit que le graph est par-tête, donc l'outlet, donc le eigenvalue qui sera typiquement à l'extérieur de cet état, peut être... est possible grâce à la propriété géométrique globale du graph. Ok. Et il y a un théorème fameux, qui est déjà mentionné, qui est un théorème par Friedman, qui dit que, en 2008, on dit que si G est uniforme entre tous les graphes déreguliers avec N vertices, donc D est fixé, et ok, Nd est même. Ensuite, avec la probabilité de l'arrivée, je dirais peut-être, pour n'importe quelle epsilon, la probabilité que... Le second grand eigenvalue, ou le plus petit, comme l'absolute valeur de l'autre, est plus grand que 2² de D-1 plus epsilon qui va à 0, comme N va à l'infinité. Ok. Donc, c'est exactement en ce cas, GN, quand vous avez un uniforme déregulière, la distribution limite est la masse directe à l'infinité déregulière. On a vu que le mur est la distribution de la distribution de l'arrivée avec des eigenvalues qui supportent l'intéril minus 2² de D minus 1. D minus 1. Donc, c'est mu de l'infinité en regard 3. Donc, il dit qu'il est à part de l'arrivée de l'arrivée de l'arrivée, qui est l'arrivée de l'arrivée de l'arrivée, qui est juste... tous les autres, ils sont à l'arrivée de l'arrivée de l'arrivée de l'arrivée de l'arrivée. Donc, nous aimerions... ... ... ...j'aimerais essayer de vous expliquer. Comment ce phénomène a-t-il quelque genre de généralité. Et... Donc, je vais étudier un exemple particulier. Donc, ce comportement devrait être prévenu, même si il n'y a pas... il devrait être prévenu peut-être que l'opérateur local soit raisonnable et qu'il n'y ait pas de random graph ensemble. Ok? Euh... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Sigma i sont les machines, signes que pour une x, Sigma i de x est différent de x, et Sigma i square de x est equal à x. Si est une permutation matrixe de Sigma i. Alors, qu'est-ce que cela décrive ? Si vous regardez ce graph, vous avez à chaque point, vous avez trois oeuvres, 3, c'est equal à 3, vous pouvez coloriser l'edge par l'indice ici, et vous mettez un weight, c'est A1, A2, A3. Et tous les oeuvres sont exactement les mêmes environnements locales. C'est appelé anisotropic. Je pense que l'un des oeuvres de l'Ai a été 1. C'est l'anisotropique dans le livre. J'ai brûlé le nom de... Ah, il y a un livre sur... Il y a un livre sur l'opérateur qui est de ce type. Si l'Ai... Ce que nous allons faire, nous allons être intéressés dans un modèle où l'Ai, le Sigma i, c'est un match. Dans un modèle où les matchs sont uniformes des samples de tous les matchs possibles. Donc le Sigma i est uniformement sample. Si l'Ai est equal à 1, le traitement est essentiellement le traitement. Ok, donc c'est le modèle... Qu'est-ce que c'est la meilleure chose que vous pensez pour une GND, pour une CYL, que je veux mettre en place. Donc, il faut mettre en place un minus-third ou un minus-two-thirds. Vous avez une conjecture sur le CYL ou ce qu'il y a conjecturé. Qu'est-ce que vous avez besoin d'augmenter le soutien afin d'attendre la probabilité d'établir tout le diagramme d'accepter les autres. Donc, ce que j'ai prouvé, c'est qu'il faut mettre en place un log, un log n squared, avec un constant. C'est ce que j'ai prouvé. Mais, si j'ai pris quelque chose, il devrait être plus grand, un minus-two-thirds. Il devrait être vrai, mais c'est une grande conjecture. C'est vrai pour... Non, ce n'est pas provenant, mais c'est... Ce serait consistant de tracé-widem-scaling. Ok? Donc, c'est le modèle. Donc, la première partie, je vais premièrement... Il y a une réplique. J'ai dit que chaque... la plupart des valeurs eigenes, elles vont être close à l'épsilon-nibode de support limité. En fait, quand les AIs sont positifs, il y a une très bonne bounde, ce qui s'appelle l'alon-bopana bound, ce qui peut penser... C'est un moyen quantitatif et un moyen plus général pour donner des boundes en lambda-2 ou en lambda-nib. Donc, qu'est-ce qu'il y a? Alors, nous devons définir le même opérateur, mais sur une tracé-widem-scaling. Ok? Donc, vous vous souvenez, nous avons la tracé-widem-scaling donc c'est consistant d'une copie de... des graphes de K, d'une copie de Z2 et avec le générateur... Ok? Donc, vous avez trois mots en ma picture. Donc, ici, vous avez la route de la tracé. Vous avez la tracé-widem-scaling et ensuite, quand vous multipliez par G1, vous considérez le point A1, A3, A4... Et vous définissez l'opérateur en L2 de la tracé et vous pouvez l'écrire exactement de la même manière. A1 et vous repliez les matchs par la multiplication par le G1. Vous pouvez l'écrire comme ça, où l'alon-bopana de la représentation régulière. Ok? Donc, nous avons déjà vu cette représentation. Ok? Donc, cet opérateur est le même modèle mais juste sur la tracé-widem-scaling. Ok? Et... Donc, le livre de Ficat, de Lamanca and Steiger, c'est sur cette tracé-widem-scaling sur la graphé-widem-scaling. Donc, l'opérateur ne sera pas seulement intéressé par l'opérateur en normes. Donc, c'est un spectrum si vous regardez l'opérateur en normes sur cette route. Pour cet opérateur, il y a des intensités que vous pouvez compter, qui généralisent la distribution. Il y a des... c'est symétrique, contrairement à ma picture. Et... ici, c'est l'opérateur en normes de cet objectif, en minus. Et il y a un formulaire qui a été prouvé par Ackman et Ostrand en 17, quelque chose. Ce qui dit que l'opérateur en normes de A est l'impf de tous les positifs de plus l'amplicité du то du R Fresh Trail du 2 FUCK Donc, leuler le złot dit que c'est pourthis opérateur qui 만들어 un Colombian qui est scored avec A stars SUS vanishing error term. Yes. Is it essential for you assuming the ways are positive? So, for this formula, no. I mean, you say it could be complex and you will have a module square. If you, no, it cannot be complex in this setting. So, a, a square. So, no. For this formula, I did not be positive. And since it's operator on a tree, it does not depend on the sign of the variable. But for the Alan-Bopana bounds, there is some positivity of the entries which will play a role, which will use play a role. So, what is this Alan-Bopana bounds? I will give you the more argument for Alan-Bopana bounds. So, as we have seen, the same argument that we have seen already a few times. So, there are two points in the graph. So, I come back to this model. Oh, there is a graph, of course. So, we call G the graph which depends on the sigma 1 to sigma d. OK. So, I found it more convenient to define the adjacency matrix directly, but there is a graph underlying that. So, we can find two points which match, which distance are equal to the diameter of the graph. OK. So, if you grow, it means that you will find two points whose ball of radius, if you take h which is less than the diameter of the graph divided by 2. You will find two points so that the balls around each of these points are non intersecting. Then, the second eigenvalue of a or the lambda 1 are just like that, the eigenvalues of this matrix a. The second eigenvalue of a is at least by the quant fichier variational formula at least the largest eigen, the second eigenvalue of the graph. So, now, maybe I should write the second value of a is at least the second eigenvalue of the graph where I remove everything else but those two balls. I put to 0 all entries outside that. So, let's call this b of x over h. OK. But these two balls, they are disjoints so the second eigenvalue is just the minimum of the largest eigenvalue of this operator restricted to this ball and this operator restricted to this ball. OK. But my graph is completely homogeneous so these two measures so this is equal to the minimum. OK. But these two numbers are equal so this is lambda 1 of a restricted to b x over h. OK. Now, this is where the fact that what is, if you would like, for example, to count the moments of a or a restricted to in my graph g. OK. And you take k even. Oh, you don't need that for this argument. OK. So, I claim that this is at least the number of the same thing, but on the tree. What is that? You remember, this is the sum of, maybe there would be the, this is the sum of, let's put here e x and here let's put e naught. This is the sum of closed paths from x to x of length k times the product of the weights that I have gathered on this path. OK. So, there are closed paths to get my k such that they start at x and they end at x. OK. But this set, if I come back, if after k steps I am back at x, if I consider exactly, if I make exactly the same decision on the tree. OK. I choose, for example, here I took a step using s1 and then I took a step using s2. I do exactly the same set of decisions in the tree. OK. When I am back here, I will be also back to the root here. So, this is at least the same path of length k. OK. So, what I am just saying is that as we have seen yesterday, the infinite diagonal tree is a cover of the graph g. OK. So, there are more walks, there are more closed walks in the graphs than in the cover. OK. So, a consequence of that is that this, if you think about it, this will be larger than the largest eigenvalue. This will be larger as the largest eigenvalue of the operator a but star when you just cut it at depth h. So, you look at the operator on the tree, you cut it at depth h. OK. And it is largest, it is a finite size graph. It is largest eigenvalue. No airborne for the largest eigenvalue of the graph. And now, that as h goes to infinity, it will converge to the norm of a star. You can write it as plus some function which goes to 0 as h goes to 0. And so, you will get a NB. So, you see here you have used the fact that the a are positive because there are more terms here and all are positive. So, the Alan Bopana bounds says that, so Alan Bopana, lambda 2 of a is larger than square root as a norm of a star minus some function which depends on the ai's and which goes to 0. You take h which is a diameter of g dot. OK. And for example, if the ai's are equal to 1 or equal to 1, you are the usual case for regular graphs. And you can take epsilon of h as being c over h squared. So, c will depend on d, I think. I'm not sure. OK. So, in fact, the consequence of that is that not only for graphs which are locally tree-like, the second-largest eigenvalue is at least the norm of the second-largest eigenvalue is at least the right edge of the operator on the infinite tree, on the limit object, on the tree. It's true, in fact, for any graph. So, the tree is a graph. So, the tree gives a lower bound, a universal lower bound for the second-largest eigenvalue. OK. So, now, so, and there is an analog of the theorem by Friedman, which will say, in this case, that if the sigma i are uniform matching, this bound is sharp. So, lambda 2 of a converges in probability of just the lower, the upper bound, with high probability. So, for any epsilon, the probability that is less is larger than a star plus epsilon plus 2 0. OK. So, so, it's a slight generalization of the theorem of Friedman, but I've chosen this application because it's it will it's proof requires the ingredients which were not present in the original strategy of Friedman. Kind of, when you will see the strategy of proof of this result, you will see that there is a general picture which will emerge. OK. So, what are the hidden difficulties in this statement? So, what, since the model is quite simple, the only method that people know is to in order to control large eigenvalues, the only thing that comes to mind at least at first is to bound, you take k even and if you bound, if you find bound on the on powers of the trace, you have a So, did I say, OK, the sum of ai is equal to 1. So, this is at least lambda 1 power k plus lambda 2 power k plus the other terms, but k is a lower bound. This is equal to 1. OK, because I normalise my ai is to be 1. So, the constant vector is it's just a transition matrix of Markov chain. So, you would like to prove that this is of order 1, so 1 for that one, 1 plus n power, this number power k. Why is there an n? Because first, there is a generalisation of Alan Bopanaban due to serre, which says that in fact there is a positive proportion of eigenvalues which are at least epsilon close to a star, to norm of a star. So, it means that in any case there are at least of other eigenvalues which are close to that. So, you have no hope to remove this n. And also, you can see it on the picture here. We know that there is a positive without invoking the serre result. You know that there is a positive mass here for any epsilon. So, there is a positive proportion of eigenvalues. OK, so, you aim at that and so, you want to prove that trace of a power k minus 1 is bounded by n. OK, so, this bothering factor n, which is what you lost when you, this bond is very rough. This positive factor n it implies that if k is much larger than log n so, you have an upper bound for lambda 2 of k. You write that lambda 2 is bounded by n power 1 over k times OK, so, if n if k is much larger than log n this is equal to 1 of 1. Otherwise, this is at least, this is large and you don't get a matching bound upper bound. So, you are doomed, if you want to you follow this strategy which is the only one that comes to mind you are doomed to compute to to to find fine asymptotics of random works of counting of random works of a long pass which are much larger than the typical diameter of the graph. So, diameter of the graph is log n by d minus 1 as we have seen in Al's lecture. But here you have to go much faster than that. So, so the first difficulty is that the first moment method fails. What do I mean by that? If you have a probabilist so you have a when you see you want to prove that everything is positive so you will take you want to prove that the expectation of that is bounded by n times. If this is true for k much larger then you apply Markov inequality and you have proved your statement. Ok, but this is wrong because it's an exercise but if you look at the complete irregular graph it will be included in G but somewhere in your graph you have the complete irregular graph. This is like n power minus c for some c like d squared. But on this event when you are connected you have somewhere kd and then the rest of the graph. So it implies that the graph is disconnected so lambda 1 is equal to lambda 2 is equal to 1. It implies that the expectation of lambda 2 power k is at least the probability of this event which is n power minus c for any k. But you would like to prove that this bound when k is larger than this dk is faster than any polynomial. Ok, because if k is much larger than log n a star plus epsilon power k is less is much less than any polynomial for any constant c. Ok? So you cannot so the first moment method fails. The second difficulty is of combinatorial type. So there is a combinatorial problem. I mean not problem but is that so if you want to compute the trace of a power k what do I mean by that? Yes so let's write what it is. So you write it as a sum of all paths which are gamma 1, gamma k of gamma i so this is a vector on the vertices of the graph which is the first n v power k Ok? Then there is a product of the weights which are seen along the path Ok? And then there is a fact product of the random variables which are the s Ah non sorry I didn't want it to write it like that Gamma i is of the form I will use I will not use the index i t t gamma t is of the form x t, y t So I take a vertex generator I will use Ok? And so here you have the product of the s you have chosen the it smatching to go from x t to x t plus 1 Right? So I've done nothing I just expanded the trace So when you have it's like in this book so they could come back a lot to the same position Ok? But exactly as in Simone's first lecture these variables they are they are equal to 0 or 1 with probability 1 over n 1 minus 1 over n so either they are completely 0 or they are very large compared to their expectation So when you take so if you spend a lot of time on an edge it will not cost you a lot just the probability 1 over n for the edge to be present but it will have a huge influence on this sum Ok? So you have to deal with when you will use probabilistic method or combinatorial method you have to deal with kind of avitail variables Ok? So we come back a lot on edges which I've already visited so we see that we will use non backtracking to exactly to avoid that and the third and last difficulty I mean this one is a small one is that the entries are not centered Ok? So from a linear algebra point of view it's just that we are interested by the trace of ak minus 1 or from a probabilistic point of view the product of variables but the expectation of we have not normalized our variables expectation of sixy is 1 over n minus 1 so it's not zero so we would like to so the strategy is to solve these three problems in a very specific order first what did I write? Yes I think so first we will solve the second problem this will be the introduction of the non backtracking work then we will deal with this problem but the first moment fails and then we will center the variables and then we will use the method of the trace and that's very important the way in which you order the way you solve the three main difficulty hidden behind this problem Ok so the solution of the first problem Joe Friedman found it long time ago and we already mentioned it a few times is to the second problem is to look at non backtracking so operators so what is that so we have already seen it a few times so maybe I will go so instead of defining an operator on the vertices we define an operator on the edges which are the vertex and then for us there are d outgoing edges from so you take a vertex well this is, you can think of that as being an half edge or directed edge I prefer to think about it about an half edge but it's just the way you think ok so then you can define the non backtracking operator B which should be which was introduced by Hashimoto in a different context so it's a matrix on E times E which you can write as B in matrix form as if E is X I and F is Y J as the indicator so you have X you have Y so you want that X is mapped to Y so the indicator that Sigma I of X is equal to Y ok so this is this and then you want that there are maps through the I's generator but you want that J is different from I so that's where the non backtracking part is so you follow an edge of your graph but you don't backtrack you can write it as B equal to, if I didn't make a mistake ah and what you do with the weight let's put the weight at the outgoing edge so I put A J which is the weight of F ok so you can write it as a sum of all I different from J of A J S I tensor product with E I J E I J are the usual communicable matrix which are equal to 1 at the entry I J I write that because you see that all the you see from this all the constant vector 1 is a left is a eigenvector all the S I are permutation matrices you see that B decomposed on 1 the constant vector times C power D ah there is an orthogonal decomposition the non backracking operator as the vectors which are constant on each vectors which take constant value for each X I and the constant value does not depend on X these are exactly these vectors and the vectors which are orthogonal to that which are vectors such as the sum of F of X I over all X is 0 ok so this has dimension D and this has dimension the rest ok so I will call that space H0 and ok so there are formulas which are probably were found in the Hashimoto already present in the Hashimoto paper which are goes along the name usually of IRRBAS formula which is traditional for when the A I are constant which say the following that the spectrum of B is equal to plus minus 1 union the set of lambda such that lambda square minus mu D plus D minus 1 is equal to 0 where mu is an eigenvalue of A ok so when the A I are constant there is a kind of dictionary we tell you if you have an eigenvalue of A I can tell you where are the eigenvalues of B when the A I are not constant there is some connections that in this example you could make explicit but it is at the cost of not introducing the the non backtracking operator characterized the spectrum of A but if you consider not only this non backtracking operator but the whole bunch of non backtracking operators and there is a beautiful formula by Nannini Anand Parman so let's call it Anand Parman's formula which is very general so in this example we don't we could have done it by hand but it's very general so you take let mu in the spectrum of A but which is not in the spectrum of A star so I recall you this is the operator on the tree ok, since you are not in the spectrum of the tree you could define as a resolvent like that you can define the resolvent of the operator A at point mu ok and you consider the non backtracking operator B mu ok with weights so I will denote it by A j of mu which are equal to you look at the inverse of the resolvent on the tree at the unit of the tree times the resolvent the diagonal coefficient between the root between the unit of the infinite tree and the jth generator ok, so you take this formula but you squeeze the weight by putting a weight which is the ratio between so this is the ratio between the resolvent at the root of the tree times the resolvent and the resolvent of between two neighboring point of the tree since an entire formula says the following that one is in the spectrum of B of mu is equivalent to mu is in the spectrum of A so I think she proved that but because it didn't say equivalent and the same is true on H naught so if you restrict to so if the same is true of H naught and in the orthogonal of one if you take an eigenvector which is orthogonal to the constant vector then there will be an eigenvector which will be in H naught so no, it's a resolvent R is a resolvent on the on the infinite tree ok, so then R is well defined at mu because mu is not in the spectrum of A star ok, and then these weights are just good old fashioned numbers ah, sorry, I just but I did it wrong correct way, sorry ok, so in fact it's true even if you are in the spectrum of A star if you know that the resolvent is defined if these numbers are well defined but we don't need this refinement ok, so the proof is very easy once you realize that this is going to be the formula so you can rewrite A i of mu so we prove this implication you write A i of mu as being you can rewrite it as being A i times gamma i, where gamma i it is a resolvent so this is a resolvent in the tree where you have removed the root and you consider it as G i G i ok, so you have the root you use, you have G i and you remove the root and you look at what's remaining ok, so you can rewrite by short complements formula I think we already have seen this formula somewhere ok, and there is a recursion for this gamma i that gamma i is equal to mu minus the sum over all j different from mu of A j gamma i j minus 1 this is a short recursion the usual short recursion on the tree I put a minus because I've inverted the definition of the resolvent ok so knowing that that's the only ingredient that you have to know prior to the proof you write you take a negative vector U you form a vector V so which leaves U is in R C to the V and V will live in R to the E and V of so imagine that you have a point X you have Y and you have let's call Y so it will be some J and Y is equal to sigma i or X so you take V of X i which is equal to 1 over gamma i times U of I have to check otherwise U of Y minus A i U of X ok so you make a linear commune you have an alpha edge you make a linear combination of the vector of the kind of derivative where the weights are well chosen and then we have to check that one that BV is equal to V right so it will imply that part of the so then you just write BV of let's take E which is X i which is BV of E it is the sum by definition of the J different from I of the A J times the V Y J ok so Y is sigma i of X so you replace so the sum of J different from I of the I J ah so vector I J gamma J so the first one so it's a of V so V of sigma it will be 1 over gamma J V of sigma J of Y minus A J of U of 1 so I just plug that definition for X I which is equal to Y J so it simplifies this and you sum to find the sum over all I J V sigma J of X of Y minus the sum over J different from I of I J squared gamma J U Y yes yes divided by ok so but now you recognize U is an again value of of A so A if you write it at Y this is a sum over all J so mu U Y is a sum over all J of A J U of sigma J of Y ok so here you recognize sorry sorry sorry here you recognize again value equation but the I's term so this is simply minus right A I of U of X plus mu U of Y minus sum of J different from Y of A J squared gamma J U Y so I've just replaced that and here you find recursion so you can factorize by U of Y you recognize recursion equation and it's exactly V of so that's very neat so the consequence for us is I think I've lost the top as a consequence for us is that you can non backtrack our first theorem and instead of proving a statement for the matrix A you prove a statement for the matrix B but for all B ok so I will skip some some steps now that you know that there is at least in principle a way to to transfer information from one word to another and for example you can prove that you can have a formula B star in terms of on the tree you can apply this formula on the tree and you will find that it will be the supremum of the mu such that rho of B mu rho of B star mu so B star is a non backtracking operator on the tree is equal to 1 that could be one consequence more or less of this lemma when you apply it, so here I applied it for a finite size matrix but you cannot it will deduce something like that but you can characterize the operator and retrieve the acman of a strand formula if you know how to compute the spectral radius of all the non backtracking operators ok so so another consequence is that if for any mu larger than what you want rho of B mu is bounded by rho of the same thing on the on the tree plus error term then as soon as mu is larger than a star this is strictly less than 1 by this formula so in particular mu cannot be again value of so since the spectral radius is less than 1 mu cannot be again value of a ok so then you realize that mu is not an eigenvalue of a so you have to play a little bit with the epsilon so that's the criterion to say that if you can bound for a family of non backtracking operators the spectral radius you can bound you can discard the presence of possibility of eigenvalues so what you have to prove that we have with high probability for any mu rho of B mu so let's say B mu positive whatever is less than rho of B star epsilon ok so that's the non backtracking reduction so there is nothing random here that's the non backtracking reduction so the non backtracking reduction tells you that if you prove that so that solved the second problem that you want to count pass but pass which do not visit too many times the same vertices the same edges sorry so maybe I should keep this definition for a little bit more this we don't care anymore ok so now c'est pas le cas c'est le cas non en fait c'est juste pour le talk to but the ai are positive because it's just i made ai positive just to present the alonbo but it's true whatever the ai is so you can apply it to minus ai ok so so to solve the first and third problem the first one was the second one was the first one was the fact that the first method fails so in fact you have already seen most of what is going to to be next because we have already seen that in al stock that the regular graphs or random graphs they tend to have a few cycles and they have no you will not find 2 cycles in the same neighborhood of a graph even if this neighborhood is quite large ok so maybe I should before doing that so what we prove in fact is to the so we from now on I just forgot about this construction I just take a b which is a some weights ai and ok I forget about the mu but if I buy a some net argument which I am not present but I fix a given a backtracking matrix I want to prove that rho of b is less than rho of b star that's what I want to prove so the bound that I have is you cannot you look at the operator or norm of L as L goes to infinity this converge to that let's get a found formula but it's an upper bound so it's enough to prove that this is true for L large enough and I forgot something important so this is true when you take the operator restricted to H0 because otherwise there is a trivial again space remember we want to speak about the second again value not the first one so we have to restrict ourselves to this set so what we so this is we have to bound the supremum of the norm of b of x over all g which are orthogonal to which are in H0 so which are orthogonal to this vector space that's what you want to check so in fact this at this point so we have understood how to in the previous papers by Laurent Massoulier et Marc LeLarge on propose a method to bound that on a model which is related and Laurent Massoulier already had a paper on a similar kind of questions so I will try to explain you this Laurent's method so what do you do you write exactly as we did as an adjacency matrix which I have erased you can write you can expand that as a sum over gamma which are edges of gamma l plus 1 this one is e, this one is f and it has to be non backtracking so gamma i I write it as I said no i x t j i t and non backtracking so i t plus 1 is different from i t ok you have the product of the weights that you see on this pass so there are deterministic weights i of i t for t equal to 1 so probably it's 2 it's more correct to l plus 1 times the product of x t x t plus 1 ok so I've done nothing I've just done the same expansion that I wrote already for the adjacency matrix ok so in this set there are still too many now we want to reduce this so this is the deterministic set it doesn't depend on the randomness the randomness is hidden here and these are deterministic weights so we want to reduce this cardinality and then and that's where we will use exactly the same trick that Eyal and Alan used to prove the cutoff we will use that the fact that the random graphs are tangle free so we are maybe I will just use the notation of Eyal do you remember this tree excess that the tree excess of g of x you take a ball of radius h the tree excess of that is less than 1 for any x with high probability if you take l which is less than 1 5 1 4 whatever ok, we saw that this morning so it implies that B power L is equal to another operator B parenthesis L where I do the same thing maybe I will just use ok and B parenthesis L is the same but I add the fact that gamma visits at most one site one cycle maybe it's not clear at that most so gamma can be like that this would be ok but gamma cannot be like that or like that ok whatever so this is not possible but this is ok because these two matrices would be exactly the same on this event because every time that you take a gamma which is not in this set you know that this product has to be 0 because this product is exactly checking whether or not this edge, the edge is present in the graph where you started with so mathematics is a lot about good definition and this is fantastic probability of random graphs and as you have seen it's not difficult to check so this simple point solves the problem 2 1 that's the fact that the first moment fails so this will just deal with that and then the first moment will work but now we have a problem that we are working with a matrix which is not anymore a power of a matrix and we have now to recenter the entries so what would be a recenter matrix ok maybe I will use a third model so what would be a recenter matrix is a matrix B underscore L where I do the same but in I just normalize that I X Y is simply S I X Y minus 1 over L we call you is the indicator that sigma I O X is equal to Y minus 1 over L ok so this in this matrix the entries are recenter I could put M minus 1 ok so now I want to write to say that in the orthogonal of is in this is orthogonal to this constant set space to this vector space these two matrices are essentially the same this might be quite true because we have first removed some terms in this sum it would be true if we did not have done the first ok so the way you you go from matrix B pour L to that one you write that you do a telescopic decomposition as a product of an empty an empty product is 1 T I want to put YT XT minus YT XK minus YK right so I apply that to this will be XT ok YT will be that with the bar and apply the formula and you get that B power L of EF is B underscore L of EF plus this sum of product ah maybe I would just note that by A gamma if you are ok with that and this bar do I want to do that no maybe not no expectation ah the product so there will be a I T plus 1 S bar I T XT XT plus 1 the difference which is just 1 over N ah yes times the same thing the same thing but with S I T maybe where I am you can write everything A I T plus 1 ok so I've just done nothing I just expanded now graphically what's going on I would like to this is almost like B underscore K minus 1 ok and this is essentially B power L minus K so I would like to write that B power L is B underscore L so I am doing some kind of orthogonal projection by N so it's times the sum of K of B underscore K minus 1 so difference will be like something like B tilde where B tilde is a non backtracking operator on the complete graph so K ok so if you if you look at this sum it will count pass which are non backtracking during K minus 1 steps then you do 1 step 1 non backtracking step and then you continue and you do L minus K extra steps which are non backtracking but you see that here this pass is as 2 cycles even if each of the chains has just one cycle so this sum is larger than that sum so I have to remove some stuff which can't exactly configuration like that and ok so R of the L is exactly it can't configuration like that and or configuration like I don't know like that etc so now if G if your graph if this property on this event you can remove you find that B power L this event that there is no graph with a large tree access no subgraph with a large tree access but then I want to test that for a vector which is orthogonal to A to C power D so with G a vector which was in H0 ok but it is easy to see that the kernel of B tilde it contains H0 ok because in B tilde you just sum over all vertices so if you are if you sum over all edge which have the same the same I of the sum of all vertices it will be 0 so the kernel of B0 includes to 0 also and also that we have seen B of H0 is included in H0 so in particular if G is in H0 BL of G is BL of G plus this will be an element of H0 because G is in H0 this is in the kernel so everything is 0 so what we have done this painful computation is just that we have done the orthogonal projection but once we have removed some some paths so it has just created some error terms which fortunately are not too bad so so you deduce that if let's call this EL this event if EL holds the norm of BL G on H by 32 H0 is bounded by the operator norm of BL plus a norm of RL so these are operator norm and now the proof continues but we stop now you can apply you can apply the trace method that's where yes I will not have enough time so maybe I will start the first line so by using the trace method we have 2 statements that if you take rho which is larger than rho of B star plus epsilon we prove that the operator norm of that is bounded so for example it's expectation that we have high probability so probability is taken to 1 this is bounded by some log n power I don't know 50 and the norm of R is also bounded by C power I also some rho power L is bounded by some log n factor times some constant C power L so you prove that and it allows you to conclude because you take an L which is for example much little low of log n but which is much larger than log log n and you deduce that on this event on the intersection of those 3 events the norm of of the spectral radius will be bounded by rho rho power L rho plus some minor corrections maybe so and in the trace method you take high power of this matrix B power L you expand into pass and there are combinatorial pass part which is counting pass counting the number of pass which arise but since there are each bits of length L is tangle free the pass they cannot visit many times a single vertex easily and then there is a probabilistic part which is to estimate to a finite estimate of expression like that in expectation ok and what will happen is that if that's where the fact that the entries are centered plays a role that you will bounded that by some constant times n power minus the number of visited edges so you have a pass so you have x t x t plus 1 which are metton edge with with I t so this defines a graph which has a certain number of distinct vertices it's a and you have a discount factor which is also like k maybe 4 divided by n power minus n power of edge which are visited once so when you visit only once a vertex you feel the fact that the expectation is close to 0 because this is small this is in the regime where k is much less than square root 10 this will be a term which goes to 0 so this is a only probabilistic part of the proof it is to prove this lemma and you have to prove it for k which is much larger than again which is not completely trivial but I have no time to explain you in detail this proof so to conclude so in fact with Benoit Collins so we when I came with this argument we tried to generalize it to other models and so you see there is a general picture which emerge which is any Hermitian matrix defined locally on your graph you can non backtrack this problem and hope for the best in following this path ok and ok so what I have taken so far is just works which I have been more or less connected with but there have been a lot of recent works on sparse graphs just within the last few years just on the spectrum of sparse graphs so I just wanted to to cite some very nice works in the minutes I have just to give you some advice of lecture of reading during the summer holidays so I wanted to mention so this was more or less with Benoit Collins extension of that so in the in the last few years on sparse on the spectrum of sparse graphs there are much more what I have presented of course so for example for example the existence of of infinite sequence of bipartite graphs of bipartite romanian graphs marcus pilman I think this is the most relies on probabilistic construction related to model of random graphs which are closely related to the one I have presented there are very nice works of Nalini and Antaraman PhD students on the quantum ergodic theorem quantum ergodic city for finite graphs so which are related to delocalisation properties which implies some delocalisation properties of eigenvectors of graphs there are also very nice works of I.S. Backhouse and Valage G.G.D typical eigenvectors in regular graphs in regular graphs ok so all these papers they just came within 3 or 4 years ago and they use methods which are almost an empty intersection with everything I've told so far so I think it's quite amazing for a field there is also I wanted to say two more works there are some we have seen some results on for the weak topology spectral measure is continuous but you can say much more they arrive very sharp rate of convergence of for example the Castelmaqué measure so there are local laws by Boheon-Schmidt and Yao on regular graphs which are I think it's a very impressive paper and finally there are nice works on the invertibility of which I've never talked about and of adjacency matrices and the related copies of directed graphs directed random graphs where the question is to for the invertibility of adjacency matrices is to know whether or not for example you could take a regular graph whether or not zero is in the spectrum so there are works by Nicholas Cook and Pierre Youssef and and Cotter ok so I think there are very nice papers we've been doing a few last few years and it shows that there are many things to do in this field so thanks