 Je suis très heureux d'être ici, donc merci pour l'organisateur. Je suis très heureux de revenir à CISA. On parle d'un travail joint avec Gabor Lugosi. Oui, je pense que c'est travaillé. On parle d'un travail joint avec Gabor Lugosi et Nikita Zibotowski. Ok, donc il y aura une sensibilité de noyage dans la météo. Donc je devrais peut-être commencer avec un très général background de sensibilité de noyage. Je pense que ça a commencé avec le travail de Seminal by Cannes, Calais et Lignale en 1988. Et il a été développé dans les années 1990, notamment par Tala Grande, Borgun, Cannes, Calais et Lignale. Ok, donc ce qu'est-ce qu'il y a ? Le set-up original est sur les hypercubes, donc tu prends un product space comme les hypercubes, 0,1 à la m, et tu prends une fonction ici, qui va de 0 à 1. Ok, donc en fait, il devrait être une séquence de fonction, mais je ne vais pas écrire l'index. Parce que ça va être une propriété qui va s'occuper quand n va à l'infinité. Donc f dépend de n, mais je vais retirer ce paramètre. Ok, et je mets ici la mesure uniforme, donc je prends x, la point uniforme sur le hypercube. Et donc ma fonction me donne un output f de x. Ok, maintenant, je rentre la dynamique suivante. Donc c'est une chaine de marque. À chaque étape de la chaine de marque, je prends une de l'entrée, et je ré-sample. Ok, donc je remercie un coin et il sera 0 ou 1 à l'infinité du reste de la configuration. Ok, donc j'obtiens x, k, qui serait quelque chose, ok, je ne vais pas l'écrire, donc ça serait aussi uniforme sur le hypercube. Donc les coordinates k ont été ré-samples. Ok, et donc maintenant la fonction sera prête pour être nois-sensative. Si vous regardez l'expectation de l'output de votre fonction, quand vous avez ré-samplé les variables k, ok, selon que vous connaissez la vector x. Ok, et maintenant vous regardez la variante de cela, comme la variable x, comme la variable random. Ok, et vous allez dire que votre fonction f, donc c'est encore une séquence de fonction, est nois-sensitive. Sensitive, si cette variante va à 0, comme n va à l'infinité, pour quelque k, qui dépend sur n, qui est un peu low of n, ok, qui signifie que la fonction de l'output ne dépend pas de l'initiel vector. Quand vous avez ré-sampled, il n'y a que un microscope, un très petit nombre de coordinates. Ok, donc c'était le début de la histoire. Et donc il y a une grande théorie de fonction nois-sensitive. Et il y a un monographe par Chattergy en 2014, qui utilise ça, il s'appelle ça un nom différent. Et vous pouvez, pour exemple, vous pouvez avoir une question similaire comme, Instead of looking at the, I take R n, I take the Gaussian measure, instead of looking at the uniform measure on the unit cube. And instead of looking at this, this resampling of coordinates dynamics, I run the Чтобы et Renweck process, ok, so the same idea at every time, I am at a equilibrium, c'est la corrélation de ma fonction d'accès avec la n'est-ce que c'est un remarque à la fin du livre de Chouard Chatterjee, c'est des fonctions noissensatives pour le processus austinien-Lembeck, c'est une équivalent, c'est closely relative, c'est en fait une équivalent de fonction qui s'improve aux fonctions qui s'improvent sur le point carré de l'équivalent pour la mesure de Gauchan, donc plus précisément, qu'est-ce que je veux dire par ça, donc tu as le point carré de l'équivalent, mais si tu prends une fonction, donc maintenant c'est un Gauchan, donc mon vector x est un standard Gauchan en Rm, et tu prends la variante de f de x, donc il sera bondé par l'expectation, peut-être qu'il y a un constant, mais je n'ai pas checké, peut-être que c'est... ok, donc c'est bondé par l'expectation de l2N de l'équivalent, ok, donc c'est le point carré de l'équivalent, et les fonctions, les fonctions seraient noisensitives, ou encore, les secondes de fonction seraient noisensitives pour le processus de train de la réimbaix, si et non si, si et non si, si et non si, donc ici, tu as un improvement de la point carré de l'équivalent avec quelques epsilon, avec epsilon qui dépend de n, et qui s'adresse à 0, ok, donc quand tu as une fonction qui s'improve sur l'équalité de Poincaré avec des facteurs qui vont à un facteur vénécié, puis votre fonctionnerie sera sans sensibilité pour le processus australien. Et c'est plus général que ça. Si vous avez un processus général de Markov, vous considérez que c'est une mesure de l'invélation. C'est plus général que ça. Donc, pourquoi j'ai mentionné ça, c'est que si vous voulez voir des conditions discrètes pour lesquelles vous devriez avoir une sensibilité de sonnerie, il devrait être fonctionner comme ça, la variante est plus ou moins qu'à l'expectation des gardiens. Ok, donc maintenant, on va commencer l'automne. Je veux dire ce que j'ai besoin de dire. Donc, ce sera un vector top eigen de la matrices vignes. Ok, donc, on va prendre un vector matrix. Donc, ce sera un vrai matrix vignes, mais ça peut être complexe si vous voulez. Donc, le J'ai de la x, pour l'arbre, l'arbre est plus large que le J. Ok, ils sont indépendants. Ils sont centrés. Et ils ont une variante 1. Et on assume un exponent tail. Comme un sub-exponential tail. Mais ce sera boundé pour un delta positif. Ok, donc il y a un tail, c'est une exponentiale d'incréditation, avec un sub-exponential. Ok, donc ce sera mon ensemble vignes. Donc, je regarde la valeur de l'arbre eigen. La valeur de l'arbre eigen. Je prends une vector v, une vector eigen. Ok, donc ce sera mon vector top eigen. Ok, donc je vais être intéressé par une très sensibilité pour ces fonctionnalités. Ok? Et notamment pour l'arbre eigen. Pour l'arbre eigen. Donc ce sera mon modèle. Il sera exactement le modèle du modèle de Cannes, Calais et Mignal. Donc je prends SK qui est un subset, une uniformité, un random subset du set de i, j avec i large que j. Ok? Of size k. Ok? Donc j'assume que SK plus 1 est obtenu par SK par un nouveau point uniformité à un random qui n'a pas été choisi so far. Ok? Je prends x' qui est... Ah, je devais dire que c'est symétrique. Donc x i, j est equal à x, j, i. Ok? Donc c'est une matrixe. x' qui est une copie indépendante de x. Ok? Exactement comme ici, je définis x, k pour être x' i, j si i, j est dans SK. Ok? Donc c'est pour i large que j et pour x i, j si... Ok? J'ai une marque de chaine et j'ai une matrixe xk. Donc ça me donne une matrixe xk où j'ai pris k entrées d'un ray de k entrées et j'ai re-simplé. Ok? Donc j'ai lambda k large second value et vk large second, le top eigenvector. Ok? Donc maintenant on va poser la même question, une question similaire à ce setting de Canon, Lineal et Kali. Donc je vais rester... Ah ok, donc un plus comment. Pourquoi est-ce que... Donc j'ai mentionné ça pour faire ce comment. Pourquoi est-ce que c'est un bon candidat pour être un phénomène non-sensitif? Ok? Donc let's look at the function f of this array of vector x which is lambda of x. Ok? So the top eigenvalue. Ok? So the variance of lambda for example if you are for the GUI is of order n power minus 1 over 3. Ok? But the gradient of this function lambda has a function of the entries. But this is simply the sum of the ij of the square of the coordinate of the top eigenvector. So this is 1. Ok? So we are exactly in this setting. The variance of the function is much smaller than its gradient. So there should be some insensitivity here. So this is ok. So here is the theorem. So the first is that if you re-sample more than n power 5 over 3 entries then the scalar product between v and vk goes to 0. And conversely the expectation of the max over k between 1 or 0 and epsilon n power 5 over 3 of the norm between v and vk goes to 0 when you choose the proper phase. For example you assume that your eigenvectors have a positive first coordinate. Ok? And what is epsilon? Epsilon is unfortunately not any function. It's like log n minus c times log log n. C'est pas exactement as sharp as that. Ok, so we have a... Here you assume so you will see you will essentially forget about your past after more than n over 5 over 3 steps. Which is much less than n squared. So it's not insensitivity. Ok? So if you are a physicist this is an obvious. So that's why I will tell you. So why n power 5 over 3 so it's not hard to guess. Let's make a picture. So here I'm representing the eigenvalues of... Here I'm representing the eigenvalues of x. So I have the largest one which I call lambda. Ok? So let's call it lambda 1 subscript 1. So it's the second largest one which is lambda 2. Ok? And the difference, the distance between then will typically be of order n power minus 1 over 6. So again for example for the GOE and it's known in greater generality thanks to the work of Erdos, Kjaw and Kautos. Ok? So I know that in this spacing so this is roughly of order 2 square root n but the distance... Ok? So now lambda 1 I do one re-sampling. Ok? I just take one entry re-sample it and look at what is... How much do I have moved my top eigenvalue? So this should be the distance... It should be roughly at least at infinitismal order. If your first set you have re-sampled just the entry ij it will be essentially there is a factor 2 but something like that xij prime minus xij times vj. Maybe there is a 2 but we don't care. Ok? V is the top eigenvalue of x. Ok so we know from Erdos, Kjaw and Schlein paper that's an L infinity norm of v is of order log n power c over square root n which is let's say n minus 1 plus little over 1. Ok? So if you call zij this random variable which is a center random variable and if you forgot about the dependency that v i is a random and depends on zij you could write roughly that zij divided by n power 1 plus little over 1. Ok? Let's forget about the fact that there is dependence here. So lambda k if it were independent after k step of that so it would be the sum of i e plus 1 minus lambda t for t going to 0 to k minus 1. If everything is roughly independent it would be roughly for the square root of k divided by n times 1 plus little over 1. Ok? You follow me? So I just say that roughly the top eigenvalue should make a random work with steps of size 1 over n. Ok? So after k steps it will be roughly a distance square root k divided by n. But the second eigenvalue will be doing the same by the same argument. But the 5 lambda 1 minus lambda is roughly equal to that will be very wrong when lambda 1 and lambda 2 will be close together. Ok? So this heuristics should break down when that is of order when this is proportional to the distance between the two eigenvalues this heuristics should completely break down because then you start to feel the lambda 2 and the rest of the spectrum. So if you write that square root k over n is equal to n power minus 1 over 6 you will get nk something which is going on at n, at k of order 5 over 3. Ok? Was it clear? So there is no mystery here. Ok? So the proof there are two statements. The is essentially trying to make a rigorous is trying to make a rigorous derivation of that argument. Ok? But this one is more original and so I will try to in the rest of the talk I will try to explain you how you get that. Ok so we will obtain that from these equations which we will use to we will use the fact that we know that the variance of lambda is of order 1 over n power minus 1 over 3 to deduce that it must be noise sensitive. Ok? So it will be done using some connection between variance and noise sensitivity. So it's very general, it has nothing to do with random matrices. Noise sensitivity. Ok so it starts with a lemma by Chatterjee so I know it from Chatterjee but maybe it's earlier than that. I don't know. So you take X in his space and you take X which is a vector of independent random variable ok? I take X prime an independent copy ok? I define X K without parenthesis as being the same as X but I have replaced K century by X prime and all the others are unchanged ok? And X K with parenthesis is the first K coordinates are coordinates of the first copy of my random variable and the other one, the remaining one they are copies of X ok? So X0 is X, XK Xn, sorry, is X prime So I interpolate so the sequence of random XK interpolates by adding coordinates from X to X prime ok? So the lemma of Chatterjee with this notation is that you can write exactly the variance of any function in R let's say by writing that the variance of of X is exactly equal, so it's an equality to the sum from K run to N of the expectation of F of X when I have shifted the case coordinate times F of K minus 1 minus F of XK So it's an identity so it's not very difficult to prove essentially easy to prove but it's very nice ok? So what you see here here you have just flipped these two vectors they differ from one coordinate and these two vectors they also differ from the one coordinate the case coordinate and so this scene for missing equality for example of course you can write also the covariance of F and G here you put F, if you put G it's also true so for example from that R is an equality ok? but that's not what I want to do now I want you also remark but in resampling it's independent so here I've decided to go from X to X prime by looking by just changing the first coordinate and the second and so on but I could of course resample take a permutation fix a permutation on SN and just first the sigma 1 coordinates and the sigma 2 and so on so it will be of course the same inequality so I denoted by sigma ok? so which means that xk is different from X by just the sigma k's coordinate and here I've resampled the sigma 1 up to sigma k minus 1 ok? so this is for any sigma in SN ok? so for that is that intuitively one more comment so intuitively the more you go in this sum and the less and less these two guys are correlated ok? because Xn is X prime and X the first terms are very the two vectors are very correlated but the last ones have only very few coordinates in common ok? so we have a lemma which formalise that so you call dk to be this number ok? and we have a lemma which will be the basis of the method which says that if sigma is uniform so you take a uniform on permutation on SN or if your function is symmetric if sigma is uniform on SN then this sequence bk bk plus 1 is less than bk less which formalise this idea that this is more and more correlated as k goes to infinity as k goes to n so a corollary which will be the corollary a corollary that you can bound that the expectation is that the expectation of f of x minus f of xk k times f of xk minus 1 minus f of xk this is less than the variance of f divided by k and there is a 2 because it's decreasing ok so now we can come back to our problem with xk, lambda k and so on so I will try to tell you about that so small n is nn plus 1 over 2 my function f of x is just my lambda which is the largest eigenvalue je vais jouer avec ça let's call let's take st be a uniform s large on 1 st n ok so I take a uniform random pair st I call y which is the same as x so x is my yk is the one where I've re-sampled the k variables x same as x but I just replaced x the st variable and the ts is equal to let's say a third copy of my variables ok so I re-sum x and y are equal but they are except for 1 and 3 where I've re-sampled and I do the same with yk same as xk but so if you play with this corollary I cheat a little bit because here it's uniform what you will find is that you call a mu u top eigenvector top eigenvalue top eigenvector mu k u k maybe let's u k mu k top eigenvector of yk and what you will find so I remember lambda and v are these pairs for xk and x is lambda and v what you will find is that the variance of lambda which will be a further n power minus 1 over 3 ok times a constant will be divided by k will be less than the expectation of lambda minus mu times lambda k minus mu k ok it's essentially the same this corollary ok so now that we are interested by a statement on the eigenvector so now I write that lambda is v scalar xv and that mu is u scalar yu ok so lambda minus mu it's less so this is the difference it's less since it's a top eigenvector than v times x minus y times v it's at least the same review which is equal to v s some variable which is the difference between when I have re-sample z st times v t maybe it's twice x st minus x second st or something like that ok you can do the same review and ok and now you use the fact that v and u are very close in infinity norm which is not an obvious statement but nevertheless true that the l infinity norm between v and u so v eigenvector of top eigenvector are the same top eigenvector of two matrices two big norm matrices which differ just by a single coordinate so this will be bounded in l infinity norm by n power minus one half minus alpha for some alpha which I will not write which is strictly positive ok so this follows from some work based on the optimal local laws obtained by Erdos and Yao and their co-authors ok so in this expression when I look at the difference between lambda and mu I can just replace also that this is larger than u s z st u t for the same reason so I can replace u I can replace u and v because they are very close in infinity norm and I will get I do the same for lambda k and mu k what I will get on the right hand side so maybe I can write it here but this will be larger than the expectation of this variable ok times ok ok no ok it will be ok it's not too much here ok two copies of this the same with x prime and times v s v t v k s v k v k t and the error I make will be n power so it's n power 2 plus alpha ok so I'm almost done so there are some dependencies still between these vectors and this vector but let's forget about that and assume that I can replace that by the expectation of that which is 2 or 1 so let's say that this is roughly equal to the expectation of v s times v t v s k times v t k ok so you can make this formula by using this kind of l infinity this kind of l infinity perturbation inequalities so this is roughly equal to that and now since the pair st sample this will be exactly equal to 1 over n something like it will be like 1 over n n minus n plus 1 times the sum over all i j of v i v j v i k v j k this exactly that because st is sample uniformity at random ok and this is exactly ok so let's say this is 1 over n squared times the expectation of times so there is still an expectation because it's a randomness about everything so the expectation of v times v k squared plus little bigot of 1 over n plus 2 plus alpha ok so we are very happy because now we multiply by n squared so there is still a vanishing term here and we have on the right hand side this term which will go to 0 as soon as k is much larger than n power 5 over 3 ok so that's essentially the proof of course there are many details which I intentionally forget so one of the main argument is to prove that so which is strange because our proof of saying that it is noise sensitive so meaning that by changing very few coordinates you get a complete difference or you get an eigenvector which is very far from the original eigenvector realize on the fact that when you change one coordinate it's very stable that's very unintuitive it still puzzles me ok because these two eigenvectors correspond to the same eigenmetrics so I have just replaced one entry ok so I can prove that n infinity norm it doesn't change much but we use that to prove that in fact the top eigenvector is very sensitive so it's kind of strange of course so this can probably be generalized to the case largest eigenvalue maybe the j's largest eigenvalue we didn't try to write the proof but probably with some minor and then the threshold will be at k which will be like n power 5 over 3 times the minimum between j and n plus 1 minus j power minus 2 over 3 so it's a question but it's possible but ok so this is what gives the heuristic I gave you at the beginning and I was mainly interested by this trying to illustrate the phenomenon so I think it should work also to prove for this case and maybe what is not proved what is not clear but the strategy should be also a similar kind of strategy might work is to do for example massage percolation between 11 to nn where you put IID weights for example exponential or geometric weights on the edges look at the path which minimise the number of the total lengths and do this resampling dynamics and maybe by using this corollary it could be possible to prove a noise sensitivity for this when you the analog of the scalar product would be to take the aming distance between the path between two optimising paths where when you have changed k or k's when you have resampled the values of k edges but this is the strategy is clear it may work to use this corollary but then the technical estimate that you would need on first passage percolation I'm not sure that they are available in the literature or not ok I think that's all I had to say so thank you for your attention ok any