 ... Ok, donc aujourd'hui, mon but est de vous donner un prof de charpness pour la percolation de la percolation de 14 castellans, pour les modèles de Rondemps. Mais avant tout ceci, je voudrais juste illustrer la différence entre les deux approches que je m'ai présentes la semaine dernière pour la percolation de Bernoulli. Je veux vous donner un exemple d'un modèle qui est très proche de la percolation de Bernoulli et pour lequel l'une de les prouves est complètement faite et l'autre n'est qu'une de la méthode de Mutatis Mutandis. Donc cet exemple était en fait une question open pour plusieurs années parce que les techniques précédentes ne fonctionnaient pas. Et c'est le suivant. Donc, on va dire que c'est un petit tour du G2. On considère ce que l'on dit, le set de vacances d'une percolation de bouleur, mais je vais prendre un exemple sur le lattice pour le simplifier. Donc pour chaque X dans le ZD, on dit que, avec probabilité 1-p, indépendamment de toutes les autres edges, on met un bouleur de taille R autour de X. Donc on met un BX, qui est un bouleur de taille R autour de X. Donc pour chaque vertex dans le lattice, pour chaque vertex, avec probabilité 1-p, on met un bouleur de taille R, et ce que je fais, c'est que je remets toutes les edges, toutes les vertices dans cette bouleur. Donc, on dirait que c'est mon bouleur de taille R. Je remets toutes ces vertices. Puis je pique le couvercle. Si c'est avec probabilité 1-p, tu remets toutes les edges. Et ainsi. Ce gars, tu remets toutes ces edges. Et à la fin, ce que tu regardes, c'est le complément de ça, qui est un certain subset de le lattice. Donc, Omega est un set de vertices qui ne sont pas dans un BX. Donc, c'est tout de même un set de vacances de la union des boules. C'est pourquoi je mets 1-p, et comme ça, le set est augmenté en P. Il y a un phase transition entre une phase où il n'y a pas d'infinite cluster, et une phase où il n'y a pas d'infinite cluster. Et la question est, est-ce que c'est une phase transition ou pas ? Est-ce que le modèle est clair pour tout le monde ? Donc, tu mets les boules à la fin, et tu prends le complément de ça. Ok. Alors, on va essayer de faire la première stratégie, le prouf de Vincent et l'I. Et essayer de l'appliquer ici. Et vous devez juste croire que le prouf de Menchikov pour le prouf d'Izelman-Barsky-Fernandez, sorry, Izelman-Barsky, ont aussi fallu pour la même raison. Donc, c'est pourquoi la stratégie n°1 a fallu. Dans notre prouf, il y avait deux étapes. Il y avait des étapes où nous arrivions à cette différence d'inéquité avec le 5P de S. Et il y avait un deuxième étape qui consiste à prouvoir que, si 1S a satisfait 5P de S plus que 1, nous avons eu un décès exponentiel. Donc, le premier étape, je veux dire, qui arrive à la différence d'inéquité, n'a pas eu de problème. Donc, c'est pourquoi beaucoup de gens ont essayé de faire ce type de choses avec notre prouf parce qu'ils pensaient que la nouveauté d'Izelman-Barsky-Fernandez était allée là-bas, ce qui est vrai. Le problème n'est pas allé là-bas. Le problème est allé dans le 5P de S plus petit que 1 qui implique le décès exponentiel. Pourquoi ? Parce que imagine, j'ai essayé de faire la même prouf. Donc, let's take a simple S. Let's imagine that S is lambda N and so we have 5P of S smaller than 1. So what do we do ? We have our bowl and the argument is what is to say where we condition on the connected component of 0 in the box. This capital, I mean this round C. So we condition on it. This is say the component. And in order to condition on it we need to know in standard percolation we need to know the boundary of it. And then remember that the argument was basically saying well now there is a point somewhere which is connected to distance KN, so let's say this point. This point is connected far away. And the important thing was that it had to be connected in the complement of the set capital, I mean round C. And therefore that was only negative information for this guy. We didn't have any positive information so the probability of this guy connected to distance KN times N was smaller we called to the probability just that 0 is connected to distance N times KN minus 1 simply by negative correlation if you want. So you can call it the BK inequality if you want that's usually what people refer to. OK it's a BK inequality. You can also just think of you use a Markov type property you condition on the cluster and the information that you have after is only negative. This guy cannot help you in any way. The problem is that if you look at the vacant set of percolation here to know that this vertex here is in the cluster you need to know that there there is nobody that the ball of this guy is not there. This ball is not there because otherwise if it would be there it would destroy this guy. So it's true that to know that the vertex just next to it was not in the cluster you needed there is somewhere one guy where you know the ball is there but you could imagine geometries maybe not with balls like that but maybe with balls a little bit more weird shaped that you will have some positive information if you want on the other side of the wall that here you will know that by conditioning on all of that you will know that this guy here is in fact open that there is nobody covering it or at least that some balls that could cover it are in fact not covering it and this positive information is terrible for you because it may be that it's very easy for this yellow guy to actually cripple along the boundary and then go away this thing here along the boundary maybe is actually very easy for you but then you cannot compare the probability that this guy is connected to infinity conditionally on that with just the probability of being connected to infinity unconditionally without this information because maybe it's easy to go far going along the boundary it's not really what you expect but it could happen another example just that I mentioned and I will actually do the proof next week is Voronoi percolation so for people who know Voronoi percolation I just do a small day too you put a puzzle point process on your plane and you take the Voronoi tessellation of this meaning you for every point you define what is the closest okay I'm not going to manage to do that the closest point to this vertices so here it does like that something like that okay it's not that bad I think okay so for every point you define the set of real numbers in RD which are closer to this point than to any other point and then you color at random with probability P you color the cell black or white with probability 1-P you have a percolation, a continuum percolation model and you want to know whether you have a sharp phase transition as well same problem as here but a little bit I mean manifesting in a slightly different way but you are going to see if you condition on the cluster of the origin in the box the boundary the cells on the boundary of this cluster they are black they are white they are from the wrong color so we could think okay that's good because but here being connected to distance k times N by black vertices is not really helped by what I saw because I only have on the boundary I only have white cells so they cannot help you but the problem is that the geometry of the cells is influencing the geometry of the cells outside of C therefore it could be that for instance it's going to push the cells slightly bigger along the boundary just you know the cell that you don't see you only know that the boundary cell is white but you know the shape of this cell so the guy just next to it is maybe huge for instance you could imagine some crazy thing where there is a huge huge cell like that if there is a huge cell like that with probability P it's black and it's helping a lot to go farther so this is another manifestation of the fact that there is a lack a priori of BK inequality for these models and therefore this first strategy with the 5POS just fails miserably now let's just forget about the runway I will tell you how we serve that next week but let's try to focus on our vacant set of percolation and let's see that the strategy number 2 works perfectly fine so strategy number 2, remember anyway I'm going to do the full proof for random clusters so for people who were not there last week they are going to be able to recover but it was based on this OSSS inequality and in particular the main point of the proof was you need to find an algorithm a decision tree which is determining your function with low revealment the probability of checking that an edge is there or not is low but here you can exactly do the same algorithm as what we did for Bernoulli percolation you explore the clusters the boundary of the cluster of the ball of size K so you take the boundary of BK and you start exploring the cluster notice that there to determine whether a vertex is in your cluster or not you need to check all the vertices I mean you need to check all the vertices at distance r of it to be certain that ok this guy is open because none of the boxes at distance r of it are there so he is in none of the boxes so you are going to explore like that you are going to say ok I pick this vertex I'm going to check whether it's there or not I explored all the vertices at distance r of it and then I say ok it is in it I look at one of the neighbors and I check all the vertices at distance r of it and so on and so on sometimes I close vertices but at the end I have explored all the connected components connected to the boundary of the box of size K and I can decide whether 0 was indeed connected to the boundary of the box of size N or not so what is the revealment of our algorithm what is the probability that a single guy is revealed by the algorithm it's not anymore the probability that I am connected to distance N that I am connected to the boundary of the box of size K why ? because once I have explored my cluster so I have information like that I had to explore everybody at distance r of the cluster to actually determine the cluster in fact even r plus 1 because I needed to know that the vertices which are on the boundary of the cluster on the exterior boundary of the cluster are closed all these vertices but in order to be revealed still I need to be at distance r plus 1 of somebody connected to distance N so for Bernoulli percolation I just for an edge I needed that one of my end points is connected to distance K now it's a little bit weaker I need to be at distance r plus 1 of somebody which is connected to distance K but instead of getting the bond that is bounded by the probability of being connected to the boundary I am going to have by the union bond something like r plus 1 to the D times the probability of being connected to distance K so it's going to work exactly the same ok ? so that was just a parenthesis to illustrate that for any in fact you can extend this to any K dependent model meaning you don't take Bernoulli percolation you take a model for which if you have K dependency if you are independent of anybody at distance more than K of you then you have a sharp threshold ok ? ok so that was a parenthesis to try to motivate that there is something more robust in this OSSS inequality and notice that here really for this model we are applying the standard OSSS inequality on product measures the problem is that if you take the random cluster model if you take fortune castellan percolation was I using fortune castellan or random cluster the previous one I don't remember anyway if you use random cluster model in order to really determine the state of an edge you need somehow to I mean if you want to write the random cluster model in terms of IID random variables you have ways of doing that but then in order to determine the state of an edge you need to explore a very large family of IID random variables so the OSSS inequality directly doesn't really give you any good information for the random cluster model because the correlations are long range so we need to do something a little bit better than that and that's what I want to describe today we have to extend the OSSS inequality so now let's forget about that and let's start from scratch so third part of the curse sharpness for fk percolation and the first part of the proof is going to be to extend this OSSS inequality so generalize OSSS inequality ok, so the CRM is going to be the following so we are going to take a measure let's call it mu and I'm going to take a monotonic so think of your fk percolation measure monotonic measure is basically a measure that satisfies the fkg inequality ok, so let's not spend too much time on that monotonic measure so let's keep the 01 to the E so on percolation configurations and let's take f a Boolean function but I'm going to ask which was not really the case in the case of OSSS inequality I'm going to ask that it's increasing then then I have the following have that the variance of f twice the sum over every edge of the revealment of the edge of T times the covariance of f and omega E for any decision T deciding I mean determining T tau, sorry, determining and recall of course that delta E of T is a probability that there exist a T smaller or equal to tau such that ET is equal to E so it's a probability that our decision T is discovering the edge E ok, so it's exactly the extension that we want of the OSSS inequality so I'm going to take my time and try really to prove it from A to Z first time I'm doing it so and one first thing I'm going to do is maybe not prove this one I'm going to replace the covariance by the influence so it's going to be a little bit simpler to prove so we are going to prove the following version of it which is a variance I'm going to replace the sum over the edge of the revealment times the influence of F where the influence of F is mu of F knowing omega E equal 1 minus mu of F knowing omega E equal 0 I mean it's just a question with this quantity but otherwise you can really go in the to look at the paper which is not on the archive yet but will soon be where you can see the full covariance why is it basically the same is that in our case anyway this is related to the covariance in a fairly simple way so just to recall if you did not just the probability of omega E equal 1 ok then IE of F is going to be mu of F omega E equal 1 over xi minus mu of F omega E equal 0 over 1 minus xi and this is 1 over xi 1 minus xi times mu of F omega E minus mu of F mu of omega E c'est vraiment simple computation therefore if you get the bound with the influence you recover the bound with the covariance just paying a cost xi 1 minus xi so in the case of the random cluster model this xi this xi is just a constant right when we are looking at random cluster model near PC the probability of being open for the edge is a certain constant which varies between c and 1 minus c but which is neither close to 0 nor 1 so up to losing here 2 but 2 divided by xi p 1 minus xi p say we get what we want so for the class it will be sufficient the point is why is it better to have the covariance because the influence doesn't grasp something which is maybe xi p is very small imagine a long range model for instance well for a long range model then the probability of being open for an edge may be very small so a bound with the influence will be much worse than a bound with the covariance that's why we prefer to think of the bound with the covariance because long range models are interesting models but because here we are anyway looking at a nearest neighbor model there it's really equivalent so that's the first simplification ok now so what is the global idea of the proof well it's basically the same proof has for Bernoulli percolation except that what you could try to do is just to think ok maybe the average of f against a monotonic measure mu can be expressed as the average against a certain just average measure of an increasing function if you could do that then you could just apply directly the osss inequality for Bernoulli random variables and with a little bit of luck what you would get here you would get the covariance between this capital F this big increasing function which is kind of encoding the small f mu of f with omega the point is that this is not really there is no way we didn't find a way of doing it that way so what we are going to do we are still going to use id random variables but we are going to really run the proof of the osss inequalities with an encoding of the measure mu in terms of directly the osss inequality to a right encoding of mu in terms of id random variables but what we can do is to run the proof with the encoding itself in the proof and manage to get something out of it so the first thing I'm going to do before like jumping into the proof I'm going to tell you how you can encode the measure mu out of id random variables so first a lemma and before this lemma a small definition which is for x just which is x e1 or x1 xn maybe I'm going to take this to be certain not to so yeah for u which is u1 un just real numbers in 01 and for e which is e1 en belonging to e so remember this is a set of order sequences of edges in e right so these are the set of e1 en such that e1 en is exactly equal to e and remember that e is equal to en so I just order it's one ordering of one ordered sequence containing all the edges in e and exactly the edges once so if I give myself a sequence of real numbers and a sequence of edges I can construct from it a configuration x which I see it as I mean that's just notation fe of u and I define it like that I define x e k plus 1 as being 1 if u k oh let's put a t let's t plus 1 if ut is larger we equal to mu of omega e t plus 1 equal 0 knowing omega e t equal x e t and 0 otherwise I just realize I don't remember if I define the notation e t in the I define I didn't define right the e crochet t so this is just gonna be a convenient notation e crochet t is just e 1 to e t it's a t first guys in my thing and I'm gonna use a slight abuse of notation when I put it like that so in index of the omega I just identify it with the set compose of e 1 e t so what I mean here is that I want omega e 1 equal x e 1 omega e 2 equal x e 2 etc etc up to omega e t equal x e t so it's like what I want here is what I want to define x I want to define the value of my configuration on e t plus 1 so imagine I already define the value of my configuration on the e t first edges now I want to know what is the value of my configuration is it open or closed on this last edge on this edge e t plus 1 how do I do that what I'm gonna do is I'm gonna look u t plus 1 u t plus 1 is a threshold and I'm gonna look whether this threshold is above well the probability that my edge is 0 if I already know that it's equal to if I already know that all the edges are equal to the x before and I set it to be 1 if it's that and 0 otherwise think for a second of u t plus 1 being a uniform random variable that's exactly what we are gonna do next with a uniform random variable the probability of being smaller than that is exactly that therefore this edge is gonna be 0 exactly with probability that which is what we want and it's gonna be 1 with probability 1 minus that which is exactly the probability of being open knowing everybody else before so it's really a construction step by step of our configuration so what we are gonna need is something a little bit more revolve than that basically not much we are gonna have both we are gonna want u to be basically a family of IID uniform random variable but e is also gonna be random so if e was deterministic then basically what I just explained gives you trivially at the end the x you constructed is just simple according to mu imagine the order of the edges is really deterministic and every single time here I'm just constructing x e t plus 1 as exactly the law of omega e t plus 1 knowing that all the omega b for are equal to the x so at the end you would just end up with something simple according to mu here we are gonna just do the same but with e being random so let u I mean in the proof I'm gonna try to keep the following notation that if I put a double bar it's a random variable just to try to make it clear b an IID uniform 0,1 I mean a family of IID uniform 0,1 random variables so u is a Uplet and let's take e to be just a random variable taking values in e now I'm gonna assume something I cannot take any set of random variables and u I'm gonna assume assume that u t that for any t smaller equal to n u t is independent of well of e t so is independent of everybody up to time t and I'm gonna put maybe t minus 1 here e t ok so I assume that u t is independent of the t edges before the t first edges then I claim that the x that I construct by taking the function that I define here applied to e and u where this guy has low mu so you should really think of this lemma as saying the following if e is deterministic just taking uniform random variable for the u there exactly for the reason I explain gives you mu load to take e to be random the only thing you need and this is really natural is that when I'm gonna look at u t plus 1 there I want it to be independent of all the edges before of everything I discovered before I want it to be some fresh randomness but this is the only constraint I need ok is it clear this I mean we have time so you can ask me questions if you want more details on the definitions because I mean this is the part of the class which is gonna be a little bit like tedious you have at some point to put your hand in the motor ok so let's prove that and we are really gonna do it a little bit like for the mark of property we are just gonna fix an e possible realization of of e and we are gonna look at x which is a possible realization for capital X and let's just look at this quantity first so what is the probability that I end up with a vector x of edges of the I mean with a percolation configuration x and that I have that so this thing is equal I am gonna do a very stupid thing I am gonna say the product for t equal 1 to n of what well of the probability that e t is equal to e t knowing everything before so e t minus 1 equal e t minus 1 and x t minus 1 equal x t minus 1 sorry x e t minus 1 x e t minus 1 times the product for t equal 1 to n of the probability of having x e t equal x e t knowing e t equal e t and x e t minus 1 equal x e so what did I do here I have exactly the product of all the conditional expectation I am just starting I go here I define e t knowing everybody else before and then I define x e t knowing everybody else before not that here it's t and not t minus 1 and then I go back here I define e t plus 1 knowing blah blah it's writing this thing as a product of conditional expectation there is no thing deep there now let's look at this quantity and let's call it e t so since u t is independent of e t and x e t minus 1 et ici on notice que c'est vraiment e t et x e t minus 1 par assumption u t est juste uniforme random variable donc la probabilité que x e t est equal x e t si x e t est 1 c'est 0 c'est la probabilité que l'uniforme random variable est sous ce threshold d'autrewise c'est 1 minus cette probabilité donc dans les deux cas je vois que e t est mu de x e t knowing que omega e t minus 1 est equal x e t minus 1 donc le produit de la t est c'est mu omega c'est exactement ce qu'on veut ok donc tout ça est mu omega x donc la seule chose que je dois check c'est que quand je summe l'e equal e c'est juste que le summe est 1 donc quand je fais maintenant la probabilité que x est equal c'est le summe sur les égages de 4 e t de x c'est x e t donc c'est mu de omega equal x de la summe de ces termes b de la summe sur les égages du produit de la t de la t mais notice ici si je summe sur toutes les possibilités d'e t c'est la la donc si je summe sur toutes les possibilités pour l'outre j'ai 1 donc ici c'est très important j'ai très peu d'informations sur ce qu'est la la de cet homme donc la condition de l'e t a une la qui est complètement différente de la la de l'e t je ne suis pas assumé d'indépendance de l'e t avec respect à ce que c'est absolument non mais ce que je dis c'est que c'est une certaine la donc si je summe sur tous les valeurs pour l'e t je finirai juste avec 1 donc en faisant ça étape par étape le summe du produit de la t est juste 1 si vous voulez c'est le produit de la t et donc c'est le produit de 1 donc ce tout est 1 et je finirai exactement avec ce que j'ai voulu c'est que la la de l'e t est mu donc peut-être si c'est la première fois que vous voyez une construction comme ça c'est un petit peu tedious mais c'est un très naturel que vous voulez sample un modèle dépendant vous faites juste étape par étape en utilisant des variables uniformes et puis d'ailleurs, ce serait pour exemple une bonne façon de prouver un complot entre les valeurs différentes de p pour le modèle de modèles vous faites ça étape par étape en utilisant les mêmes uniformes et vous allez exactement finir avec deux configurations exactement comme votre standard percolation un complot entre deux valeurs différentes de p et vous allez finir avec deux valeurs différentes pour votre configuration ordre pour deux valeurs différentes de p pour le modèle de modèles donc ici vous voulez juste faire ça de la même manière mais si vous faites ça dans un marcovien de façon marcovienne si vous décidez votre prochaine étape et puis utilisez un uniforme qui ne dépend pas de tout ce que vous avez fait vous allez aussi finir avec mu c'est tout ce que la la de l'e t bon c'était la la de l'e t on peut faire ça peut-être que ce n'est pas aussi le cas et maintenant on va être dans une position pour basiquer l'argument comme pour l'inéquité de l'SSS dans le bernouli mais cette fois en train de tenir la construction de la variable avec la la de l'e t mu donc exactement comme pour la la de l'e t de l'SSS c'est 3.1 donc ce que nous allons prendre c'est de u, n, v 2 i, d familles je veux dire 2 familles de i, d i, d uniforme 0, 1 random variable et ce que nous allons faire exactement comme pour le bernouli nous allons utiliser u pour construire une configuration x une configuration x avec la la de l'e t et notre décision 3 e et nous allons utiliser un deuxième gars v pour construire une indépendante une configuration percolation avec la la de l'e t et ce qui serait utile pour obtenir cette influence donc l'indépendance de cette v serait utile pour obtenir une termine d'influence exactement comme nous avons obtenu la termine de percolation dans le cas de percolation ok donc c'est un petit peu plus d'indépendance et la configuration percolation et l'expiration donc construire x e et tau comme suivant donc nous allons seulement utiliser les variables x donc nous allons construire et notez que c'est un peu comme ça ce que nous allons faire donc construire par induction donc e1 c'est e1 c'est un des algorithmes qui s'en souhaitait être déterministe le premier état de l'algorithme nous avons juste décidé que c'était un gars déterministe donc nous avons fait ça et puis nous avons défendu x e t nous avons défendu tout par induction donc la définition ici de cet gars serait 1 e ut est plus ou moins omega et t est plus 1 0, pardon, sauf omega et t minus 1 égal est plus x e t minus 1 donc c'est très bien comme ce que nous avons fait avant et 0 c'est plus et nous défendons t plus 1 pour être gt de e t x e t où vous vous rappelez que gt était la décision roulée c'est-à-dire ici je construis x et t et puis je dis ok mon algorithme seeing et t et x et t il veut découvrir et t plus 1 et ce gt est réellement la décision roulée la décision 3 ok donc notez ici que je n'ai pas je veux dire la construction ici ressemble exactement à cette chose ici ressemble exactement donc pourquoi je n'ai pas utilisé la notation fe de vous juste parce que je construis et t en même temps que je construis x je n'ai pas e t avant donc la décision 3 est vraiment comme la décision 3 était en regardant la configuration et en décidant où aller donc ici j'ai exactement utilisé cette construction par construire étape par étape x en même temps que je construis e mais à la fin vous pouvez juste penser qu'une e t est construite x c'est juste fe de vous ok donc juste remarque parenthèses x à la fin est en fait fe de vous période donc en particulier et donc Bilema 3.2 x a l'eau mu c'est juste parenthèses ok donc cette construction x et e reste à définir tau mais tau va être une fois que j'ai x et e c'est très simple donc tau c'est juste la fin de la t comme ça ou le min en fait pardon comme ça pour n'y x en 0,1 à la e pour n'y percolation configuration si je coincide ordinate puis nécessairement f est equal f de x est equal à f de x c'est la première fois où j'ai déterminé ma fonction c'est-à-dire que n'importe les bits que j'ai mis après j'ai les séances ok donc peut-être parce que c'est un prof donc peut-être faire 15 minutes breaks et nous avons défendu x maintenant, qu'est-ce que nous allons faire nous allons défendre des configurations hybrides y yt qui est exactement comme pour l'équité de l'SSS part nous serons construits partie en utilisant les variables u et partie en utilisant les variables v et nous allons utiliser plus le lemma ici nous allons être attentionnés que somehow l'utilisation des variables u et v sont toujours indépendantes de l'épidémie à maintenant et massager un peu les choses pour arriver avec ce qu'on veut mais ce que nous allons faire après les breaks donc 15 minutes breaks nous commençons à 40 ok donc c'était la définition de l'algorithme x et tau donc exactement comme pour l'agriculture de l'algorithme de toute façon maintenant nous devons ajouter cette addition de l'agriculture donc ce que nous allons faire c'est pour t qui est entre 1 et n nous allons définir e y t pour être juste f e donc maintenant je peux utiliser la notation parce que je définis e f e up to ut then v t plus 1 up to v tao and then I'm doing it exactly the reverse way of course why not so it's v1 to vt and then ut t plus 1 to u tau and then v tau plus 1 to vn so this is really the same construction somehow this was a vector that we precisely used pour la percolation de Bernoulli. Donc je prends ces vecteurs, ce genre de vector de hybride, et je le plugs dans la fonction FE. Mais notez que l'ordre ici de l'edge est vraiment seulement majeurable en termes de la U. Excellent. Et aussi notez, ici, on comprend que si T est plus ou plus ou plus ou plus taux, en fait, c'est une vector V. C'est ce que nous sommes en train de faire. Donc, qu'est-ce que nous avons? Donc la première observation va être basiquement la même que pour id random variables, exceptes que nous allons utiliser l'EMAS 3.2. Donc Yn, bien, Yn est FE of V, et V, qui est complètement indépendant d'E, bien, l'EMAS 3.2 trivialement s'applique, et je comprends que Yn a moins de U. Oui. Juste comprendre, donc l'E est choisi d'abord avec respect à U, et ensuite vous répliquez avec un V. Exactement. Ok. Vous répliquez avec le V, le coordinate T1, et l'un après la fois tau. Donc, notez que ces gars sont tous utilisés pour l'E et pour ce gars. Donc, ce, de ce point de vue, peut sembler un petit peu curieux, mais on va voir que ce n'est pas un problème. Oui. Comment avez-vous... Non, c'est ok. Ok. Donc, Yn est FE of V. V est complètement indépendant d'E, et d'E, nous avons que Yn a moins de U. Y0, maintenant. Donc, Y0, bien, Y0 n'est pas assez equal to X, right, because only the tau first coordinates are U, and then I'm using V. But the definition of tau is such that F of Y0 is the same as F of X. Simply because I do not look at what is constructing after, I mean at these variables, I only look at the tau first one to define the percolation configuration on E tau. And because F depends only on the output of this, we have this equality. And therefore, so we have, so we have this equality. And remember, so that's a rep, what is... so that what do I want to write? N, so this is lemmas 3.2 and X has no mu by lemmas 3.2. So this is the two observations I want to make. And now I'm going to just look exactly that's going to be the same lines as for Bernoulli percolation for a few moments. So the variance of F is smaller equal. So F is taking value in 01. So it's smaller equal to mu of F minus mu F. Now, F, well, X is U measurable. So here, if I put a expectation of F of X knowing U, I exactly get F, right? And if I put, so maybe I should put the expectation. So now here I should put expectation and this, this is the law here and let's put that E is the law of UV. But conditionnally on mu, on U sorry, Y, N is independent completely of everybody else. Right? So when I'm averaging F of Y, N with respect to U, I'm just getting, it's independent of U, so it's the average of F of Y, N. Therefore, it's just mu of F, right? And here notice that F of X, I can replace it by, oh sorry, by the observation above by, okay, by Y, 0. Okay? So all of this to say that this is equal, this is less or equal than the expectation. Of F of Y, 0 minus F of Y, N. So this is just a triangular inequality. I let the expectation, the conditional expectation with respect to U, I put it outside of the absolute value. Okay? So it's a, this is a little bit tedious, right? So just condition with respect to mu, to U sorry, this is just mu of F, it's a tricky way of writing mu of F. And this is a tricky way of writing F of X. And then when I have range on this, I know that X has low mu, so this is exactly expectation of F minus mu F. Okay? And then here, I just take the conditional, I mean, I just take the conditional expectation out of the absolute value by the triangular inequality. Now I am in my, I'm going to do the same step as before. So this is like the sum for t equal 1 to N of expectation of F of Y t minus 1 minus F of Y t. Then I'm going to use that I can add the indicator function that t is smaller than tau. Y, because if t is strictly larger than tau, then Y t minus 1 and Y t are both equal, simply because I'm only looking at the vector v now. So this is the same step as for OSS in the Bernoulli case. And then I'm just going to condition in addition on E and condition on all everybody knowing the U t sorry. Okay. So what I mean here I maybe I should write a little bit more. So knowing U t minus 1 knowing that. Okay. So here what did I do? I added the condition in E t equal E. Right. This I mean just partition. And then I just use that having t smaller or equal to tau and E t equal E is measurable only in terms of U t minus 1. And I'm just realizing that I don't know why I changed from just a second. Is it t smaller or equal to tau or t strictly smaller than tau? I don't know. So here what do I use? I'm confused. How did I do for OSS? Sorry about that. I'm just checking. Okay. So all of this is measurable only in terms of t minus 1 on the t minus 1 first coordinate and why I'm saying that looks very wrong. So E t equal E. Definitely is. Why? Because it depends only writes on E t depends only on g t minus 1. I mean it's a deterministic function of E t minus 1 and X E t minus 1. So this guy definitely depends only on the steps before and then yeah, I think I just messed up with the indexes. So let's put a plus 1 and let's look at this. Okay. So now I can condition on E t. Sorry about that. Sorry. Okay. And t smaller or equal to tau means what? Means that once I arrived, I mean Oh, no, no, no, sorry, sorry, sorry, sorry, sorry. Like that at least you can see that it's not such a piece of cake, this proof. Well, I guess maybe you already knew that but anyway. Okay. Sorry. Having that the time, the stopping time is larger or equal to t means that at time t minus 1, I'm not determined yet. And that time t minus 1 well, knowing all the configuration up to time t minus 1 is only depending on the entries of you before time t minus 1 before or at time t minus 1. It's funny because I think I exactly got confused at the same place last week. So it's it's good. Okay. So they so this there I mean adding this condition on u t is just using that this is u t minus 1 measurable. So this is u t minus 1 measurable. That's all. Okay. I can promise I'm not going to prove this very often in my life. In a talk. I will do everything possible to avoid it. Okay. Now what do we need to prove to conclude? We need to bound this by the influence, right? So here it's a little bit more problematic than for Bernoulli per collision, but not much more. So let's look at a expectation of f of y t minus 1 minus f of y t knowing u t minus 1. So I claim the following. I claim that this is smaller way code to a expectation of f of z e minus f of z e knowing u t minus 1. We're there. I need to tell you what z e is. So let's for a moment here. Let's recall. Let's record also the dependency in the measure. So here it was with a function f defined for the measure mu, right? Now what I claim is that both f e of I mean both y t minus 1 n y t they are smaller we call and larger we call to the following. Here I claim that if I use f e, but not for mu, but for mu conditionally on omega e equal 1. So this by FKG is a bigger measure than mu. It's stochasticly dominates mu and I'm going to apply it in this vector. So in v1 vt u t plus 1 u tau and v tau plus 1 vn. So let's call this vector say w. And I claim the same thing with this time conditionally on omega equal 0 apply in w. Okay. So why is it true? Well, the first thing that you observe is that let's take y t. So y t is what? It's f mu e of w, right? That's the definition. But here if you look at this definition f is definitely an increasing function of the measure mu. If I pick a bigger measure, then the output as a percolation configuration is going to be bigger. Why? Because just this thing is going to be smaller. So the probability I mean it's going to be easier to be 1 than 0. Okay. So this as a measure of mu it's increasing in the measure. Therefore, if I plug it the same omega, I get the same result. Now the second observation is that in fact when I look at these guys, these two guys, the t input in it is irrelevant. Why? Because before it I don't use vt of course. When I arrive at the step e t because I'm looking at the measure condition on the edge e to be open anyway. Here, this thing is 0. So whatever the input here, I will always put 1. I mean I want at the end, right, to have a measure which is simple according to mu knowing omega equal 1. So whatever the value of vt here, I will here always get the same thing. I can change vt here, it doesn't change anything. But therefore I can change vt to ut and use again the monotonicity between the measures. Therefore, this guy is smaller than this one, right? So define if you want w prime to be the one where here you have a ut. Okay. Then this guy is by definition fe of w prime. And it's therefore smaller than fe mu knowing omega equal 1 of w prime. Same thing here. Apply in w or in w prime, you get the same result simply because the t's bit is irrelevant. And you get the inequality. Okay. Is it clear this step because it's somehow the tricky step? That's the only place where you use monotonicity, really. It is, if you want the fact that your measure is monotonic, is exactly giving you the fact that mu is bounded between these two measures. This one is stochastically dominated by mu, which is stochastically dominated by this one. And therefore here, when you go step by step with mu or with mu prime that stochastically dominates mu, you are going to by definition get a vector at the end, which is going to be smaller. The first step is trivial. Then the second step because anyway, x prime is larger than x and mu prime is dominating mu. This is going to be smaller than mu prime that form mu. So therefore you are going to get x prime of the second h larger and so on and so on inductively. Okay. So simply define this guy to be ze. And this guy to be ze. These are my two guys. And now what do I observe? I just observe that this difference because one is always smaller than this one and the other one always larger than this one. This difference is always smaller than that. With a two maybe. That I drop somewhere. With a two, sorry. Right? Either this one is, I mean, yeah. Okay. No, okay. Not with a two, sorry. But now this is, no, with a two. Okay. Now, but now we are done. Why? Because W, by definition, is independent of ut minus one. Right? And W is the sequence of IID uniform variables which satisfies in terms of E exactly the condition here. So W there, if you want for this lemma not applied to mu, but to mu knowing omega equal one, gives you that ze has exactly the law of mu knowing omega equal one. Similarly, this ze has the law of mu knowing omega equal zero. So this whole thing is nothing but twice mu of f knowing omega equal one minus mu of f knowing omega equal zero. And this is twice the influence. I'm just wondering for these two because I don't have it on my notes. So here of course to go from there to there, I use both this and the fact that f is increasing. Right? But I do think that there is no two, so maybe it's just the two comes from the covariance. I'm not from because here I really don't see where the two comes in. Right? The difference between these two guys, this guy and this guy are sandwiched between these two values. Right? Therefore, the difference between the two is sandwich is smaller than this and this is positive so you can remove the absolute values. So I do probably it's just the covariance thing that the first time I do it with the influence. Okay. Is it clear the end of the proof? So the additional input, if you want, in the case of Bernoulli, you didn't need to do this simply because there you could already see the y t plus 1, the y t, they were just differing by the tith bit. Either it was 1 or 0 and you just replace this by the, this was the value of y t where the edge e was open and this was the value of y t when the edge e was close, and this was a bound. But here it's a little bit more complicated you need to use this comparison. So really again the important thing here is that the first, I mean, so this is the end of the proof and let's just make a few comments. So the first one is if you pick e to be deterministic. So if you pick your exploration to be deterministic. If you do that then in fact all of this was a little bit useless. Why was it useless? Because simply in this case well, you could write everything like f, you could define capital f to be, let's say g to be f of f e so let e be deterministic equal to e. F e of u and notice that f e is an increasing function of the random variables. F e is increasing in u. Therefore g there or maybe here g here is increasing so just apply osss for Bernoulli random variables directly. You have an increasing function just apply it directly. And indeed you are going to end up with the covariance of g with u e and this you can easily check that it is smaller than the influence. So it seems very good. The only problem is that here I mean of course when e is deterministic the revealment of my algorithm is one. So it's not a very good bound. It gives me just that the variance is smaller than the sum of influences which is sometimes called the discreet Poincaré de l'écoït. So not excellent. But one could think ok but then when e is random just put f e here and apply it for this. The problem there is kind of tedious is that when e is depending itself on the randomness then it doesn't this function is not increasing. Imagine you are discovering your configuration. So you discover some things these edges are closed there and you explore this edge now. If you increase u maybe you are going to make this edge open. But now because you are not a deterministic exploration maybe you want to go explore a bubble there. So you are going to go explore a bubble there maybe get some negative information and then the next edge there that you are going to explore is going to have a smaller probability to be open. Why? Because I explore this whole bubble got some negative information on the boundary therefore making this thing bigger and making the probability that this orange edge to be open smaller. While if this edge was closed then it gives me a little bit of negative information but not that much and this edge is open with a fairly good probability. So it's as soon as the exploration is random you do not get monotonicity. So the fact that you have monotonicity here was used actually in a slightly different context by Geoffrey Grimet and Ben Graham that tried to extend the BKKL this sharp threshold theorem I mean they didn't try, they succeeded to extend this to random cluster models and they use an encoding because I mean this is basically the simplest way of encode random cluster measure or any monotonic measure in terms of ID random variable they use this terministic set of edges they were just opening edges one by one and they applied the BKKL theorem the sharp threshold theorem to this function and just checked that the covariance between g and omega is smaller or equal to the influence of the edge but there because it was deterministic the function was nicely increasing and so on so at first we really thought that inequality had no chance to be extended to random cluster models and to monotonic measures exactly because of this reason that once it's random it's not quite all this increasing property but in fact when you plug the thing in the proof somehow miraculously still works ok what do I want to so I want briefly to redo the end of the proof who were not there last week and then finish with yeah maybe the last comment before doing so that was remark 1 and remark 2 before diving into the end of the proof so remark 2 is the following when you do the osss inequality here at the end, roughly speaking you are going to get that the variance theta n1 minus theta n if you apply it for the quality of being open of being connected to distance n theta n times 1 minus theta n you're basically gonna get smaller than the derivative times the max-reveillement and let's take instead of the max-reveillement let's take kind of a typical-reveillement what did we say for a typical-reveillement it was basically the probability of being connected to distance n roughly to the boundary of the box of size k but for most k it was roughly boundary of the box of size n so you get theta n you should not put this in your hands in your eyes anyway if you do the sharp special theorem of of gram-gremont or I mean the bkkl theorem you get this times log of the max-influence and the max-influence there is a fairly simple way of seeing right? I mean because of the definition of the inference basically that you need at least the event to occur to get the inference therefore the max-influence is smaller than the probability of the event period so here you get a log theta n so this is excellent when you want to prove that basically theta n has to decay polynomially fast you integrate this thing and you see that automatically not quite true but with some very mild further assumption it gives you that you decay polynomially fast but log of n here is not sufficient to make you get away from this polynomial decay of correlations here on the other hand if you have one over n to the epsilon automatically you get stretch exponential by moving so getting theta n instead of log of n is not the same deal this is much better than that was remark 2 ok so let's just conclude the proof for random cluster model and you are going to see it's basically the same proof as for Bernoulli random variables ok so theorem b is main theorem and I want to restate it because you are going to see there is a slightly better than what I claimed on the first on the first class so take d larger or equal to 2 and q larger or equal to 1 then there exist a constant c such that theta of p is larger or equal to c times p minus pc for p larger or equal to pc so we have the mean field lower bound and the small difference is that for any p smaller than pc there exist a constant cp such that the following is true the probability for the random cluster model with wired boundary condition maybe I put w here of on the boundary of the ball of size n of zero connected to the boundary of the ball is decaying exponentially fast so what is better than in the first lecture what is better is that here I have wired boundary condition at distance n just like I am taking a box of size n I am putting wired here and I bound the probability that zero is connected to the boundary of the box so this in 2D it was known that if you have exponential decay in the bulk you can recover exponential decay in this box when really the boundary condition are just next to you but in higher dimension this is not something which is simple to get a priori it's not equivalent you could really think that you see this wired boundary condition are really giving you a very strong push up to distance square root n of the origin and then inside this ball of size square root n it looks roughly like the infinite volume measure so if you have exponential decay in the infinite volume measure this will give you stretch exponential decay in finite volume and not exponential decay so there is something it's not quite equivalent really think of this wired as being a push that I'm giving you and here I'm really telling you that in fact this push doesn't give you anything why do I bother doing that and not just take this nice thing in infinite volume because basically all the applications of exponential decay in subcritical they are based on this like the mixing property the Horstein-Zernike estimates everything you want kind of uniformity on boundary condition really at distance n not at infinity and we are lucky enough to get I mean it's not really luck but kind of so what we are going to do is something a little bit strange let's define mu to be the random maybe I use p before p let's look at p on the ball of size 2n I'm going to put wired boundary condition looks a little bit strange maybe at first but to this measure mu for the event that 0 is connected to boundary of the ball of size n and not 2n so I get variance of the indicator of 0 connected to boundary of the ball of size n smaller way equal to the sum of the edges of the covariance of E with the event times here remember I'm averaging on the n algorithm so times n this sorry n divided by here I was getting sn you remember the sum of the ctk but here now I'm not in finite I'm in finite volume so I cannot use invariance and the translation there yet so here I can use the max over x in my ball of the sum of a k and there was a 2 also with the influence thing I mean with the osss it becomes a 4 and even there were like 2n points so it becomes a 8 sorry max of k equal 1 to n of mu of x connected to the boundary of the box of size k around x if I don't use invariance and the translation just at every x I have this but this notice this is not sn it's not equal to sn I cannot I mean the maximum is apriori not at 0 but the important thing now is that let's define ctk to be the probability with wired boundary condition on the box of size 2k of 0 connected to the boundary of the box of size k and let's define sn to be the sum for k equal 0 to n minus 1 of ctk so when I define that notice the following the sum for k equal 1 to n of mu of x connected to the boundary of the box of size k around x is definitely smaller than twice the sum for k smaller equal to n over 2 of mu of x connected to the boundary of the box of size k around x this is just monotonicity in the event so up to k minus up to n over 2 it's the same as there and above n over 2 that being connected to distance n over 2 plus 1 is smaller than the probability of being connected to distance 1 n over 2 plus 2 distance 2 etc so this is clear but now here I have my box of size n I have an x somewhere which I want to be connected to distance n over 2 to distance k over 2 to distance k so the box of size 2k around it is necessarily included in the box of size 2n around 0 simply because k is smaller equal to n over 2 so 2k is smaller equal to n and I'm only looking at axes which are in the box of size n so they are all at distance at least n from the boundary so the comparison between boundary conditions is telling me that if I start to put wired boundary condition so I have wired boundary condition here but if I put wired boundary condition here it increases the event that x is connected to the boundary of the box of size k around x by monotony therefore this thing is smaller equal to ceta k so this is smaller equal to twice sum for k less or equal to n over 2 of ceta k and this is smaller ceta n so here I'm even just dropping the dependency the smaller equal to n over 2 and I go up to n minus 1 so all of this this small manipulation which you can just forget if you don't want to is just to try to really get this bound here so I have 2sn so now what do I have if I look at this the variance of this event is exactly ceta n1 minus ceta n times n over now 8sn right by the bound and this is smaller than well this is the derivative of ceta n so I found again my differential inequality that I started with so I have 1 minus ceta n but this one I can drop it I can put by changing the constant exactly like for Bernoulli percolation I can change this this thing and this I can replace by a small constant c because just ceta n is never close to 1 and I get ceta n prime larger than n over sn c times times c so now I just apply the lemma that was valid for Bernoulli percolation but which was actually valid for anybody on just families of functions satisfying this inequality so applying I guess it was lemma 2.3 applying lemma 2.3 2fn which is something like ceta n over c where the c just is equal to is equal to 1 minus ceta 1 of p0 over 8 which gives ceta n of p smaller equal to exponential of minus cpn for any p smaller than pc and ceta of p larger than constant times p minus pc for any p larger than pc so here I just use a small thing which is that ceta n is converging to ceta p ceta n of p is converging to ceta p so this is simple to see I mean the first bound is that ceta n of p is definitely larger or equal to ceta p I see that I am going too fast so that was really the lemma for percolation you need to apply it to this thing so you apply it on 0p0 0p0 right so ceta 1 ceta n of p you just bound it by ceta 1 of p0 which is smaller than 1 it's perfect and you get that so in order to apply it we need to prove that fn is converging to has a converging sequence so in fact you need to prove that ceta n converges to ceta of p so ceta n of p is what is the probability that you are connected to distance n with wired boundary condition at distance 2n so this is definitely larger or equal to the probability of being connected at distance n in the full plane because you throw the wired boundary condition to infinity so the monotonicity gives you that it's larger therefore it's larger than the probability of being connected to infinity so ceta n is definitely larger than ceta of p this is simple to see but the probability what you simply do is just a measureability argument you just say ceta n of p is smaller than the probability of being connected to distance k in the box of size 2n you fix k you let n go to infinity it gives you that ceta n of p is definitely smaller that the probability of being connected to distance k in the full plane just by measureability the measure with boundary condition at distance 2n converge to the measure in the full plane so I'm smaller than the probability of being connected to distance k in the full plane for any case I'm smaller than the probability of being connected to infinity ok so the last thing is something like the lim soup of ceta n is smaller than the probability in the full plane of 0 connected to boundary for any fixed k and therefore it's smaller than ceta of p ok it's not very difficult to believe that it converge to ceta p this anyway you can forget somehow about this the juice is not there when we reached this point I think we thought that we were home that was fine so the last thing to observe is that ceta n of p is formally larger than the probability this is definitely larger than the probability with b2n wired pq of 0 connected to the boundary of the ball of size 2n so you get also exponential decay here for n even but you can also do it for n odd if you really like to add lines to your proofs you can do that ok, well that is the end of the proof so you see I mean at the end with the OSSS ok there is just this small at first the proof was not that nice because we didn't see this small like way around the difficulty of working with finite volume but once you see it it's a little bit tricky but it is exactly the same proof and it is quite short so next week we are gonna change a little bit and get a little bit more like maybe relax and do Voronoi because it's a nice nice model and in particular there is this what I think absolutely beautiful proof of PC equal one half for Voronoi percolation by Bologbash and Riordan which really inspired me a lot for all these things and you are gonna see that here as well so I think it's a goal a good like go around story because it's really I mean this proof for Voronoi was the one that inspired me for inspired us with Vincent Belfare for the PC theorem so I think it's nice to provide a new proof of that and then I would like to discuss a little bit Boolean percolation because it's gonna give you an example where we can prove things we can have exponential decay so we are gonna take Boolean percolation but with bowls that have un bounded radia and maybe even polynomial tails and there we are gonna still prove some kind of sharpness where there it doesn't mean that it's exponential decay because there is no chance you can just have a bowl of size R which covers 0 and X so it's not gonna be exponential decay but where you get in the sense that there is a certain condition that allows you to do some renormalization that you can prove holds up to PC and that will be the end of the class thank you very much