 Je vous remercie pour venir là-bas et on va finir aujourd'hui par donner deux autres exemples de l'application de la même technique pour ce temps-là, les modèles de percolation continuent. Et je vais, donc je vais surtout présenter un, qui va être le percolation de la percolation et puis je vais discuter un petit peu d'autre, juste pour vous donner un exemple que la technique n'est pas bounde à juste comme prouver l'impact, dans le sens de prouver un détail exponentiel, que ça peut aussi être utile, juste pour prouver les conditions plus fortes, même quand il n'y a pas de détail exponentiel, en fait. Donc, je veux juste discuter briefly un exemple dans cette direction. Donc, on va continuer le modèles de percolation et la première va être la percolation de Voronoi. Donc, dans la percolation de Voronoi, il n'y a pas de lattice anymore. Il y a un processus de point de point sur le lattice, qui je vais décomposer dans deux types de points. Il y aura un point noir et un point blanc. Et là où ils vont être subnu, on va des points, je veux dire la Division du Space, et les 1ers points qui sont les plus proches d'une de vos points. Donc, finalement il y aura un rite B et un hit A d'oub et deux point du point de point d'attention d' uma, et une peakpiano vaccinated 1-P. Je vais définir un hitA plutôt que celui d'une de les deux. Donc, une autre façon de voir ce que c'est, c'est de prendre un processus de point personnel d'intensité 1. Et pour chaque des vertices des points, vous choisissez si c'est noir ou blanc avec une qualité P ou 1-P. C'est exactement le même. Et maintenant, je vais définir les cellules. Donc c'est de x. Donc pour x dans h, c'est de x. C'est simplement le set de points dans rd, pour que la distance entre x et y est smaller qu'à la distance entre z et y pour que la distance entre z et y soit de z dans h. Donc c'est simplement le set de points qui sont plus près de x que de n'importe quel autre point dans votre processus de point personnel. Par exemple, ce serait difficile. Et ce que je vais faire c'est quelque chose de très naturel. Je vais juste dire que la couleur de point dans la plane est simplement la couleur de la vertex du point du processus de point personnel dans lequel c'est, dans le set où c'est. Donc, on va dire que y dans rd est noir si il existe x dans eta b, pour que y s'élève à cx. Et d'autres y, je vais dire que y est noir. Juste un point est là. C'est un petit, petit point, mais ce n'est pas complètement symétrique, noir et noir. Parce que ici, j'include les points sur la boundary de mes celles, je les appelle en noir. Mais si vous pensez à ce point, de toute façon, ils ne joueront jamais un rôle important dans ce que nous avons. Donc je vais complètement ignorer ça. C'est pourquoi je ne dis pas que vous êtes noir si vous êtes dans le set de quelqu'un noir. Ok. Donc, c'est la définition du modèle. On peut dire que A est connectée à B si il existe deux vertices et un pass connecté à B. Et un pass continu de black connecté x à y. Et comme avant, je peux ensuite définir theta n de p et theta de p. C'est la probabilité. Donc maintenant, je vais juste garder la notation comme avant. C'est la probabilité de measure. La probabilité est à 0. Elle est connectée à la boundary de la boxe de la dimension n autour de 0. Et depuis que je suis maintenant dans un environnement qui est either rotation ou invariant, je vais juste prendre la boule pour actuellement être la boule ukrainienne. Ou peut-être pour changer un peu... Oui, ok. Parce que c'est une boule ukrainienne et theta de p est la probabilité que 0 est connectée à l'infinité. Donc, l'un des résultats les plus fameux sur la percolation de Voronoi est la suivante. Elle s'étend. Donc, c'est à cause de Boulobache et Riordan. Il s'est dit que la PC, ce que l'on définit comme général, dans dimension 2, est equal à 1,5. Donc, si je fais ça sur R2, il y a le noir et le blanc. On joue des roules simétriques à 1,5. Et en fait, vous verrez que c'est un point critique. Et je l'ai déjà mentionné ces résultats dans la première classe. Mais j'espère que je peux vous convaincre que, parce que vous croisez les squares avec la probabilité de 1,5, la seule chose qui est faite de prouver c'est exactement de prouver la sharpness de la première transition. C'est que, quand vous avez la sharpness de la première transition, quand vous avez l'exponentialité en subcritique, ici, vous pouvez deduire que la PC est equal à 1,5, exactement comme vous l'avez fait sur les ratés. Vous utilisez que l'exponentialité en subcritique n'est pas cross avec la probabilité de 1 ou 2,0. Ok. Donc, mon objectif est exactement de donner une preuve alternative de cette. Et en fait, c'est de donner un résultat plus fort, qui est de prouver la sharpness dans toute la dimension. Donc, il existe la PC comme ça. Bien, nous avons cette prouverte pour une PC plus grande que la PC. Et nous avons l'exponentialité pour une PC plus petite que la PC. Ok. Donc, le même résultat, comme d'habitude. D'ailleurs, il y a une feature intéressante que cette prouverte n'a pas fait vous donner une prouverte supérieure en subcritique, qui est souvent dans deux dimensions, ce sont beaucoup plus simples et vous pouvez le voir. Mais ici, je pense, c'est le meilleur de notre knowledge, la première prouverte de Minfield lower bound, même dans 2D. Oui. Oui, oui, oui. En fait, vous pouvez prouver ça. C'est juste, c'est comme, vous utilisez des arguments perles là-bas, qui sont vraiment de différentes types, donc je ne veux pas vraiment discuter, mais oui, c'est vrai. Ok. Donc, la stratégie va être, je veux dire, exactement la même que ce que nous avons fait jusqu'à maintenant. Ce qu'on veut prouver c'est que nous avons cette n'n plus grande que c'est times n times n toujours la même différence d'inéquité. Il y a un petit twist ici qui est qu'on ne va pas prouver ça. Donc, vous vous souvenez, on ne pouvait pas prouver ça près d'un. En fait, il n'y avait pas de chance de prouver près d'un parce que cet homme est venu à un. Donc, nous avons toujours pris p de l'un, mais c'était bien parce que de toute façon, nous sommes intéressés dans le comportement près de la PC. Ici, il y va un deuxième twist qui est qu'on ne peut pas aussi le prendre près de l'autre. C'est, bien sûr, c'est vrai, mais ici, on va prendre p entre un certain delta et un minus delta. Mais delta peut être taken as small as we want, so it's gonna be fine. And of course, here, it's a c depending on delta, but not on n. Ok. So, before maybe diving into the proof of that, you feel that of course we are gonna use your SSS inequality again. But before diving into that, let's just recall a few general statements on Voronoi. So the first one that we will need, so background, the first one is the FKG inequality. So here maybe I should tell you what a increasing means. So we will say that a is increasing if, well, whenever I have this in a and this, so if I have a configuration in A of a configuration, a Poisson point configuration in A and I take a bigger one in the sense that the blacks are bigger and the whites are smaller then automatically eta bar B eta bar white is in A. Ok, so this is the most straightforward generalization of being increasing on a lattice. So now that I define what an increasing event E is the FKG inequality will simply be that the probability of the intersection is larger than the product of the probabilities for any A and B increasing. There are tons of ways of proving that I don't want to annoy you with it but this will be important for us. The second property that is going to be important the second ingredient is so just maybe a parenthesis on this FKG inequality what is a little bit funny is that it doesn't occur in the same place as in the proof of a random I mean for the random cluster model. For the random cluster model the FKG inequality was important in the proof of the OSSS inequality that was where it was used. Here we are actually going to use just the OSSS inequality for ID random variables. So FKG there is not involved. FKG is going to be involved because in the use of the OSSS inequality after we use it we will have a certain quantity which will not be we will not manage easily to bound the revealment by theta n which was what we were doing we were bounding the probability of being revealed by the probability of being connected far away. Here it's going to be a little bit more tedious and to relate it at the end we will use the FKG inequality. So there is a small subtlety there. Second property which we really want is that we will have this sum of inferences and we want to compare the sum of inferences to the derivative of an event. So we do that in two steps. First let's mention Rousseau's formula for Voronoi and it's going to be the following. So define p of a to be the set of x in eta such that the indicator function of eta b union x and eta y minus x so if I assume if you want that x is open is black is different from the indicator function of eta black minus x and eta white union x it's different if I change the color of x I change actually the occurrence of the event. So it's exactly the definition that you would like to naturally to put in in the continuum and the first lemma is that the derivative of a is the expectation of the size of p or a ok so I'm not certain I want to make well maybe I'm going to prove the thing but one thing that I want you to understand there and actually maybe I should have been careful here let's say for a depending on colors in b0 r so exactly like for Bernoulli percolation you don't have Rousseau's formula for infinite volume so here you want to say I just look at the colors in a box I have an event depending on that only then the derivative is the expectation of the size of the set of pivotols and the remark I want to make because it's going to be important and it's a is that the set of pivotols may actually be also intersecting the outside of the bowl so even though your event depends only on what is inside the bowl changing the state here of a cell saying ok now it's black may actually change the geometry of my cell even up to this point so it changes the geometry of the cell a priori with the definition there it doesn't change the cell but it definitely changes the cell itself may intersect the the bowl so one small thing is ok careful that piv A is not included in the bowl of size R but even though it's not included it's almost it's almost is in the sense that if you think about it it's a probability that piv A say intersects the bowl of size 40 so if I wanted to intersect say this bowl of size 4 say 4R but you could do it for any 40 well if you want that the point here is relevant for the colors inside here in particular there cannot be any anybody sorry put 40 there cannot be anybody in the bowl of size T if there is even a single point of our Poisson point process in the bowl of size T then this point will be closer to the point in the bowl of size R than any point outside there so definitely no point here the cell will never intersect there so this is always smaller way equal to the probability that just eta doesn't intersect the bowl of size T and this just very simple estimate on Poisson point processes tells you that this is of order exponential of minus C times T to the D so it decays very rapidly so it's not such somehow think of this as being a little bit our dependent long range dependency this property that the cells are change even at long range but it's a very very rapidly decaying correlation second thing maybe that there it's not obvious a priori that the set is integrable the size of the set because you may have infinitely many pivotors even in a box but this thing tells you that you don't really want points far away from R from the bowl and the second observation is anyway for a Poisson point process having more than T to the D plus 1 say points in the box of size T is decaying exponentially fast so if you combine the two things you definitely I mean you can very easily prove that the set of pivotors is I mean the size of the set of pivotors is integrable and in particular it tells you that this is differentiable which was not a priori that clear ok so how do you prove this you just integrate on the environment on the Poisson point process the Rousseau formula for percolation simply so you write that let's say you look at P plus delta OV minus probability at A and you really write this the expectation with respect to eta of the probability at P plus delta of A knowing eta minus probability at P of A knowing eta so here I use sorry ok and this thing now is just integral between P and P plus delta of the derivative but now you are really on you are looking at percolation or on a model on a discrete set of points so you can use the standard Rousseau formula to get that here you get a expectation with respect to S of P of A knowing eta dS and then you just use a Fubini to conclude so this is something old Fubini so nothing really mysterious there ok so this is not really what is going to be so important for us what is going to be important is how we use that to discretize our set because remember the third ingredient is the OSSS inequality and remember that the OSSS inequality even in IID state is really on a discrete space we have discrete variables they may take values in some continuous space but we have a countable number of them so let me just restate it in a slightly different way here because now we don't have variables which are 0, 1 valued so the OSSS inequality now will be the following it will be that the variance of F is smaller than the sum so I will have a certain set of variable of the revealment times the influence but here I need to tell you exactly what the influence is so revealment will still be the probability that my algorithm reveals the bit I but here so I should have said we are on a space omega tensor I and pi tensor I and I is countable so the influence is the expectation or maybe let's look at it only for an event because anyway we are going to use that the influence of A is the probability for this tensor for this product space of having that eta is in A but not eta tilde where eta tilde is the space is a configuration obtained by resampling the bit I let's put omega maybe because otherwise you are going to I mean it's going to be confusing with things so we have here right it's a configuration so we are on omega I so we have configurations are just a family of variables of realisation of a certain variables they can take value in zero one but they could also take value in any other space omega and omega tilde is simply equal to omega J on every J not equal to Y and omega J omega tilde of I is independent of omega I so you just resample the spin at I and you look at the probability that from your configuration the outcome of your event is different for omega and for omega tilde just notice this is really corresponding to the covariance thing that we were doing in the case where omega is zero one valued ok ok so this is the this is the OSS inequality in general and the proof is exactly the same the same as in the IED case we didn't use in fact at any point that we were taking the same variable the last bit of information that here we have a continuous space I mean vornoy percolation is continuous space why here we are working we want to use something which is really on a discrete space so there is yeah one thing that here it's countable and the OSS inequality I prove formerly was only for finite but it's a finite but you can just check it works mutatis mutandis in infinity but it definitely needs to be countable so here we have continuous space so we need to do something and that's what people working in continuous percolation know too well is that you need to discretize the space which is usually a nightmare but here we are going to do something which is fairly simple I think the discretization is simple so let's call it a discretization I don't know vornoy percolation so what I'm going to do I'm going to fix an epsilon and let's define f s x epsilon to be simply x plus 0 epsilon to the D so you just cut your space into small balls and for an x this is going to be the ball associated to x and notice that here these two edges are not included in the ball like that it's really partitioning when I take the union on x in epsilon Z now to each of this x I'm going to associate just the intersection of the Poisson point process with this ball so define eta x b to be eta x intersected sorry eta b intersected with s x epsilon I'm going to drop the dependency in epsilon here just to simplify and eta x y to be the same with y and maybe eta x which is eta x b given eta x y so now I can see que la percolation a being just a product space where I would have each coordinate will be a possible realization of a Poisson point process in a box of size epsilon and pi will be the law of this guy ok so I didn't here I didn't do anything right I mean just I cut my space into slices I can look at the influence of an event if I want here so I can look for instance that influence let's call it x epsilon of a which will be the probability that eta is different that what I get when I re-sample where in this case eta theta x epsilon is a sample and what I would like is because here I will have at the end replace the USSR it is a sum of influences I would like here to have something relating the sum of influences to the derivative and this will be provided by the following lemma which says that in fact the derivative is larger we call to one half of the lim soup of the sum over the x of the influences I do think that this is a fairly like neat way and the lim soup is in epsilon of course fairly neat way because I mean usually when you discretize it is a little bit messy but here I mean we will just use this inequality ok so how do we prove this inequality the first thing that I'm gonna do which you are gonna believe me on this one it's I mean it's just three lines to actually justify but here I told you before that anyway influences that are far away they are uniformly integrable we have a bound which is completely uniform even in the event A here on the contribution of influences far away so definitely it's sufficient to prove the following we will prove we will fix M or let's say capital R sorry M here of course it's only for events A depending on colors in certain books so I will fix an M and I will prove the same result with here epsilon ZD intersected with a ball of size M if I do that for any M I can let M go to infinity simply because I have uniform convergence I'm there so fix M positive and let us prove let us prove that ok so it may look a little bit complete I mean may see this but in fact you are going to see that it's not that bad so the reason is that why isn't it bad because here you could think ok now our state is kind of complicated because every variable is a collection of points etc but notice that in fact except with very small probability there's going to be a single point in each of the sx so the first thing that I would like to just write is that well the influence here of an event A well up to a factor 2 it's really going to be the indicator of eta not equal to eta tilde so this is a definition but what I'm going to say is well here I can assume that eta x is 1 and eta x tilde is 0 and the error I'm going to do when I do that is only an error involving the fact that twice I should have had a point in this small square of size epsilon which would have cost me epsilon to the D for each one of them so with probability except if I exclude an event of this probability I have only one point either in eta of x or in eta tilde of x by symmetry with a factor 2 I can assume it's in eta of x what I do not know is whether it's black or white this I don't know but now what I can observe as well is that here if I'm not pivotal if the eta so I have an eta x which is either black or white if it's not pivotal this point so you have a family of cells if this point when I switch it from black to white if it's not pivotal then definitely if I just remove it I'm not going to change the occurrence of my event if turning it to white didn't allow me to get the to destroy the event then just removing it is not going to do the I mean here of course it's for A increasing because I am with an increasing event I'm not going to change the value so here this is exactly what I do right I have a point here and I remove it I don't change anything and it's the same if I start from a black point and I remove it if I'm not pivotal I will not change the occurrence of my event but if I start from a white point and I remove it I will also not change the state of my event if I'm not pivotal so all of this to say that this point in addition should be pivotal so this is smaller or equal to twice the probability or the expectation I need to have at least one point in one pivotal point in SX epsilon and notice that because I have only one point I can rewrite it instead of saying I should have one point I can just put the the number of points in my guy which are pivotal so if I do that like that I obtain that or maybe let's write it like that right here this point must be pivotal so I need to have at least one point and by the way I mean having two because I have only one point here is gonna be difficult so I can write it like that but now you can also just in fact remove these conditions and get that this is smaller than twice the probability that piv A I mean twice the expectation of piv A with intersected with SX of epsilon and I still have my error right I just removed two conditions and use the probability of being larger equal to one is smaller than the expectation very deep stuff here one may be a little bit worried that I mean here this looked like this was an event which had probability epsilon to the D and I remove it but this one also has probability epsilon to the D right here you already have the information that you should have at least one point so this thing is of order epsilon to the D but now I'm gonna sum of order one over epsilon to the D guys so when I sum these inferences I'm gonna get smaller than twice I sum this expectation so I get the intersection of piv A with the ball of size M and I sum one over epsilon to the D times these guys so I get big O of epsilon to the D when I let epsilon go to infinity I obtain my S so I think it's really like you see it's like six lines of discretization which I mean in this field you are happy if you end up only writing six lines for your discretization okay let epsilon 10 to 0 so that was the background that we needed now we are gonna be able to prove our result okay so first thing that we are gonna do as we said we fix p in delta 1 minus delta we fix n and we are gonna apply the OSSS inequality which I just here is so we apply the OSSS inequality to the event to the function f which is the indicator function that 0 is connected to the boundary of the ball of size n and let's assume let's apply it to a to a a decision tree Tk which for now I don't tell you exactly how I defined it but let's assume we apply to an algorithm Tk what do we do we get we get cta n times 1 minus cta n smaller we call to the sum over the x in epsilon zd of inferences x epsilon of indicator function of 0 connected to boundary times the reviement so let's assume the following lemma now which says for any there exists a Tk an algorithm Tk a decision Tk deciding the indicator function such that delta x of T tau is smaller than the probability that x is connected to the boundary of the ball of size k and this for any p larger we call to delta so let's assume we have that I'm assuming it because you're going to see it's a little bit less obvious than in the case of the discrete model for discrete percolation we are just taking this exploration of the boundary I mean of all the clusters intersecting the boundary of the box of size k here you need to do a little bit more but let's assume we have that how do we conclude and notice here that I really assume p to be larger we call to delta so if we have that and we go back there what we can say is that well we have theta n smaller than c0 and sorry here there should be a c0 which is smaller than c0 sum over k equal 1 to n of the sum over x in epsilon zd of the probability that x is connected to the boundary of the box of size k times the influence by the way here we are going to only just thinking maybe here it's only going to be for maybe only for x in the bowl of size 2n we have a problem that we ok let's put that and see and otherwise it's 0 so we have that and now we are going to do the standard bound this thing we are going to bound it this whole thing and this there is a 1 of n so this whole thing we are going to bound it by sn times the sum over the x in epsilon zd of influence ah no sorry sorry the influence of this element how do I get that and there maybe there is a 2 well it's a standard bound that we did several times that for any x in epsilon zd when I do sum over k equal 1 to n of the probability that x is connected to the boundary of the bowl of size k with the same inequality as before smaller than 2sn where sn is still the sum of the sk of the ctk notice that here you may have points here like this is true also for x extremely far it's just that anyway being connected to distance very far of being I mean here you are only bounding always this distance by k ok so you do that and then you just let epsilon go to 0 and maybe you will use also that this is bounded by a certain constant c1 here and by letting epsilon tends to 0 lemma 4.3 implies implies that ceta n is smaller than c0 c1 n over sn over n times ceta n prime and then you use lemma I don't remember what but then conclude as before ok so we are going to make a break and in the second part of the lecture I will explain what is I mean how you get this algorithm it's a little bit different from I mean it's based on the same idea but there is a little twist and then I will state I don't think I will actually dive into the proof because it's going to be maybe state the result for Boolean percolation and just give you orally an idea of what are the the problems that you encounter there ok so let's make a break and start at 40 ok maybe let's start again so goal now is to prove lemma 4.4 so the difference is going to be that I mean we are going to still do the same the same geometry we are going to take so there is K there is M and we want to explore somehow the connected component of the boundary of the ball but here there is a trick which is that once we explore so we are going to go small ball I mean small SX of epsilon by small XF of epsilon but every single time explore such a guy so we exactly as before we say ok this this is black ok then I explore well I want to know the whole cells of the points that are inside my small box so if I pick so let's first define for a box Y let's first define discover Y to be a sub algorithm if you want a sub decision tree which is going to do the following it just explore all the boxes around the original box and if now you know from only the Poisson point process in all these boxes if you know the cells of all the points in this box S Y you stop if you don't well you look at all the boxes around it like that and you ask ok now that I know the Poisson point process in all these boxes do I know the cells of these of all the points there if not I carry on if yes I stop ok so I do like that so this is discover why maybe I don't write it formally because it's not going to help much is it clear for everybody the algorithm so I really want when I choose if you want when I want to check whether the small box is containing a point which is going to be connected to the boundary I want to know all in addition all the cells in this box I mean all the cells of points in this box therefore I explore around until I'm certain that I I saw sufficiently many points that I know for sure the state of the cell and then once I have this discover why I exactly do the same algorithm as before I explore box by box before it was edges by edges when I was doing the random cluster but I explore box by box until I know for sure all the cluster of the boundary of the box of size scale so it's going to be like that but notice and this is an important feature of the proof notice that here let's say this point is in a certain box and is indeed connected to the boundary like that well so that's good for me I mean this this is revealed but notice that there are all the guys that are revealed is that in order to be certain of the shape of all the cells in this box let's say it was a big cell around it I had to explore around and in particular I had to discover a priori points that may be far away so it is not true anymore that the probability of being revealed is a probability that says there is a point in my box which is connected to the boundary of the box of size K it is not true anymore but what is true is that there must be I so Y is revealed Y is revealed if there exist X in Epsilon ZD such that what Y must be in discover X and there must be X prime in the small square associated to X which is connected to the boundary of the box of size K do you agree with that if I was I was discovered at some point it's that I was in a discover of somebody which I mean if I at this stage I check this guy is that there is somebody check actually maybe let's put 3 Epsilon here so you need to be neighboring somebody who who is discovered ok oh I mean 2 Epsilon ok but notice that this in particular if you have that so if X if Y is discovered is revealed then there exists Z this time in Z2 such that well I have my Y you understood 2 Epsilon here cause we didn't when we wrote the thing I think we put Epsilon right you need you need the neighbor you should be certain maybe it's ok with Epsilon it would be better with Epsilon ok so there is an X here and this small box has X prime in it which is connected but for this Y so first this X is in a box associated to the certain Z which is in Z2 so much bigger box Epsilon is a priori tiny but notice that in addition to that if Y is in discover X then in particular there cannot be anybody say in this box there cannot be anybody in this box not that X prime is not is not a point of I mean sorry there cannot be anybody else in this box because if there is anybody else here this guy would be irrelevant for determining this cell because this guy is anyway closer ok so there must be Z such that the box of size 1 for Z is connected to the boundary and well the bowl of radius say X I mean Z minus Y and here you may have some tedious things this is length square root D so let's put something like 3 square root D or something like that to be certain to take some margin this has to intersect eta only in there so in particular let's say only in this but now that means that the probability of being revealed so the probability of being revealed smaller than the sum of a Z of the probability of these two events there but notice that in particular so here I'm kind of saying that the intersection like this area actually I mean it should be even empty sorry otherwise this guy is not going to determine any of the colors there ah that was my problem sorry I said something a little bit wrong I told you that discover why was stopping when you discover the cell let's say rather that it stops when you discover the colors of everybody in the box so of course if you know the cells you know also the colors that's obvious but you could discover the colors before discovering the cells let's say that you you find a black point here and nobody around well you know everybody is black inside even though you don't know the cells yet ok so this is going to be a little bit better so let's discover why let's say discover the colors so here in particular it really becomes nobody in this box should be there if you want why to be important for the colors of the guy in the small box but the point is that ok asking this is true but you can also ask it's a weaker statement you could also ask that there is no white guy in the thing here because if there is one white guy in particular you will determine this I mean why would not be relevant for this so here let's assume that this is and this is a nice trick due to one of my co-authors which was to say put white here why is it a nice trick because this is an increasing event and this is a decreasing event so the FKG inequality tells you that this is smaller than the sum of Z of the probability that S1Z is connected to the boundary of the box of size K times the probability of the bowl of Z-Y-3 square root D intersected with and white is empty sorry, that was black if I want something increasing decreasing I want black ok the probability that this is empty you have a bowl of size X Z-Y this is roughly exponential of minus C Z-Y to the D so it's tiny tiny tiny it is kind of telling you that this cover X doesn't go far having absolutely no points so this is easy to bound by this now this is a priori larger than the probability that X is connected to the sorry that Y is connected to the boundary right here I want this connected I don't want this guy connected but notice that they are not far from each other why? because imagine this is connected to the boundary and imagine that this bowl is black this bowl is black etc until this bowl is black imagine all these events occur at once so this is connected to the boundary and I have these bowls that are open are black if this happens then Y is also connected to the boundary but conditionally on this connected to the boundary having this open this open this open I pay a constant cause for each one of the bowls if you prefer just using the FKG inequality in fact this is smaller or equal probability that Y is connected to the boundary of the box divided by the probability of say B1 oh I could even have say like S1 0 all black to the power roughly Z-Y or maybe 2 Z-1 but you see the competition between something decaying like exponential of distance to the D and something growing because it's 1 over so something growing exponentially fast in the distance but this sum of us that times that is just summable when I sum over all Z I get just a constant times this quantity which is what I wanted so this is smaller than C0 times the probability of Y connected to the boundary of the box of size scale ok so here the trick is that the dependencies if you want like having to discover somebody very far is costing you much more than the probability of being connected so this is much cheaper than this therefore like in average you don't go far don't choose far and notice that we use P larger than delta where we use P larger than delta to bound this from below this is larger roughly that constant times delta so that's where we use this ok well in the last half hour or less I'm going to try to tell you a little bit about the second model that I wanted to do today which is Boolean percolation so we cannot believe that it should apply really to a bunch of continuum percolation models because at the end what we use so here we definitely use some strong decoupling of Voronoi but I'm going to give you an example where we use much less but otherwise the features are kind of you use the OSSS in equality by discretizing and if you have this nice formula with influences for the derivative for your Boolean for your continuum percolation model then the techniques should actually apply fairly well I mean the question then is whether you have such a formula of course it's absolutely unclear in general so here I'm going to give you another example where we have a differential formula and therefore we can apply the same techniques so it's Boolean percolation so what is Boolean percolation you have a Poisson point process eta which is something of intensity say lambda times Lebesgue in the plane but now at every x in eta what you do is you sample a bowl centered on this x random radius taken according to some probability measure mu on r plus so you take mu probability measure on r plus and you take O to set of open vertices of open point say O is a union over x in eta of the bowl and this is rx in eta where the rx are id random variables of flow mu ok so you pick your point and boom you get a bowl boom a bowl and this is my set O and the set O is a set of black points ok so we'll set that x I mean y rd is black if and only if y belongs to O and we are interested in the connectivity properties of this so here think of mu think of mu as being mu could be first it could be delta of r equal r0 so you just put bowls of size r0 you can put mu to be 1 over r to some power dr or exponential these are classical choices and up to now I think the best that was known was bounded for bounded radius you had the following result so for bounded radius you had exponential decay in subcritical so when the radius is un bounded and if the tail of mu is slow you will never manage to get exponential decay why because you may have a small probability of having a huge bowl including both 0 and x so here the statement is going to be a little bit different so there is a lambda c which is a critical value and I'm going to define lambda c tilde to be the soup of the lambda such that the limit when n tends to infinity of the probability that the bowl of size n is connected to the bowl of size 2n that this limit is 0 so it's a slightly weird definition so you take what I'm saying is that lambda c tilde is the last guy the probability to connect a bowl of size n and the boundary of a bowl of size 2n so the probability of having that doesn't go to 0 above this lambda c tilde below doesn't necessarily go to 0 but at least it's infimum is 0 and the point is that this condition is a very convenient condition et c'est que once you have that this limit is 0 you can actually start to do some renormalization argument and really prove that correlations decay quickly what does it mean decaying quickly? well roughly it means you cannot hope to get better it means roughly probability that 0 is connected to distance n of radius n covering 0 it's not quite that but I think at least the prediction would be that it should be roughly that so I think maybe what is known is that you lose n to the d so you have like n to the d times the probability of having so maybe n to the 2d times probability of radius larger equal to n but let's say if you want the equivalent of exponential decay would be having this condition this would be the right condition and notice that if you are in bounded case for instance then below lambda ceta it was known for a very long time that you have exponential decay actually even with exponential tail like that it's known for a long time that once you have this you have exponential decay so really this the goal is what the statement is going to be to prove that lambda c equal lambda ceta so CRM again by the same authors as before it's saying so fix d larger equal to 2 and let's assume that mu of r to the so we still have some we are not certain what is the right power here but let's put 3d plus epsilon let's assume that this is finite then lambda c equal lambda ceta so this is a result of sharpness in this context the 3d plus epsilon is ok a little bit weird so the first thing you could notice is that if you don't have d plus epsilon then in fact you don't have a phase transition in the sense that at any lambda you are going to cover the whole plane so or maybe like I mean you will have an infil kinetic component so d plus epsilon is a barrier that you cannot hope to beat but the point is whether you can go to d plus epsilon this we don't know so far we have a technique that allows us to go to 3d plus epsilon so a third moment for the volume say but we don't manage to really get down to the the smallest assumption in two dimension this is done so there are results proving really almost the sharp result but it's using Rousseau-Sémoreg type idea as exactly as I discussed in the first lecture and this doesn't extend to higher dimension you have no hope to be extending that to higher dimensions ok so I'm how do we prove that I'm not going to make a full proof because I think anyway I should reward you for staying until the end so we are going to stop a little bit earlier so how do we prove that you will believe me that you have an FKG inequality you will believe me that you have some kind of Rousseau formula if you want you can definitely use your SSS inequality but I want to tell you to what so here what we are going to do is the following so the equivalent of eta x epsilon is going to be the following so first notice that this I mean choosing first the points and then choosing the radius you could have done differently I throw a Poisson point process on RD times R plus so if you take a Poisson point process of point XR of intensity so lambda times Lebesgue then I have the point XR which are in my Poisson point process eta and I just take O to be the union over XR in eta of the ball of radius R around X it's exactly the same way of doing but the good thing with that is that there it's going to be a little bit more clear the discretization so for epsilon what I'm going to do is I'm going to cut the space I'm going to just define for epsilon sorry R in Z I'm going to define S X R epsilon to be simply XR plus 0 epsilon to the D times 01 so I have my space here I have my Poisson point process if I do look at the model in one dimension which is not that interesting I will have the X's like that and the R like that and I basically cut into small things of width epsilon that way and height 1 that way ok so you cut the space into really small boxes and you cut the R direction in boxes of size 1 this is fine ok and now you define as before eta of XR to be eta intersected with XSR of epsilon so now I have a family of random variables notice they are not IID but this played no role in the proof of SSS so you can just I mean have independence that's the important thing so what you end up with at the end is when you apply the SSS in equality to our event indicator of 0 connected to distance N you obtain theta N times 1 minus theta N smaller or equal to the sum over X and R XR of Tk times the influence of XR epsilon of the event so here I need to tell you what is going to be the algorithm so what you are going to do is just you explore as before but in order to know the state of an edge you need to explore also when I want to know whether this guy is in O or not whether there exists somebody in my box so is there X in S, X epsilon this is really the same notation as before I could actually have done that could have said this was this would have mean maybe right this is exactly equivalent with the notation of the Voronoi thing so what I am going to do the discover Y that I had before which had to explore the cells here is going to have to explore the state like all the balls that if they would exist would intersect this thing so I am going to explore so discover X is going to be exploring the balls in YR for Y minus X smaller or equal to R or maybe here you should put R plus 1 to be certain ok so this notice for instance there are balls that are going to be explored with ravinements really one like the huge ball for instance imagine you are unbounded the huge ball of size N covering everybody is going to be explored with ravinement one because I need to know whether it's there or not so the ravinement in this thing are not going to always be small but notice that we are in a regime where we are assuming a strong moment condition on our measure so ok this ball is going to be revealed with probability one but its influence is going to be tiny because the probability that is there is small so now there is a competition the ravinement is not always small and it should not be right because if you think about it we cannot hope to get theta n smaller than N over SN times theta n prime why because otherwise we will get exponential decay and we do not have exponential decay ok so this is the first step of the proof now what do we want to prove we want to prove that this algorithm when we are in a certain regime the ravinement is roughly of the order of zero connected to distance N so how do we do that well the first trick and this is really important is that in Voronoi we were trying we had at some point to use that p is larger or equal to delta why because somehow we had the geometry constraints we were like we had to discover a priori far away distance R but somehow what we did is we said yes we have to go far but anyway having to discover far is much more expensive than just connecting to this distance that was somehow the trick and in order to say that connecting to this distance was not too expensive we used that p was larger than delta because we only needed to beat exponential of minus distance to the power D which is decaying super fast so even if we knew that connecting to distance R is just exponential that was sufficient here discover X goes and sees guys that are far away and it's not costing anything basically the only thing of the cost of having to look far is that we will have to have a big ball and it's the cost of this ball which is expensive so here we are going to use we want to use the equivalent of p larger than delta we want to use that connectivity for really the percolation is going to be less expensive than going to explore far and here what we are going to do is that we are going to place ourselves above lambda city so fix lambda larger than lambda city so there exist a constant such that you know that this probability is always larger than c lambda for every n so this is saying what it's not that expensive to connect I'm going to tell you later what exactly we do but now we are going to say the lemma is going to be so I guess it's 4 point this was maybe 4.5 so this is 4.6 the lemma is going to be that if lambda is strictly larger than lambda c tilde there exist a c0 such that the revillment of the algorithm outline here which is once I check a new guy I actually check discover x then for this thing we have what did I want to do uh no sorry sorry so if lambda larger than lambda c tilde I'm going to use that there exist a c0 such that the influence of x r epsilon of 0 connected to boundary of the box of size n is smaller or equal to 1 over r to the 1 plus c times the influence x0 epsilon of 0 connected to the boundary of the box of size so here instead of bounding the revillment I'm going to bound the influences I'm going to say influences of big balls are not mattering so much so why is it important why is this lemma the key because once you have this lemma you can play the same game as before why because the revillment of the ball of size r around x so the revillment of this guy is what it's going to be revealed if well if it is in the discover x of somebody in the discover y of somebody so this is the probability that y belong of discover x and x is connected to the boundary or let's say the sx epsilon is connected to the boundary right but this to be discovered here well because I'm a ball of size r I need to be at distance r of x so the revillment of size is bounded by the fact that the ball around y of size r plus 1 is connected to the boundary of k so here you start to feel maybe the competition so here I'm losing something in the bigger the r the bigger the revillment if you were you could be far away from the point which is connected and if you are far away then in fact the probability that this guy is connected may be bigger than but in the Voronoi we had that anyway we were paying an exponential of minus r to the d for being far away and here what we are going to say that we don't pay anything for being far away we really gain in the influence but we lose we really gain in the revillment but we lose in the influence we lose this here so if I prove this lemma then what can I do I can rewrite that cta n 1 minus cta n is smaller when lambda is larger than let's say n 1 minus cta n is smaller than sum over k sum over x and r of the revillment times influence now I can plug my two bounds this is smaller we call to the sum over k and x r of probability that the ball of size r plus 1 around x is connected to the boundary times 1 over r to the 1 plus c times the influence of this I just replaced by my two estimates and really here I think I'm doing a pretty bad job at outlining that but here I have again a competition between a term which is getting bigger and bigger when r goes to infinity and a term which is getting smaller and smaller when r goes to infinity so how do I conclude now from that well remember what was the trick in the proof the trick was to say well I define lambda c hat say which is the inf of the lambda such that lim soup log s n over log n is equal to 1 remember the proof of the lemma you define this lambda c hat and you say above this lambda c hat you have exponential decay you have above this lambda c hat you have percolation and below you have exponential decay that was the idea well here what we are going to say this lambda c hat has to be equal to lambda c n lambda c tilde because what we are going to prove is that below this lambda c hat you have exponential decay so it's a proof by contradiction because you never have exponential decay but that tells you what because this is only true for lambda larger than lambda c tilde there cannot be any space between lambda c tilde et lambda c hat otherwise there would be exponential decay which is absurd so it will just a kind of weird way of thinking about it but we just prove that lambda c tilde has to be equal to lambda c hat which has to be equal to lambda c ok so just to tell you to illustrate the fact that we are done basically once we have this thing is this big sum here we have sum of k and r right? written like that but the point is notice that here in particular you have r equals 0 so this thing here is definitely larger than s n ok so this quantity let's call it s n tilde so s n tilde is definitely larger than s n this is clear c tells you what it tells you that above lambda c tilde the proof that we did with this f n this lemma on the differential inequality since it was working with s n it's a forcery working with s n tilde so it really tells you that this differential inequality oh sorry and I'm also using that the sum of influences x0 epsilon is smaller plus smaller than the derivative times a small constant with this you can just see from the fact that if you increase I mean increasing look just the radius between 0 and 1 is already sufficient to gain something so here maybe I'm making a small assumption that there is a probability of having a radius between 0 and 1 but you can just extend that in a trivial so this plus this tells you that above lambda c hat if you run the same proof as before you get that there is an infinite cluster so it tells you let's call it 1 this plus 1 implies for larger than lambda c hat theta lambda positive so remains just to treat the other case and there it's gonna be this proof by contradiction so assume there exist lambda in lambda c theta lambda c hat then at this lambda what do we have we know we have exponential decay of s n we know we have decay of s n so we know that there exist there exist c1 positive such that s n is smaller than n to the 1 minus c1 right? this was the assumption on lambda c theta remember when we prove stretch exponential decay we said you are below this guy therefore this limit this lim soup is smaller than 1 it implies this but now what you can do is the following here this big sum I'm gonna prove that in this context when you have that this big sum s n is also bounded from above by s n but if this is true then you can run the same proof as before so in order to prove that what do you do you just look at an alpha which is much smaller than c1 really take a tiny alpha and observe that the probability of the ball so observe that the sum over K of this thing well it's definitely smaller just troncate depending on r larger than n to the alpha or r smaller than n to the alpha do these two sums here what do we get in this context we have automatically the 1 over r to the 1 plus c when I sum the r I'm gonna get 1 over n to the alpha so this is smaller than I have n choices for K and I have the sum of r larger than n to the alpha of 1 over r to the 1 plus c so I get 1 over n to the minus c alpha right so sorry I said something wrong I said it will be controlled by s n it's not controlled by s n but notice that this is polynomially small it's n to the minus c alpha times n which will simplify with this n so this n here if I put a 1 over n here I get that this first term is polynomially small excellent that's usually what I want so look at the second term now so here in the second term I still have the 1 over r to the 1 plus c and I have the probability but here the probability is anyway I can bound it by the probability that the ball of size n to the alpha is connected to the boundary of the ball of size K right because anyway r is smaller than n to the alpha ok so this looks almost good so here I have n choices for K I have r choices I mean n to the alpha 1 over r to the 1 plus c is summable so I get n times the probability that bx I mean the sum if you want ok let me maybe do the step here this guy here I mean the probability of this connected to this is bigger it's the ball of size n to the alpha connected to the ball of size K so it's bigger than the probability that 0 is connected to the ball of size but not by much if you think about it why because exactly as before think of the event for Voronoi we created like small guys connected like that 0 to the boundary here it's gonna be too costly but the fact that you are above lambda c tilde the fact that you know that you cross annuli a very simple argument is gonna tell you that in fact the probability that 0 is connected to x is taking at most like 1 over distance to the 2d just it's a union bound if otherwise you would not cross at all the box so the union bound is telling you that connecting 1 point to 1 point here this at most you pay n to the alpha to the 2d but that means that here you could put the ball of size 1 if you want paying just a cost of n to the 2d alpha and then you are back here what do you have forget even this guy if you want you have the sum over K of the probability that the ball of size 1 is connected to the ball of size K and this is a standard bound this is bounded by twice sn so the second part here is bounded by n to the 2d alpha times twice sn but sn is bounded by n to the 1 minus c1 so if alpha is indeed chosen much smaller than c1 you get that this decays polynomial polynomially fast to 0 I guess that's a good sign that I should stop this is the end of the proof it's a little messy but maybe it gives you an idea that at least it keeps working even in the framework where you don't have exponential decay and there was again this competition between revelment getting bigger with the distance and the cost for the inferences I didn't explain to you how you bound that but that's the only place where we use assumption on the moment for the distribution and that's why we don't know where we are going to stop because I mean yesterday the base bound was 4d now it's 3d, I mean hopefully it's going to keep going like that for 2 more days but yeah this is the only place if you can prove that for as soon as you have a moment d plus epsilon then you have basically the optimal result thank you for staying until the end see you next year, hopefully