 Aujourd'hui, comme je l'ai dit l'année dernière, je vais vous donner deux prouves pour la percolation de Bernoulli. Donc, à ce point de vue, ce n'est pas un nouveau résultat ou quelque chose comme ça. La première prouve est une prouve relativement récente par Vincent Tassion et moi-même, qui je pense est très très courte, c'est difficile d'essayer ça. Alors évidemment, la deuxième prouve qui est nouvelle est plus compliquée. Donc, vous vous inquiétez pourquoi je veux présenter cette prouve, mais c'est aussi beaucoup plus robuste. Donc, on va essayer de vous donner un exemple avant la fin de cette lecture de pourquoi c'est plus robuste et pourquoi l'autre n'est définitivement pas robuste. Et la prochaine semaine, on va vraiment aller au coeur du sujet et prouver l'extrême version. Ok, donc, c'est la seconde partie prouve pour la percolation de Bernoulli. Et juste que nous sommes certains d'agir sur le théorème, mon objectif est de prouver le suivi, que pour la percolation de Bernoulli, nous avons, pour n'importe quelle pi smaller que la PC, il existe une constante CP, comme la probabilité que 0 est connectée à la distance n, que la percolation de Bernoulli est plus rapide que l'expansion de minus CPn. Donc, ce que nous appelons l'expansion de decay. Et nous allons aussi prouver qu'il existe une constante C, comme ça. Donc, c'est une sorte de candy pour nous, c'est la chérie au-dessus de la crème. C'est ce que nous allons prouver que, sur la criticalité, nous avons ça. Donc, qu'est-ce que ça veut dire? La probabilité d'avoir une crosta infinie. Si vous voulez la densité de votre crosta infinie, il est en train de s'entraîner au moins légèrement quand la pi est sur la PC. Et ça s'appelle la bounde de Minfi, parce que si vous pensez à ce qui se passe sur la crème, par exemple, c'est exactement en train de s'entraîner. Donc, ici, nous avons dit que, dans tout le grapho de ZD, c'est en train de s'entraîner plus vite, ou au moins, plus vite, plus légèrement. Donc, c'est le thème qu'on veut prouver. Et peut-être que je l'ai mentionné la dernière fois, mais juste en casque. C'était prouvé par Menchikov et Eisenmann-Barski dans les 80s. Donc, c'est 86 et probablement 86 ou 87. Donc, d'explanation d'DK et de la bounde de Minfi, et juste en notation, parce qu'on va utiliser ça beaucoup, ça, je vais l'appeler Theta N of P et ça, je vais l'appeler Theta of P. Ok? Donc,記得 cette notation. Donc, let's go with the first proof. This is a box of size N, maybe I use BN on the first lecture. I don't remember. I use BN. Ok, so let's try to keep the notation. Same, so BN. Ok, so first proof. And I will call it using the 5P of S quantity. So, how does it go? Let me introduce a quantity which is not going to look that natural to you, but which will make sense through the proof, which is the following quantity. So, imagine you have a set S. It's a subset of ZD of vertices. It contains zero. And define the following. Define 5P of S to be the sum of the following. So, sum for any X in S and every Y which is not in S where X is neighboring Y. So, I have my set S and I sum over every pair of points X inside and Y outside, which are neighbors. And I'm going to sum P times the probability that zero is connected to X inside S. So, that is the notation to say connected inside S using vertices of S only. Ok. So, it is a quantity which is defined for every S and every P. It doesn't look that natural maybe at first sight, but you are going to see it has two nice properties. And the first property and the most important one is the following. We can call it Diff 1 for differential inequality number 1. Imagine for a moment that I have the following inequality. That I can prove that theta n prime of P so the derivative of this function is larger than 1 over P 1 minus P times something times 1 minus theta n of P where this something is the infimum over every set S included in BN and containing zero of 5POS. Let's imagine for a moment that we have this inequality for every P and every N. So, what I'm going to do is I'm going to assume just that and prove to you, yes. What is the Y in the sum of 4? Here. So, it's, yeah, that's a very good question. I was expecting this one. That's why I never tell about it. Here the Y seems like a dummy variable, right? Except that X may have more than one person on the outside of your graph. So it's basically just counting X with a multiplicity given by the number of Y outside my graph outside S which are there. And it's going to be important because I mean, it's not that important but it's cuter if you do it like that. This is really the right quantity but it's a very good question and also it comes naturally when you start to do long range interaction because then this P as an interpretation it's basically the probability qu'il est ouvert. C'est peut-être juste une parenthèse. On ne l'utilise pas. On assume qu'on a une differential inéquality. Let me prove that automatically we get these two properties. That's my first step of the proof. So, step one from Diff 1 to the theorem. So what we are going to do we are going to introduce a parameter which is not PC a priori and prove that it's equal to PC. So this parameter I'm going to call it PC tilde it's going to be the infimum of the P for which there exists a nest containing 0 such that 5P of S is smaller strictly than 1. So this is I'm trying to make this proof I think probably majority among you actually saw this proof already but I really want to present it in a certain order because I'm going to respect exactly this order in the next proof. So, I want a differential inéquality I want to define a parameter which a priori is not PC and I'm going to prove to you that this parameter below this parameter you have exponential decay and above this parameter you have actually an infinite cluster with positive probability. As a byproduct of that it will tell me automatically that this parameter is in fact PC. But a priori I don't define it like that, yes. Sorry. Yeah, I guess. So the other parameter was not PC it was 0, but that's fine. This one will be PC. Thank you. Ok, so first let's prove that for P larger than PC tilde theta of P is positive. So how do I prove that? Well I'm going to integrate this differential inéquality. So notice that for P larger than PC tilde by definition all the 5P of S are larger equal to 1. That's the definition of PC tilde. So 4P larger than PC tilde in fact differential by differential inéquality becomes theta n prime larger than 1 minus theta n divided by P 1 minus P. Which for those who like integrating differential inéqualities can be rewritten as the logarithm of 1 minus theta n prime is larger or equal to the logarithm of P over 1 minus P prime. Right? Theta n prime over 1 minus theta n is the derivative of this guy and 1 over P1 minus P is the derivative of this guy because here you need a minus. Oh, let's write it like that. So if you integrate this between PC tilde and P what do you get? Well you get that 1 minus theta n of PC tilde over 1 minus theta n of P is larger or equal to P over 1 minus P times 1 minus PC tilde over PC tilde. This is just integrating the previous differential inéqualities. But 1 minus theta n of PC tilde is definitely smaller than 1 so this is smaller than 1 over 1 minus theta n of P. And if you re-shuffle the whole thing you get that theta n of P is larger or equal to what? To P minus PC tilde by P1 minus PC tilde if I don't make ok? This is just re-shuffling the thing but here notice that there is something very particular on what I wrote there that it doesn't depend on n anymore the lower bound. So just let n go to infinity and we obtain exactly what we wanted. Theta of P is larger than a constant times P minus PC tilde. So we get it with PC tilde so far. We don't get it with PC but remember that our goal is to prove that PC tilde is equal to PC. So we get exactly what we want because here P is smaller than 1 so just remove it. Ok, for P larger than PC tilde I have percolation almost surely. Let's now look at P smaller than PC tilde. Notice that so far Phi P of S was not involved except in the definition of my differential inequality. But here it's going to be important. For P smaller than PC tilde there exists a CP such that theta n of P is smaller than exponential. So here we start by the beginning what do we know on P smaller than PC tilde? We just know that there exists for which Phi P of S is smaller than 1. So we pick this guy. So choose S such that Phi P of S is smaller than 1. And let's assume for instance that assume that S is included in lambda in the bowl of size K minus 1 for some K, right? It's finite so you can fix K like that. So now what is the idea? The idea is to try to compare somehow our exploration of the cluster with, I mean, two Garten type process, a branching process. But which would be unlucky enough to actually have a branching, I mean an expected number of siblings which would be smaller than 1. So the idea is going to be we are going to start from 0 and assume that we want to connect to distance very far. So say to the boundary of the box of size n times K. And what we are going to say is that well in order to go to distance n times K you first need to exit the set S and you are going to somehow do it in expectation in fewer than one guy. And from there you would still need to go pretty far. And we are going to use this to prove it's fairly simple to prove our statement. So let's do it. Let's define C which is just the cluster of 0 inside S to try to define what we call the, I mean the guys that are reachable from the inside. So let's C to be the set of let's call them Z in S such that 0 is connected to Z in S. Ok, so it's a random set and notice the following if 0 is connected to the boundary so let's say this looks like the cluster this is a cluster C if 0 is connected to the boundary then necessarily there must be one point on the boundary of the cluster here which is connected to the boundary there without using anymore the edges there what you can just so it may use again S, be careful but it's not going to use again the cluster there so here notice that I mean, it's not true for any of the points on the boundary because you may have things that do like that but there must be one point which is like that ok, you can pick any path from there to there and take the last exit point and you would find this guy ok, so there exists so our claim was that there exists a point X which is in C such that 0 is connected to X in C such that, well, this guy must be connected, let's say this is X it must be connected by an edge to a certain Y which is not in our set so we need omega X, Y to be open and we need Y that's our claim, we need Y connected to the boundary of the bowl of size N times K in the complement of C so not using any vertex in C so this is deterministic so now what I'm going to do is I'm just going to decompose my event that 0 is connected to the boundary of the box using this observation so I'm going to say let's the probability so theta of N times K of P is smaller or equal so just in order to be certain that everything is clear for you I will even make one decomposition which is not really necessary but I will decompose into the possible values for C so I will say the sum of NEC included in that's a poor choice anyway C included in S of the probability that so I want what I want C to be equal to C so I want that exactly what I explored is equal to C and 0 connected to the boundary of the bowl of size NK and now I know by the claim that this is smaller way called in S and here I know that there exist a point X X is boundary and Y is not and Y is not in S so I know that there is an X in S there is a Y which is not in S and X and Y needs to be neighbors if I want to have a chance that the edge is open times the probability that C is equal to C 0 is connected to X in C Omega XY is open and Y is connected in the complement of C to the boundary of the box of size N times K right? I just applied the claim so here I did something which I'm going to redo later on is that I condition on C and like that the conditions that X is connected becomes condition on X and Y being somewhere and this is ok that was not clear what I just said in my head it was clear don't worry ok maybe it's not going to help you if I repeat anyway but ok so I condition on C just to make it clear here what what we will become next so now that is now where I want to be clear before it was irrelevant so C is given the event here is depending on which edge is or let's start with the simplest one this event here depends on which edge it depends just on the edge XY here this event the yellow one depends on what well because it's not allowed to use any vertex of C it depends only on the edges between two vertices that are not in C ok so this depends depends on Y depends on depends well only on XY and this one is only on edges between two vertices outside S outside C so C now this one the fact that zero is connected to X and the fact that the cluster is equal to C well it depends a priori you need for every vertex in C you need a path to the origin and you also need that any vertex which is not in C doesn't have a path to the origin going through S only through vertices of S so this guy here it's not quite dependent only on edges with both endpoints in C but it's definitely dependent on edges with at most one endpoint in C at least one endpoint in C so this so it depends on the edges with at least one endpoint in C and here I really want and that's why I decompose into the possible C here really C is deterministic so I have three events that depend on different set of edges therefore the product of the thing I mean the priority of the intersection is the product of the probabilities so this is equal to sum for C included in S sum for X on SY not in S and X neighboring Y of the probability so I have C equal C0 connected to X in C I have P and I have probability that Y is connected in the complement of C to the boundary of the bold of size and time scale now I'm almost done here I'm asking that I'm connected using only vertices which are not in C so this definitely is harder than if I allow myself to use edges in C so this is definitely true and also Y is on the boundary is at distance one of the boundary of S therefore it's necessarily in the bold of size K and hence it is at distance at least n minus one times K of the boundary here so this guy here is smaller we call to theta n minus one times K of P so remember I started with theta n K of P n minus one K of P so this looks quite good let's look at what remains I have P that looks like my 5P of S and now I'm summing this is exactly the same sum as before I'm just summing on C included in S well this summing on C included in S because I'm just partitioning the event that 0 is connected to X in S so this whole thing here and maybe I should move to let's not erase that let's erase that so I obtain sum over X in S Y not in S and X neighbor of Y now I only have so the parameter P I have 0 connected to X in S and then I have my theta n minus one K of P and this is what this is just 5P of S so I obtain this thing and just by induction this is going to give me 5P of S to the n 5P of S is smaller than one so I just got exponential decay ok so since I have exponential decay in particular I don't have an infinite cluster that's clear but I just proved that above PC tilde I had an infinite cluster below I don't have so PC tilde is equal to PC so we get exponential decay theta P equals 0 so all of this together gives me PC tilde equal PC so this is a classical way I mean classical this is not a very old proof but I think this like there is basically no way I think a small simplification on the written version of of this proof which is how do we get this differential inequality we still need to get that ok so first thing let's check whether we all know how to differentiate rational functions so lemma this is going to be useful for us so it's 2.2 this was 0 2.1 so lemma 2.2 is the following if I take x just a random variable x depending on finitely many edges then when I differentiate in p and actually let's do it really for any p and q it's not specific to percolation and I look at the different the derivative of this then in fact it's very easily written it's one of a constant times the sum over the edges of the covariance between x and omega so let me try to give you this I'm going to prove it for q equal 1 but then you are going to be really convinced that it works exactly the same for any key for people who are used to work with with Rousseau's formula I will tell you how you get Rousseau's formula ok so let's just do q equal 1 but again it works for any q and so when q equal 1 the guy on the left is what is the sum of p to the number of open edges 1 minus p to the number of closed edges times the sum of omega so let's call this c of p cp of omega and again if q would be non zero you would just get a q to the number of clusters but it would work the same and when you differentiate this well you get just c prime p of omega x of omega or maybe I should do it let me do it actually for already let me do it for for clusters so for clusters you need to renormalise and we differentiate this so when you differentiate this you get what you get sum of omega of cp of omega squared and at the top you are going to get sum over omega of c prime of p of omega times x of omega times this sum minus the sum of cp of omega x of omega times the sum of c prime p of omega right and notice that there c prime p of omega well it's fairly simple to compute in term of c of omega it's what it's just omega over p1 minus p times cp of omega ah, this is o of omega I just use that c of omega is a certain number of edges minus o of omega and do the same computation ok so overall this is going to give me here I'm going to get so this simplifies with this and here I just get the average of omega over p1 minus p times x of omega and there this is the average of x of omega and the other one is the average of omega divided by p1 minus p so what do I get overall here it's just the sum I'm keeping the wrong notation omega and this is just the sum of the omega e so by linearity I get exactly my formula ok so I know how to differentiate and now I want to use this to get my differential inequality ah, no, I promise that I would do I will connect this to Rousseau's formula so who already heard Rousseau's formula heard about Rousseau's formula ok so you always work with Rousseau's formula right so this is just a good example that it's really useless I mean you can do it with the covariance it works somehow to my opinion better ok so how do you get Rousseau's formula so here by the way the constant at least for p equal 1 was p I mean actually for any p therefore here just look at x which is the indicator of an increasing event ok and let's write what is the derivative of the probability of A well it's 1 over p1 minus p and then you get the sum over the edges of the covariance between so I'm going to write it like that the expectation of the indicator of A times omega e minus p right this is a way of writing the covariance but notice one thing if you want omega e not to be independent of the indicator of A you need it to be pivotal so pivotal is the event that if the edge is open then the A occurs and if the edge is closed then A doesn't occur so that's the definition I mean if it's not pivotal then the edge itself is independent of indicator of A and then you can factorize and because omega e has average p you get 0 so here you can really add indicator than e is pivotal otherwise you know and this is very specific to Bernoulli percolation but otherwise you know that it's conditionally independent but now when it's pivotal this event A occurs if and only if omega e is open if it's closed it doesn't occur so here when you put e pivotal and you put the indicator of A you can just say that omega e has to be open so it's open with probability p and when it's open it's equal to 1 so I get p times 1 minus p which cancels with a 1 over p1 minus p and I get some of the edges of probability of being pivotal ok so you recover Rousseau's formula but Rousseau's formula is not quite true for q which I mean it's not true at all when q is not equal to 1 so that's why this would be more useful for next week so this is just p times 1 minus p probability of e is p-votal for A ok so that was the parenthesis it's not going to be used at all in the proof so proof of the differential inequality and then I promise we do a break we will not forget this thing ok so how do we prove that you look at x which is minus the indicator function of 0 not connected to lambda to the boundary of the ball why do I take this I mean its derivative is just or if you want to take x to be 1 minus this it's exactly the probability the indicator than 0 is connected to n so the derivative of p is exactly equal to the average of this guy so it's equal to 1 over p 1 minus p sum over the edges and here I get the covariance which I'm going to write like this so expectation of indicator that 0 is not connected to the boundary times because I have a minus here and I want to put omega e minus p I'm going to just put p minus omega e right this is exactly what you obtain the minus was incorporated there ok let's do the same reasoning as before for the event not to be p-votal so for for omega e not to be independent of this indicator function what do I need well I need that's the edge matter so I need 0 connected to x I must have y connected to the boundary of the bowl and I should have that 0 is not connected to the boundary of the bowl in all the edges minus e in omega minus e minus x1 right so let me make maybe a drawing so I have 0 I have the boundary of the bowl and I have an edge xy if I have a connection like that this guy is going to be independent therefore it will not contribute so I need not to have that so this is exactly the last condition any path going from 0 to the boundary should go through x and y but now if I want it not to be independent I need that when I open the edge I need it to be relevant I need to have the event now otherwise in both cases I don't have the event and this is independent of this so I need to have when this is open I need to have a path so I need to have something like that ok notice here you could tell me x to the boundary this is true but the name of xy anyway is arbitrary it's an edge right so I will call x the guy which is connected to 0 and y the guy which is connected to the boundary so I need that and the last thing that I need because oh ok sorry and now on this event when I have that if the edge is open well if the edge is open I'm not in my event but this contributes exactly 0 and if the edge is closed then I have my event but this contributes exactly p so all together what I just prove is that the derivative there is equal to 1 over p1 minus p this was anyway in my differential 1 in my differential inequality so I will not touch it I have a p on the fact that p minus 0 is going to be 0 and then I have the sum over all the edges e which I'm going to call them xy so of the probability of omega e is 0 0 is connected to x y is connected to the boundary and 0 is not connected to the boundary in omega minus e but now you see if e is closed anyway it's not connected to the boundary so I can just I mean the e doesn't help and I get this thing ok so now again I'm going to do exactly like there I'm going to condition on a certain event to try to show that somehow these things are conditionally independent these different events so set set s which is a set of z in the boundary of the bowl in the bowl sorry so that z is not connected to the boundary of the bowl now imagine imagine I get this but I also condition on s equal a certain deterministic set so for s including lambda n the bowl on s equal s well what does this thing become this event on the special event that the set of points which are not connected to the boundary is exactly equal to s ok so what does it become well first thing x is connected to 0 and 0 is not 0 is not connected to the boundary this translates into simply 0 should be in s so this guy there you can replace it by 0 in s this guy here tells me that x must be in s and this guy here tells me y what sorry tells me y is not in s as soon as x is in s and y is not in s this guy equals 0 is automatic so overall when I'm really conditioning when I'm adding this condition this when I'm adding this here I take the intersection with that most of the things are just getting completely deterministic condition on x and y and deterministic condition that we already saw so this theta n prime of p sum over s and I'm going to have a sum over the x y here if I add the s equal s as a condition there what do I get I need x to be in s y not to be in s 0 to be in this s and still notice that x here should also be connected to 0 in s so I'm just going to keep this thing but that is looking quite good why because let's do exactly the same as what we did before this event depends only on edges with both endpoints in s this set so both endpoints edges with both endpoints in s this guy here on the contrary in order to know that you are not connected to the boundary well it's a compliment of if you want to think of the other way you need to know everybody who is connected to the boundary and this requires only to know I mean only to check all the edges with at least one endpoint which is connected to the boundary so this guy is edges with at most one endpoint in s therefore they depend on disjoint set of edges and this can be rewritten as a sum for s containing 0 there is still a p sum of x in s y not in s and y are neighbors of course x neighbor of y and then you get probability of 0 connected to x in s times probability of s equal s here if I put the p here what did I get here I just got 5 p of s and you see that's where you really want to do this dummy summation on y here to exactly get 5 p of s and not something which would be up to a factor 2d equal to 5 p of s but otherwise you could indeed do it without that so this is 5 p of s so in particular if I want to have a lower bound here I can put the infimum of the 5 p of s and now remains a sum of s containing 0 of the probability that s is equal to s but this probability that 0 is not connected to the boundary these sets these events the partition the event that 0 is not connected to the boundary therefore I get 1 minus theta n of p and this is exactly what I want so this is the proof completely I mean I didn't hide anything you see somehow working with the covariance instead of working with Rousseau's formula you can do everything with Rousseau's formula but somehow I mean to my opinion working with the covariance as the advantage that this 1 over p1 minus p remains there for the whole proof at the end we were getting this 1 over p1 minus p I must confess that at least personally I don't speak for Vincent but I didn't really know why you were getting this 1 over p1 minus p you were getting it it was fine it was actually even kind of nice but just because we were working with the wrong differential equality if you work with this one it's clearly this guy which is emerging and the p is also emerging quite naturally just one comment before we make a break is that because it's gonna be just one of the down sides of the other proof that here in addition to prove the sharpness we get something at PC so we get an alternative definition of PC it's a tricky one this PC tilde seems like a ridiculous one but it has the advantage that if you think about it PC tilde was the infimum of the supremum still of the p for which 5p over s is smaller than 1 strictly so this is an open condition so it cannot be satisfied at PC because if it was satisfied at PC then there would be an s for which 5p of s is smaller than 1 strictly and it will also be true for a p a little bit larger so here you get some information on the critical point which is that you get that 5p of s 5pc of s is larger equal to 1 for every s so pick a box you get that the sum over the x on the boundary of the ball of size n of the probability that 0 is connected to x in the ball you get that this is larger than 1 for every n and think that this is giving you some non trivial information because it's telling you that the probability that 0 is connected to x doesn't decay faster that the size of the boundary so that 1 over n to the d minus 1 in this case polynomial ifa there is no exponential decay at the critical point and here you can think if you would like to I mean there is this huge conjecture in percolation theory which is trying to prove that there is no percolation at the critical point that there is theta of p is equal to 0 and here you can try to I mean you can hope at the critical point at some point one of them will give you an information which is sufficiently relevant to try to prove that theta pc is equal to 0 ok let's make a break and let's start again in 10 minutes ok maybe we can start again so proof number 2 and the proof number 2 is going to be using what we call randomized algorithm so I'm going to tell you what this is later on but I'm going to follow exactly the same strategy as for the first one I'm going to first tell you about differential inequality then explain why then define pc in a different way and then explain why this alternative definition of pc is in fact equal to pc and why you get exponential decay so define differential 2 it's going to be the following assume that theta n prime is larger we call to n so I think there is a 4 or 8 but let's assume it's 0 it's 1 we'll see after how we do over sn times theta n assume you have that for every n where sn is a sum for k equals 0 to n minus 1 of theta k ok let's assume we have that so for people who had the great pleasure and honor but I think seeing the edge it's probably not the case here but to have taught menchikov proof in his life so I had to do that menchikov original proof already the beginning is not that great but at some point you arrive to this differential inequality and then there was really a tricky computation from there to get the true exponential decay so there was something for people who saw it this I guess there are more people who saw menchikov proof being taught then there is the sausages things it's very like if you teach that by close to lunch it's horrible for everybody so you have the sausages things and at the end you end up with this formula you have really a complicated thing to do so what I want first to do is exactly as before to tell you that from this formula it's actually quite simple to get the result it's a few lines on me so same thing as before let's define the pc this time let's call it pc hat and this pc hat is gonna be the infimum this time I hope it's 3 no it's still yeah it is the infimum this time but that the lim soup of sn of log sn of a log n is larger or equal to 1 by the way sn being smaller than n I could put equal 1 so it's exactly I mean the pc tilda you agree was not very intuitive as a definition this one maybe doesn't look intuitive either but that's and I want to prove that from this thing below pc tilda I do have exponential decay and above pc tilda I do have theta p positive so let's try to do that so first thing let's prove that when p is smaller than pc hat sorry there exists a cp such that theta n of p is smaller than exponential of minus cpn let's start by this one so fix delta arbitrary small and let's do a first step so at p because p is smaller than pc tilda what do I get I get that log sn over log n the lim soup is strictly smaller than 1 right so since p is smaller than pc hat there exists a capital n n and alpha such that sn is smaller than n to the 1 minus alpha for every n larger equal to capital n right but if I have that then necessarily when I plug it here sn which is smaller than n to the 1 minus alpha I get that theta n prime is larger than n to the alpha times theta n so if you want the derivative of the log of theta n is larger than n to the alpha right so this gives log this is theta n prime over theta n larger than n to the alpha for any n larger than capital n so when I integrate integrating between p minus delta and p by the way this is true at p but it's a forcery true for any p smaller for any p prime smaller than p because sn is increasing in p right so when I integrate this thing between p minus delta and p what do I get I get that theta n of p minus delta is smaller than exponential of minus delta times n to the alpha so I got stretch exponential decay but if you have stretch exponential decay what do you get on sn well you get now that sn at p minus delta is bounded right so there exists the s such that sn at p minus delta is smaller than s for every n but now when I plug it in there I get theta n prime larger than n over s time theta n so here I get theta n prime smaller than n over s for every n larger equal to n and every p prime smaller than p minus delta so when I integrate that between 2 delta and p minus delta I get exactly what I want I get that theta n of p minus 2 delta is smaller than exponential of minus delta over s times n so I got exponential decay theta was arbitrary so I really get it for any the fact that I get it for any p minus 2 delta for any delta and p smaller than pc hat give me what I want so of course this was not the hard part the fact that once you get sufficiently once you get just a polynomial decay small polynomial decay you get exponential decay below that was known but the definition of pc tilde theta is important for what is going to be next which is the other side the other side was basically what was difficult in Manchikov proof was to go from well you are below pc you just know that the probability that zero is connected to the boundary of a box goes to zero that's the only thing you know how you get to somewhere where you know polynomial decay that was a difficult part here somehow we are going to just say no no no we don't do it like that but the first point where you see is to have exponential decay polynomial decay and you just prove that above you have directly percolation so now when p is larger than pc hat what do we get so that's where it's good to be French and to have done preparatory classes and to have been completely traumatized with the equivalence between integral and sums so define tn to be the sum of the following thing so you sum theta i or theta k over k for k equals 1 to n minus 1 say and divide by 1 over log n ok let's look at the derivative of this of this monster so tn prime well it is 1 over log n sum for k equal 1 to n minus 1 of theta prime of k over k theta prime of k over k is theta k over s k but theta k over s k is exactly is exactly s k minus s k minus 1 over s k right so it is larger we call up to actually sorry we find s k up to n minus 1 so there was a good reason for that so it's s k plus 1 minus s k which is even nicer because this is definitely larger than the integral between s k and s k plus 1 of 1 over t dt so this whole thing is larger we call to 1 over log n times the integral between s 1 and s n of what did I want to say of 1 over t dt and therefore this is larger we call to log s n over log n minus log s 1 but s 1 is 1 so the second term is just disappearing it's nicer than what we wrote in our article why is it, well there is probably small mistake somewhere but anyway you can try to find it sorry I mean in our article there is a minus something but which is constant we don't care let's act like nothing happened log s n over log n so in particular here what do I get I get that t n so if I fix p prime between p c hat and p I get that t n of p minus t n of p prime is larger we call well to log of s n of p prime over log n times p minus p prime I just integrated this inequality now when n tends to infinity when you take n to infinity this goes to what theta k tends to theta of p so this thing tends to theta of p this to theta prime of p sorry and here if I take the lim soup here I am allowed to take the lim so here take just the lim soup you get 1 by definition of p c hat so you get p prime you let p prime go to p c to p c hat p prime tends to p c hat gives theta of p larger we call to p minus p c hat so really this definition is well yeah it's a little bit astuce but it's really I mean really for people who's so let's face it this and it is really the whole complicated part of Meshinkov proof is really these 5 lines a little bit more than 5 yes and last one should start with an inequality sorry here there why yes you should know why ah yeah yeah yeah of course of course sorry thank you it's horrible to give a talk in front of a co-author no excellent thank you ok so in particular p c hat is equal to p c now the question is how do you get this inequality the differential 2 and that's where randomized algorithm enter in the picture so how to get ok so what is a randomized algorithm so what is a randomized algorithm they are as they pop up really in many fields and in particular in computer science it's related to decision trees and so on I don't want to enter into a big discussion on randomized algorithm I just want to tell you what I what is a definition our definition of randomized algorithm so consider a product space and omega which is just 0 1 to the e and let's look at a function from omega into 0 1 say and a randomized algorithm determining f is going to be the following so let's call it t is a an algorithm determining f if so basically an algorithm is going to inspect the variables one by one in the Markovian way in the sense that every single time he would have discovered something up to now he looks at all the bits he discovered and he decides where to go next which is the next guy that he discovers and he just stops when he has discovered so many bits that whatever the other bits are f would not vary if you apply it to the bits he discovered and the rest so let's more formally it's going to be a function from omega into I'm going to call it e with narrow to say it's going to be just the set of sequences ordered sequences of edges where e1 est equal to e and here I just think of e as of cardinality n so in particular here it means that all the edges in e appear once and only once so it's a function like that so it gives me the bits one by one and there are two things that I want I mean there is one thing that I really want on this is that I want it to be Markovian so what I'm going to say is that t is going to be a deterministic function phi t think of phi t as a decision rule once I have something I decide where I go to explore what is the next edge to explore and this decision rule is going to be applied to e1 e t minus 1 and omega e1 omega e t minus 1 so what does it do the t minus 1 edges that it already discovered it decides, it looks at the value of the bits at these things it just decides where to go so phi t is deterministic here okay phi t is let's say a decision rule I don't, okay determining f was maybe not such a good definition so determining f is tau stopped at a certain t and it's t stop at a certain time tau and tau is going to be defined like that tau is the infimum of the t such that f of omega prime equal f of omega as soon as omega prime restricted to e1 e t is equal to omega restricted e1 e t so it's the first time where whatever you would discover after you get the same result for f okay so if you take the algorithm determining whether one edge says the edge e is open or not you choose first e you check its value and then you check all the other guys say in lexicographical order and there this gives you an algorithm and the stopping time tau is just one okay you can do a little bit clever I mean more complicated things if you want to check whether there are more than half of the edges in your box which are open or not what do you do you just check them in the lexicographical order there is nothing the geometry is not relevant but the stopping time is going to be the first time where you discover that they are strictly more than n over 2 strictly more than n over 2 open edges this will be your stopping time let me give you another algorithm a little bit more clever than this one which would be if you want to explore to determine the function 0 connected to the boundary of the box of size n okay so look at f which is the indicator function that 0 is connected to the boundary of the box of size n let me give you a few algorithm that can determine this so the first algorithm that you can do first you can go lexicographical order and check all the edges at the end of the day you will know because you know all the edges you will know whether you determine the thing and you can stop at the first time where you see a crossing or the first time you see a blocking surface something that really disconnects 0 from the boundary that could be a thing the problem is that a priori stopping time may be very big if you do it in lexical graphical order you will take a long time to actually even explore what is near 0 so one thing which is definitely cleverer I mean cleverer than that is to actually go from 0 and explore one by one the cluster of 0 by doing the following you say okay at time 0 I pick an edge which is neighboring 0 and I check whether it's open or closed let's say it's open well I keep in mind let's define a0 to be the empty set and e0 to be the empty set also I will keep in mind a1 is going to be the whether I mean the set of points which are connected sorry let's put 0 a1 is going to be the set of points of vertices which are connected to 0 in my exploration so if this edge is open I would put I would add this edge x to the thing so I will say it's 0 with x if omega 0x is 1 and if it's not I will put I will keep equal to 0 and e0 will be the set of edges explored just all the set of edges explored so here I will define a1 to be the edge 0x and now I can carry on I can pick an edge let's say the smallest in lexical graphical order which has one end point in a1 so let's say this edge and I check whether it's open or not and I define a2 to be 0xn this point if this edge is open and 0x if it's not etc etc so 80 at every step would be the set of points connected to 0 and et would be the set of edges explored ok? when you do that at some point either you are going to see that one point on the boundary of your graph is now in 80 sorry and what does it mean? it exactly means that there is a path of open edges from 0 to the boundary so you can stop you can fix this now or if it's not the case what will happen? there's going to be a point where I will not be able to choose an unexplored edge neighboring one of this vertices and this means what? that means that I arrive at the point where I explored everybody and that outside everything is closed I explored the edges and they are all closed so this is a possible algorithm to determine 0 connected to the boundary ok so why do we care about randomized algorithm? well because there is a theorem by O'Donnell Sacks Schramm Servidio qui dit le suivi pique pour F Bouléan et T Tau déterminant on a le suivi si je regarde la variante F c'est plus petit pour la somme ce qui est dans mon set de la co-variance entre F et Omega E ça ressemble très bien à la dérivative except qu'il y a un terme additionnel et ce terme additionnel c'est T E de T Tau Delta E de T Tau et ce gars c'est la probabilité la probabilité que la haine est réveillée par l'algorithme la probabilité qu'il existe un T plus petit que Tau comme que la haine est égale la probabilité est qu'on est découvert par l'algorithme et là il devrait être peut-être un facteur 2 donc pourquoi est-ce que c'est bien? est-ce que c'est clair la définition de ça? donc ici pour exemple let's look qu'est-ce que le réveillement donc ceci est appelé le réveillement Delta E de T est appelé le réveillement qu'est-ce que le réveillement de cette haine? bien c'est l'un des neighbors de 0 donc à un moment on va devoir checker ou au moins avec un très grand point si on regarde l'ordre lexicographique on va premièrement checking all these edges before checking anything else so this edge has Reveillement 1 but an edge which is far an edge here what is the Reveillement of this edge what is the probability that we explore this edge well if this edge is explored at any point in particular that means that one of the end points had to be in 80 at this time so it had to be connected to the boundary to 0 so that means that the Reveillement of an edge up to a factor may be 2 is theta of the distance between 0 and this edge so here we will have a lot of small terms here so this is the derivative here we have a small term and the variance of f if I look at f which is say the indicator function of 0 connected to the boundary of the box it's what it's just theta n times 1 minus theta n that's the variance of the indicator function of an event is probability of it times 1 minus probability of it so here what did I get I get theta n times 1 minus theta n smaller than the derivative so theta n prime times something small so this is very good it goes in our direction actually we would like maybe this something small to be sn over n that would be exactly what we want the problem is that here with this algorithm well I explored with probability 1 the edge is neighboring 0 so not everybody has a small Reveillement there are guys that have a very big Reveillement so what I'm gonna do now I'm gonna actually show to you the proof from this and if we have time I will give you an idea of how we prove that but next week anyway we will have to prove this in a more general context because here it's for IED random variables so we will prove it for FKG measures basically and we will use the same strategy for the random cluster model ok so first how do we really get the differential inequality from this one well in fact this is not the right algorithm because this it explored vertices close to 0 with way too big probability so what we will do is the following fix ok which is between say 1 and n and we will do the following algorithm we will do exactly the same algorithm except that we do not explore starting from 0 you explore starting from the boundary of the ball of size K so you start with A0 to be boundary of the ball of size K and E0 to be empty set and you exactly do the same procedure maybe I don't write it on the board but you really do at each step you look at whether there is an unexplored edge which is with one end point in my set 80 and I explore this edge if it is open I add the end point on the other side if it's closed I do not ok so I do that until I cannot do it anymore and I stop tau is there after that you can just explore how you in a deterministic way everybody else you don't care because it's not going to be relevant for computing the revealment ok so is algorithm clear who wants to live I see no don't be sad it's fine ok so this is the algorithm starting from the boundary of the box of size K so what is now the revealment of the algorithm for an edge so if I fix an edge let's say it's here so the box of size N is there I fix an edge like that well exactly like the observation there I need one of the end points to be connected to the boundary of the box to to be revealed ah, first sorry these determine of course whether 0 is connected to the boundary or not because if 0 is connected to the boundary it's going to intersect this guy so if I explore all the clusters intersecting this ball I will know whether 0 or not is connected to the boundary so now if the edge is revealed if X or Y is connected to distance well let's call this distance D with distance D minus K in absolute value ok so the revealment of the algorithm TK is smaller way called to the probability that U is connected to the boundary of the box of size K is connected to the boundary of the box of size X sorry and Y and this is a fairly good bound because now if I average the relation that I get there on K from 1 to N assume I average it when I average this so if I do 1 over N sum over K equal 1 to N of this this thing is bounded so this is bounded by 1 over N sum for K equal 1 to N of the probability I mean here you you at least need to be connected to distance K minus D and here the same or let's say D X and D Y where these guys are distance to the origin and this is what here I get exactly if you think about it this is smaller than twice SN right because how many K are gonna be such that K minus D X is say 10 you are gonna have 2 guys like that 11 2 guys like that and so on up to 2N at most 2 people imagine that this is J so here and because DX is smaller equal to N and K is smaller equal to N this whole thing here is smaller than 2SN and this one as well so this whole thing is smaller than 4 over N times SN so now apply the OSSS inequality and average it on K and average on K what do you get ? you get that the variance of my indicator function so which is equal to C times N times 1 minus C times N is smaller equal to the revealment now for every single edge the revealment is bounded by 4SN over N we have a bound uniform in every edge so I am gonna put 4SN over N and then I get the sum of covariances which is so there is a 2 that I lost which is P 1 minus P times C times N prime this this times this we remove and we get 2SN over N times C times N prime so we almost got the right thing notice that 1 minus C times N here you can bound it don't care when P is close to 1 because you know that P is much larger than PC so bound that by a constant and you just got so using say P smaller than P0 P smaller than 1 minus epsilon we get that theta N is smaller I mean theta N is smaller than let's write it exactly like there N prime is larger or equal to N over SN times 1 over 2 I mean epsilon over 2 so I don't quite get the differential inequalities there because I have this kind of ugly constant there and I lost a theta N somewhere sorry but if you think about it this gives you exactly the differential inequality there for 2 theta N over epsilon so do exactly the reasoning you did there with 2 theta N over epsilon instead of theta N and you get exactly the same result so having a constant there I could have done the computation with a constant there but that gives you exactly the same result ok so really the summary of this in really one sentence is apply inequality to not one algorithm but a family of algorithm and you get exactly the thing immediately right I mean there are very few lines ok well I actually do have time to give you the proof like that it will be maybe simpler for next week let me try to give you the proof of the USS inequality in the case of Bernoulli random variables and next week we'll see how life is sweet because it actually works in the case of FKG measures as well but not quite the same proof maybe I don't erase that so now you can forget about percolation even it's really something which is true for any Boolean function and actually the structure of graph of E that E is a set of edges is irrelevant now it's really just a function of n bits and you prove your inequality ok so what we are going to do is we are going to try to use lindenberg principle we are going to try to replace variables one by one because if you do it stupidly if you just compute the variance you can check that the variance is smaller than this so forget the 2n and remove that just because at every step you reveal just you imagine you want to compute the variance you just condition on the n first bits what is the new randomness you add where you need at least to be pivotal to add something but this gain here is more subtle so we need to do something so what we are going to do is we are just going to construct we are just going to consider 2 families of i, d random variable bernoulli random variables and we are going to play with these 2 families to try to to try in some sense morally to have that these guys come from one of the 2 families and this revealment from the other one ok so let's just take omega and omega tilde omega and omega tilde are 2 independent i, d families of bernoulli of parameters p ok and what I will do is I need to define the randomize algorithm I am going to define the randomize algorithm from the first guy the first guy is giving me my randomize algorithm so really imagine that you are doing the randomize algorithm only looking at omega so the t tau is defined out of omega I mean using omega so by this I really mean the edges there they were functions of the omega so here I am really now that I am on a bigger space I need to specify so my ET and so on these are the basic functions of the omega not the omega tilde so omega tilde is completely independent ok and now let's define omega t to be some kind of mixture of the omega and the omega tilde so omega t is going to be omega 1 tilde omega 2 tilde etc up to omega t tilde then is going to be omega t plus 1 t tau et puis omega tilde t plus 1 je perds juste le fact that en fait peut-être je dois mettre e1 et t 1 e2 et t et t plus 1 etc donc sur les edges je définis comme ça ok donc ça ressemble mais vraiment la première thing is omega tilde then omega and then again omega tilde ok so now what do I do with that well I observe 2 things so omega 0 is what omega 0 you have only omegas up to et tau and then you have omega tilde so f of omega 0 is equal to what by definition f of omega 0 is equal to f of omega simply because you do not care at all what is happening after so this is f of omega on the other direction omega n is equal to what well omega n there are only tildas involved in omega n so it's omega tilde so in particular this implies that f of omega n is f of omega tilde right therefore if I write now if I want variance of f so this is smaller than the expectation so now this denotes the expectation for the coupling it's smaller than this what did I do here I did a very very clever thing I use that f is between 0 and 1 so the square of the thing is smaller than the absolute value well you can fail this type of thing ok f of omega by definition f of omega 0 expectation of f now if you think imagine a second condition on omega then expectation of f I can write it as just f of omega tilde because omega tilde is independent on this so this whole thing if you think about it it's like the expectation of f of omega minus f of omega tilde knowing omega so if I use sorry it was this sorry and therefore if I use the triangular inequality I get this so this is just a expectation of f of omega 0 minus f of omega n ok so now I write this as sum for k equal 1 I mean 0 to n minus or let's say 1 to n ok 0 to n minus 1 of expectation of f of omega k plus 1 minus f of omega k so up to now it's really some trivial manipulations there is nothing deep there but now what I'm gonna do clever I'm gonna use omega k plus 1 is equal to omega k as soon as k is larger than tau because it just it's become stationary you don't change the guys at the end because they are anyway equal to omega tilde so here there is no cost in adding that k is smaller or equal to tau I could also if I really want to play with it I can also add the sum over the edges and ask in addition there that et k is equal to e right I can also just I'm just partitioning the events so I can also add this ok so up to now nothing good so now observe that I'm almost done in fact I did trivial manipulations and I'm almost there in the sense that here what do I have I have the expectation of this thing knowing omega e1 omega e k e k minus 1 sorry and then I have indicator of k smaller or equal to tau et et t equal why do I have that because simply e k is measurable in terms of the first edges actually I could also put e k minus 1 because they are random variables so e k is measurable in terms of the edges before this is the definition of an algorithm and the fact that I didn't stop before time tau is exactly also measurable in terms of everybody else that means that at time k minus 1 I still have some randomness I cannot determine my function so this is measurable but now imagine here let's imagine that this I can prove it smaller than twice the covariance between f and omega e imagine I do that then I'm done because I get twice this the thing is exactly the probability that the edge e was discovered before time tau ok so I will so I will be done if I can prove that this thing is smaller than twice the covariance but up to now I didn't really use the two variables but why did I introduce omega and omega tilda omega was important because the k minus 1 first bits of omega were determining this guy but now here because I'm with omega k and omega k plus 1 if I go back to the definition here what are the bits involved in this new guy well the k first one by definition they are actually omega tilda so they are completely so far I have absolutely no clue what they are they are completely independent on the conditioning same thing these guys well they are omegas but they are after times k minus 1 so they are also independent and the last one of course are independent so here I have if you want my full randomness I do whatever I want there because here omega k and omega k plus 1 let's forget that I condition on that I mean sorry even if I don't forget that I condition on that these are just an ID sequence of Bernoulli variables notice that there is something subtle that the edges where in fact my variable is omega tilda or is omega this depends on this because it depends on e1 e k minus 1 what are the bits that I changed here this depends on my conditioning but anyway that I changed them or not they are ID Bernoulli random variables period so that is the game there we use the second randomness in some sense to it's like you can explore the thing but you can still keep some randomness for this guy now the second observation and it's also a simple one is that this omega k plus 1 goes to each other in fact because they differ only in oh I just notice something so it should be omega k and omega k minus 1 I guess right so here sorry sorry so omega k and omega k minus 1 so they differ only through the edge e only the case guy is different in one case it's omega e k and in the other one it's omega t so this looks very much like the influence because there it's like I look at the average of something and the average of the exactly the same thing except I re-sample the edge e k so more formally how we conclude the proof so on k smaller equal to tau and e k equal e I want to prove my inequality both omega k and omega k minus 1 are e i d Bernoulli conditionally on that I don't care about the fact that conditions are still i d Bernoulli and they differ only through their value at e so if I do that if I observe that then I can conclude for ok so maybe it's true in any actually where I mean the statement that I gave you is wrong f should be increasing there is a version where it's not increasing but with the covariance you need f to be increasing so put f increasing add f increasing in the thing so if f is increasing we are going to define the following so f 1 and f 0 they are just the same function as f except that you put the bit at e up or down I mean equal to 1 or equal to 0 ok so f 1 of omega is f of omega e 1 omega e k minus 1 1 omega e k plus 1 etc omega e n and f 0 is the same with 0 ok and because f is increasing notice that f 1 is larger or equal to f 0 I just do it for increasing because we will use it for increasing next week so let's and anyway our function is increasing so what is this thing I mean what is the value of this well first if omega tilde e is equal to omega e then it's exactly omega k and omega k minus 1 are equal so the only time where it contributes is when they are distinct this happens with probability 2 p 1 minus p then what remains is to see what it is equal to when they do not when they are not equal when they are not equal well one of the guy is equal to f 1 1 is equal to f 0 so here I could put a expectation of f 1 of omega minus f 0 of omega and here I remove the absolute value just by using the fact that it's increasing ok well if you check p times 1 minus p times this is just a covariance between the edges so this is twice a covariance between f 1 so again I apologize and it's very important so in the statement with covariances it's only for f increasing now if you want to write something with influences with probability of being pivot all then you can write it for any function but anyway that's not important for us we really want it for increasing functions ok so I think as we say in France I saccage the proof but it is a complete proof so that's already there and really the idea is this Lindenberg type principle where we exchange variables one by one and like that when we have condition on this event that the edge is revealed at time k we still have randomness there we still have randomness and we get the influence so next week basically the game is going to be what do we do when there is no independence because here we use independence in every single step I mean not in every single step of the proof but there we use independence in a very important fashion so we are going to see how we can replace that and then just see that in fact the proof follows basically the same line after you get that ok thank you very much