 Ok, je pense qu'on peut commencer. Bien, tout le monde, merci beaucoup d'être venu. Donc, je vais parler pour 8 heures. Non, pas aujourd'hui, mais... Et mon goal est de vous dire un petit peu sur le phénomène charme et le phénomène statistique, à moins de la compétition mathématique. Et en fait, entre les temps où j'ai choisi le titre et entre aujourd'hui, nous avons fait un bon progrès donc je vais vraiment focussé sur un cas spécial, qui est le modèle de pot et j'ai essayé de passer par la compétition complète. Maintenant, plutôt que de commencer par là-bas, vous savez pourquoi vous êtes toujours cool et tout, j'ai pensé que je pourrais commencer par la première 2 heures où je vais essayer de vous motiver pourquoi c'est une bonne question, une question naturelle que vous n'avez pas besoin de prendre mon travail pour ça. Et puis je vais passer par la compétition. Donc, cette lecture va être un peu de motivation. Je vais essayer d'expliquer à vous que, en particulier, ça vous permet d'améliorer les points critiques. Et puis, dans la prochaine lecture, je vais en fait prendre un simple cas, que peut-être que la plupart de vous ont déjà vu, qui est le cas de percolation de Bernoulli. Et je vais prouver, dans deux différentes manières, la puissance pour la percolation. Mais la première preuve est très simple et certains l'ont déjà vu dans leur classe, probablement en master, mais je vais le faire un peu différemment de toute façon. La deuxième preuve est complètement nouvelle et vous pouvez penser à la première, que ce n'est pas si intéressant, parce que la première preuve est beaucoup plus simple que la deuxième. Mais ça vous donne une autre type d'information et cette deuxième preuve a l'advantage d'être beaucoup plus général. Donc, je vous présenterai la première, en cas de percolation de Bernoulli, que vous n'avez pas perdu dans le formalisme. Mais ensuite, dans la troisième et la deuxième lecture, je vais en fait expliquer à vous comment vous prouvez cela, comment vous étendez cette preuve et comment vous faites le travail pour le modèle de pot et aussi peut-être, je vais voir peut-être tous les types de modèles de physics statistiques, selon ce que vous présentez. Ok, donc aujourd'hui, c'est une motivation et j'ai envie d'y motiver, donc je dois dire que c'est une majorité de travail avec Vincent Bifara, pour ce que je vais parler d'aujourd'hui. Vincent Tassion et Aran Raufi pour ce que je vais parler dans les autres lectures. Et si vous avez une question, vous n'avez pas compris n'importe quoi, vous vous demandez Aran, qui est ici, il sera très content d'y répondre. Et il est responsable pour tous les étapes qui ne sont pas claires dans la preuve. Ok, donc, introduction et la question va être, comment on compute un point critique, comment on compute le point critique de votre modèle préféré. Donc mon modèle préféré c'est Bernoulli Percolation, donc on va commencer avec celui-là. Bernoulli Percolation, c'est un modèle de subgraph de la lattice square. Je vais focusser sur les deux, juste parce qu'on a un bon temps pour les lectures, donc, on va commencer sur les deux. Je vais peut-être expliquer comment vous pouvez faire pour les autres. Donc pour Bernoulli Percolation, Bernoulli Percolation c'est un modèle de subgraph de la lattice square. Donc tous les étapes dans votre lattice square, donc c'est pas le set des étapes de la lattice square, c'est E. Pour tous les étapes, donc, E est donc 0 ou 1, avec la suivante interprétation, si Omega E est 0, nous pensons que c'est close ou absence de notre subgraph. Et si Omega E est 1, nous pensons que c'est open ou présent dans notre subgraph. Donc la configuration Omega à la fin c'est juste un subgraph de la lattice square où le set de vertices est un set original de la lattice square et le set d'étapes est un set d'étapes open ou si vous voulez, c'est un set d'E pour lequel Omega E est equal à 1. Donc maintenant, la distribution sur le modèle, donc je veux le prendre en random, donc la probabilité de la configuration Omega sera un subgraph random que Omega E est equal à 1 avec la probabilité P et c'est equal à 0 avec la probabilité 1-P indépendant pour chaque étape où les étapes sont indépendantes. Je sais que je le fais très vite sur le bord, donc juste un conseil est que tout ce que vous savez, vous n'avez pas à le faire sur votre pièce de papier, délai. Ok, donc probablement, vous avez déjà vu la percolation de Bernoulli dans votre vie, vous savez qu'il y a un point critique, vous êtes intéressé dans les propriétés connectives de ce subgraph random donc il existe un P qui est entre 0 et 1 comme ça où la probabilité 0 est connectée à l'infinité qui n'est pas comme ça qui est dans un component connecté à l'infinité est 0 si P est plus petit que PC et c'est très positif et je n'en parlerai pas si P est plus grand que PC donc il y a une transition face plus grande que P, le plus grand que j'ai dans mon subgraph random c'est assez naturel de considérer qu'il y a un point critique sur lequel vous avez un cluster et sous lequel vous n'avez pas d'infinite cluster qui est positif ok, donc la percolation de Bernoulli a été introduite par Broadbent et Hamer Slay dans les fifties mais tout de suite le théorème a dû attendre pour les années 80 pour être prouvé et le théorème est à cause de K-10 et il a aussi dû attendre pour K-10 pour être prouvé, c'est aussi donc K-10 dans les années 80 prouve que PC est equal à 1,5 et je veux discuter un petit peu ce qui est je veux dire, le path de cette théorème, ce qui est difficile ce qui est facile, etc donc, on commence par, disons niveau 0 raisonnement ou minus 1, je ne sais pas ce qui est d'essayer pourquoi il devrait être 1,5 pourquoi prédier PC est equal à 1,5 et pour faire ça on va noter la relation de dualité pour la percolation de Bernoulli donc si je prends mon original lattice je peux définir un graphin dual par mettre une vertex dans le milieu de chaque face de mon lattice par mettre des étapes entre les neighbors donc pour chaque étape E je n'arrête pas le seul étape qui se passe par E-star donc cette étape ici il va être dénoté E-star ok, donc si j'ai une configuration sur mon lattice je prends quelque chose avec cet ouvert, cet ouvert on va dire quelque chose comme ça je peux naturellement définir une configuration duale par dire que un étape est ouvert dans la configuration duale si l'étape correspondant de la configuration primale est fermé donc par exemple, cette étape n'est pas présente dans la première configuration donc je vais le faire présente dans la configuration duale similarly ici ce sera présent, ce sera présent et ainsi au contraire quand l'étape est présent dans la configuration primale je ne le mets pas dans la configuration duale ok donc cette configuration duale est définie proprement comme ça Omega-star d'E-star de l'étape E-star est equal à 1-Omega donc c'est ouvert si l'autre est fermé et fermé maintenant si Omega est simple pour la percolation Bernoulli de paramètres P je devais avoir dit ici qu'on va dénoncer la mesure comme ça si Omega est simple pour cela, puis Omega-star c'est simple pour la percolation Bernoulli de paramètres 1-P chaque étape est ouvert avec la percolation 1-P parce que c'est exactement la percolation que cet étape est fermé donc ici je le dis je veux dire ce n'est pas assez vrai parce que c'est sur le lattice dual donc c'est sur la percolation 2 mais maintenant on voit qu'il y a un paramètre qui joue un rôle très spécifique qui est un paramètre pour lequel 1-P est equal à P c'est 1,5 c'est 0 de la percolation pour prouver que la percolation P est equal à 1,5 c'est pour dire qu'il devrait être un point unique où quelque chose d'inquiétant se passe et si la percolation P n'est pas equal à la percolation P-star alors il devrait être quelque chose d'inquiétant à la percolation P-star mais aussi à la percolation 1-P ok? la première raison est quelque chose d'unique point devrait être à P est equal à 1-P c'est à dire que la percolation P-star est equal à 1,5 donc je vois beaucoup de phases qui ne semblent pas très convaincées et je comprends c'est un niveau 0 on ne va pas rester à ce niveau de présentation mais c'est le premier de toute façon et c'est en fait un vrai gaz donc c'est typiquement comment les physiciens vont commencer à étudier la chose il va essayer de voir ce qu'est un bon candidat et ce sera le premier argument ensuite il va aller plus loin mais c'est le premier argument ok mais c'est un peu moins d'argument donc ok? c'était notre premier jour à l'école deuxième jour à l'école on va essayer de faire quelque chose un petit peu plus précis et la première raison que nous pouvons faire c'est la suivante disons ok Assume Assume que la PC est plus petite Assume que vous êtes ici donc si vous êtes ici ce 1-PC est ici donc si vous faites ça vous avez observé que ici il n'y a pas d'infinité connecté component on va dire on va dire connecté component de notre graphique on va dire un cluster parce que je vais de toute façon donc pas d'infinité cluster en omega dans ce régime il n'y a pas d'infinité cluster ici donc d'infinité cluster mais ici il n'y a pas d'infinité cluster en omega star et ici il n'y a pas d'infinité cluster en omega star donc ça implique si vous faites ça si vous pensez que c'est ce qui se passe il implique qu'il y a un whole range ici de paramètres pour lesquels il y a d'infinité cluster en omega et d'infinité cluster en omega star et si vous pensez sur ça ça semble être très difficile de s'occuper parce que la construction d'un omega et d'un omega star n'a pas d'infinité cluster dans les deux c'est-à-dire d'infinité cluster dans les deux ça implique d'avoir d'infinité interface et ça semble être difficile donc si vous assume et ce sera notre hypothèse si vous assumez qu'il n'y a pas d'infinité cluster d'infinité cluster en omega et d'un omega star alors ça doit impliquer que la PC est largeur à 1,5 donc h1 implique PC largeur à 1,5 on va dans l'autre façon et d'ailleurs cette hypothèse est assez sensible si vous pensez que l'infinité cluster a un grand cluster omniprésent comme tout le monde alors ça semble être même vraiment impossible que dans la doule vous aurez exactement le même cluster coexiste vous pouvez essayer de faire le même par assumption que la PC est largeur à 1,5 alors 1-PC est sur ce côté et si vous faites le même raisonnement qu'est-ce que vous avez terminé vous avez terminé avec tout le régime ici pour lequel vous n'avez pas d'infinité cluster dans le primal et dans la doule configuration donc si vous assume et c'est encore un peu de hypothèse si vous assume que il n'y a pas de régime pour lequel omega n omega star n'a pas d'infinité cluster puis encore cette deuxième hypothèse doit impliquer que la PC cette fois est plus ou moins à 1,5 mais ces deux hypothèses a priori ils ne sont pas si clairs en fait il y a des modèles pour qui ces choses peuvent fail donc c'est le niveau 1 du raisonnement mais c'est encore plus loin donc vraiment le niveau 1 vient vraiment de l'idée que si il n'y a pas d'infinité cluster dans une configuration il devrait être l'une dans l'autre et vice versa c'est un bon je veux dire c'est une bonne chose d'y croire mais c'est pas priori ok donc faisons un plus de résonance essayez d'évoluer un petit peu plus et regardons le suivi événement regardons l'événement que la box n contient un path de gauche à droite en omega ok donc quelle est la probabilité de cet événement à 1,5 où la probabilité de cet événement plus la probabilité de son complément est equal à 1 et la probabilité du complément n'est pas difficile à estimer parce que le complément est exactement si vous n'avez pas un path de gauche à droite alors il faut être un path doule dans la configuration doule de gauche à gauche dans cet événement doule ici que vous obtenez par prendre cet événement comme ça ok le point c'est que cet événement cette version rotée cet événement est un n x n plus un rectangle dans cette direction et je vous demande pour un crochet pour l'événement qui aussi a distribution Bernoulli percolation avec paramètres 1,5 donc cet événement et cet événement simplement ont la même probabilité donc ces deux choses sont equal et le sum à 1 donc ça implique que la probabilité de cette chose est equal à 1,5 ok donc pourquoi est-ce intéressant de avoir cet estimateur ? bien on peut essayer d'avoir une assumption pour faire une hypothèse si p est plus grande que p c c'est assez decent pour croire que vous avez cet infinite cluster donc cet infinite cluster si c'est assez grand c'est une bonne chance de contenir en particulier une pièce qui va passer de gauche à gauche cette première hypothèse que je peux faire c'est d'assumer que pour une pi plus grande que p c la probabilité de cet événement est tendue à 1 comme n tend à l'infinité ça semble être une hypothèse decent il y a une deuxième hypothèse que l'un peut faire peut faire pardon c'est que vous pouvez imaginer que l'opposite arrive donc quand p est plus que p c c'est-à-dire que cette probabilité irait quand n tend à l'infinité à 0 si vous avez ces deux hypothésies alors c'est automatique que cet événement 1,5 ne peut pas être sur la pièce ou sous la pièce donc ça veut dire que 1,5 doit être equal à la pièce ok le problème est que c'est pas si clair comment vous justifiez un claim comme ça mais au moins l'un des deux hypothésies est prouvable donc c'était théorème 1,1 je vais essayer de garder le tract de ça mais je veux dire je n'ai pas de gros esprits là donc proposition 1,2 2 c'est que d'ailleurs h'1 c'est vrai c'est-à-dire quand p est plus grand que p c nécessairement la probabilité de cross va tend à 1 à 1 ok donc on va prouver ça et je vais faire deux assumptions deux choses que je ne vais pas justifier mais qui sont complètement basées dans la théorie percolation qui va être la suivante deux choses donc la première c'est que il n'y a pas d'infini cluster ou quand il y a d'infini cluster c'est unique et c'est quasiment unique donc quand l'infini cluster existe il existe quasiment et c'est unique donc c'est une demande de standard pour ceux qui ne savent pas qu'ils peuvent s'y aller et la deuxième chose va aussi être standard c'est l'équalité qui dit la suivante dit que l'évent a s'incrise si quand j'ai pris deux configurations et une est plus petite que l'autre donc il y a quelques égiles dans la première puis omega s'occupe de A s'implique omega s'occupe de A donc ça veut dire qu'il y a un événement qui est stabilisé par ajouter égiles dans ma configuration ok donc typiquement l'existence de la route de gauche à droite si j'ai ajouté égiles je reste bien sûr dans mon événement donc A s'incrise si vous avez ça et l'équalité de FKG dit que pour n'importe A, B qui sont incrises l'évent la probabilité d'A intersectée avec B est plus grande que la probabilité d'A times la probabilité d'A dont vous pouvez re-interpréter si vous divise par la probabilité de B vous pouvez dire la probabilité d'A de la probabilité d'A un cas d'A et d'A plus un cas un cas pas un cas un cas d'A un cas d'A pour un cas d'A pour un الد pour un un un un des étudiants de master, c'est-à-dire qu'il y avait beaucoup de master il y avait beaucoup de master il y avait beaucoup de master donc je vais essayer de ne pas donner trop de détails mais à la même fois de vous donner suffisamment ce que tout le monde peut suivre. Ok, donc ce sont mes deux assumptions, je ne vais pas les prouver. Maintenant, comment nous conclurez les prouves? La première chose que nous allons faire c'est prendre une boxe de size K, c'est-à-dire lambda K dans le milieu de ma boxe de size n. Donc je le mets vraiment dans le milieu, c'est n plus 1 par n. Et la première chose que je veux dire c'est que, nous allons le mettre dans le milieu de la boxe n par n, désolé. La première chose que je veux dire c'est que je veux regarder la probabilité qu'il y ait un pass de cette petite boxe, imagine K plus petit que n, dans votre tête, pense à ça. Je veux un pass de la petite boxe à la gauche de ma grande boxe qui est restant dans la boxe. Donc poétie à P pour avoir ça. Ok, alors j'ai dit que c'est plus ou moins une probabilité d'avoir un pass de la petite boxe à la boundary, à la force 1. Ok, donc ça pourrait être un peu bizarre. Mais pense à la suivante. Si vous n'avez pas ça, donc la probabilité est qu'il n'y ait pas un pass du centre à la gauche. C'est-à-dire cet événement A1. C'est un événement décrisant, dans le sens que c'est l'opposé d'un événement décrisant. C'est un complément d'un événement décrisant. Par contre, ici, si vous mettez décrisant, ici vous avez la même équilité pour décriser l'événement décrisant. Simplement, vous passez au complément et vous avez le même. Assume, donc appelle A1, l'événement qu'il n'y ait pas de pass du centre à la droite, à la gauche. Assumez A2 pour être l'événement qu'il n'y ait pas de pass du centre à la gauche, A3 à la droite, et A4 à la gauche. L'événement qu'il n'y ait pas de pass du centre à la gauche est l'intersection des 4 événements. Mais ces 4 événements sont décrisants. Donc si j'utilise l'équilité FKG pas pour 2, mais pour 4 événements, donc vous utilisez ça juste 3 fois, vous allez avoir la probabilité que l'intersection des 4 événements est plus grande que la probabilité du producteur de la probabilité. Donc c'est plus grand que la probabilité de ne pas avoir un pass à la force de 4. Bien, quand vous passez au complément, c'est exactement comme la probabilité d'avoir un pass, c'est plus grand que 1 minus la probabilité d'avoir un pass au tout, à la force de 1 à 4. C'est-à-dire le truc de la route square. Je ne veux pas prendre trop de temps pour ça. Je pense qu'il n'y a pas de plus grand que 1 minus la probabilité d'avoir la probabilité que l'intersection des 4 événements est connectée à l'infinité à la force de 1 à 4. Donc en particulier, ok, il n'y a pas de plus grand, donc peut-être je ne veux pas, je pense que tout le monde l'a écrit, donc je vais le porter ici. Donc maintenant, j'ai mon box lambda k ici, et je prends le translat d'un de ces boxes. On s'appelle le lambda prime k. Ok? Qu'est-ce que la probabilité qu'il y a de cette box un pass là-bas, et de l'autre box un pass là-bas. Bien, ce sont deux événements donc la probabilité de ça est plus grande que la probabilité donc ce sont des plus grands, en fait, vous ne besoinz pas d'un, vous pouvez juste faire le bond union si vous voulez, mais c'est plus grand que ça. Donc, vous utilisez FKG ou vous utilisez complément et du bond union. Ok, pourquoi je veux prouver ça? Simplement parce que quand n tient à l'infinité bien, ces deux pass n'ont pas de raison à priori pour être connectés à l'autre, peut-être, ici, il y a peut-être quelque chose, une surface douloureuse bloquant, je veux dire je veux dire, en faisant ces deux clustères des joints. Mais le point est que quand n tient à l'infinité je vous ai dit qu'il y a un unique clustère d'infinité plus surement. Donc, ça implique automatiquement que quand je prends le lim je pense bon, on va prendre le lim-imp si on veut, le lim-imp c'est la probabilité de croiser n par n plus 1, comme ça parce que la probabilité d'avoir deux passes d'infinité de la boxe de size K à l'infinité est 0 où la probabilité de que l'n tente à l'infinité va être la même chose que la probabilité d'un croissant de n plus 1. La différence entre avoir un croissant de cette boxe et avoir potentiellement pas connecté à l'un de l'autre, est exactement cet événement qu'il y a deux passes d'infinité de cette petite boxe à la grosse boxe et cet événement va à 0 parce que nous savons que sûrement il y a un unique clustère d'infinité Donc, ça implique que cette chose, le lim-imp est plus grand que cette quantité Donc, cette chose est utilisée en uniquité que ici je veux replacer il y a un croissant de gauche à droite. Qu'est-ce que l'horreur que je fais où l'horreur est inclus dans l'événement qu'il y a deux passes d'infinité de la boxe de size K à l'infinité de distance M à l'infinité de distance 01 et N Ok, donc je fais ça Mais maintenant, notez que pourquoi avez-vous besoin de la petite boxe ? Pourquoi n'est-ce pas suffisant d'y prendre un point à l'infinité de l'infinité de K ? Ce sera pour la prochaine étape Maintenant, quand K tente à l'infinité parce que la raison l'infinité de l'infinité est plus sûre c'est juste que la compétition vous dit que cette probabilité va au... Sorry, je suis désolé Vous ne hesitatez pas de me arrêter, surtout quand je fais des choses ce n'est pas possible Donc, cette chose, quand K tente à l'infinité, ça va juste à 0 Donc, quand K tente à l'infinité qu'est-ce que j'ai eu ? J'ai compris que ce limin doit être equal à 1 Donc, j'ai juste prouvé que cette chose se convertisse à 1, c'est ce que j'ai voulu Donc, cette étape donc ici, j'ai vraiment envie de highlighter le fait que... le fait que cette probabilité tente à 1 quand P est plus grande que PC est vraiment reliant dans le cluster et la égalité de FKG Certaines personnes peuvent... je veux dire... non, peut-être que je ne sais pas Je me sens comme que ce n'était pas convaincu tout à l'heure, en regardant vos faces c'est... de toute façon Donc, la corollerie de cette c'est que PC bien, depuis que j'ai l'assumption H1 cela implique que la PC doit être largement equal à 1,5 Donc, le moral de la histoire est que prouvant une basse basse sur les points critiques ce n'est pas la fin du monde bien, c'était peut-être pour vous mais si vous retournez et regardez-le un peu plus en détail ce n'est pas la fin du monde donc... donc, cette chose n'est pas difficile de toute façon de ne pas avoir deux croustes infinites c'est-à-dire que l'équivalent de cela n'est pas difficile le problème est de faire le sens de l'autre et ce n'est pas assez facile Ok donc, on va au niveau 3 de la résonance qui est plus de détour que vraiment important pour ce qu'on va faire donc c'est vraiment pour vous donner une idée de comment les prouvantes historiques Qu'est-ce que la transition faite la transition faite est un changement chargé dans les quantités macroscopiques de votre système donc, il y a une chose que j'aime beaucoup c'est si je regarde une bouche de size n si je regarde une bouche de size n et plus 1 si la quantité macroscopique de mon système pourrait être est-ce qu'il y a un cluster ou pas ok? donc, on va appeler cette probabilité comme fonction de p on va appeler fn de p et peut-je dire quelque chose relevant sur le graphe de cette fonction je pourrais vous participer donc, vous savez que ici c'est 1,5 j'ai juste prouvé ça ici c'est 0 ici c'est 1 jusqu'à maintenant, tout le monde est suivant je pense et en-between, comment ça ressemble? bien, en-between, c'est going to look like that et c'est l'important c'est going to be very close to 1 then drop to 1,5 get very close to 0 and stay very close to 0 oh wow, that was nice for a long time I do not ask me to redo this drawing ok so there is a sharp threshold brusque change of behavior in the sense that if you look at say a level 1 minus epsilon and a level epsilon well the window here between the first time you reach epsilon and the first time you reach 1 minus epsilon this spacing here delta p this delta p is tending to 0 as n is tending to infinity delta p tends to 0 as n tends to infinity so this is what you expect that would be exactly a phase transition brusque change in my behavior but you need to prove that and after you it's not that trivial so yeah yeah you could expect so actually it's going to be a continuous phase transition because this is not the probability that 0 is connected to distance n so this one we will see that theta n of p which is so let's make a just parenthesis because that's a good question if you define this person this is just 0 connected to distance n this one is going to change continuously or even theta n something like that this is a continuous function and in particular even its limit theta of p is also going to vary continuously but here we are really looking at another type of guy which is 0 is in between is in the middle and a priori 0 has no reason to even be on this crossing and typically it won't be on the crossing ok yeah so how could we prove something like that so there is a theorem which I will call theorem 1.4 but which is more of meta theorem than anything I will not give a name for who proved it first because all the proofs are actually based on this observation is that there exist a constant such that well for any n and for any p when I look at the derivative of fn at p it is larger than a constant times well fn of p 1 minus fn of p ok this first is kind of natural if you think about it derivative when fn of p is close to 0 when your function is close to 0 or it is close to 1 you cannot expect your derivative to stay big so this is just a natural quantity to introduce there but the important thing is that in front of that you have another term which is log n I wanted to highlight it and I just put color for that on the board anyway log n so why is it good well because if you think about it imagine so we know that at one half it is one half probability but we can also I mean this can be integrated to show exactly that necessarily we have a behavior like that so I want to do the integration because we are not going to work with this type of thing but we are going to work with something similar in the next lectures so how do you integrate kind of differential inequality like that well you observe that this is fn prime over fn 1 minus fn which is exactly the derivative of this function and this thing is larger than c log n for every p so when I integrate that between epsilon what do I get I am writing that you can think in your head ok so what do I get I get that and I am going to think by looking at my ok fn of p plus epsilon over 1 minus fn of p plus epsilon is going to be larger or equal to n to the c times epsilon times fn of p over 1 minus fn of p right it's exactly what you get when you integrate this inequality now well let's plug in that we know that at 1 half fn of p fn of 1 half is 1 half so what do we get we get that fn so fn of 1 half plus epsilon divided by 1 minus fn of 1 half plus epsilon is larger than n to the c epsilon you should put this for p equal 1 half fn of 1 half plus epsilon is smaller or equal to 1 so if I put this there and I pass this on the other side I get that fn of 1 half plus epsilon is larger or equal to 1 minus 1 over n to the c epsilon and if I do the same in the other direction if I imagine that p plus epsilon is 1 half this is equal to 1 this is always smaller or equal to 1 so I get 1 half minus epsilon is smaller than 1 over n to the c epsilon so this gives me exactly this type of behavior for the crossing probability ok notice that the window here the epsilon is going to be 1 over log n in this case actually very far from the truth the truth is in fact 1 over n to some power so it's much very weak statement if you want but if your goal was to try to prove I mean to make for instance the step l2 of our reasoning valid this is sufficient to at least prove that above 1 half and below 1 half probabilities do not stay d'away from 0 and 1 the 10 to 1 or the 10 to 0 the problem is that it was not at all what we wanted anyway we wanted to prove that if you are below pc your probability goes to 0 that was the missing step the h2 that we didn't justify so this is not quite sufficient but in fact by working a little bit more you can make it happen you can prove that indeed similar reasoning give you what you want let me just tell you a little bit how you get this inequality so it's I mean that's exactly the type of inequality that Kesten proved to get to get his theorem and Kesten what he did is just he didn't have like a big theoretical background on what I will discuss next he didn't have this at his disposal so he just did it by hand this derivative of this event has a geometric interpretation in terms of what we call expected number of pivotoles and it just proved that the expected number of pivotoles is larger than that by hand it's a pretty tricky proof it's really like difficult so Kesten did it by hand and there are several alternative ways of getting it by hand but it's difficult somebody did a little bit conceptual proof of that and I think it's a beautiful proof and it's also appealing from the physical point of view I think even though it's relying on a kind of mysterious math result so Bologbash and Riordan observes that one theorem from actually the theory of Boolean functions could be used here and this theorem is the following if you don't like it you can just rest for a second and come back with me later on so it's a theorem which you can find on different forms sometimes it's called BKKKL for Bourguin, Cannes, Katnitzel, Kali and Lignale or it was also proven slightly I think maybe the form I'm going to describe is actually and it says the following if you take A increasing and you look at the derivative of A well this is larger we call to a constant C which depends on nothing the probability of A times 1 minus the probability of A so this is exactly this thing times the logarithm of A the maximal of every edge of the following thing what we call the influence of the edge A and the influence just the probability that the event occurs if omega E is equal to 1 minus the probability that it occurs if omega E is equal to 0 so for people who it's probability that A is pivotal ok I'm not sure that this is very appealing as it is but notice the following thing so the proof of this is using Fourier analysis discrete Fourier analysis hypercontractivity it's not an elementary result this I mean if you remove this log it is elementary but if I mean having it with this log is very difficult so this gain, this logarithmic gain is very important but I just want to illustrate because I really think it's a beautiful argument of Bologbash and Riordan and somehow what I'm going to tell you next is that we don't need it which is a little bit disappointing because I really I mean that's the reason why I started in this area so the observation of Bologbash and Riordan is the following take an event A imagine you are on the tourist that doesn't change you are with Bernoulli percolation so just work on a tourist if you prefer and imagine that your event A is invariant under translations so this is typically not an event which is invariant under translation but it's pretty close to one you can think of on your tourist there exist say a cluster of radius N something like that this will be an event which is invariant under translations if you think of an event which is invariant under translations well somehow all the influences of the horizontal edges are the same and all the influences of the vertical edges are the same just by invariance under translation but if this is true then you see that well either all the influences are small or they are all big so for A which is invariant under translations let's say we are on the tourist then you have two cases either influences of E of A is smaller than 1 over N to the epsilon for E vertical N, sorry is larger than 1 over N to say vertical edges but if this is true you have N squared vertical edges therefore the derivative ah yes I should have said that the derivative of your event which is in fact the sum of the influences this thing is larger than N squared times 1 over N to the epsilon so it's in fact much larger than log N times the probability of A 1 minus probability of A this is called Rousseau formula I mean if the influences of A is larger than 1 over N to the epsilon for E which is horizontal while the same argument gives you a very large derivative so what remains is the case where both are smaller than 1 over N to the epsilon but that means the maximal influence is smaller than 1 over N to the epsilon so the log of this is going to be log N both maximal influence and you get the log N from the theorem so this is for events that are invariant of the translations but Bologbashon Riordan found a clever way of applying it basically to this event and proved that this guy has a sharp threshold you see this theorem I like because it's really telling you quantities which are kind of macroscopic in physical system they are kind of homogeneous but they are the same roughly at every place in your system so that really tells you that they undergo sharp phase transitions it's a very general fact of these quantities ok problem is as I said that's not quite justifying anything in our in our theorem that's how they proved it but because here there is a lot of work to be done in our theorem to actually get PC equal one half so this I don't want to describe this because I'm going to describe to you something much simpler ok let me just also tell you one make another remark which is where is it I raised it I guess which is that the probability here when I move by epsilon the probability is decaying polynomially fast and when I move by epsilon it's going to one polynomial so in this is not going to be sufficient for us and we want now the last step of reasoning l4 which is going to be the following if I want to conclude the proof of PC equal one half it's sufficient for me to prove the following whatever I'm going to call exp it's for any p smaller than PC there exist a constant such that theta n of p which is the probability that zero is connected to the boundary of the ball of size n this is really ball of size n well this thing decays exponentially fast imagine I have this thing so kind of you work a lot you have a chance in some cases to prove polynomial decay but imagine I give you exponential decay for any p smaller than PC well then I can conclude simply because corollary 1.6 PC has to be smaller equal to one half and the way I'm going to prove that is I'm going to use this thing to tell you where the probability of having a crossing from n to n plus 1 well this thing is definitely smaller I need one point on the left to be connected to distance n at least actually even n plus 1 so this is smaller equal to n times theta n of p right well if this decays exponentially fast this goes to zero as p I mean as n tends to infinity for any p smaller than PC so for p smaller than PC that is smaller than that and this by exp is going to zero in fact here I could have any rate of decay which is larger than I mean faster than 1 over n would give me that the crossing probability goes to zero and that would be sufficient ok so of course I mean this seems to be hammer to kind of try to kill a fly but at the end all the proofs of exponential decay all the proof of PC equal one half kind of goes through proving something like that or at least it's very simple to deduce that from the different proofs so this is the first appearance of this condition and I hope that at least for the square lattice all this discussion before shows to you that there is no real simple way of going around proving a statement like that to try to compute PC ok but maybe your favorite model is not Bernoulli percolation in fact my favorite model is not really Bernoulli percolation but maybe it's the easing model so b let's look at what the easing model is so I'm not proving to you exponential decay I'm gonna try to justify for another example that it's a useful thing to prove ok so take a finite graph which is a subset of Z2 and for this graph I'm gonna call sigma a spin configuration where it's just gonna be a collection or plus or minus one which are assigned to the vertices of my graph ok so sigma x is called the spin at the vertex x is just either plus one or minus one or sometimes I would use plus or minus one or just plus minus ok two things are the same now if I take either zero or plus I'm gonna define the Hamiltonian so the energy of I mean how many people did you already see the easing model maybe I should start by that ok I should actually start who didn't see it really I can jump Matan I'm not trusting you on this one ok there are at least one person and I'm certain that there are three or four people who are just like I cannot say it so I'm still gonna define it quickly so integration is gonna be minus the sum of sigma x sigma y for edges in my graph minus sum of the indicator function that sigma x is equal to zero or to plus when x is on the boundary of my graph so the boundary of my graph just as the vertices such that there is y which is not on the vertices and x y is an edge of the original squared so think of because I'm a sub graph of the square lattice this makes sense and here notice so the indicators and boundary spins are equal to plus or to zero so if I put zero this quantity is always zero so this the second term doesn't exist and if I put plus this term is counting the number of spins on the boundary which are plus ok so this is the energy of the configuration the Hamiltonian and then I'm gonna pick sigma at random by taking the Gibbs measure with either zero boundary condition or plus boundary condition which is attributing probability to sigma which is proportional to minus beta times the Hamiltonian of the configuration and here I need to renormalize if I want something ok it's very good that you saw that so plus boundary condition and the zero boundary condition we usually call them free boundary conditions so there is a beta c there is a critical point such that well when I look oh sorry and of course here I had to work in finite volume if I wanted to define the energy not to have an infinite sum of plus minus but I can take measure in infinite volume to just be the limit when G tends to Z2 so the limit in terms of measures so the weak limit of measures in finite volume ok so now there is a phase transition which is saying the following when I look at the average spin at zero it's either zero if beta is smaller than beta c and it's positive if beta is larger than beta c and there is a theorem due to Onzager which says the following where beta c is one half of logarithm of one plus square root two so question how do we prove that so let's go back to level zero it's gonna go faster this time so level zero is to go back exactly so for percolation we had a duality relation well for the easing model there was a relation also a duality relation due to Cramer's and Vanie which says the following set beta star such that hyperbolic tangent of beta star is e to the minus 2 beta well well if you choose this beta star you have a relation which is free energy which is simply the limit when g tends to infinity of one of our let's say when n tends to infinity of one of a bn log of the partition function so it's not completely clear that this exist but for people who don't know already this I recommend that they go I mean it's done in any good book about statistical physics so this free energy is satisfying so that the free energy at beta star is equal to the free energy at beta plus log 2 plus 2 log of the cosh of beta star minus 2 beta so it's not that I love complicated formulas I actually usually hate them but this formula is saying something very important if you believe that there is a direct correspondence between phase transition and singularities of the free energy which is something that is believed in physics at least in generic cases well there if you want a unique phase transition exactly what I can have mentioned for Bernoulli percolation if you want the phase transition to be unique here it really gets rephrased into there should be a unique singularity for the free energy now if there is a singularity at beta just because all of this is smooth it tells you that there is a singularity at beta star by this Kramer's Vanier duality so what does it implies it implies that beta c should be equal to b star star c so uniqueness of singularity should give you that beta c equal beta c star not that here I mean a priori correspondence between singularities and critical points is not completely clear a priori mathematically and uniqueness of the critical points in this sense of uniqueness of singularity is not clear either but if you believe that then you should have beta c equal beta c star and normally if I did the right thing it gives you that it's one half of log of 1 plus square root 2 but here again like there is bunch of things to believe just why is it level 0 and why is it related to the question for percolation I mean historically people historically actually the first approach to proving pc equal one half was through the equivalent of the free energy for percolation and try to prove that it has a singularity only at one half so the equivalent of the free energy for the percolation model you cannot take if you take log of 1 you are not going to get something very interesting because the partition function for percolation is just one but in fact it's a degenerate case so you need to take the limit of something and you will see that it's the average number of clusters per vertex and this one has a trivial relation that the free energy at 1 minus p is equal to the free energy at p plus something smooth so that was this level 0 of reasoning was actually to try to use the duality in this sense for the free energy to try to guess that it's one half this kind of I mean strategy to go through the free energy to get the result for percolation never really worked in fact it's much harder to study the free energy than to study all the other things if you think about it just for people who know who are a little bit more specialist there is only one exponent that we do not know how to compute rigorously on Bernoulli percolation even in 2D, even on the triangular lattice for which we have conformal invariance and we have basically all the critical exponents it's exactly how the free energy blows up so the average number of cluster per site blows up when you approach PC so it's really I mean somehow this free energy reasoning is really tough but here I mean people were saved by Onzager who said ok it's maybe difficult to prove you know like why should there be a unique phase transition and things like that let's just compute the free energy that it's simple if we get the exact formula and the amazing thing is that he managed to do it so Onzager in 1944 just said ok the free energy of the easing model is log 2 plus I'm just writing it that you don't think this is a good idea to do that ok I mean it's an excellent idea to try to compute it but if you just want to compute the critical point it's not the fastest way so it's a double integral and then we get of course the logarithm of cos squared of beta minus cinch of beta times cos theta plus cos phi I'm pretty sure you all had it in mind and it also by the way if you are there you can also just try to compute other things and he also computed this thing and proved that it is something like 1 minus 1 of us and the plus part of it isn't there, there is a power 4 and there is a power 1 over 8 sorry so you can compute things explicitly so you can all go home with this formula so here I mean somehow there is something to be noticed it's an amazing achievement but somehow it was very lucky in the first space that there is a formula there is absolutely no reason a priori that there would be a nice formula and of course if you reason I mean if you think a little bit more there are actually deep reasons for that but if you alter a little bit if you change a little bit your lattice or if you change a little bit your model you lose this kind of magical property so can we guess the one half log 1 plus square root 2 without going through these computations you will also lose this formula yeah but in terms of qualitative behavior you will get many like there are many things that the other techniques are going to give you which are still going to be true for other ok and a priori I'm going to give you an example just after of a model for which you can compute the critical point but you cannot compute the free energy of every data so it's not a if and only if so there somehow you are lucky enough but a priori it's not clear but indeed this is very lattice dependent that is very true ok what did I want to do so let me try to give you another computation of that and also to try to give you a unifying picture how this is related to the other computation et also to highlight for percolation so let's try to define a percolation type model which would encode geometrically the properties of the using model this is called fortune caster and percolation sometimes it's called also the random cluster model how many people saw the random cluster model how many people never saw it a little bit more ok I'm progressing in the right direction people know less and less that's good so let's do the following so we are going to fix Q and Q a priori could be larger than 0 but for the whole class I'm going to take it larger or equal to 1 and since I don't want you to get confused let's take it larger or equal to 1 right away and P between 0 and 1 what I want to do is for either 0 or 1 exactly like for the using model I'm going to introduce the following measure so it's a percolation measure so it's exactly going to be like for percolation so for a graph G which is equal to V e the probability of a configuration omega so omega is just an element of 0 1 to the e so it's a percolation configuration open or closed for the edges so I want it to be proportional to P to the number of edges in omega I will denote it like that so omega is a set of e so that omega e is equal to 1 1 minus P to the number of closed edges in omega let's denote it like that times q to the number of clusters in omega all of this renormalise where this is what k0 of omega is a number of clusters in omega so you just count the number of clusters notice that an isolated site is a cluster really your graph omega when I think of omega as a graph I really think of the graph with same vertex set as G and with edge set given by the open edges so an isolated site is a cluster so if I want the measure with zero boundary conditions I just take P to the number of edges 1 minus P to the number of closed edges q to the number of clusters notice that for q equal 1 1 and this is just Bernoulli percolation choosing at random the next to be open or closed and if I want the one boundary conditions well I do the same when but with the additional thing that's where all clusters touching the boundary of your graph so remember G we remember what I didn't write I mean G here would be taken as a subgraph of Z2 so all the cluster touching the boundary are counted as one so it's just a slightly different way of counting the clusters and I mean it's not going to be really the object of this class to tell you why it's interesting to do that but you can define this measures and exactly like for the using model you can take the limit when G tends to infinity and define measures in infinite volume so you can also define this thing which are measures on Z2 so you have percolation measures on Z2 so what is the next question you want to ask is what is the critical point if there is any so define PC to be such that theta of P which I remind you is the probability or let's even write it that 0 is connected to infinity theta of P is equal to 0 if P smaller than PC and is positive if P larger than PC the question is what is PC ok so it's really for q equal 1 you are exactly with Bernoulli percolation and the question of computing PC is exactly what I devoted the whole hour, past hour doing, yeah no good point but no it's not clear but it's not going to be so relevant I'm going to exactly sweep this under the carpet and come back later at that or maybe not I will see so it's not clear at all it's not like for Bernoulli percolation where it's completely clear here you need to do something more but it's not going to be so relevant for the discussion so ok if you want you can define if you want it to be rigorous you can define PC to be the infimum of the P for which theta of P is positive that doesn't mean that for P larger than PC you will always have theta of P positive it's indeed well defined at this stage ok, so theorem by Vincent Beffara and myself PC of q so here maybe I'm remember that q is larger or equal to 1 so PC which is going to be a function of q is equal to square root q over 1 plus square root q ok so why is it not obvious to this monotonic the state of P monotonic just when you try to check that there is a coupling so what does it mean monotonic I mean probability of increasing event should increase with P and when you check it's not completely elementary to satisfy that so for the percolation configuration there is an obvious coupling between the two you can really construct configuration at different P in such a way that omega P prime is really containing omega P for P smaller than P prime and here well the obvious coupling for percolation just doesn't work you need something more subtle actually maybe I will present that because it's a little bit related anyway to the proof that I will describe so at a certain point I can discuss this it's a little bit like I mean it's not completely clear that for the easing model correlations increase you need to prove you need to have second graphism equality so here it's a little bit the same so it's a very good question and this is something I will not discuss too much ok so let's go level 0 of reasoning or maybe yeah level 0 which was a duality so can we find here can we guess first that this should be the right point so proposition says the following if omega is simple according to this guy then omega star right the dual configuration is simple according to the following measure so it's going to be a random cluster measure with boundary condition 1 with parameter p star and q star that I'm going to tell you in a second and on the graph g star so what is the graph g star so g star is the graph given by the set of edges I mean by the set e star for e belonging to e so you take all the edges of your original graph you take the dual edges these are the edges of your dual graph and the vertices you just take all the end points of these edges so this is g star and I need to tell you what q star and p star are so q star well it's going to be equal to q simply and p star p star times p divided by 1 minus p star times 1 minus p is equal to q so this is a little bit more complicated just notice here if you take q equal 1 which again is simply Bernoulli per collation this duality relation gives me what p star equal 1 minus p ok so it's coherent at least for q equal 1 you have a proof and now we need to check whether we have a proof for other values so let's go let's just maybe draw a small graph like that let's say this is g and g star is this graph right and notice that so for this graph I'm claiming that I'm going to count wired boundary condition so let me do this in my head let me close up along the boundary connect all the vertices of the boundary of g star together via a path going around ok so what is the probability of omega so the probability of omega is proportional to p over 1 minus p to the number of edges times q to the number of clusters right it's the definition here I mean the 1 minus p to the number of edges is constant right it doesn't depend on the configuration so just remove it and put it in this alpha sign this proportionality sign and this is also constant so it's very proportional to p over 1 minus p to the number of edges times q to the number of connected components now let's try to write this to massage a little bit the equality to try to get something which would be expressed in terms of the dual configuration instead of in terms of the configuration so first thing can I express the number of edges in omega in terms of the number of edges in the dual configuration so this is the number of open edges and by definition it's exactly equal to the number total number of edges minus number of closed edges and the closed edges are exactly the open edges in the dual so I have exactly that so here it will be proportional to at least 1 minus p over p to the number of dual edges for this part now the number of connected components in omega what do they correspond to in the dual graph in the dual graph these are exactly the faces of my dual graph right if I look at the configuration maybe I should make a drawing so let's assume I have this and this these are the open edges of my original graph so my dual configuration here it's going to be so in the dual graph in the dual graph the I mean the connected components of a primal graph are exactly the faces of the dual graph so this is the faces of the dual graph but here I can use Euler formula to re-express this in terms of well other quantities of the dual graph so this is going to be equal to the number of edges in my dual graph plus the number of connected components in my dual graph plus 1 minus the number of vertices in my dual graph right number of faces equal to number of edges plus number of connected components plus 1 minus number of vertices if you never saw this inequality I cannot do much for you now you can just prove it by induction by starting with zero and just it's a very simple induction so when I'm going to put when I'm going to replace this by this expression in terms of the dual edges this and this is constant it doesn't depend in the configuration so it's going to be eaten by the proportionality sign and I'm going to get well q to the number of connected components times q to the number of sorry of number of edges configuration times q to the number of connected components in the dual configuration so here I'm going to have a q which pops up and a q to the number of connected components in the dual configuration but my choice of p star instead of p gives me that this whole thing here is exactly equal to p star over 1 minus p star so what this is just the probability of omega star right excellent so we have a duality relation so now what do we do when we look at the savior point exactly as before so this was level 2 of our reasoning was look we were looking at one half before p save the whole such that p save the whole star is equal to p save the whole and if you compute p equal p star there you get exactly that is square root q over 1 plus square root q so look at this value and let's see what we can do at this value so we can do exactly like before we can look for instance crossing probability well this is the same I mean I can look at the dual the dual event a little bit annoying and this I can use duality to express this in terms of the primal configuration so it's exactly using duality I know that the dual of this configuration so it's infinite volume but just taking the limit when g tends to infinity here it's going to give you the same duality relation for the infinite volume so here it's going to be equal to this for the dual thing which I could not put it if I want but with zero boundary condition here so here we are facing a small thing which is that the measure here is not the same as the measure here but there is a general fact a little bit like the monotonicity that were some of you mentioned earlier that when p increases you have more and more edges in your configuration when you take I mean you can interpret the boundary conditions as being actually bigger for the one boundary condition that the zero boundary condition because it's a little bit like exactly saying that these edges here are open so in fact this measure is going to be bigger than this one and this implies so monotonicity again this is a very standard fact in the theory tells you that in fact crossing like that is larger than one half and larger than no it's not driving me crazy at all I cannot even imagine for you who are not responsible for it ok so what do we do now well first step ok we are lucky because the proposition 1.2 the fact that the crossing probability is tank to 1 when p is larger than pc in fact this argument was using so little of Bernoulli percolation that the same argument works for the fortune castellan percolation so proposition 1.2 adapted of course there is a little bit of work but basically none implies that for any p larger than pc the crossing probability is 10 to 1 and you can even take the zero boundary conditions so this automatically tells you what it tells you that the self dual point has to be smaller or equal to pc but first maybe I'm going to try only to write on these things because I feel like my life is threatened so one of you is going to just you know stop being able to bear it ok so corollari which is something like no corollari so consequence pc has to be larger so pcf dual which is square root q over 1 plus square root q so this inequality very easy question is can we get an equivalent of the corollari 1.6 or something like that can we get the equivalent of the other inequality this is much less clear but the argument with assuming exponential decay this one was completely straight forward exponential decay you just look at the crossing probabilities you use the union bound and you say well the probability of crossing is smaller than n times probability of being connected to distance n it tends to zero, I'm done so the corollari 1.6 also works so it is sufficient to prove the property x to prove exponential decay enough actually I will give you a last motivation later on but a good question is to try to prove this exponential decay in general so theorem due to myself to Aran Raoufi into Vincentation which is take fk so q larger equal to 1 take fk on zd in fact do not even restrict yourself to z2 do it on zd then you have exponential decay in subcritical and the corollari of this is that you can compute the critical point for random cluster models just a little bit of history about this theorem maybe I should just write exactly then for any p smaller than pc there exist cp positive such that pq you will see that in fact we prove something a little bit stronger and that's important but let's stay at this level for this lecture so you get exponential decay so a little bit of history on that so when was it proved first and where so first proof was for q equal 1 for Bernoulli percolation by Kestan d equal to so on z2 for q equal 1 Kestan managed to prove when he proved that pc was equal to 1 half he proved it and in particular he also proved exponential decay so this was in the 80s then for q equal 1 and d larger equal to 3 where it was proved by Menchikov by Eisenmann and Barski in 86 and 87 so Menchikov in 86 and Eisenmann Barski in 87 there I should say that there is a new proof which is the proof that I will explain next time by Vincent Tassion and myself ok now when q is equal to 2 we are going to see in a second the easing model so there somehow in this case you can almost go back to onzaguer I mean exponential decay of correlation in d equal to probably follows from the whole body of work that was done in this thing I mean I would actually not really know who first really stated the thing and prove it but there you can do really much more anyway you can in particular compute this you can compute it explicitly so this I will explain to you why the easing model is related to that and for q equal to 2 and d larger equal to 3 ok here I should say larger equal to 2 of course it also works in dimension 2 the proof is by Eisenmann Barski and Fernandez in 87 as well maybe both are in 86 and this one in 87 anyway most of you were not even born at the time non no no no no reflection positivity only gives you part of the 2 point function yeah but polynomial ones not exponential at least to the best of my knowledge the first proof is due to Eisenmann Barski and Fernandez and there are also other cases which are q very very large compared to 1 so q larger than a qc of d much larger than 1 so after a lot of work you can do 25 in dimension 2 for instance and it blows up very quickly with dimension and here think of it as a kind of perturbative argument it's kind of so this the first one to do were Kotechi Schlossmann and then there was Massager and co-authors so let's maybe not mention all of them ok so this is basically what was known until a few years ago and a few years ago we did q larger equal to 1 and d equal to with Beffara and myself so the theorem there it was exactly I mean that's the only missing point so the point was exactly to prove exponential dk for the 2 dimensional random cluster model and the ideas were actually very much base on this detour than I did earlier on in the class where I was saying you can use this sharp threshold argument this abstract theorem on the derivative and there we use this in a crucial way the point is that the argument was very planar and that basically it was more larger dimensions and if you think this is isolated because we are looking at a weird random cluster model and that's a weird idea in the first place just take so what is the closest to Bernoulli percolation you can think of just think of a model where just instead of Bernoulli percolation let's say you take 2 dependent models so a model where the edge the value of an edge is not independent of the other edges but it's independent of any edge at distance 2 of it even this model was not known we had no clue how to get exponential dk in any dimension so that's exactly and I will explain to you why next week because I will do the proof for Bernoulli percolation you are gonna see it's very short it's very cute we are very happy, I was very happy at least but then you really see that there is one place which really depends on on the independence basically of course at high temperatures it's probably was known but it was not known that it extends all way to the creation so here you have a very good point the perturbative regime they were known for a very long time so P very close to 0 this is known actually it's very simple to do you can really for random cluster model it's extremely simple the model at some p with a parameter q larger or equal to 1 is smaller than the model with q equal to 1 at the same p and for p smaller than 1 over the degree it's extremely simple to prove exponential dk for Bernoulli percolation so this indeed was known but here the whole game is to try to get something up to the critical point so all the counting arguments do not work anymore let me say that there are many applications of this result so here once you have this actually it's a slightly stronger form you can prove many things you can prove mixing properties of your model you can prove Orstein's Zernick estimate so you can really get the two point function really understand how the dk even get equivalence for them so it's not gonna decay like an exponential times a polynomial correction and you can understand this polynomial correction you can also know at which speed the mixing I mean you can bound the spectral gap of the dynamics that are naturally associated to this model and in some sense the fortune casterland percolation was almost like I mean one of the big successes of this model is that it gives a very good algorithm to sample the POTS model but it gives a very good algorithm if you know I mean at least theoretically at the theoretical level it's good only if you know that it mixes fast and that's one of the things that you can prove using ok what else did I want to do ah so in the last 15 minutes I completely forgot the break but that's fine I mean nobody wanted a coffee or anything like that so anyway end of the thing I want to give you a last motivation which is how do we relate easing and percolation because so far I didn't tell you so last motivation let's look at last class of models and let's see the connection so D only 10 minutes left POTS models so generation of the easing model it's exactly the same definition as above so it's good that it stayed on the board it's a lucky one I didn't do it on purpose which is just instead of doing sigma in 0,1 a in minus 1,1 I'm just going to take the spins to be colors among a set of q colors q integers you define exactly the same thing for any boundary condition like that you define the Hamiltonian and the merger you take the limit when G tends to infinity and it gives you the Gibbs merger which is just a merger of random coloring of the plane of Z2 and theorem ah sorry sigma x, sigma y I should tell you what this means here so here sigma y you just replace it by indicator and sigma x is equal to sigma y is not equal to sigma y ok so the larger is equal so the accident so I'm completely lost of course where am I anyway theorem by Béphara and myself which is very much like the theorem there so this was the theorem was 1.10 there and this is 1.11 and it says that beta c which you can define ok which I'm going to define in a second is log of 1 plus square root q and beta c is defined such that when I look so I'm going to denote the merger with the q for the q state pot model if I look at the merger with boundary condition 1 and I look at the probability that sigma 0 is equal to 1 well this I want to compare it to what it would be if I would be without any interaction so I will compare it to 1 over q right if this guy was sampled uniformly it would be 1 with 41 over q well this thing I want this equal 0 if beta smaller than beta c and positive if beta is larger than beta c well the claim is that beta c is equal to log of 1 plus square root q so notice for q equal 2 you recover the using model right 1 2 or plus or minus is the same the reason why you don't get the same beta c is just because before I was doing sigma x times sigma y which was 1 if they are equal minus 1 if they are not equal so there was a factor 2 in the Hamiltonian which is exactly popping up there ok and right that's the theorem and the goal of this lecture so this was known for q equal 2 it was known for q very large by the same argument as here in fact that is exactly what was proved here and the last theorem I want to mention is a theorem by myself which is well for this model you have the equivalent of this thing here which is for any beta smaller than beta c there exist c beta such that the thing decays exponentially exponentially fast ok let's put sigma y in x so the correlation between 2 guys decays exponentially fast ok so what is the connection between this and the result that I mentioned before where the connection is going to come from a very simple coupling and that's going to be the 5 last minutes of the class which is that you can relate the random cluster model and the POTS model so corollary 1 point proposition 1 point 13 so do the following imagine you have a configuration omega which is simple according to the random cluster measure let's take free boundary conditions and let's do the following color each cluster uniformly at random ok so what does it mean coloring just for any cluster in omega choose a color uniformly and just do for any x in your graph sigma x equals sigma c where x belongs to the cluster c ok so it's a coloring of the connected component of your graph ok do that now you have a random coloring of your graph where the claim is that sigma is in fact simple according to the POTS measure so if q is in a integer with 1 minus p equal exponential of minus beta so the beta is related to the p now notice that and you have the same thing so that's the first point and the same thing if omega is simple to according to this measure then and that you fix so here in this case you are just going to do something different you are going to say that all the clusters that are touching the boundary you give automatically the color 1 with sigma c equal 1 if c intersects the boundary then sigma is simple according or let's say color i then sigma is simple according to the POTS measure with parameter g, beta q and boundary condition i so before I prove the thing why is this proposition good well it kind of gives us a dictionary between the random cluster model in particular for instance if I want to understand what is well let's say what is this thing well I can look in the coupling whether 0 is connected to the boundary or not so it's going to be probability in the grand coupling that sigma 0 equal 1 and 0 is connected in omega to the boundary plus probability in the grand coupling that sigma 0 is equal to 1 but in the case where 0 is not connected to the boundary right this is just trivial so far I didn't do anything smart but now notice that if 0 is connected to the boundary so I'm in the second procedure the color of the cluster is necessarily 1 so here I can forget about this condition and just say the probability that 0 is connected to the boundary it's automatic on this event now on this event when I'm not connected to the boundary I mean another cluster therefore the color of this cluster is chosen completely at random so the conditional probability of having color 1 if I have that is 1 over q this is sorry p gpq 1 of 0 connected to the boundary plus 1 over q probability that it's not connected to the boundary which this is 1 over q plus q minus 1 over q or something like that I'm necessarily going to thing like that now let g tends to infinity this was with g sorry let g tends to infinity this is going to be equal to mu beta q 1 of sigma 0 equal 1 and this is going to converge to probability probability that 0 is connected to the boundary of the box is going to infinity I'm going to get that 0 is in the infinite cluster so that tells me automatically that having ordering having this larger equal to 1 over q is equivalent to having this positive so if I compute the critical point of the random cluster model if I prove the CRM that I this time erased on the critical point of the random cluster model ok ok 2 minutes is going to be a little bit short but let's just finish the proof because it doesn't make sense to postpone this to the next time but I hope I mean you can also look probability that sigma 0 equal sigma x and just see that it's expressed in terms of probability that 0 are connected to x in your random cluster model so if you understand all these things you understand the POTS model and in particular for q equal 2 the critical point of the easing model you can understand it as the corresponding value for the self dual point of the random cluster model with q equal 2 ok the proof is actually very short except that it's not this one so take p the coupling between sigma and omega and let's say that omega is compatible with sigma if omega xy equal 1 implies sigma x equal sigma y right in the coupling the construction of the coupling is made in such a way that if you are in the same connected component you will automatically get the same spin so my coupling will only charge will only give positive probability omega sigma which are compatible so now for the probability of this what do I get so I need to sample sigma first so it's 1 over z let's do it with the 0 boundary condition omega 0 gpq p to the number of edges 1 minus p to the number of closed edges q to the number of connected components excellent so this is the probability of omega and now I need to add the probability of sigma condition on omega right what is this probability for each connected component I am choosing one color at random so for here to get exactly the configuration sigma we need to get exactly this conditional probability these two things cancel each other and here I should not forget that I need the indicator function that omega is compatible with sigma and I get that the joint law of the two is proportional to p to the number of edges 1 minus p to the number of closed edges times indicator that omega is compatible with sigma what do I need to do to conclude I need to re-sum on omega to see what is the probability of 1 sigma ok so for fixed sigma let's e sigma be the set of edges with sigma x not equal to sigma y why do I want to introduce this set because well if omega and sigma are compatible what does it mean in terms of e sigma it exactly means that omega e must be equal to 0 for any e in e sigma so sigma I mean omega compatible with sigma it's exactly equivalent for every edge in e sigma omega e is 0 it's exactly that so here the probability of sigma it's a sum of omega compatible sigma of the probability of omega sigma so it's going to be the sum of omega compatible with sigma I mean sorry so when I look at this thing here I know that for any edge which is in e sigma omega e must be equal to 0 so the contribution here is going to be 1 minus p to the e sigma these guys you are certain they are closed and then for the other guys for the value of the other edges well they can be open or closed at random both are compatible so omega restricted to e minus e sigma is really arbitrary and I'm looking at p to the omega let's call it omega prime this thing is equal to what it's just equal to 1 that the whole thing is equal to 1 minus p to the e sigma 1 minus p I chose it to be equal to e to the minus beta so this is e to the minus beta and the size of this is what is just the value of it's a number of pairs of neighbors which have different spins it was exactly the energy of my configuration well here I wrote it with equal spins but you could have written it with different spins so it's exactly the same and I get exactly that the distribution of sigma is the pot model distribution so this is the end of this first class so next week we will focus on Bernoulli percolation first give you a kind of candy with a cute proof of exponential dk a little bit different by the way than what is usually done so we were happy to find a even smaller simplification and then we will dive into a proof which is not very long either maybe 3 or 4 pages but a little bit less elementary but this proof we will jump on it for the next lecture so the third lecture and prove that this proof extends in fact to any random cluster model but also to a large class of other models that you can think of thank you for attending and hopefully see you next week for those who are courageous enough thank you very much