 Merci d'avoir regardé. Donc, deux petites choses, comme ça, les gens peuvent arriver si ils sont tard. Donc, première, un petit rathoum. Donc, vous avez vu qu'j'étais un peu confus sur pourquoi j'ai reçu A à la 2-D et puis A à la D plus 2 à la 2. Mais c'était complètement stupide de moi parce que si vous mettez votre discrète, je veux dire, une fonction. D'ailleurs, ici, vous avez besoin peut-être de renormalisation, delta A, que cela convertisse dans le continuum 1. Et d'ailleurs, si vous regardez les fonctions, les averages de phi A contre la fonction F, peut-être que vous avez aussi besoin de renormalisation ici. Mais ils n'ont, bien sûr, rien. Ils ne sont pas directement liés à l'un ou l'autre. Je ne sais pas pourquoi j'ai voulu être équilibré. Ils sont directement liés à l'autre, mais ils ne sont pas équilibrés. Donc, je vais juste mentionner ce qui serait assez évident de le voir. Donc, si vous voulez convertir quelque chose comme ça, la relation naturelle entre les deux ici, c'est que delta A devrait être A contre 2D contre capital delta A. Et la square root de cela. Donc, c'est complètement normal que je n'étais pas en train d'avoir les mêmes choses. En fait, je veux dire que ce delta A, ici, n'était pas même en train de 0. Donc, je ne sais pas pourquoi j'ai pensé que c'était plus de 2D, plus de 2D, plus de 2D, plus de 2D, plus de 2D, plus de 2D, plus de 2D, plus de 2D. Et cet homme a été A contre plus de 2D, plus de 2D. C'est le cas. Ok, c'est juste pour correcter ce que j'ai fait. Et maintenant, je vais réécrire parce que c'est le principal sérum. Donc, je vais juste réécrire ici le sérum de la trivialité qui dit que si vous considérez un voisin plus proche, donc, vous ne devez pas réécrire, c'est exactement ce que j'ai écrit hier. Donc, considérez-le un voisin plus proche péromagnétique, model de mètre en z4. Puis, il existe un constat c, qui est universel. Je veux dire, on pourrait même avoir une formulae explicite. Ce n'est pas assez intéressant. Comme ça, qu'un peu plus petit qu'un peu plus petit c, qu'un peu plus petit qu'un peu plus petit qu'un peu plus petit c, qu'un peu plus petit ou, qu'un peu plus petit qu'un peu plus petit qu'un peu plus petit Est-ce que vous voyez une永an qui n'est pas un verre un verre quelque chose de�� ce que le fonction f c'est douloureux ou continueux compactement apporté. Ensuite, j'ai une boundaries sur comment fL of sigma squared beta. So minus 1 is smaller than a constant that depends on f. Z to the 4 over log L to the c. And this is true for every z positive. OK? So this is just the theorem that will actually be the object of the class from now on. And I made a small comment, but this was only the first comment among many others. So let me put remarks with an s. So first remark is that it basically says that this thing is like a normal random variable of the right variance. That's the first thing. The second thing that was asked for me is that the question that people asked me is that here it looked not so great when beta is very small. You could have a doubt that this is true. But remember that this is only interesting because L must be smaller than psi of beta when L is larger and larger. Beta has to be closer and closer to beta c. OK, so it's very much. It is a result dealing with beta near beta c, really. So don't bother about what happens far away from the critical point. Third comment is that here I am making one assumption which is not so natural, which is this one here. I'm asking that beta is smaller or equal to beta c. So the assumption that this is true should not actually be important. It should not matter. But in our case, it's technically important to deal with the fact that if beta was strictly larger than beta c, the spontaneous magnetization will be strictly positive. It should not matter, but technically it is. It is more convenient. Because otherwise, so what happens when you are above the critical point? Well, you should subtract the spontaneous magnetization. You should work with truncated correlation as beta, larger than beta c, would require to deal with typically, so for instance, if you define sA x1 x2, let's say beta. So below the critical point, it was simply something like that. But if you are above the critical point, you should add minus magnetization, spontaneous magnetization squared. It doesn't mean that the result become wrong. In fact, they should still be right. But the techniques that we are going to develop, they should also be the right techniques to deal with this. But technically, we don't manage to do it. And in fact, you are going to see during the lectures that I will explain to you how you do it in higher dimension, dimension 5 and more. You can also extend this, which is actually related maybe to the next comment. So in dimension d larger than 4, one has a similar result. This time, l to the d minus 4 replaces log l to the c. So you get an error, which is even smaller. Here, the decay is very fast. And this is a result, a very famous result by Eisenman in 1980, and by Frullisch independently the same year. So this is actually much simpler to prove, the higher dimension case. Well, even for the higher dimension case, we don't know currently how to treat the beta larger than beta c. So just to say that there is something about working with troncated correlation that is not that easy. And we have experts in the room of people who try to work on easing above the critical point. And these correlations, they are not so easy to handle. So here, it's a typical example where things really don't work well. So a comment that I made, I mean, that somebody asked me. Yes? So if beta is greater than beta c, then you can choose different boundary conditions? Or 10 different? Yeah, I mean, well, you don't need here. You can also do that even with free. Otherwise, you could also take plus boundary condition or minus, so you could take different boundary conditions if you wish. If you take plus boundary condition, then your variables inside have a mean, which is non zero anymore, which is m star of beta. So then you should look at sigma x minus m star of beta as being your variable at each point. And then you do a smear average of that. And you would expect, indeed, that you still get something that is going to converge. And that you should also get something trivial. So always get something Gaussian, but we don't know how to do it. Yeah, so it was a comment that somebody made about this assumption. So if l is much larger than psi of beta, in fact, the result is still true in some sense. It's just that the sigma, I mean, the limiting, the limits one can get are just white noise. So you get a Gaussian process, except that just the covariance is zero for any point that are not equal. OK, yes? You better see, does it depend on error? No, no, no, this is the beta c of the nearest neighbor is in model. Ah, on z4, I see, OK. If you, well, I will come back to that just after. But once you look at all the 5-4 models, each lattice 5-4 model has its own beta c. That depends on the parameter. It's already infinite size. Yeah, it's already infinite size. You could do it infinite size, but it doesn't change typically. So here, we already took r, go to infinity, cut off, basically. OK, so yes? So in your definition of the formative tf r, you were scared by some story of sigma. So suppose we are at the critical point, and at this, so far, we don't know the scaling direction of sigma, right? Yeah. But then if sigma is greater than the over 2, is it possible that the sigma r is too large that it kills the 2-point function? Well, whatever, I mean, what could be that there is no convergence, meaning that for some f, then tfl of sigma is a nice random variable. And for other f, it's just 0. And for other f, it blows up. That's possible. But as soon as you want that tfl of sigma is non trivial for some kind of nice function, then sigma l is the only thing you can imagine. Because, OK, so let me maybe add a remark like that. So just sigma l in the notation that I used, this is just the variance of t indicator function of minus 1 to the dl of sigma. This is just that, sigma l of beta is this. So, of course, this is not a compactly supported continuous function, OK? But because this, if you expand, right, it's the integral or it's the sum over x and y of f of x, f of y times the correlation. The correlations are positive. So if you take any function f, if you take f, say compactly supported in minus 1, 1 to the d. Let's just simplify like that. I don't take minus rr, but minus 1, 1 to the d. Then, by definition, when you are going to look at tfl of sigma squared, it's going to be smaller than norm of f infinity squared times sigma l of beta if you don't renormalise. If you would not be, OK, this is not renormalised. Just some non-renormalised. So you see that you clearly want to be divided by this thing to get something bounded. And same thing on the other side. If f is nontrivial on a small minus epsilon epsilon squared, minus epsilon epsilon to the d box, then you get the bound on the other side with the same sigma l. So it's really the right normalisation. And at least for positive function, if you renormalise like that, you end up with a random variable, which has a variant. So it's a centred random variable, and it has a non-trivial variance, which is the most reasonable thing to try. OK, so I keep going. So seventh comment. So it generalises to 5-4 lattice models. So in this 5-4 lattice model, remember that there was a beta, a b, and a lambda parameter. So what it will be is that for every lambda positive and every b, this is true if you take beta smaller than the beta c lambda b, which is a critical parameter of the 5-4 lattice model. Remember that 5-4s look like easy models, so they also have a critical parameter that depends on lambda and b. And the result will be true as long as beta is below this critical parameter. But in the case of 5-4 models, it has the following consequence, is that imagine, so definition. So I wrote 1.6, but I'm absolutely not sure this is the right notation. But what you can say is that a family of 5-4 lattice models, let's say index by n converges to the random distribution phi if, so let's say the parameters index by n. So here I could say that the parameters are, so there is this lambda n, which is going to be the parameter lambda, the bn. I'm also allowed to vary beta n, as long as I satisfy this thing. They are defined on a lattice of mesh size an. In fact, I would even have normally the big size rn, but let's forget about that. So there are all these parameters. I'm allowed to choose them as I want, as long as beta n is smaller than the beta c of lambda n and bn. So here the only thing is that I want this to be smaller than beta c of lambda n bn. So I said that this family of 4 lattice model converges to the random distribution phi, if for every f smooth and compactly supported. Epsilon n times tf1 over an of phi n converges in law to tf of phi. So it's really I gave myself all the freedom. I'm allowed to choose all the parameters as I want, even the normalization. It's just that I want this to be converging for every f. In this case, I say that the second of family converges to phi. Well, what the generalization of this result to phi 4 lattice model gives is that any, so in dimension 4, any phi, which is the limit of phi 4 lattice model, is a generalized Gaussian process. In the sense, let me just remind you, in the sense that tf1 of phi, tfk of phi, is Gaussian. So really, you cannot construct a non Gaussian random distribution in dimension 4 using the phi 4 lattice models. Again, this is one way of proceeding. It's a priori not the only one. So I would say that this is an extremely strong indication that whatever the cutoff you will do to try to construct phi 4 on r, so it's r valued, you will get something trivial. So in order to construct something, you would probably need to use non commutative spins or something like that, because at least with the phi 4 lattice model on r, you will probably fail. OK, I keep going with my comments. And it will be the last one. And then we dive into the second part of the lectures. Is that, so as you saw, the five dimension and more, I did d strictly larger than 4, because if you do long range model, you can actually mimic a model with non integer value. And there, you could also prove that it's trivial. But I mean, if you take nearest neighbor, then you need to speak about d larger equal to 5. So as I said, in the 80s, it was proved that the thing is trivial. And then it took 40 years to have the nearest neighbor using model on z4. But let me complete this by a story, which is that there are cases for which it was known in four dimension that you have something trivial. So when lambda is much smaller than 1, in some sense, you are looking at a phi 4 model that looks very much, that is quite close to a Gaussian process, which will be lambda equals 0. So then you have renormalization, I should say, rigorous renormalization arguments were used to prove convergence, to prove the massive GFF scaling limit. When I say massive, I also include 0 mass. Meaning that there, you could even prove that this converges really to what you get when you look at a continuous massive Gaussian prefix. So these renormalization arguments, let me mention two teams. So the first one on the Gavetsky-Kuppey-Einen, there is too many ds here. What do you have? Gavetsky-Kuppey-Einen. Already in the 80s, they used a rigorous renormalization scheme that is based on a rigorous version of Kadanoff block spin renormalization. So a rigorous block spin, so a la kadanoff, if you want. And this was in the 80s. More recently, there was a team, including Bayer-Schmidt, Bridges, and Slade, that got the same results and then extended it to Rn instead of R for the value of the spins. Where there, the idea is a little bit different, is that you write, in some sense, your 5-4 model in terms of the GFF, of the discrete GFF that you would end up with if you would just set lambda equal 0. And then you decompose the covariance of this GFF in terms of, in a multiscale fashion. And you integrate out the modes there. And you keep track just of how the spin spin, for instance, behave and quantities like that. So it's a very powerful strategy that is a little bit different from the more physical one that you see here. So here it's more of a decomposition. So expression of the 5-4 lattice model in terms of an underlying GFF plus multiscale decomposition of the GFF. There are many other ways of proceeding, but I just wanted to mention these two ways. But the important thing about them is that they are really perturbative in nature. They work because you start from something close to your Gaussian process. So if I think of the lambda, like the larger the lambda, when B is fixed, here I have a discrete massive GFF. It's a very nice Gaussian process, et cetera, et cetera. And if you start close enough, so this guy, when you renormalize, when you look at A tends to 0, it converts to a fixed point, which is a massive GFF in the limit. So if you start nearby here, you will also converge to the same thing. That's what this renormalization argument says. Now, if you want to be playing the devil advocate, what you will clearly see, I mean, so it's beautiful results, they give very delicate estimates. You get a lot of information on the models. But if you want to play the devil advocate, you will say. But of course, if I want to be building a 5-4 field theory, which is non trivial, then I would like to be starting as far as possible from the Gaussian process. So that means starting rather that way, as far as possible. And actually, lambda tending to infinity is what? It's easing. So in some sense, easing is far there, and it is the furthest away from this thing. That's why I stated my theorem in this context, because in some sense, it's what is the furthest away. And this renormalization argument, which are perturbative, in some sense, they complement very well. But we are speaking about almost different words. Here, you start in the vicinity of a fixed point, and you prove that you converge to the fixed point. What I'm saying is that if you start from anywhere, even very far, you will always get to something Gaussian. By the way, this is going to be comment 9. One expects that it's more than just Gaussian what you can get. You can only get the massive GFF. One expects that the only possible limit is a massive GFF. And this is not included in this theorem per se. Why? Because here I'm saying it's a Gaussian process, but I'm not telling you what is the covariance of this process. So if you want to be proving that it's a massive GFF, you would need to be proving something like properly renormalizing the S. So finding the right delta of A and looking at S2AX1X2, which would be something like the spin-spin correlation of your using model, you should prove that this converges as A tends to 0 to something. And ideally, you want here the massive continuous Green function. I'm going to be honest with you. We have a very strong indication. I mean, at the time, I had zero clue how to do it, absolutely zero clue. So it's a very, very different question. In some sense. Recently, with one of my PhD students, we progressed a lot. I mean, it's definitely too far to be stateable as a theorem. Maybe I should have chosen to do this class next year. But I think there is a chance to get that it's only massive GFF that you can get. I want it just to mention it. 10th comment. In dimension 2, and this I will maybe even have time. I don't, we are going to see. Maybe I will have time to even present it. You are not Gaussian. The limit is non Gaussian. This is something which is fairly easy to see from conformal invariance. So if you know conformal invariance, you can actually quickly see that you get a non Gaussian field for the spin field. But there is actually a very soft argument that gives you the non triviality, which basically is going to prove something like the second, the fourth moment of your random variable tfl. It's not three times the second moment squared. So I will try probably to prove it at some point. And let me mention that in dimension 3, it is expected not to be Gaussian, not to be. But it's not known for the using model. So there is a nice paper by Slava Rischkov and his co-authors where they even quantify in some sense how non Gaussian it is by looking at the herself four point function. So we will get to this quantity, maybe not today but next week. But there is no rigorous proof at the discrete level that indeed you get something non Gaussian. And I should say that this resonates, the fact that it is expected to be non Gaussian and that it is non Gaussian here, with the eleventh comment and last comment, which is that 542 and 543. So the 2D and 3D 5-4 models, really the continuum one, have been constructed rigorously and have been proved to be non trivial. So you will tell me, well, it's a little bit strange. You are telling me that you are just expecting it's non Gaussian for using. And you tell me, but you know how to prove the limit, but this is not made, these constructions are not based on the lattice cut-off. They are based on all the construction, like Fourier cut-off or things like that. So this is why, I mean, it doesn't rule out that what you would get from using would be trivial if, of course, nobody expect that. Everybody expect that whatever, I mean, everybody. I expect that whatever a reasonable cut-off you apply, you will end up with the same object in the limit. So in dimension 2 and 3, it's already done. So in dimension 2 is the same limit and what is constructed is? You mean here and here, like what you can get from, yeah, you should end up with the same. So of course there, it's always a little bit, when people mean that they are constructed, sometimes they only mean that, in some sense, they have the local limit of the thing, but they don't always have access to the long range. So once you have 5-4, you can rescale, then, and try to see how the correlations behave when you look at large-scale properties of this 5-4. So there, the large-scale properties of this 5-4 should be related when you take a critical continuum 5-4 to what you get by taking the scaling limit of the spin field at criticality, where there you get this criticalism model, this conformity invariant. This 5-4, it's not necessarily a conformity invariant object, it's not necessarily an object at criticality, in some sense, it can have a correlation length. The massive Gaussian free field has a correlation length, which is m, like correlations decay exponentially fast, I mean 1 over m. Okay, so it's a, even if there is a passage to the limit, you have to be careful that you don't necessarily end up, the limiting object is not necessarily a scaling-variant type object, you have to know. But if you take it at criticality, at the right parameters, then it's gonna be, the large-scale properties are gonna be related to the criticalism. I think I completely messed with your brain right now. It makes me think that in 4 dimensions, 5-4 could be non trivial, but at large scale it could be Gaussian, like it seems, maybe, I thought, in 4 dimensions. No, I mean it's just, it's always trivial at every scale. It's always Gaussian at every scale. There is no, yeah. Whatever's the scale of your T, F, L, you will end up, whatever's the scale of F, it will be Gaussian, whatever F. Okay. Okay, so now we pass to the second part of the class, and it's gonna look much more like a math course. So I saw some physicists yesterday that kind of panicked and said, oh, I hope it's not gonna be too difficult. Don't worry, I'm doing only easy math. So I'm gonna try to be like self-contained, and I think it's actually a subject that fits pretty well to be self-contained. It's almost combinatorics, you are gonna see. And so the tool I'm gonna develop, it's called the random current representation of the easing model. And I should right away try to be a good advertiser and say that this random current representation, it's actually useful way beyond just trying to prove triviality. It's a very useful tool to study the easing model where we are gonna see you, you cannot rewrite the spin-spin correlation of the easing model in terms of either kind of random walk probabilities or percolation probabilities. I will explain these things and you will see, we will manipulate, I will actually try to give you some direct applications of these things that you see that it's useful. And it appears more and more in the most refined results on easing model that people use this random current representation. Okay, so what is it? So definition or basics, basic notions. As often in math, you start by a completely trivial definition and hopefully you get to something good. So a current on a graph G, let's say it has set of vertices V and set of edges E is, oh, current on G is a function, is an element N belonging to the function from the non-negative integers, I mean, to the non-negative integers and starting from here, the pairs, the un-order pairs, X, Y, included in V. So what you just do, you have your graph, for instance, let's say it has four points and for every pair, you give, so if this is X and Y, you attribute a variable NXY, which is just a non-negative integer, okay? And a priori, at this stage, you are also allowing any pair, like there can be a non-negative integer for any pair. You are gonna see, we are gonna quickly restrict a little bit that, but this is a current, okay? You will understand a little bit, maybe Y, we call it a current, even though if somebody understands, don't hesitate to tell me, because I'm not entirely certain why we call it like that. I see an analogies with electric current, but not much more than that. Okay, so this current, there will be things I want to be considering on it. So I will set that a vertex X and V is a source of N if the following quantity, delta N of X, delta X of N, sorry, which is just the sum of Y of NXY, right? I will always write them like that. So this will be NXY for X and Y in V. Okay? So if this thing is odd, this is an integer, it's either odd or even, I will call it a source if it's odd. Yeah. Sorry? Are the same, yeah. Yeah, you really look at an order. So the notation is a little bit confusing here, but this is the same as NYX. There is only one object, which is N. I should write to be fully rigorous, but you are gonna see, I don't use notation a lot, so I try to make a concise one, but this would be a much better and much more trustful notation. Okay. So I will use this one. So if it's odd and let partial N be the set of sources, of N. So that's the second notion. And the last notion associated to a current would be its weight. So the weight W beta of N of a current would just be the product of XY of beta JXY to the NXY divided by NXY factorial. You are gonna see it appears very naturally. I mean we are gonna, now I'm gonna prove things, right? So I mean hopefully I will manage to to give you an idea of why these things emerge or you will see it from the proof that it's natural. So what is gonna be the game? The game is gonna be that we are gonna express the quantities of the easing model in terms of weighted sums of currents. Okay. And one thing that you immediately see is that if JXY is 0, so if JXY equals 0, then NXY if you want to have a non-zero weight for W, what do you need? You need NXY to be 0. NXY equals 0, otherwise W beta of N equals 0. Why do I say that? Because it's very natural to consider that G for now I just use V, right? There was no mention, so G is a graph, but I only mention the set of vertices. So it's very natural in fact to just set the set of edges as being the set of pairs where JXY is non-zero. Okay, so we will always, for this reason, we always set the set of edges of G to be, so I will call it E to be the set of X, Y such that JXY est fréquemment positif. En fait, c'est non-zero, mais on va toujours regarder fréquemment positif. Donc, avec cette notation, ce qu'est le current c'est juste une fonction des edges de votre graph dans Z+. Okay? Alors, ça vous donne et c'est ce que je veux vous rappeler, mais parfois, si vous voyez que vous regardez des modèles longues, vous ne voulez pas vraiment dire que vous êtes en train de travailler sur le complet graph, même si c'est un peu comme ce que vous faites parce que chaque pair X, Y a un étage entre eux. Mais parce que nous serons en train de travailler quasiment avec des neighbors c'est en fait une bonne chose. Et donc, la n est un élément de Z+, pour la force E, donc c'est une fonction des edges de votre graph et vous faites ça beaucoup. Donc, n'hésitez pas à me demander si quelque chose n'est pas clair parce que sinon, il va être un temps long pour vous. Okay. Donc, nous allons faire quelque chose qui est similaire à ce que les gens, peut-être que beaucoup de gens sont déjà dans la salle, peut-être que vous avez entendu de la haute température d'expansions et de la haute température d'expansions. Donc, imagine, je regarde un set sigma A ce sera pratique. Donc, en définition, c'est le produit de la salle dans un set A. Et nous allons regarder à Z, A, G, beta qui, en définition, je le set pour la salle sigma A de minus beta à l'aide de sigma A. Donc, c'est si A est la salle de la salle, je termine avec la fonction partition de la salle de la salle. C'est la normalisation que vous avez en définition. Et ici, je suis juste Z, A est quand je multiplie par sigma A. Okay. Et ce que je claims c'est que c'est equal à 2 pour le nombre de vertices de la salle que le set de sources est A de W, beta, OVN. Donc, je peux réécrire cette quantité comme ça. Donc, en particulier, si je regarde les correlations entre les points, les spines en sigma A, alors, c'est Z, A G, beta sur Z, M, T, G, beta C'est juste le nombre de W, beta, OVN quand je summe sur les currentes de la salle A sur le nombre de W, beta, OVN quand je summe sur les currentes de la salle A. Je comprends et je pense que vous m'aurez d'accord que si vraiment nous sommes en train de faire quelque chose de bizarre on peut remettre ce qu'on parle de N. Il y aura toujours un nombre de currentes. Donc, vous pouvez exprimer les correlations de spines, par exemple, sigma X, sigma Y qui serait A equals X, Y qui est seulement deux éléments X et Y. Ensuite, vous pouvez exprimer cela comme un ratio de sumes de weight sur les currentes. C'est juste que je vous attraie l'attention d'avoir différents sets de sources selon si vous êtes sur le numérateur ou le denominateur. Ok, donc, let's try to prove this proposition et peut-être after that we do a small break. Ok. So, proof of this thing. So what you do is very simple you are going to use that exponential of JX, Y sigma X, sigma Y which is something that appears in this thing. I'm going to just expand it and say this is a sum from NXY equals 0 to infinity of JX, Y sigma X, sigma Y to the NXY divided by NXY factorial. Ok, I just tell or expand like that. So if I do that and I start from this thing here I can write it as e to the beta sigma X, sigma, I mean JXY sigma X, sigma Y ok, this is exactly this thing I expand so I end up with something like that and here I can try to just invert the two sums and I'm going to end up with what? I'm going to end up with a sum of all possible N then there will be things do not depend on sigma so there will be a product over XY of JXY beta to the NXY over NXY factorial these are these terms and then I have terms that depend on sigma so I have a sum over sigma I have a sigma A and I have a product over X and let's see for each X how many times sigma X appears when it appears for every pair XY it appears NXY times so I'm going to get delta X of N that's exactly the number of times things appear and maybe a smarter way of writing that is to say plus indicator function that X belongs to A just another way of putting it I just sigma is a product of the sigma X for X in A and I just put it like that so now what is the observation first an easy one so if I look at this quantity if this quantity is even for every X then I mean sigma X is valued plus minus 1 so this thing would be 1 so I'm just summing 1 so I get 2 to the number of vertices on the other hand if not then that means what that means that there is at least somewhere 1 vertex for which this is odd but each configuration sigma so let's call it X0 each configuration sigma can be associated to another configuration sigma X0 which is the same configuration except you switch the spin at X0 so in this case what you see that sigma and sigma X0 well they just have opposite contribution to this sum therefore you get 0 so if not then 0 because there is this involution that switches the state of spin notice that this is actually a deep I mean it looks quite simple but it's a deep kind of manifestation of the plus minus symmetry in the spin space the fact that there is this plus minus symmetry for easing that's every configuration sigma so because here you associate sigma and sigma X0 where this is pin at X 0 switch I should write even smaller ok ok so overall what do I get this is W beta of N and here this is 2 to the number of vertices indicator function that the source of N is A because having this which is even for every X not in A and this odd for every X in A so I get exactly what I want ok comments and we make a break I don't know how many people heard about the high temperature expansion but the high temperature expansion is something where each edge is associated to just the random variable which is 0 or 1 either you have an edge or not in the high temperature expansion where the high temperature expansion is just a set of odd edges of this thing is a set of odd odd current of edges having odd current so it's very easy to read the high temperature expansion from the random current representation at this stage it's not that obvious why the random current representation is better than the high temperature expansion for those who know what the high temperature expansion is but it will become very clear I think actually during the second class I mean during the second hour that there is an additional tool that you get when you work with the random current and that this is a very powerful one that the first comment, the second comment is that just to give you a little bit of history so it appeared in the work of Griffith but it was I think it's fair to say that it was magnified like really used in its full power in the work of Eisenman from 1980 but it was used to I mean at the next level of sophistication by Eisenman exactly in the work on triviality this is the end of 2.1 in 2.2 I'm going to give you explain to you how you can kind of see a random work type object in the random current and then in 2.3 we will do what we call the switching MEMA which is a little bit our fondamental LEMA in the current even though you should not use MEMA here why not being careful but anyway ok so let's make a 10 minute break and we resume in 10 minutes as I said the game now is it good for something ok so the game now is to give you two interpretations of the random current one in terms of a type that look a little bit like random path and one which is closer to what we do in percolation theory so I'm going to start with a random work interpretation of currents and the idea as a starting observation is that if you take N and let's say you assume there are no sources in N it's fairly easy to understand that N can be seen as the occupation time of a family of loops so what do I mean by that I mean that I look at a family of loops not even necessarily safe avoiding the loops can pass by several times the same edge and I just count the total number of times pass through each edge this gives me an integer well N if there are no sources can be seen as the occupation time of a family of loops I'm not saying that there is a canonical way of decomposing N into loops but what I'm saying is that there is a way and one easy way to see is that what you can do is you start from N you take a vertex where delta X of N is non zero so there is at least one non zero current nearby and I just step on this current on this edge and remove one to the edge because I pass through it so I remove one now I arrive at a vertex and you will agree with me that this vertex has necessary is now a source of the new current because I remove one it had an even degree here so now it's odd so necessarily I can keep going there's necessarily an edge which has a positive current I can take it, I remove one and I keep going like that the only time where I will possibly be blocked if I end up if I step on a place where I have odd current before I step what is the only place that is like that where I started from because after the first step it had odd current so I like that drew one loop I ended up with a new current which is smaller than before and when I finish this loop there is no source anymore I created two sources by start so I mean I have this thing like that I started from here I started if you want erasing edges so at a certain step I have two sources but at the end of my drawing the two sources collide and I do not have sources anymore so I can just iterate I take another point where I have positive delta x of n and I keep going it's not at all canonical but it works now if I look not at that notice by the way that A must be even this is something that you can immediately see from the definition then it can be seen as the occupation time of a family of loops plus path pairing the vertices of A plus path pairing the vertices of A so if this is a source then in fact if you start the peeling process this blue process from this vertex then as soon as you do one step so now the sources are there but there is no source here anymore so this process necessarily must finish in one of the other sources and then you take another source you do the thing so we are going to just so there is nothing canonical here I didn't tell you you should be decomposing like that it's a unique way of decomposing so I'm going to just define one way to do it because it will be useful for us first as an intuition and also in some of the results so we will call it the backbone and in fact this backbone is actually related to what Freilich did and not Michael Eisenman for trivalities but I will mention it later on so what you do is just you pick vertices of ZD ok you pick the smallest the smallest source ok so I have like say four sources since vertices are index there is a smallest source you index them really as you want there is no big thing and then what you do is that so let's say you pick the smallest source and call it gamma 0 so it's the first step of a walk that I'm going to construct and at each step if you constructed gamma 0 up to gamma t what you do is that you pick I mean let's say gamma t is not a source of the original N so either you are somewhere there and if you are there what you do is that you look around you and you take the smallest edge which has odd current ok oh by the way I'm on ZD vertices on ZD so I take the smallest edge and call gamma t plus 1 the other extremity the smallest edge incident to gamma t incident to gamma t and not already use let me rewrite this properly the smallest edge incident to gamma t not already use of odd current gamma t plus 1 the other extremity so why can I always do that I start from a source I have an even number of edges around me and I have a source delta x of N is odd so necessarily one of the edges around me is odd so I can go but now I'm at the vertex and if I was not a source of N I have delta x of N which is even but I just use an odd edge so the remaining sum of current is odd so I can pick one which is odd and I work on it and I do that, I do that the only time where I'm not certain is that maybe gamma t ends up being a source maybe I'm now here gamma t is a source then pick the smallest source left and jump there and keep going so just a simple thing, if I arrive here then I pick the smallest or the one and I restart from there ok so the outcome of this is a collection so gamma at the very end and you get a collection of cardinal of A of the two paths that are pairing the vertices in A that's the definition of your backbone ok so I'm going to associate exactly like for current I associated a weight let me associate a weight to backbone so definition so it's 2.5 weight so for gamma I call rho s of gamma for gamma and for s subset of Zd and gamma from x and gamma I'm going to call rho s of gamma so the definition so let me try to check how I want to be definition so I'm going to put the partition function of the easing model on s and I'm going to do the following so you are going to see this is a little bit strange but I hope to be able to explain to you what this is so I do the following define gamma bar so what is gamma bar gamma bar gamma bar is going to be gamma so my pass gamma plus a certain number of edges and these edges will be the following it will be all the edges so all the edges that are smallest I mean that are smaller so let me make a drawing maybe so when I do the process of I mean it's really a pain to define rigorously so I'm going to make a good drawing so let's say this is gamma s this is gamma s plus 1 and this is gamma s plus 2 when I did this process here and that I decide that my next step here is this edge in fact I get some information on the parity of the current on certain edges imagine that this edge and this edge have a smaller index than this one meaning that this vertices here had a smaller index of this guy if this is the next step of my current of my backbone what does it tell me it tells me that these two edges have even current otherwise this will not be the edge if this was odd because this is a smaller index this will be the edge same thing for here so when you if you want if you think probabilistically or physically when you explore your backbone you gather two type of information the first one is that you gather your backbone but you also gather information that on those type of edges you are even so these edges gamma bar is a union I mean it's all the edges whose parity is fixed by the event gamma of n equal gamma so when I'm telling you the backbone of your current is equal to gamma here I'm fixing I'm forcing the parity of a certain number of edges clearly the edges on gamma but also some edges outside that must be even and now you kind of understand what this weight is saying it's saying the edges on gamma should be odd the edges on the other guy that are fixed sorry and then and this is a natural sum and this is a natural ratio for the following thing for the following reason page 16 disappear I have never been very good at hide and seek 16 so why is this weight natural because it satisfies the following three properties and you are going to see it not that difficult properties to derive proposition 2.6 so the weight as a foreign properties so assume gamma 1 I mean set gamma 1 circle gamma 2 for the concatenation of 2 of 2 works imagine you have a first work this is gamma 1 and you just concatenate the next one if you can notice that gamma 1 is an edge avoiding work it's not allowed to go back to the same edge so if you can you call these two so set for the concatenation of gamma 1 and gamma 2 when concatenated in such a way that the concatenation is a backboard in such a way that gamma 1 gamma 2 can be a backboard so of course one condition for instance is that they should not intersect but it's not just that I mean it should also gamma 2 should not be allowed to use one of the edges of gamma 1 bar for instance because this is forced to be even so it couldn't be all the posterior but they are clearly works that you can concatenate and the first property is saying that if I look at the weight of gamma 1 concatenated with gamma 2 it's equal to the weight of gamma 1 times the weight of gamma 2 of gamma 1 of gamma 2 sorry but in s minus gamma 1 and in fact minus gamma 1 bar so you have a chain rule like that and this is one of the reasons why you want to set the definition of the weights like that second property is that sigma a s beta is just a sum sigma x sigma y it's just a sum for gamma pairing the elements of a of rho s of gamma so the spin-spin correlation can be written as a sum of the weight of these backbones and the third property I'm gonna write it here just to avoid splitting is that if s is included in t then the weight of a backbone is larger in s than in t so the smaller the s the larger the weight of the backbone so why did I mention this first because we are gonna use it next week but also because I wanted to draw an analogy with another model of statistical physics which is called the safe avoiding walk it's this thing constructed like that but in fact if you decide of another recipe to identify a collection of paths that would pair if you do it in a non-crazy way you will also be able to define a backbone there probably the gamma bar is gonna be a little bit different for instance see the definition of gamma bar could change maybe if you take something at random they should be there on average of a weight like say you choose uniformity among all the odd edges that are possible for instance you have different ways of defining backbones they will all basically share the same properties if you do it properly ok? so here I chose it like that you first index the vertices you change the index you change the backbone you are right gamma is the same backbone it's just the underlying graph so the weight of a backbone depends on the graph on the set of vertices so you have a set of vertices here S and you have a path and you are estimating you are defining the weight of this path with respect to this set ok? ok ok, on me say gamma 2 can be a backbone you mean it can be constructed in that way ah, it's just that imagine I take 2 paths gamma 1 and gamma 2 and let's say gamma 2 uses an edge of gamma 1 then it's impossible that gamma 1 and gamma 2 can be concatenated in such a way that it could be obtained as a backbone of a certain current of a certain current yeah notice that the weight of here the weight of the path gamma gamma here is not the backbone of a given current I give you a path and I attribute the weight and this is going to be related to whether it can be I mean it is the backbone of a current but there it's the path of ah, weight of a backbone yes, I understand now maybe the confusion I should not have call it like that weight of a path and then we are going to relate it to the backbone that's maybe this is confusing indeed ok maybe I am going to try to keep the backbone definition here let me give you a proof of this proposition ah yes, I was saying analogy to safe forwarding works safe forwarding works so let's say edge safe forwarding works our path going from one point to another one that never use the same edge and in this model there is a very natural weight that you want to attribute to each safe forwarding work it's x to the number of just edges in your work ok if you think of this as the weight of your work it will exactly satisfy this property because here it will say the weight of the concatenation of two works is going to be indeed so it's going to be x to the gamma 1 plus gamma 2 which is exactly the product of the two in some sense it doesn't even depend on s and this is going to be an equality because it doesn't depend on s it's just x to the number of edges and here it will be more the correlation of your safe forwarding work is the sum of the weight which is in fact in this case just the definition of the safe forwarding work so this backbone you can think of it as a little bit more complex version of safe forwarding works where the weight is a little bit more weird but where we are lucky enough that we manage to get properties that are in fact kind of the important properties when you study safe forwarding works if you have those properties you are in good shape so what I'm claiming in this proposition is that the weights really satisfy exactly what you want yeah yeah of course no no no I think at some point that this would be implied by the notion of the weight but it isn't you are right that's a good point ok let's include it in s that's a very good point ok so for one so first property why do we have this ratio here well if you look upstairs I'm going to do the proof of a lazy man on a part of the weight it's actually factorizing perfectly in fact it's defined in such a way the compatibility makes it that these things the gamma 1 and gamma 1 bar actually I mean gamma 1 bar is disjoint from gamma 2 bar and that's exactly what the thing does and therefore it factorizes and here when you notice that there so rho s gamma 1 gamma 2 so you are exactly going to say that this is the empty set s minus gamma 1 gamma 2 bar over the empty set s times the thing that factorizes and here what you can do is just divide and multiply by gamma 1 bar if you do things carefully you exactly end up with what you want so it's just something of dividing and multiplying by the right thing it's really not deep so let's call it obvious so if you do it carefully it works second property and this is a link to the backbone and that's why it's a weight of a backbone but a priori it's a weight of a path is that the second property is just due so if ok let's call it like that sigma a it's the sum of a possible gamma pairing vertices of what of the sum for n equal a then indicator gamma of n equal gamma divided by some sourcelles W bit of n ok and when you take gamma which is really pairing the vertices of a this is in fact just rho of gamma rho s of gamma it's not very difficult to see this whole thing here rho s of gamma ok and the third thing is even kind of simpler so well it's not simpler simpler actually I don't know why I said that so rho s of gamma over rho t of gamma what do I get this factor rises of those things so I end up with the empty set of s minus gamma bar over the empty set of s times the empty set of t over the empty set of t minus gamma bar proving that this is larger or equal to 1 ok well this is due to one very simple observation but I mean a powerful one how can I rewrite this thing so I'm looking at the ratio of two partition functions of easing so I have t and I'm removing the set gamma bar what I can think of is that imagine here I can think of this thing as just ok let me maybe write somewhere else so I can see it like that this is almost the partition function of the easing model on t with zero coupling constant on edges here when I remove the set here it's like I'm setting the edges to zero coupling constant and I'm forgetting that there are isolated vertices but if I re-put these isolated vertices I end up with just the partition function of the easing model with zero coupling constant on the edges of gamma bar so here if I just multiply by 2 to the number of vertices in I mean outside t I mean in gamma bar but I mean end point of gamma bar which are not in t no sorry end point of edges in gamma bar which are not in gamma bar not outside gamma bar I end up with I'm going to write it like that I should never approve this thing I realize this is how you discourage a whole crowd but I mean so it's this thing can be understood as the easing model on t where I define coupling constant j where jxy is jxy I mean is zero if xy is an edge of gamma bar and jxy are the ones so here I really didn't do anything smart I just said of in fact my easing model on my smaller set I can see it as an easing model on the big set with zero coupling constant it's just that there is this stupid thing that isolated vertices they have two possible spins so there is a factor 2 to the number of those vertices that you need to add but why is it good to do it like that because now the empty set of t is what so the partition function on t well an easing model on t with coupling constant j I can see it as an easing model with zero coupling constant when I'm looking at e to the beta product of the remaining edges of e to the beta so this I can see it as correlation in my easing model with coupling constant j bar of product for xy in gamma bar of e to the beta sigma x sigma y this exactly gives me this quantity divided by this so why did I do that because this is a quantity which is increasing in t this is just a sum over like a certain possible set of let's call it u of vertices of alpha u times sigma u tj and we will see so clearly not this next week that correlations are increasing in the graph so instead of having t if you have s if s is smaller than t then this is going to be larger than the thing for s so this is going to be larger I mean this is going to be increasing in t and so here this is going to be larger than this one over this so I'm going to get larger equal to one I wanted, I mean you see I did something wrong it's typically this thing that you struggle to understand and that you absolutely want to convey to other people and then you realize that you didn't do a better job than the person who explained it to you the first time so it's a trick which is very useful remember for people who had to work with this time of model where you have what we call graphis inequality that this type of ratios they are decreasing in the graph so this type of ratio they are increasing in the graph so this one are decreasing ok and just if anything you could think but why did he define this weird model with jx, y equals 0 why didn't he look at this thing which there you can clearly see it as a neising model with normal coupling constant on s and you are just averaging not product of e to the beta sigma x sigma y beta sigma x sigma y so it looks like a much more natural thing to do except that when you expand this thing the alpha you have no reason to be positive and it's absolutely unclear whether this is decreasing in s or not so kind of doing it the other way it allows you to have this special form where things are positive and these quantities appear everywhere they are really really useful so it's kind of very funny that when you do it the wrong way you absolutely don't see any monotonicity popping up but when you do it in the right way it comes fairly easy ok so that was the bottom line of the story here is that remember that in some sense in a current resources a there is there are kind of hidden safe forwarding works pairing the vertices and these hidden safe forwarding works they have weights which really behave a little bit similar to safe forwarding works so what is going to be now the next step the next step is that we are going to want to interpret spin-spin correlations and in particular for instance let's say you take the 4-point function and you want to compare to what you would get from the motion process so you want to compare to Vick's law well we are going to try to prove that the difference between the 4-point function and what you get from Vick's law can be expressed in terms of intersection of those random works you have 4 points so you have pairings of works you have 2 works that are pairing the thing and we are going to try to interpret this as ok do they intersect or not in order to do that well we have to go through exactly what is the juice of the random current which is when you look at 2 currents so I am just going to state the lemma I will prove it next week so I am going to talk about duplicated current I am going to tell you what it is I am going to state what we call the switching lemma and then just in the 5 last minutes I am going to give you a few applications and maybe a few exercises for next week to train with that so 2.3 duplicated current ok in fact it's a big word just to say that from now on we will work with multiple currents and one thing that we would like is we would like to get rid of a very annoying property of the expression of spin-spin correlation in terms of currents which is that you have a ratio 2 weighted sums but on completely different currents at the top it's currents with sources A at the bottom it's sources currents so as a probabilist this is very annoying you would like to be having the same type of object on top and bottom and just saying the guys on top satisfy a certain property because like that you could say source is current something happens the problem is that we are not facing that here so the following lemma is exactly going to go around the difficulty I mean it's going to allow us to play with the currents with the sources it's called the switching lemma and it says the following consider h included in g 2 graphs so you have a first graph g and you have a certain subgraph h then and let's say you take a function f which goes which take currents on g and gives you so on edges of g and gives you a real number so the switching lemma tells you the following let's assume that I take 2 sums a sum on n1 which is a current on g which has sources say a ok w beta n1 but let's say that I take a double sum I also take n2 which is this time a current on h by the way you can think h equal g if you prefer I'm writing it like that because it is interesting to have this generalisation but otherwise just think you have I'm summing on pairs of currents in one in h one in g and I'm looking at a functional of the sum of the two ok ok well the switching lemma tells me what I'm allowed to do is I can switch the sources I can put the sources of the second current on the first one so if I put the sources of the second on the first one I mean that now I have empty set on the second and I have sources a symmetric difference with b in the first one and that's here I can keep n1 plus n2 by the way notice what is the set of sources of n1 plus n2 if n1 has sources a and n2 has sources b the sum of the two has the symmetric difference for the physicist sorry should have said that this is the symmetric difference between the two ok I have w beta n1 plus n2 so here it looks like absolutely marvelous because it looks like just I'm allowed to switch sources from one to the other in fact this is not true but what I have to add is fairly easy I need to add the indicator function that n1 plus n2 belongs to fb where I need to tell you what fb is is the set of n such that there exist a k a k smaller equal to n with the sources of k equal b so I need to be able to find a current smaller than n1 plus n2 which is pairing the sources so in particular for instance if b is just xy just two points fb is what I should find a current smaller than n1 plus n2 which has sources x and y so this is exactly equivalent to say that there is a path of positive current between x and y I will redo this type of things next week don't worry but rather than so I'm definitely not gonna prove this now and I will maybe not even discuss too much what it is I just want in the five last minutes to just show to you a bunch of application of that like really direct applications that I will redo quietly next week but just to tell you that once you have that really many things just follow very easily so for instance imagine so I applications let's say sigma a squared so it's a square of the spin-spin correlation so it's z a over z empty set z empty set so it is a sum if I have n1 n2 with sources a divided by the sum if I have no sources of w n1 let me put it so w n2 and same thing at the bottom right but what does the switching lemma tell me the switching lemma tells me I'm allowed to switch the sources of the second guy to the first one but if I do that I'm getting empty set for the first one but also for the second one so here I'm replacing by exactly the same object as downstairs the only thing that I get is indicator function that n1 plus n2 belongs to f a so what did I do now I have that the square of spin-spin correlation I can interpret it as the probability of a certain event in this case f a for a measure which consists in sampling two currents which are sources less and just the probability of each one of the current is proportional to w beta of n1 so I get back on my feet as a probability I get back to a probability event I will really discuss that much more next week don't worry I just want to make some advertisement yeah yeah don't worry I will definitely discuss it more I will even define properly probability measures I'm considering because we are going to use that a lot to prove our theorem imagine I did the same with the product of two spin-spin correlations like that so it's exactly the same now n1 as sources a and n2 as sources b n1 and n2 here don't have sources and again so I don't put the w and again I'm allowed to switch like that the cause is that I should add indicator function of n1 plus n2 belongs to fb but notice one thing I mean an indicator function is just reducing my sum so if I forget entirely about this I get that this is smaller recall now n2 has no sources at the bottom and top so I can just remove n2 and I get just sum with n1 with sources a symmetric difference b and sum with n1 is empty set and this is just sigma a sigma b this is what we call Griffith inequality and it comes completely immediately by the way you can check for instance that if I take age included in g just apply you can try it as an exercise before next time I will redo it but sigma a h is going to be smaller than sigma a g you get automatically that spin-spin correlations are increasing in the graph a little bit more difficult for instance if you have a set s and a point zero in it I just recommend that you try to prove what we call so this is what we call Simon-Leib inequality which again I will re-explain next week Simon-Leib inequality and which says the following that sigma zero sigma y is smaller than the sum of a x on the boundary of s sigma zero sigma x s sigma x sigma y again of almost immediate application of the switching lemma and the last mention I wanted to do again I will re-explain all these things but the last mention and like that at least you have a take home message for people who don't plan on coming back next time or cannot come back next time if you look at u4 x1 x4 which is just a correlation between four points minus the sum of the pairings so let me even sigma x1 sigma x2 sigma x3 sigma x4 minus the two all the permutation so this is u4 would be something zero if you are looking at something Gaussian if you have the V-claw well this thing I recommend as an exercise to try to prove that this is minus 2 sigma x1 sigma x2 sigma x3 sigma x4 times the sum I mean yeah let's put it like that sum over all currents as sources x1 x4 and the second current has no sources and at the top the same thing indicator that x1 up to x4 are all connected in n1% meaning that you can go from any point to any other using only edges of positive current so why is this interesting because this is kind of the probability d'avoir un current avec 4 sources et un second source less current est la probabilité que j'ai managé pour connecter les 4 points donc parce qu'il y a des sources vous savez que le premier current nécessairement a passé entre les 2 les 4 vertices il y a même un additional current qui est une collection de loops en fait même ces choses peuvent faire comme ça et les 4 comparé à la relation de 4 points quand vous comparez par la probabilité que ces choses intersequent ou pas donc vous pourrez être quasiment v-claw si vous pouvez prouver que la probabilité de l'intersection est très petite et maintenant ça juste résonne avec un théorème qui s'appelle le théorème de rondeur qui est que dans la dimension 5 et plus en fait dans la dimension 4 et plus si je prends 4 points et je regarde la rondeur entre ces points la probabilité que l'intersection tend à 0 dans la dimension 2 et 3 si elles sont à une distance en fait l'intersection est positive donc on voit la dimension 4 appearing dans cette interprétation et ici on a une sorte d'interprétation qui va jouer le rôle de l'analogie avec le rondeur je vais réexpliquer toutes ces applications parce que je pense que toutes ces choses sont non triviales si vous n'avez pas le rondeur ce n'est pas tout de suite mais avec l'intersection et par contre vous pouvez essayer de prouver l'intersection combinatoire de vos arguments je vous présenterai la prouve la prochaine semaine mais je vous recommande de essayer un petit peu et quand on a tout ça on va être prêt à prouver notre théorème donc notre théorème est peut-être prouver que ces bas-bonnes comportent comme si rondeur fonctionne dans la dimension 4 et plus basically ok, merci beaucoup à tous et à tous