 What I'm going to do in this mini-course is give you not a full proof but a good idea about a project that we completed with Michael Eisenman a few years ago. And I'm going to do it in four steps, there are four classes. By the way, it's not every Wednesday, it's two days tomorrow and then again Wednesday and Thursday next week. So during this first class I'm going to tell you it's going to be an introduction basically. I'm going to try to motivate the problem from two sides to try to show you two sides of the story and to try to end up with actually a statement at the end of the two hours. So it's going to be a pretty introductory. In the second class I'm going to tell you about a tool that is going to be really super important for the whole proof, the whole argument. And it's going to take us some time, so it's going to take us I guess a full lecture to actually explain what this tool is, it's called the Random Current Representation and I will give you a few of the basic properties of it and how you can use it. We will practice like that. Third lecture, so next Wednesday, there we will start really to try to prove the theorem. I will just do it under an assumption, a kind of regularity assumption of what we call the spin-spin correlations because things are way cleaner when you make this assumption and in fact you expect very strongly that this is the standard behavior for your model. So under this assumption we will get the result completely and depending on time because usually you plan something and you go three times slower, so depending on time I will tell you a little bit how you can prove this assumption in dimension four, I mean sorry in dimension five and more and in dimension four I will explain to you how you can go around the fact that you don't know this assumption or maybe try to give you hints on how we could prove the assumption but as for today we don't know how to prove it. Okay, so we start with the beginning, so in kind of introduction slash motivation and I should start by saying that the result I'm going to discuss, by the way if you have questions please ask, I mean it's not because it's recorded that everybody will know that you had a question, well actually everybody will know but it's not a problem. Yeah, so this the project I mean the result I'm going to describe it's actually related to two kind of distinct ways of looking at statistical physics and mathematical physics so it's dealing with or it's useful from the point of view of constructive Euclidean field theory and I will in a minute tell you what this is it looks like a big word and maybe it is but I will try to tell you what I mean by that and in some sense the theorem I will prove is a no-go theorem from the point of view of this constructive Euclidean field theory program so it's it's a negative result so if it was only for that it would maybe be a little bit disappointing but actually the result is also interesting from the point of view of statistical physics so really the study of statistical physics model at criticality so today I want to kind of tell you a little bit about these two sides and at the end to tell you what the result is that's really the goal from today and we are going to start with this thing with a constructive quantum field theory aspect of the result so first thing so a motivation from constructive Euclidean field theory so the goal of Euclidean field theory or one of the goals is actually not to construct Euclidean fields but to construct quantum field theories so you know that I mean physicists in their everyday life they like to deal with quantum fields it's a very very powerful tool but it's also a very ill-defined tools from the point of view of mathematics it's not easy to make sense of what we mean by a quantum field theories but the in the second half of the 20th century Ostervalder and Schrader prove that in some sense if you want to construct if you want to construct a quantum field theory well maybe actually you only need to construct a Euclidean one meaning you want to construct a random distribution so redistribution in the mathematical sense but a random one satisfying certain regularity assumption so what does it say it's some kind of a recipe to turn I mean I give you a probability measure on distributions so a way to pick a random distribution and this theorem by Ostervalder and Schrader tells you okay now I can cook from it a quantum field theory it's using what we call big rotation and you can really think of something like you are trying to construct something on the line a function on the line and you extend it in the complex plane to the purely imaginary line for instance it's kind of a little bit how the thing is working this regularity assumption maybe let me just keep them I just mentioned them you need for instance that's what we call the stringer functions that I will define later on you want them to be analytic in each coordinate you want them not to depend on exchanging the coordinates things like that I mean you have two other things you want them to be symmetric under actually the symmetries of the space so rotation symmetry scale symmetry things like that translation and you want them to be a reflection positive that's the four assumptions I just mentioned them but it's not the core of the of the class so I'm not going to spend time describing them but okay this Ostervalder Schrader result it's quite useful because it tells you okay forget about quantum field theory quantum physics in general just try to construct random distribution and if you manage to construct hopefully interesting enough random distribution then you will get interesting enough quantum field series okay so in general this you get this Euclidean field series I mean these random distributions you want them to define them in a smart way so you would like for instance so imagine Phi is my random distribution and you want that when you you apply a function to this random distribution I will tell you which type of functions you want to apply you would like morally so when I take the average this is the average so when I take the average of Phi against the function f I would like that it looks like one over some normalization so for now everything is going to be a little bit vague and it's normal I mean the goal is going to be to turn this into something a little bit less vague later on so for now it's a little bit difficult to make sense of this but I would like this average of of f against Phi to look like the integral against f of Phi of exponential minus Hamiltonian times product d Phi x and here where f so the function I want to apply will all be of the following form they will be in fact what we call averages of Phi against a function so they will look like tf of Phi which would be the integral of f of x Phi of x dx so you want to average Phi against a function f so this is a typical function you want to apply on on f and this say for every f which is smooth and compactly supported so that tells you what f is here and h of Phi well I want this to be of the following form I want this to be a quadratic form applied to Phi plus a kind of potential parts which could be something so what was the notation here I did it like that this is on RD by the way something like potential of P of Phi of x dx and here again you see it you open like a new space and you need always to read so where this is true and now I need to tell you what Q and P is so where I'm gonna have less and less space but that's fine so Q is a quadratic form and in fact what we call a reflection well let's a positive quadratic form and actually the one that you are interested in usually is you want Q of Phi Phi to be something like the integral of gradient Phi x squared dx this is typically what you want okay so normally if you are a mathematician at least you should really be panicking on several aspects here there are several things that make zero sense and I'm gonna come to that and the second P what is P so P would just be a polynomial of like an event polynomial okay at least this we don't have to open a new drawer there okay yes F sorry the large F is a function I'm applying on Phi and I want it to be of this form always an integral of Phi against some function F which is smooth and compactly support yeah sorry in this case it's linear yes actually you are right then what you could also do okay so that's a good point that's maybe level zero and what you could then do is also take for instance this type of observables like just like that it will allow you to get correlations between so yeah P will will in general contain quadratic terms yeah so P is really even polynomial so some of ak x to the k okay so here normally one is worried for many reasons first here we take a product an infinite product right because we have I mean we are indexing Phi by RD everywhere distribution on RD so we have an infinite product so even this thing is not that clear but then you see that if Phi is a distribution so it's not even a function still you want to be doing things like taking gradients of Phi for instance taking exponential of things expressed in terms of Phi so you have millions of problems happening when you when I'm just formally setting this type of things okay and the goal of Euclidean field theory is to still try to make sense of objects like that okay just to because it's a this is something physicists will often look at you can make things even worse so you can in some sense if you look at Tf1 of Phi Tfk of Phi or let's say you yeah or maybe just let's let's take a let's rather do that so if you take moments of T yeah what I did was matter let's say product for I equal 1 to k of Tfi of Phi which is also an observable which like that is not linear so you take this thing and if you kind of brutally imagine that you can exchange the integrals this is going to be expressed as a multiple integral on RD of f1 of x1 fk of xk and if you are lucky enough you could expect here that you can express this thing at as as this time sk of x1 xk dx1 dxk again this is there is no reason that you can really write things like that but usually you will hope that this is possible and this guy in some sense you interpret it as Phi x1 Phi xk right it's kind of like the point-wise correlation in your field and it's called a Schringer function again why this exists in general when you take q or p in a not to be too trivial it's it's not clear okay let's give examples so the simplest example you can imagine is to take p so I'm always just I mean I'm always going to think of this q okay you can you can I mean everything I'm saying you can generalize to to other quadratic forms but let's let's really take always this one so first thing you can imagine is to take p of x which is just quadratic I mean of course you can also put a constant but you realize right I mean if I take a constant notice that it will not impact edge of Phi and therefore it's just something that boils down to changing the normalization okay so if I take something quadratic here I end up with what we call Gaussian processes so we I mean Phi will be a generalized Gaussian process and I will give you one example which is a continuum GFF when I will start to define things correctly okay and the problem of Gaussian processes is that when you look at the stringer functions and you look at s2n of x1 x2n there is a very easy way to express it in terms of two-point functions for a Gaussian generalized Gaussian process this thing is going to be the sum of a Pi which is a pairing of the x1 x2n of product for i equal 1 to n of s2 of x Pi 2i minus 1 x Pi 2i so you can express the two-and-point function in terms of the two-point functions the product of the two-point function it's called the weak law or weak formula and this is problematic or it's not problematic it depends what you want to do but in Euclidean field series this is problematic because this weak formula when you put it in the Osterwalde-Schrader machinery it spits on the other side a quantum field theory which is called trivial in the sense that it describes particles that do not interact which from the point of physics is not interesting so in some sense what you want to avoid as much as possible is to construct random distribution that at the end are just Gaussian processes you want to avoid that okay you want to avoid but I mean it's not completely obvious you can but clearly the first thing you would like to try is to just make this a little bit more complicated to get to level 2 meaning that you take p of x which is a plus b x squared plus c x to the four so you add I will remove the a because again I mean it's irrelevant so let's so you put a fourth power and when you put this fourth power then things that priori get more interesting you are not a Gaussian process anymore by the way as a reason why you are a Gaussian process is that if you look at the h of phi you can really put the b x squared into the quadratic form and then you really get a Gaussian integral okay so as soon as you have a lambda x to the four this is not possible anymore and a priori you give you get something more interesting and this is what people call the five four field theory okay but notice that in both cases right even for example one the thing is not obviously defined we still have this problem of this infinite product here we still have the problem that we are looking at a distribution but we are taking the integral of the gradient of this distribution squared and even just everywhere like when we do something like that we need this to make sense right we are integrating on rd so we need that this f of x phi of x is integrable so careful and this is true even for example one so it's not clear how you make sense of these things and again to mention the difficult the difficulties you are taking gradient of a distribution you are taking an infinite product of a Lebesgue measure and this is not that nice and more importantly I mean it's not clear that the integral are just integrable I mean finite so you need to do something and the standard way of going around this thing these difficulties is to basically like regularize smooth your distribution so we are going to regularize the distribution so to make sense of those so we proceed by regularization and here I should say there are many different ways of proceeding you can for instance proceed by smoothing out phi you can so it's what people refer to as a smearing of your distribution you can do a cutoff for instance a cutoff in Fourier space for me as soon as I see Fourier in this context I'm panicking a little bit so this is not what we are going to do what we are going to do here is that we are going to do cutoff in space we are going to work on a lattice okay so we are going to do in this class a lattice cutoff and I'm going to illustrate that with example one and then we will see that's doing it with example two is more subtle and leads to things that are a little bit surprising so the idea is that we are going to define phi on a lattice on a finite piece of the of a lattice so typically this finite piece is going to be called lambda r a which is going to be just the box minus r r to the d intersected with azd so you look at a big box imagine r is very big a very big box and you look at a very fine mesh size lattice on it so this is huge it's of size r it's two r there and this is tiny the mesh size of your lattice a and you think of this as providing two cutoff you are going to define phi only on on the vertices of this so when you want for instance to take the gradient of phi what you will do is that you will replace that by the difference between phi at two neighboring vertices so a is kind of a cutoff that allows you to make sense of gradients of infinitesimal fluctuations of the r on the contrary is to guarantee that you do not work with like infinites integrals or things like that there it's going to be just a sum on finitely many vertices okay so sometimes physicists is just to to give you a buzz word so a they refer to it as the ultraviolet cutoff so it's it allows to cut i mean to to solve the problem of tiny fluctuations fluctuation at small scales and r will be the infrared cutoff so it's it solves the large scale problems okay so for now i didn't tell you much i just told you i want to define phi x on a finite piece so now let me tell you what i mean by this thing in this context i'm happy when people don't ask me question on this side because this is not my my area of expertise so i'm happy if i managed to convince you that everything is flawless okay so again we look with phi x x now is indexed by this set so h of phi so actually we are going to do something a little bit i mean in the as a statistical physicist i prefer to distinguish between the q and and this part so we are going to define actually the thing a little bit differently so h of phi the Hamiltonian is going to be something of the following form it's going to be sum for x and y so now things are well defined of j x y phi x phi y and i put a minus okay so this is again a quadratic form in the phi x i don't put the phi x squared you are going to see this i'm going to put it in the second term but this is really i mean a discrete version of the the quadratic form so now what will be how did i write this what will be the average of f against phi notice i use this curly phi for the discrete okay so it's going to be something that is indexed by the the graph on which we look and because it's going to be practical for me in the future i'm also going to put a beta parameter and it's going to be one over something i will define in a second times the integral but this time it's an integral that makes perfect sense because it's an integral an integral on r to the lambda r of a of f of phi times exponential of minus beta h of phi so this is going to be what we mix our h of phi upstairs and now i'm going to have a product this time a finite product and here i'm going to put a measure rho on uh on phi so rho is a measure a measure on r d on r sorry and the rho of phi typically i want it if i take one of this example i want it to be or actually even in the generic way i want it to be the integral uh sorry what am i doing no e to the minus p of y dy okay so think in our two examples for instance i would like that so in example one it will typically be a Gaussian measure something like that and in example two it's a slightly more weird thing so here maybe i should have said i don't know how i put it maybe there are minuses in the wrong direction something like that okay so you agree these are measures if lambda is positive i mean here if lambda is positive or here if b is positive these are finite measures on r i'm doing a product of finitely many of those this makes perfect sense here this is the quantity which is well defined i take this exponential and i integrate against f that makes perfect sense okay so what's the game about yeah so what's the game the game is that we want to be doing things in the continuum we want to be making sense of those guys so what do we need to do we need to let r go to infinity we need to remove the uh infrared cutoff and we need to remove the ultraviolet one so we need to let a go to zero so can one let r go to infinity and a to zero of course we have a little bit of leverage so this is the rule of the game we need to do that but we have cards in our hands so what are the cards well we are allowed to vary beta and let's say if we are in one of these examples we are allowed to vary lambda and b when we let a and r go to i mean r go to infinity and a to zero so beta b and lambda can vary and i'm even going to add something is that i could also scale phi so i'm also allowed to rescale and maybe rescale phi by a factor epsilon so the game now if it's stated like that is that i want to let a go to zero and r go to infinity and what i'm allowed to do is to move beta b and lambda in a sufficiently good way and maybe to rescale phi also in a good way and can i construct something non-trivial out of this can i construct a random distribution first can i construct these generalized Gaussian processes and second can i do something more interesting with phi 4 okay okay just here i wanted to do it in a general generalized i mean in a general context but let's let's actually not complicate our life too much for the purpose of this class so j x y i'm going to set it to be one if x is a neighbor of y so this is a way to say x and y neighbors and zero otherwise i could actually imagine a different set of of j x y first like more complicated finite range things maybe you know something that depends on the box of size 100 around you you could also take j x y which is something decaying polynomially this is actually something more relevant but the story is going to be simpler than the j x y which is nearest neighbor for us in our story the nearest neighbor case this j x y equal one is actually the worst one so we are going to stick with this one it's already sufficiently interesting in some sense okay very well so i think maybe i'm going to try to tell you a little bit more about this game in the case of example one okay just like that we have one clear example where things are actually working fairly well okay so back to example one so game for example one and here maybe the probabilities will not be that surprise because this is something we know fairly well but still let me redo it so in example one it's like lambda equal zero the beta is actually not so important so let me set it to be one over d actually maybe i want 2d there is zero chance i'm going to get the constant right but i mean there is a way to get the constant right it's just not through me and so there is just this parameter b that is the interesting parameter and for now let's imagine i want to write it one plus n it's just another way of writing b okay so i am on lambda r a and i'm going to define f of phi let me recall actually yeah yeah let me recall the parameter n r here so this if i look at the formula it gives me one over z oh by the way i forgot to tell you what z was so z is a normalization that guarantees that the average here gives you a probability measure that the dual of this average is the probability measure so what is it is just what you get here for f equal one okay so it's just the integral of r to the lambda r a of exponential minus beta h times just the normalization that you want to be taking so here you have again the normalization again this quantity and here i just want to write my my beta h in a different way here it's going to be minus one over 2d sum for x neighbor of y phi a r x minus phi y a r squared minus m phi x squared this is just a way when i have these parameters to rewrite what was upstairs okay why is it good because here you recognize a Gaussian integral and this whole sum here you could also write it as sum over x of phi x a r times laplacian m phi x a r where this laplacian is a massive laplacian meaning that you get one over 2d sum for y neighboring x of j of y minus one plus m i mean this whole thing sorry g of x i'm just doing this i mean trust me i'm just doing like simple manipulations around the definition there to rewrite it in a slightly different way because when i write it like that i immediately get that you are a Gaussian process with covariance given by the inverse of the massive laplacian which is the green function the massive green function so when you do that for instance and here there is a perfectly well notion of schwinger function and maybe okay so you are Gaussian so i'm going to stick to just two terms so x so if i look at the correlation like that for this measure this thing because of this Gaussian structure is just a massive green function between x and y and here i'm going to be i'm going to tell you what this is so the gm discrete is just the inverse of laplacian m so it's a function which when i look at x dot when i look at it as a function of the second coordinate laplacian m is zero if y is not equal to x and is minus one if y equal x again i think that there is no chance i'm actually putting maybe one let's put it one right there are too many signs in math okay so why did i write that i mean why write the thing like that is because this massive green function even in the discrete they're actually quite easy to study and you can get the asymptotic behavior of those things so first thing so first thing is that one can let r go to infinity as soon as d is larger than two so in dimension three and in particular in dimension four oh by the way i didn't say something that i should have said immediately in in this approach to constructive i mean this perspective on constructive euclidean field theory if you want to be related to quantum field theory the natural dimension is dimension four because you have three space and one time dimension so we are not going to bother about what happens in dimension two for instance even three even if it's very interesting we are going to always think of dimension four okay so in dimension four letting r go to infinity here is like it doesn't cost anything you can let the the infrared cutoff go to infinity now the second thing which is interesting so if i look at my lattice like that so by the way for the probabilists actually for everybody gm discrete can be seen as follows you have a random worker on your graph that starts from x and it's the number of times it will visit y but this random worker at every step has a probability i mean dies with a rate m okay so if you let a go to zero then if the distance between the two points gets farther and farther away because you die at a certain rate if this rate is too big then you are going to decay very fast so here it becomes clear that you want to be rescaling the rate with a and typically what we are going to do is that we are going to take m to be of order a square times m star so i will not change m star anymore but as a tends to zero i rescale m okay and when you do that then what you can see is that a to the d minus two times g a to the two minus d sorry that's time gm discrete x y this is going to converge as a tends to zero so now we are really on a time z d okay but you can take this limit and you converge to the continuum massive green function which is a solution of the same thing where you replace the discrete laplacian by the continuum one so same definition with discrete laplacian replace by continuum one very good so if you do that then you realize that you exactly did the game we wanted to play you took the two point function so it's a schwinger version of it you take the two point function and if you rescale m and the field phi then you end up with what you want so if you do so m equal a square m star and you look at a so you want to be rescaling by a to the two minus d times over two times phi x then the schwinger function let's call them maybe like that s to a so this was even s to a r of x y s to a i remove the r already of x y converges to an object in the limit which in this case is very explicit is just this massive this massive green function okay so this is for the schwinger state that's one way of seeing that you can make things converge the other way is that you could directly look at the average against a function so you can also take t f of phi a and here well you can exactly try to i mean how can you sorry sorry how can you look at the convergence of an object like that you could look at the characteristic function so i look at this average and i see whether that converges this is a random variable and a way to look how it converges in law is to look at the characteristic function of this random variable and if you look at this thing in this context so i'm going to do it completely so this is sum from n equals 0 to infinity of z to the n n factorial times t f phi a to the n okay i just expanded here if i remember the formula in terms of schwinger functions and the vik formula it's very easy to see that this is going to give you in fact t f of phi a squared to the n times the number of ways you can pair uh two n points meaning two n factorial i mean yeah two n factorial over uh two to the n n factorial okay okay uh oh sorry sorry i forgot this is for i was thinking i don't understand what is happening it's easy to see that the average is zero if n is odd so let's fix it to be even so and and this thing with two n is exactly this thing so when you write it like that what do you end up with in the case of of of well you end up here the two n factorial simplifies with the two n factorial here so you end up with e to the z squared over two times t f of phi a squared which is saying something that is true for every Gaussian process which is that this quantity which is a linear average of the phi is a Gaussian random variable with variance a centered Gaussian variable with variance this okay but this here get nicely written in terms of the schwinger functions so if i look at t f of phi a squared i end up with what i end up with a double sum i mean the sum on x and y of f of x f of y and then i get the variance i mean the the the covariance between phi x and phi y with this is exactly what i define to be gm discrete x y okay well again here if i renormalize properly i should end up with something good so i said there that's probably i want so m i want to be to to that it's equal to a square m star and then i want to rescale phi by a to the two minus d over two so i want to make here a to the two minus d appear because this will converge so it gives me a a to the two minus d here okay and uh what did i write there i'm just wondering yeah okay so i get something like that and here if i want this thing to converge well i would like this whole thing to converge and i see a Riemann sum so probably i want to be rescaling the t f of phi in such a way that this is going to converge to the corresponding integral okay so i have a to the two minus d here and in fact the natural thing that i would like to have here is to have a to the d i mean to the 2d to get my Riemann sum right i have a sum so i would like over one over a to the d points and i have two of them so i want to be rescaling this by a to the 2d so if instead of looking at t f i look at the rescale version of it where uh what where did i define t f here i i never defined t f i defined it only in the continuum so maybe i i defined it still once in uh in uh in the discrete so this is the sum over x of f of x phi x a but now i'm going to risk it in order that the variance converges and here normally if i didn't screw things up which probably i did i should rescale by a to the d plus two over two because if i do that here it will boil down so if i put an a it will boil down to putting a to the d plus two over two and here a to the two minus d sorry here it's d minus two so when i look at this uh with of the two because i'm looking at the square so a to the d plus two a to the d minus two i get a to the 2d and this indeed converges as a tends to zero to the double integral of f of x f of y times g m continuous x y the xy so you see if i look at this this average of f so here here i screw up things because this is the right uh scaling so probably i i did something wrong there but uh the important go home message is that if i rescale like that which corresponds to rescaling phi x by a to the d plus two over two then i get a normal random variable of variance this thing but this variance converges in the limit to a well-defined variance which will be this thing okay and well it happens to be that there is indeed a Gaussian generalized Gaussian process with these variants is called the massive Gaussian so this this is actually a construction of it so you call the generalized Gaussian process i think i understand now where i lost this here i lost the press the press one i probably lost it from the fact that we have to go from a sum to an integral also in the h and this is where i lost it but it's i hope that it was clear when you look at it with t f a that you need to rescale like that so you call the generalized Gaussian process that you obtain like that it's a massive Gaussian which you abbreviate Gaussian free field by g f there are other ways of defining the g f f more direct ways directly working on a Hilbert space in a proper fashion but this is one of the intuitive ways which is to do the cutoff and to uh to end up with a limiting process okay so the go home message on this part was that at least when you are in example one so when you are Gaussian when when p when p is quadratic sorry you have a way to rescale properly the phi and the m in order to have that the quantities converge and you obtain a limiting process okay where the question here is can you do the same when p is not quadratic and in particular when you have a quadratic yeah and i question the the convergence of x x y it's defining in this way in the way that the coordinate the exponential points sorry i'm not sure i understand the question you mean for s2 yeah it's point-wise convergence there we are lucky enough that it's it converges it's not at all clear that it should exist so s2 has no reason to exist that's why i wanted to do this other approach which is to look directly at the smeared averages because that is the general thing you want to be considering because there you need to be when you speak of the convergence of a field to another of a random distribution to another one you need to to mean something and one thing that you can mean is that the smear averages has random variable converge for every f which is smooth so the value of s of x y a in x and y is s of the nearest x yeah exactly exactly you can make sense of that but again it's yeah that is from the point i mean it's kind of even an anomaly in some sense that you can make sense of the point-wise estimates right i mean a priori you are looking at a random distribution so you really want to be smoothing things out yeah and probably i should not have done that because there i mean i think you agree with me that there the the proper risk heading for phi is less clear so i screw up things there i should have done directly with the smear averages okay it has been an hour already that you are my victims yeah if you happen that's why i cannot take part in infinity for d equals 2 you could yeah by the way that makes me think that indeed here it's m star yeah you could um okay so that's good because that's exactly the end of uh the first part which now i think you understand the game is you want to be taking now a more complicated thing where you have this lambda y to the four you are in discrete you want to take a limit can you do it what do you obtain okay so that's the first story now the second part of the story is going to be to forget about that and to look at this type of object here from the point of view of statistical physics and this i will do in the second uh half but we are going to do a break because otherwise i want to let you the opportunity to fly away and never look back okay so let's let's do a yeah seven eight minutes break let's start again at ten past uh that's three let's restart so first i found my a squared where did he leave this why it's plus two other than i mean uh rather than minus two here is because i mean here you are looking at something that is discrete uh gradients right and you want to turn it into a true gradient so you need to reshape by one over eight right so that's so that's where but i think everybody will agree that i should just have skipped that and and and worked with this thing okay so second point of view as i said so we abandoned completely this this question of constructing random distributions and we look at the model from statistical physics so one point two statistical physics perspective and there i'm going to bluntly start with a completely a priori different model and maybe one of the most famous model of statistical physics it's called the easing model so what's the easing model so you take a graph g okay a priori you take any graph but really think of a finite piece of the square of z d or finite piece of a time z d typically you can think of exactly the set i described before okay and you define the following so it's again you want to define the average so now uh phi so a configuration in the easing model it's an element of plus minus one to the vertices of g so every vertex of g has a variable and this variable is either plus or minus one this variable has the interpretation of what we we call a spin it's i mean the easing model is a catastrophically bad model for ferromagnetism that's why everybody says it's a model for ferromagnetism but it's a terrible one and its name after easing who did i mean did prove that it was even worse i mean than i mean that made a wrong prediction on the the model itself so it's a double bad thing you should not call it easing model and not say it's a ferromagnetic i mean it's a model for ferromagnetism but well you know life isn't fair so that's how things go okay and the interpretation for ferromagnetism is that if the spin is plus one then it's kind of so you have a magnet the component of this magnet are the the vertices the spins at every vertex and the point is the north of source they act like small magnets okay okay so once i have that i tell you what is the measure associated to the easing model so the average of a function f again there is going to be this beta parameter is going to be one over some normalization times the sum for sigma in plus minus one to the vertices of g of f of sigma exponential of minus beta edge of sigma where edge of sigma is minus sum j x y sigma x sigma y so something extremely close to what i got before except that the spins are valued in plus minus one okay and again exactly like above i would say that j x y is one if x is neighboring y and zero the ones okay so it's a nearest neighbor model so a few remarks on the model first remark i already mentioned it it's a bad model for ferromagnetism but it's a good model actually for all sorts of cooperative phenomena why because if you think about it age id yeah age is going to be smaller if there is more spins neighboring spins at a line and therefore the probability or the weight of sigma is going to be larger the smaller these agreements you have so it's really something that is favoring agreement and this is why it's a good thing for for cooperative phenomena in particular it's good for binary alloys for modeling this type of alloys it's not that bad for ferromagnetism if in some sense you assume that the small constituent in your magnet are really only pointing in two directions that's what is really really wrong normally it's not the case and you need to explain how a magnet behaves through quantum physics and not just something semi-classical like the easing model so that's why it's not such a good model for ferromagnetism and there is a much better model which is called the Heisenberg quantum model which which replaced very early on the easing model for ferromagnetism but it's a very useful model one of the most famous model of statistical physics it was introduced by Perz in 1920 as I said it was not introduced by easing and easing studied it in his PhD in 1925 and it's one of the most important model exhibiting a phase transition and that's what I want to be telling you about so in order to tell you about the phase transition I need to come to make the thing a little bit more complex so even if it's a bad model for ferromagnetism it's actually very practical to think of it that way so what I'm going to add is I'm going to imagine that sigma is a configuration of my magnet and I want to put it in a magnetic field external magnetic field this magnetic field is going to say point north so point in the direction plus one and this should push each spin to favor being plus one rather than minus one so I'm going to add here a term of the following kind okay so I end up with a function which now I mean a missing model that depends on beta and on h and I can look at the magnetization of my magnet which roughly speaking is going to be the average so I can define let's say m g beta h which is one over the okay number or vertices in g sum for x in my graph of sigma x you agree this is kind of the mean average in my the mean spin in my magnet and there is a phase transition in the following fashion in the following way so yeah so when g tends to so let's say now you take g or subset of z d is let I mean is turning to z d so if you take a second of graph that is turning to z d in fact we you can prove that this quantity converges to something so you take larger and larger graph the mean spin is actually converging to something this we will actually prove even that this is true because we will prove that as g is in so if you take g to be say a box a big box this thing is in fact almost increasing so it increases to a certain value okay that's the first thing and then what you can do so this is step one so this is kind of telling you that you can define the magnetization of an infinite volume magnet so then the step two is to think what I mean to look at what happens when you remove the magnetic field so imagine that you let h tends to zero okay then m of beta h is going to converge to a quantity which is going to depend only on beta it's called the spontaneous magnetization so this is the magnetization in an external field x h sorry and this is the spontaneous magnetization and the first transition is a following where this spontaneous magnetization it's going to be zero for certain values of beta and it's going to be strictly positive for other values so the interpretation is that you have a magnet in an external field you remove the field and the question is does it keep the spontaneous magnetization or does it lose okay so in fact there exists a theorem there exists a critical value beta c and so if d is larger than one i mean larger equal to two there exists a critical value which belongs to zero infinity such that m star of beta is zero if beta is smaller than beta c and it's strictly positive if beta is larger than beta s so i mean for people who are used to that i mean beta has the interpretation of an inverse temperature the inverse of a temperature so if the temperature is too high the spontaneous magnetization is zero if it's below a certain critical temperature then it's strictly positive so that's kind of what you learn in elementary school yeah step one there must be some conditions on g sorry the step one there must be some this one here uh yeah if i want to be truly rigorous i should probably have that the boundary indeed is not too big compared to uh to the the thing i mean it depends what you call by tending like but normally it should like include more and more okay let's take to be certain if clearly if you take the box of size n then it works okay so there is a spontaneous magnetization i mean there is a phase transition and the game for statistical physics is this time is not to construct a you can add fishery or anything like that is to understand what happens through this phase transition how the things evolve when you let beta go through the phase transition in particular what happens when beta is equal to beta c so this time the game is different the game is what happens when beta is equal to beta c okay and we are lucky enough that for the easing model what happens at beta c is interesting so there is the first result due to on the in dimension two this is for d equal to and to ison man myself and c doravisius in dimension three and more that says that m star at beta c is equal to zero that means that if you draw m of beta it's going to look like a continuous function it's going to be zero up to the critical temperature and then it goes okay and this is particularly interesting because at continuous phase transition physicists predict that at the critical point the large scale properties of my systems are quite interesting so what i truly mean by what happens at beta c is can i study in particular maybe the correlations if i take x1 x2 x3 x2n let's say i i don't want to rescale here i i take these points they are fixed can i understand what is the limit when i look at larger and larger points that are roughly positioned with respect to each other according to this thing so what happens when i look at sigma lx1 sigma lx2n as l tends to infinity and that i look at the critical point that's typically the type of question that the statistical physicists would like to understand how the correlation evolves when i look at larger and larger scales okay this is important because that helps i mean of course i mean this is something that sometimes people overlook a little bit is that i mean as a statistical physicist you want to explain something that is relevant to physics but for phase transitions that are continuous i mean you will never manage in physics to tune an experiment that it's exactly at the critical point at least for this type of systems it's impossible so you want to understand in fact what happens near the critical point it's really what you want to be understanding but there is a miracle of statistical physics that what happened near the critical point is deeply related to what happened exactly at the critical point and that's why physicists theoretical physicists and mathematicians study exactly what happened at the critical point so in particular for instance you have what we call scaling relations that enables to compute critical exponents of your system in terms of critical exponent at the critical point and in particular how these things behave when l goes to infinity okay so that was a second perspective it's shorter but don't worry we are going to embrace this one so we are going to only talk about this later but i wanted to motivate that the question is a priori a little bit different but clearly there is a link between the two i think you will agree that it's a pretty direct link and i want to illustrate it here so link between the two approaches so one point three okay first so there are different components so first link is link between the models on one side i have this thing and the other side i have this five four lattice models that i define this thing defined on the lattice where you have so what's the link between the two models first the easing model is in fact a five four mod lattice model in disguise why because if you want to go like that remember that this thing had this this i mean product over x i mean there was this d rho of y which was e to the minus lambda y to the four plus b y squared right but if you set b to be equal to two lambda and you notice that you can rescale by a constant your measure this costs nothing this is nothing else than constant i mean there was a dy constant e to the minus lambda y squared minus one squared dy if you take b to be equal to two lambda but now if lambda goes to infinity this is what is just a measure that is going to so it's a measure that looks like it's favoring y's that are close to one but if lambda goes to infinity then it's actually going to become just the sum i mean the average of the two d rack at plus one and minus one so b equal to lambda tends to infinity you recover the easing model okay so it's reasonable so it's not exactly a lattice a five four lattice model because it's a limit when the parameters go to infinity but really think that nothing degenerates there it's actually the thing is you you can make it as a limit of five four lattice model where everything works well and so so easing is just a subcase somehow of the five four lattice model but let me mention that actually the story goes both ways the five four lattice models are actually very closely related to the easing model in fact the five four lattice models they belong to what we call the Griffith Simon class of models and what are these models there are models for which h of so let me not call it sigma or phi let's call it tau h of tau is minus sum of j x y tau x tau y so this both model satisfies this i mean easing and five four but the important property of the Griffith Simon class is that D row the the the measure that is attached to each vertex should have a special structure let me try not to screw it up this time is that this should be written as a sum for sigma bar belonging to plus minus one to the n so n can be whatever thing and it should look like exponential of sum of i j equal one to n of k i j sigma i sigma j you are i'm going to tell you what that means exactly indicator that tau is equal to sum of q j tau bar j what does this thing mean it means that you want rho to have a very special structure that looks like that it looks like rho i mean that it looks like tau sorry let me do it like that imagine i have a graph okay saying that rho has this structure can be interpreted like that you replace every vertex by a complete graph with n i mean capital n vertices in it and you make the vertices interact with coupling constant chi a j k i j sorry that was completely random transition okay i'm not going to redo it because i'm going to fail it the second time k i j okay so you you you look at an easing model in fact where you blow up every vertex into n vertices that interact with this k k i j good and what you look is that you think that the model you are looking at is actually the model where you forget about the spin of each one of the capital n vertices you just take this average for each vertex you take an average of these guys okay so there is a special case which is capital n equal one where you recover this easing model every vertex is just blow up blow up to one vertex and tau is sigma x but now you can imagine that you blow up each vertex to two vertices and you take the average of the two for instance or capital n vertices and you take a weird average of the capital n vertices so the important thing is that when you are looking at the model in the Griffith Simon class but by the way then there is also i mean then these guys are interacting actually naturally when you through the jx y times i mean if you replace tau x by some of the qj tau bar j you end up with an interpretation where each guys interact with the neighbors so when you look at the model in the Griffith Simon class just with the tau you can forget that you look at an easing model but you can also think oh in fact i'm looking at an easing model on a more complicated graph and there everything that i know about the easing model is valid so in some sense if you have a good enough understanding of the easing model then getting an understanding of the Griffith Simon class is just well being able to extend the result that maybe you prove on zd to this weird graph where you blow up every vertex to a complete graph and in fact if you are smart enough not only you will manage to do it for this type of of models but also for models that are limits of that so you could even take so Griffith Simon is not only these models so but also limits of those models limits as n goes to infinity of those models okay well five four belongs to this class so you need to you see you have a little bit of freedom you can choose a k ij you can choose a qj in a smart fashion if you do it in a smart fashion you end up with five four lattice model so from now on i'm going to actually focus on easing but you can trust me that in fact the techniques i will describe they work also in this strange context where i blow up each vertex to complete graphs and therefore they will work for five four so all the serums i'm going to mention they work perfectly well for five four okay so in terms of models the two perspectives are actually dealing with the same objects that's the go-ho i mean the take-home message okay yeah it's a limit you get it only in the limit and in the case of five four the k ij is actually be constant always the same so actually every vertex is is behaving in the same way and it's a property of of curivice that you get that the the average of the variable is naturally going to have this five to the four that appears in the law which is exactly going to give you the the five thank you is also yeah thank you q is also yeah q will also be constant so you need to tune them the order of magnitude you see i mean you want the k ij to be quite small actually to make a nice average and same thing for qj but but you can do it and okay so that's the link between the two models so in fact we were working with the same models second thing now is in both cases we are taking a limit right in one case we are letting capital r tend to infinity and a tend to zero in the second case here we are taking g to infinity the graph to infinity and then capital l to infinity okay so connection between between the procedure the limiting between the limits so in constructive euclidean field theory we have r go to infinity and a tend to zero and it's statistical physics where the equivalent here is g tends to zd and again i think i agree i should be more precise that you want to be taking g to be a nice graph i mean many things works for arbitrary g but the graph but let's let's not uh complicated so you can think like lambda n and here l goes to infinity and the clear connection between the two is that you should think of l as one of a it's just that in the first process you rescale the lattice by a factor a but in fact when you look at the rescale lattice with the factor a and you look at x1 xn they are typically a distance order over one over a of each other okay so here it's because you are looking at a rescale lattice of mesh size a and here you are looking at a standard lattice of mesh size one but at points that are a distance l of each other and here at points that are a distance order one of each other okay okay that's the second thing i wanted to mention and the last thing i wanted to mention on the connection between the two is that in euclidean field theory you had these three parameters right b lambda and uh and beta while in statistical physics it became pretty clear or i actually not even justified i just went for it that i wanted to look at what happens at beta c so how does beta c appear in the story why is it that i'm drawing a parallel between the beta c story for statistical physics and the story for euclidean field theory where up here is there is no parameter so appearance of beta c okay so the reason for that is that the nearest neighbor ferromagnetic easing model and actually also five four models they exhibit what we call a sharp phase transition so they exhibit a sharp phase transition so what does it mean and i'm going to just mention it in the case of easing but again exactly because the other why the other one in the in the Griffith Simon class you get something similar so for easing it means the following it means that if i look at sigma zero sigma x at a parameter beta if beta is smaller than beta c then this should decay exponentially fast so this should decay exponentially fast minus x of a psi of beta let's say if i take n times e1 should behave like that one plus the total of one so there is a rate of decay and it decays exponentially fast but now if you think of the story of trying to get a limit i was allowing you to rescale but i mean if you want something non trivial you want that the things do not decay exponentially fast but polynomially fast as soon as it decays faster than polynomially fast in fact when you are going to go to the limit for people who know what this is you are going to get just white noise so in order to get something non trivial so in order to get a limit one wants to work with x smaller than psi of beta basically you don't want that i mean with with n smaller than psi of beta you want to look at a system that has a scale which is smaller than the what we call the correlation length this psi of beta but if beta is smaller than beta c just this correlation length is finite so you cannot rescale things you cannot let l go to infinity but also you cannot let a go to zero without starting to exceed by a large amount the correlation length so in order to have a limit that means you need the correlation length to go to infinity so you need beta to tend to beta c so in fact every reasonable limit that you will cook up in the constructive euclidean field theory will necessarily require that you let the parameter beta go to the critical parameter of your system of course you have this b this lambda that you can change but basically the beta will have to be close to the beta c of this parameter b and lambda so that was what i wanted to say i'm maybe i want still to tell you about the statement at the end of the first class otherwise i think i would really have absolutely nobody tomorrow and there is a camera so i'm i'm forced to to to give the second class the total remediation like in front of okay so the main result please come tomorrow those who can maybe the lack of pedagogy in the way i present doesn't show it but i like you so please come okay so uh one point four main result okay so the result is going to be exactly dealing with what are the limits we can get for five four lattice models i'm just going to state it for easy because i hope i convince you that five four is not so different and actually i would start next i mean tomorrow by giving you a few remarks and particularly telling you that the statement also holds for a five four models and the serum is going to be a description of what happens at criticality so serum or near criticality so it's a result by michael isenman and myself maybe it was published last year and it says the following so i'm going to define exactly like idea again i'm going to try to use the statistical physics perspective so you remember we defined the average of f against sigma against phi with the parameter a so here i'm going to just redefine it with the parameter capital l because i'm in the second interpretation here but it's really the analog of the t f a of uh five that i defined before so it's going to be sum for x in z d so the advantage of statistical uh physics is that we can already take the limit when the graph goes to infinity so the r limit is already trivial i'm going to look at f of x sigma of x this is perfectly uh similar to the other thing and you remember that we were rescaling at least in the in the example one which was the only one that we treated we rescaled here here i'm going to rescale and in fact if you think about it that the only kind of reasonable order of magnitude you could look at so i'm going to rescale by one of a sigma square root of sigma l where sigma l is by definition the sum for x and y in the box of size l of sigma x sigma y and this is something that depends on beta so you are going to see at the i mean just after i state the theorem i'm going to make an observation that makes it clear that this is the right rescaling okay okay so what does the serum say it said consider the nearest neighbor ferromagnetic easing model so nearest neighbor ferromagnetic remember is j x y equal one if you are neighbor and zero otherwise on z four then there exists a constant such that the following happens i can take whatever beta smaller equal to beta c i can take whatever l smaller than psi of beta so psi of beta that i defined here by the way here it's a definition right i'm telling you there is a psi of beta that satisfies this it's unique so i take l smaller than this and i take f so here i'm going to take continuous compactly supported i mean of course this includes the more interesting smooth compactly supported which would be the natural class but i mean you don't even need this so you take f continuous compactly supported then i claim the following if i look at the characteristic function at beta of tfl of sigma so i'm trying to see what this guy looks like and the serum is going to say it looks like a normal random variable so what do i mean by that i mean well if i divide it by e to the z squared over two times tfl sigma squared which would be exactly what you would get if it was a normal variable right the characteristic function would be equal to this because this is of course the variance of the variable well it's not exactly a normal random variable but i can measure how close it is to a normal random variable and how close it is is that there exists a constant that depends on f and is universal i mean that depends only on f sorry there is a z to the four because you notice that here i should also say that i should say what z is so for every z positive i get this and here the important thing and so because it's important i'm put it in another color i get log of l to the c so what this serum is truly saying and this would be the end of this first class it says that it says that tfl of sigma is almost like a normal random variable centered of variance well the variance it should be which is the second moment of your variable okay and what i mean almost is that here as the approximation is better and better as l tends to infinity okay and in particular that's what we will see tomorrow it's i mean if you believe this is also true for phi 4 whatever limit you are going to end up with is going to be such that for every average against a smooth function f you will get a normal random variable that means you get a Gaussian generalized process a generalized Gaussian process and the problem with that is that that means you always end up with something trivial so it's a no-go theorem from the point of view of trying to construct a Euclidean phi theory using the phi 4 models so i will go back to this tomorrow but i wanted to state the thing properly and just a second remark because you could tell me yeah but you are clearly here like just that maybe you chose in a poor fashion your your variance here i mean your renormalization here but notice that this renormalization is such that for f positive in fact tfl of sigma squared is between two constants why because i mean if you want sigma l is what is exactly the smeared average against indicator function of the box of size one in the sense of the so if you take f which is a different function oh sorry what did i do here this i mean f doesn't change right so i need to f of l x if i want to be looking at things at scale l i need to be looking uh oh sorry uh what am i doing x over yeah so i i look at uh at uh at the average and i look at scale capital l and i i look at the average thing there and so sigma l is in some sense the average here when you take f which is indicator function of the box of size one because it gives you the sum of everybody in the box of size l and so when you look at this it's the variance of this thing so this is if you want this is the average of sum of indicator than uh x over l belongs to lambda one sigma x square it's a very complicated way of writing this okay now if you don't take indicator function of the box because this is not a continuous compactly supported thing but you take anything which is a nice compactly supported function for instance if it's zero outside of the box of size one and bounded by a 10 then you will get something smaller than 100 by this renormalization so this is really the right renormalization of course you can change it by a constant factor but you will always end up with something that looks like that okay i guess that's a good point to stop yes well yeah i mean as soon as f is non zero non exactly equal to zero you're right and positive there is a place where it's like really positive and then you will easily see that you um that you get something which is larger than constant okay so that's the crm tomorrow we i would just make a few comments on the crm and uh explain how you do the thing i mean what is the statement for five four and after that we would have finished the first step of the i mean the first part of the class and we will dive into the toolbox in some sense which is like trying to explain to you what is the object that is going to be uh powerful for us to actually prove such a result okay okay well see you tomorrow fulling i will be here at least yes i have a question yes i'm not sure to have uh correctly and understood the the link between the two models particularly the the Griffith Simon class this thing yes on the same taking the on each point taking yeah so so if you think here so if you look at something like that that just means that in fact tau is uh hiding a collection so tau x is naturally related to uh sigma bar x i for i equal one to n why because if you look at uh let me let me maybe erase can i erase so imagine so in one case you have a sum of a tau of e to the minus h of tau times uh a product of uh derotau maybe i should write it like that even though it's gonna be a discrete thing in general so here if you write what it is it's gonna be something like a product of sums and here it's e to the minus blah blah so all this thing here in fact is nothing else than exponential of minus sum over x in your graph sum for i equal one to n sum of x y j equal one to n of q i q j sigma x i bar sigma y j bar if you sum if now you sum over sigmas that are in v of g i mean plus minus one v of g times one n this term is exactly the exponential of minus h of tau because this thing is forcing tau to be equal to this sum over sigmas uh oh this is sigma uh and this term is just adding here that you this was kind of telling you what are the interactions and there is a j x y here the interaction between different blubs different groups and then you have also the sum over x this kind of internal interaction here which is going to give you sum for i j equal one to n of k i j sigma x i bar sigma x j bar and here you just see that it's a kneading model on vg times one n with some weird coupling constants which are between different guys here it's going to be j x y q i q j if you look at interaction between sigma x i and sigma y j and internal guys are going to be coupling constant k i j is it a little bit clearer good so i have a question so why do we need this l is smaller than this capital l and what if l is very large so actually you don't need is just that when l is very large it's kind of trivial that you get uh so if l is very large the correlations in your system decays exponentially fast in constant over i mean in distance over xi of beta so if you are way above xi of beta then the correlations are extremely small so basically what is it telling you it's telling you that there is no correlation in your system so in the limits you get white noise meaning that every bits is completely behaving independently so you don't need but you end up with something that is not uh i mean this is this is known since a long time let's put it that way i just took to here in the statement we take as well as the infinite icing model or yeah you take the the infinite one you could take a finite one that will not change the answer you could take the size of your thing here the graph you could take it you size it with l of course i mean because you need to and if you size it with l you will also end up with something uh where everything is Gaussian yeah as there exists an exponent and uh and it's a small one and c of f here i just didn't want to put it but it's something like norm of f to the power four times maybe the range so i mean f is compactly supported so it must be zero outside of a box of size kept i mean small kappa say and uh you i mean you get something at rush just like the range to the power 12 or something like that which is ridiculous but you get something extremely explicit in f there is no okay well see you tomorrow