 Okay, so welcome back. So I'm going to start, I kind of cheated a bit, I started writing before you arrived. I'm going to recall a bit of what we did. So in the first lecture, so you know to refresh memories. So in the first lecture, I gave a rigorous probabilistic definition to the Uville correlation functions. And so what did I do? So just to refresh memories, I introduced the main parameter of the theory, which is gamma, which in all these lectures belongs to the interval 0, 2. I introduced q equals gamma over 2 plus 2 over gamma, so it's a function of gamma. And I introduced so the cosmological constant mu, which is a positive parameter, which is just a scale, it just appears in this scale relation here. So it's not an important parameter of the theory. But nonetheless, it's essential for the existence of the theory. But it's kind of a trivial parameter in some sense. So what I did last time is I justified that if you have the bounds up there, so here I'm calling them the extended cyber bounds. So sometimes this is how we call it. And so I define the product of, so in the path integral language. So remember these are the, should be seen as you know something like this in the path integral language, but okay, I'll justify this in lecture three and show why these correlations can be interesting to study from the random planner map perspective, for instance. So I define the product of these fields, so with weight alpha k and in the point z k in the complex plane. So I define the correlations of these fields as two. So this mu to the power minus s, where s is given by this thing upstairs, gamma of s, the standard gamma function in the complex plane, times this product here, which is the Gaussian free field part of the theory. And times the interesting part of the theory, the interesting piece of the theory, which is an expectation of, and so this is what I write above. So under these extended cyber bounds, then I can define this thing here, okay? So it's the integral of the Gaussian multiplicative measure, random or random volume form integrated against a function. And what is this function? It is a function which essentially has singularities around each point z k. And these singularities, let's say their intensity or their magnitude is given by these coefficients alpha k, okay? And so the extended cyber bounds up there just to, okay, let me recall a few things. So the extended cyber bounds ensure that this thing here is non-trivial. It's between 0 and plus infinity strictly, okay? So if I write it in with, so usually in the lecture notes I introduce this notation and the extended cyber bounds are for all k alpha k smaller than q, so remember gamma over 2 plus 2 over gamma. And minus s, strictly less than 4 over gamma square infimum of 2 over gamma q minus alpha, okay? So these are the extended cyber bounds. So if alpha k is above or equal to q, the Gaussian multiplicative chaos measure, it explodes. So this bound comes from the fact that if I integrate, say around a point, so if alpha is greater or equal to q, if I integrate my exponential of free field, I get infinity. So almost surely, so say in a ball of radius one or radius whatever. The singularity is too strong and it can't integrate it anymore. So this is where the first bound comes from. And then, so if I have this essentially depending on if minus s is positive or negative, I get an expectation which is 0 or infinity. So of course I can make sense of this, but it's not the right object, okay? It's not the right way to construct the UVIL correlations. And so in the probabilistic approach it's a trivial thing. And these bounds came from the fact that I explained to you that if I take, say, if I integrate a Gaussian chaos measure in some open set. So say some ball of radius r and center z at any point. Then the moments exist if and only if P is strictly less than 4 over gamma square. And then this bound here comes from the fact that when I put a singularity around the point, the moments of this variable exist if and only if it is smaller than 4 over gamma square, but especially infimum 2 over gamma q minus alpha, where alpha is there. So it's just a condition to ensure that I'm allowed to take the moment of this variable. I have to look at what happens around each singularity here. And I have to look also, say, away from a singularity that everything's okay. Okay, so that's what I kind of, okay, maybe it was a bit abrupt, I mean direct. But I introduced directly these correlation functions. I set this as a definition and I tried to show to you where these bounds come from, okay? So today, something that was maybe not that clear on Wednesday in the first lecture is I said that n is greater or equal to 3. But in fact, I didn't really explain why. It's just that, and this is the computation I'm going to do in front of you, is if I look at, okay, I wrote them down here. If I look at these two conditions, this implies n greater or equal to 3. That's the point. And this is what I'm going to develop today, explain to you why. And then, okay, how do you do to define nonetheless a two-point correlation function? And when I try to define a two-point correlation function, the material of the mating of trees paper by Dupontier Miller Sheffield will enter the game. Okay, so let me first just give you back the definition of the three-point correlation function because I'm going to be working on it. Okay, so in the case of, so I defined, let me define the three-point correlation function. So I can also send one point to infinity. So here, the ZKs are in the complex plane, but I can send one to infinity. Say this one, and okay, it comes out of a scaling relation. And if I send the third point to infinity, I have to re-normalize to get something. So delta 3, and so, said I also, so in the lecture notes, this is denoted c of alpha 1, alpha 2, alpha 3, but you can also denote it quite naturally like this. So this was my, my, my, so in the lecture notes, I know I set this equal to this. And I had this explicit expression. So this is, I'll be working on this explicit expression. So mu minus s, where s is, you know, the sum of the alpha, 1 alpha, 2 alpha, 3 minus 2q for gamma, gamma of s, expectation. So I, I set this kind of definition with, I, I send one point to infinity and what do I get? I get this. Okay, so that was the definition of the three point correlation function where one point is sent to infinity. And the, at the end of the last lectures, I explained what was the main theorem that we proved and that is, you know, the purpose of these lectures is to prove that this thing, this expectation is the same, has an explicit expression which is called the DOZZ formula. Okay, so let me try to explain now. So of course, okay, I can, I, we, we define the, the three point correlation function under these conditions. And let me try to argue, it's rather easy, why there's no two point correlation function. So, because in fact it never really appeared then in, in the first lectures. So if you want to define a two point correlation function, say you're, you get, the first thing you do is you set alpha 2 to 0, for instance, okay? So, and you try to, and you look at the bounds. So what are the bounds? The bounds are alpha 1 smaller than q, alpha 2 smaller than q. And if I look, 2q minus alpha 1, sorry. Let's say I'm taking alpha 2 to 0. So here, here are the bounds. So, 4 over gamma, infimum 2q minus alpha 1, 2q minus alpha 2, okay? So these, these are the bounds that I have to satisfy to define the two point correlation function. Now what does this imply? This implies that this guy here is smaller than this. So saying that this is smaller than this, it's very easy to see that. So, this smaller than this implies, well, let me get it right. Alpha 1 smaller than alpha 3, right? This is the same thing. And so of course, this guy, smaller than this guy here, well it's the other bound, it's alpha 3 smaller than alpha 1. So the, of course you, you see it's obvious there's a contradiction. You can't have both of these strict inequalities. And in fact, it shows something more. It shows that, okay, that if alpha 1 is different than alpha 3, then for all, okay, for all epsilon, or at least for epsilon small, okay? Then for epsilon small, for epsilon positive and small, alpha 1, epsilon alpha 3, does not satisfy the extended, the extended cyber bounds. So, okay, maybe I'm going to put in, well, let me write in, the extended cyber bounds, okay? So that's one first. However, and this is, you know, the somehow the key observation if you want. If I choose alpha in the interval gamma over 2q, okay? Then, so this is a non-trivial interval, right? Because this is gamma over 2 plus 2 over gamma. Then, okay, then 4 over gamma is bigger in 2q minus alpha. So that's a, I let you do the algebra. And so what we have to look for, and for all epsilon strictly positive, alpha, epsilon alpha, does satisfy, okay, so does satisfy the extended cyber bounds. So this means that I can, so it's an easy computation. This is bigger than this, then if I look at this, the conclusion is 2q minus 2 alpha minus epsilon smaller than 2q minus alpha, and this is obvious, for all epsilon strictly positive. And so this means that I can define, so conclusion of my discussion, v alpha, if alpha is in here, v alpha 0, v, so what would be 0, the two-point correlation function, if you wish, this is infinity. However, for all epsilon strictly positive, v alpha 0, v epsilon 1, v alpha infinity, well, exists, so it's a three-point correlation function which exists. Okay, so it means that we can define, if we take the same weights, we can define three-point correlation. And now, of course, the natural thing to do if you want to define a two-point correlation function is to renormalize this guy, take the limit and see if the limit exists. And that's the main purpose of today's lecture. Okay, and so the answer will be you get something called the reflection coefficient in UVIL, which can be interpreted as the partition function of the quantum sphere measure introduced in Duplantier mother-chef's field. Okay, so let me start by, so I want to keep this, right? So I think I'm going to keep this thing here right now, and I'm going to erase my definitions. Anyways, you have the lecture notes for this, and do I have, I have to, here it is. So let me jump straight away to the lecture notes what is called, what is, goes under the name of lemma 3.4. So I'm going to prove directly lemma 3.4 by first, you know, assuming something that is completely non-trivial. But so lemma 3.4, the UVIL reflection coefficient, so for all alpha in gamma over 2q for our alpha, so okay, this is some normalization. It's like linked to the two in the definition. It's because at the end you want to match with the DOZZ formula of physics, but I mean it's not important. I can define the limit as epsilon goes to zero of epsilon c gamma, so the three-point correlation function. So I, okay, this is, okay, and the notations of the lecture notes, I chose to call, when I have one point at infinity, I chose to call this with three-point structure constant. Oh, by the way, I'm sorry everywhere, of course, with respect to the lecture notes, I have to put a gamma and a mu. All these quantities depend on gamma and on mu. Okay, so, sorry, I let you in your mind correct. So this thing, of course, c gamma also depends on mu, but we didn't stress the dependence on mu. Maybe we should have in our papers. The dependence on mu is essentially trivial, so sometimes we don't stress its dependence. So this thing exists and it converges to something which has a nice probabilistic expression. Okay, so it gives a, so this thing appears in the bootstrap approach to UVIL very often and what we get here, what this theorem will provide is a nice probabilistic expression to the two-point correlation function. So I'm going to give a proof of this by first, you know, taking for granted a completely non-trivial thing. So I'm going to introduce, so proof. Okay, so the proof goes this way. So a whole of alpha epsilon alpha. So remember, I'm taking the expectation of this guy to some power. Let me take my lecture notes. The statement of the lemma is the limit exists and our alpha is going to be defined in some way I'm going to explain. So lemma is, okay, I went kind of backwards compared to the notes. I wanted to, this limit exists and then I'm going to spend an hour at the end of these lectures giving you a probabilistic expression to this. So first, okay, it's going to be, so yes, the statement of the lemma as I'm stating it here right now in front of you is this limit exists. Yes, it's a non-trivial, okay. I can multiply by epsilon square and limit exists. Okay, sorry. So let me explain how you prove this. So c gamma of alpha epsilon alpha is equal to 2 mu, so 2 over gamma q minus alpha minus epsilon over gamma. So a constant. The gamma function times, so this is always the interesting part. So it's, you know, it's all this, the first trivial part of the, so it's this thing here, I'm just, and the expectation of whole alpha epsilon alpha to the power 2 over gamma q minus alpha minus epsilon over gamma and this thing, this random variable, it's this. So I haven't, so remember this is the, this is the maximum, yes, the maximum between x and 1. So, sorry, so what I mean is that this thing, the expectation part is, you know, I'm just copying what I did there but in a specific case. It's given by this thing and what I want to really register is this formula for the, for the moment. So of course, when epsilon goes to zero, this goes trivially to, okay, mu to this power, this goes to gamma evaluated here. So everything is about trying to understand what this random variable is doing, okay? The rest is a trivial, trivial matter. But I have to keep these terms, because they're important, you know, to match the physics and everything. Okay, so what's going on? Let's look at this variable. So first, let me explain what's roughly going on. When epsilon goes to zero, I'm hitting the threshold 2 over gamma q minus alpha and if I look at what's going on around zero, I said it doesn't have a moment of order 2 over gamma q minus alpha. So this is making this expectation blow up. I'm hitting exactly the moment where white explodes, okay? And so, and this guy plays no role at all, roughly around zero, okay? And so I'm going, you're going to admit, I'm going to let epsilon be zero here. It's not, this guy's playing no role. Epsilon is playing a role in the fact that the moment is blowing up. So I'm going to replace this guy here with epsilon equals zero. Okay, it's pretty safe. Safe. And if I do that, I have to study this variable and this variable is i alpha plus some other term which I don't know, I called it i prime, okay? And what is i alpha? If x is less or equal, it's what, it's this guy around zero, okay? Where it's blowing up and i alpha is, so roughly, it's not roughly, it's actually equal. i alpha is going to be the integral for x less or equal to one of one over x to the power gamma alpha. And it's i prime alpha. And in fact, you can see that if you do a change of variable x goes to one over x, you can believe me, it's symmetric. It has the same law as this guy. It's by conformal invariance. It's at infinity and around zero is the same thing. So alpha and i prime alpha, they have the same distribution, okay? And here's the main tool, the main theorem that I'm going to spend an hour on after. It's the following theorem. Theorem 3.3 in the lecture notes. The probability that i alpha is bigger than t, is equal to some r bar of alpha plus an o, so something which is a constant over 2 q, 2 over gamma q minus alpha plus eta, where eta is positive. So, of course, you know, okay, this guy roughly, it's a sum of two variables and you're hitting the threshold where the moment explodes. So what you want to do if you want to study this, you have to study the tails of this random variable, okay? And the main theorem I'm going to spend an hour on is this. So once you have this, what goes on? Well, it's not very complicated, right? I mean, roughly, okay, this guy, the tail of this is really concentrated around zero, because the singularity is really adding some weight to the random variable. And so, roughly, these two guys are independent. Because, you know, if you're around zero or around infinity, you have a finite correlation, so the tails sum up, okay? So that's reasonable to believe that, you know, they have a finite, there's a finite correlation between the two variables here. And so roughly, this implies, so let me say what this implies. It implies that, okay, what do you do if you have two independent variables? Well, the tails, they just sum. So I get two r bar alpha over t plus this o term, which is neglectable, okay? So we're done. This implies, okay, so I let you take home exercise. This implies that if I multiply by epsilon, I get two r bar of alpha. So it's a simple exercise in probability that this implies that this thing converges when epsilon goes to zero. I know everything now. I know completely the tails. So it converges two. So there's a gamma here, but two gamma, two q minus alpha over gamma r bar of alpha. And so I think no one will complain. So if I know the take-home message is the moment is blowing up. It's blowing up because of a singularity in zero and in infinity. The two guys are roughly independent. So provided I know the tail of one of them, I know the tail of the sum. And so once I know the tail, when I'm going to the threshold where the moment doesn't exist, I can completely easily study what's going on. And up to some trivial terms, it's just the tail of the random variable. Okay, I think everything when we do this probability will find this, I hope, completely clear. Okay, so everything, you know, all the juice is contained in this theorem. But I first wanted to show you that, okay, it's just a question of moment blowing up. So the key is to understand the tail behavior of GMC Gaussian multiplicative chaos random volume terms. Okay. Probably not. No. Okay. I let you as an exercise. I don't think I need the... You know the EDA positive here? Yeah. No. Yeah, I think it should be true. Yeah, if I have a little O. But okay, the EDA is very important in our work because we need to analytically continue these moments. So we have to take out poles. So you won't see these in these lectures. There are lots of technicalities like taking out poles of these guys. If you want to take out poles, you need control on the second bound. These are very important bounds in our work. But here, no, it plays no role. But okay, I'm stating... Actually, anyways, I'm not going to prove you, you know, this term. I'm going to explain to you where this term comes from. Okay. So this... So now I'm going to spend time on proving this. I'm going to give you a probabilistic expression for this. If I have a probabilistic expression for this, then I get a probabilistic expression for the r-alpha here. Because r-alpha at the end is going to be... So if I sum up everything, let me just say what r-alpha is. So at the end of the day, what is r-alpha? This limit here, the reflection coefficient of u-vill. So r-alpha is 4 mu to the power 2 over gamma q minus alpha. Gamma of minus 2 over gamma. So the gamma function. 2q minus alpha over gamma. r-bar of alpha, where r-bar of alpha is still a bit abstract for you guys, but I'll be discussing in a moment. Okay. So that's the two-point correlation function of u-vill. So now everything is about understanding this. Okay. And at the end, you'll get an exact formula for this, which comes out of d-o-z-z as a corollary. So where's the... Okay. So let me introduce some... So now I have to describe this tail, so I have to introduce some material. All things that here people know rather well, I think, but still have to refresh memories. It's the Williams. So let me start by introducing the... by recalling the Williams decomposition theorem. Okay. So what is the Williams decomposition theorem? So I think it's stated in the lecture notes as lemma 3.1, but it's hard to read. It's better to do a drawing, and that's what I'm going to do. So if I take... So I consider... No, I can wander a bit more. So I consider a Brownian motion with a negative drift. A negative drift. So I look at a Brownian motion minus u-s, and u is positive, so I'm looking at a Brownian motion. So of course it goes to minus infinity at infinity. Even I remember this stochastic calculus. So... And of course it's going to hit a... It's going to hit a maximum. So if it goes to minus infinity, it's going to hit a maximum. So in the maximum, m... So this is m. The maximum is the supremum of my Brownian motion. And okay, it's a very known fact that the probability that m is bigger than x, so it's distributed like an exponential variable. So the probability that m is bigger than x is exponential minus 2 nu. So nu is the drift times x. So this is... Okay, and so the Williams decomposition is the following thing. I'm going to shift my curve here down and re-center it. Okay. And so the Williams decomposition, it says that the following thing. So if I take... If I take that curve up there and I take the... Around what's going on around the maximum, what do I get? So let me try to reproduce the... I guess... You get this and on the other side you get what? You get this. And so here... What do I have? I have minus m. Minus the maximum. And the Williams decomposition says that if I take a Brownian motion and I look at what's around the maximum, conditionally on the value of the maximum, I get on this side. I think I called it B2t. It's a drifted Brownian motion conditioned... It's a drifted Brownian motion, but of course it's below zero, so it's conditioned to be negative, non-positive. So this is a standard diffusion which has been studied for years in probability. So I get a drifted Brownian motion conditioned to be less or equal to zero. And on the other side, what do I get if I look at things backward, like this? I get the same thing, a drifted Brownian motion conditioned to be negative. So for all s positive, I'm negative. I get the same thing, except that I'm not looking at the full trajectory, I'm looking at the trajectory and L minus M, and L minus M, well, you see, it's the last time that my drifted Brownian motion hits minus M. So the take-all message is when I decompose my trajectory on my Brownian motion around the maximum, I first sample the maximum according to an exponential variable, and then if I re-center, what do I get on the right? I get a Brownian motion conditioned negative with the same drift, and on the other side, if I look at it this way, I get the same thing, but I stop it at the last time it hits minus M. So that's the Williams decomposition lemma that we're going to use. So quite naturally, I'm going to introduce some definitions which I'm going to use now in the sequel. I'm going to introduce, of course, this picture but on R. I'm going to extend this everywhere and, of course, you know, somehow what's going to happen is that I'm going to look at tail events, so M is going to go to infinity, and so naturally I'm going to have a two-sided drifted Brownian motion conditioned to be negative. So let me introduce some notations. So here now I'm really following 3.2 tail expansion of GMC, page 12. Okay. So let me introduce the main, all the the main geysers of this thing. So I introduce, I think I don't know how I call this a matcal B, so I'm going to introduce the two-sided drifted Brownian motion, so if S is negative. So this is a straight B and this is a curvy B. Okay. B alpha S if S is positive and B alpha S I put a bar on this one here and B bar alpha to distinguish them are independent, okay, so BM, so Brownian motions conditioned to be negative and with drift, alpha minus Q, or if the picture that I just wrote, I take the drift to be minus nu with nu positive and nu is given by Q minus alpha. Okay, so these are two independent Brownian motions, so conditioned to be with drift, so with drift and conditioned to be negative to be less or equal to 0. Okay. So this is, okay, for those who know a bit of these stories, when I'm going to study the exponential of the free field there's going to be a radial part and the radial part is going to be described by this by, well, by this guy, sorry and the non-radial part is going to be defined by what, you know, Duplantier-Miller-Cheffield called the lateral noise and I'll introduce the lateral noise right now. Somehow the lateral noise in all these businesses what makes things kind of, you know simpler to study when you have two points is you have lots of symmetry and somehow the lateral noise plays no role. Essentially you can state all the theorems on toy models where there's just a Brownian motion and essentially the lateral noise plays no role, but we have to introduce it. So what is, the lateral noise is going to be the non-radial part of the Gaussian free field around 0. It will be the radial part which appears. So the lateral noise process it's a Gaussian field with this covariance. So I'm going to switch to the cylinder so s is going to belong to r and theta is going to belong to 0 to pi okay and you know the complex plane by a conformal map it's the same thing as a cylinder r times 0 to pi so I'm going to introduce this guy I'll explain in a moment we'll see why, when this appears but the lateral noise process it's just a Gaussian field with a log with this covariance so I take the maximum of exponential minus x, exponential minus t and I get this and if I have this, you know this thing defined on r times 0 to p to pi sorry I can introduce the Gaussian chaos measure the exponential of this guy oh yesterday in the first lecture sometimes I forgot the 2 but of course there's a 2 here okay this is a standard Gaussian chaos measure and I'm going to be interested in slices on a fixed s so I'm going to introduce this thing so integrating this guy on a slice with fixed s in my cylinder so things will become quite clear I hope in a few so this is lateral noise process so a way of constructing this thing is I take a Gaussian free field in the full plane I take out the radial part what do I get if I map it to the cylinder I get this guy here okay so that's so this is a generalized notation that will only exist in the space of distributions what makes sense is integrating zs against ds it's not necessarily defined as a function a slice okay but I'm still going to use this notation in a very important property of this if I'm still with my abusive notation as if it were a random function it's a random generalized function so let me keep is that it's stationary if I translate this guy on the cylinder it has the same distribution as if I okay so I'm going to introduce did I erase the notation I'm going to introduce rho of alpha which is the limit in some sense of rho of alpha epsilon alpha the random variable that's blowing up now that I have all the material I'm going to introduce rho of alpha okay which this notation should be seen kind of as the limit of let's say rho of alpha epsilon alpha as epsilon goes to zero in my picture okay it's written over up there so it's this variable so I take my two-sided drifted Brownian motion condition to be negative drift minus q minus alpha and I integrated against the lateral noise process and now I can introduce the tail r bar of alpha so r bar alpha is the expectation of rho of alpha to the power two over gamma q minus alpha so here's my definition yes it's a definition definition definition definition so let me give a no it's not equal in lot of rho alpha zero alpha no no no it's you'll see in the proof it's not if it were anyways no no because the expectation of rho alpha zero alpha to the power two over gamma q minus alpha is infinity it's blowing up whereas this guy it's not why is this guy okay let me explain this thing here what you should see it's really it's the same thing so it's the same thing as the exponential of a log correlated field with no singularity you should really see this variable as kind of the exponential of a log correlated field with no singularity this guy okay it's the exponential of a log correlated field so remember it's against dx but so never mind okay this is max of x and one so it's one sorry this guy it's it's the exponential of a log correlated field you know when I say log correlated field it means it it it it explodes on the diagonal you know it's log one over x minus one but it has something blowing up around zero and this guy here and the blow up is due to the maximum over there in fact and when you condition the thing to be negative you're killing the singularity and so let me stress once again expectation of rho alpha zero alpha two over gamma q minus alpha so maybe it's a bad notation I don't know it's worth infinity that's the full point two point correlation now here's something here's something I'm going to admit but is that roughly okay this guy it's just an ordinary log correlated field with a divergence on a diagonal so just like I said in the first lecture when I integrate a log correlated field it has a moment of order four over gamma this is you know set into stone right so if I take any interval included in r the expectation of zs of ds to the power p which is equal to you know the expectation of my Gaussian how did I call it in gamma my Gaussian chaos measure integrated on the interval times finite it's just an ordinary log correlated field if and only if p is smaller than four over gamma square okay this is the important thing and what we're going to admit now is that since I'm integrating against that sds not an interval but with something that is you know really going fast to minus infinity and plus which is going fastly to minus infinity around this and around this it doesn't change the moment property okay it doesn't change them and so what I want to say is that what I want to say is that the moment the expectation of alpha to the power p is finite just as with an ordinary exponential of a of a Gaussian free field and so in particular this is a well defined definition because four over gamma square is smaller than two is bigger so two over gamma q minus alpha is smaller than four over gamma square if and only if alpha is bigger than gamma over two so I have a well defined random variable here okay so I think that I'm going to take a two to three minute break because I've introduced everything I need now and so I think so you can digest a few minutes and maybe ask me questions then we can start again in five minutes okay so I introduced the material to understand it's still written it's going to be more transparent in a minute I hope I introduced the material to understand the tail of this variable and I told you that in theorem 3.3 we prove this and I introduced the r bar alpha which works and this is what I'm going to justify in this hour okay but I had to introduce lots of material and let me just say something again which is all over the lecture notes and I haven't justified but I don't know if I'll have time if I take the exponential of a log correlated field and I integrate it on some open set it has a moment of order p for all p smaller than four over gamma square and since you know this lateral noise business it's just a log correlated field if I take a compact interval and I integrate then I'm integrating the exponential of my log correlated field and some okay this is not an open set but a compact set sorry in a compact set this is a finite if and only if I have this and the fact that here this guy is going very quickly to minus infinity when s goes to plus infinity and minus infinity it says that roughly it's like integrating this exponential of log correlated field on a compact interval so I hope this is a this is clear okay so let's let's go 0.3.3 so everything is going to be I hope clear on why this appears so I'll say it in two languages one which is very analytic and one maybe which is more geometric so I want to study i alpha remember i alpha up there so my Gaussian free field has this covariance this is the covariance of my Gaussian free field it's log 1 over x minus y plus log x plus y plus and so x plus is the maximum of x and one so if I'm inside the ball of center 0 and radius 1 this thing disappears right this is where the 0 and this is where the 0 the ball my GFF it's I can take out this term okay now the expectation so I'm going to do a change of variable I'm mapping my complex my ball of radius 1 and center 0 to a to a cylinder so it's not a cylinder it's a half cylinder now if I look at the covariance it's nothing but this and it's this the minimum of s and t plus log of this this is a straight forward computation okay this is straight forward this is a I'm just a this plus log of the upstairs here is 0 so if I'm writing it at the level of covariance but if you want it in a more geometric picture you take your Gaussian free field you take the radial part so this corresponds to taking this is the covariance of the radial part of my Gaussian free field and this so the covariance of the radial part is this and plus the covariance of the lateral noise okay so I did it as a simple computation or you can see it geometrically I take my Gaussian free field I project it on the circle of radius exponential minus s and what's left is an independent Gaussian field which has covariance this the lateral noise okay and of course everyone recognizes here this is Brownian motion so this guy the radial part is distributed like a Brownian motion okay so now what am I going to do well I'm going to I'm going to do a change of variable in I of alpha and see Brownian motion and lateral noise appear and then take the the limit so let's go I of alpha is the integral for x less or equal to 1 so I'm going to do the full computation at a formal level but if you want to justify it rigorously you just add cutoffs here and then make them go to zero so I'm doing my change of variable I set x minus s e i theta so I get what integral of 0 plus infinity so I get a Jacobian minus 2s because there's you know if I do radial it's r dr d theta and since I'm taking an exponential I get a 2 here so it's a standard this guy becomes exponential of gamma alpha s and I get of course okay so let me emphasize that it's very important to always write this explicitly this guy here you know this hidden normalization if you don't you're going to end up saying something false so now I decompose my x over there it's the sum of two independent parts so I'm going to call this bs by the way so this is my bs so it's a Brownian motion with this covariance so I get so x is a Brownian motion plus an independent guy so I get integral 0 plus infinity exponential minus 2s so there's a typo by the way I wrote alpha s in the notes and there's a typo there's a gamma in front okay so still going a bit a bit slowly I'm going to write things explicitly this I get exponential gamma my Brownian motion minus you know the term that comes out of this times the lateral noise and so at the end of the day if I you know if I put all these terms in s together so recall that q is gamma over 2 plus 2 over gamma I get the integral of exponential gamma a Brownian motion with drift minus q minus alpha times and I integrate over d theta so zs ds so remember I wrote it here zs is integrating my lateral noise on the slice so I get this expression so this is the very important expression the key thing where you see how this guy is going to appear already right okay because you know the Brownian motion is going to have we're going to zoom around the maximum and we're going to see these two these two parts the two sided guy appear here okay so let me continue so now I I use the Williams decomposition which I erased and it says this so the integral so now my Brownian motion of course it has a maximum so I'm going to so it's equal in distribution or in law okay to what so I take out the maximum so exponential gamma m so where m here is the maximum this Brownian motion and so if I I shift I get minus l minus m plus infinity exponential gamma of my curly set of s plus l minus m ds so I take the supremum of my radial part you know my drifted rate minus the drift that appears naturally so I and I saw the Williams decomposition says that if I look around the maximum what do I get on the right I get this this drifted Brownian motion condition to be negative and on the left I get the same thing up to the the first time the last time it hits minus m now here remember this guy is independent from this guy okay the lateral noise is independent from this Brownian motion so it's in this picture I'm shifting but this is this guy is independent of this guy and it's stationary so I'm allowed to take out the shift here so it's equal in law to exponential gamma m integral minus l minus m plus infinity of exponential gamma b alpha s z of s ds okay so now now we're almost there right so let me make a few so this is the variable we have to study so the variable is this okay so this is i of alpha right so so i of alpha is equal in law to this so let me call this variable okay it's i of alpha but it's in distribution so what does it say it says that this i of alpha I can have this trivial bound it's less than minus infinity of so this is a whole of alpha it's clearly below okay and it's clearly above this so what is what is what is not uh exponential gamma okay but what is nice here is that these tails are obvious right this is this guy here the maximum well I know the this is an exponential variable with parameter q minus alpha 2 q minus alpha so in fact this is completely explicit this is equal to 1 over t to the power 2 gamma q minus alpha this is there's no other terms here it's it's an exponential variable it's completely explicit and this guy here well it's independent and this guy here is independent and so I have a random so I have a random variable here which has a tail of this form and I have an independent variable here which has a tail of the form 1 over t to the power okay at least it has a tail which is smaller than 1 to the t over p for all p less than 4 over gamma square so it's trivial you have two independent variables it's a it's a it's a scaling argument so this guy here so this is whole of alpha the tail of this guy well by a simple scaling it's nothing but equal to the expectation of whole of alpha 2 gamma over q minus alpha divided by t 2 over gamma q minus alpha okay up to some little correction because this is only valid if t is bigger than 1 okay so these two guys are independent so I wrote this probability as t over this and then I apply my scaling they're independent I get this right what? no not yet I'm saying I get this I get this if I look at this guy I get the same thing except that I don't get whole of alpha I get okay half let's say this but what am I saying that the tail of this guy it's clearly a constant between it's clearly between two constants 1 over t to the power 2 okay so now it's easy because look at this so this is I of alpha okay so what happens when I look at the tail if I say m is bounded then if m is bounded I'm going to end up with a tail here which is of this order so if I look at the probability that this guy is big necessarily m is big so if m is big this means that I of alpha if I of alpha is big roughly on this event I of alpha it really looks like m has to be big so it looks like this guy integrated so if m is big then this is minus infinity and so what I'm saying what I tried to argued and I didn't give you all the technical details but I argued that on the event that this tail is very big on this guy is very big it looks like exponential gamma m and I replace m by infinity here times of alpha and so at the end if you work a bit you just have to control a bit you know what's going on with if m is not very big and and you can actually get even bounds you know correction terms and so this explains why I erased it but at the end of the day if you sum up these considerations I get that this guy is big is roughly you know the whole of alpha divided by t over 2 over gamma q minus alpha okay so you work you get you get the second bound etc the second order okay so I think that this explains why why you see these kinds of objects and so let me let me explain okay why why do we call this what time is it I think I'm going almost too quick but I wanted to show this in great detail and so let me recall what the quantum sphere is I'm erasing exactly the wrong thing so this is the definition of let me not erase this sorry so I have my measure here so why do I say this is the partition function it's because so definition so let me call this like the alpha quantum sphere so alpha quantum sphere and so they consider only the case alpha equals gamma but okay it's so the alpha quantum sphere is going to be the following random measure defined up to translations and on the cylinder so it's going to be the measure if I take a function defined on measures then the alpha quantum sphere well it's exactly the measure it's the measure associated to this guy okay or no it's going to be the measure F of so the unit volume one so I take my my lateral noise GMC measure and I multiply multiply by exponential the two-sided drifted brown in motion condition to be negative and I divide it so I want a unit volume guy I divided by total mass okay my role alpha well alpha is the alpha this guy it's the total mass so I if I integrate this on the cylinder I get one it's unit volume and but it's in very important that it has an extra term here a radon nicodym derivative which is this which is the total mass okay and so if I want this to be a probability measure then I have to divide by to remember this is our bar alpha and so okay it's very natural to to interpret the unit volume as the partition function of the of the alpha quantum sphere okay so I I wanted to give this definition because I I realized that for most people it was not clear that there was this term here in the definition there's a radon nicodym derivative it's not just you know this radial part which I've conditioned to be negative times the lateral noise it's also there's a radon nicodym derivative which comes out naturally and so and so what you really have to understand is that the mating of trees paper it's it's really constructing random volume forms associated to you will conformal field theory with two mark points one at zero one at infinity okay and okay what I did today is I showed you how you you construct you know the partition function so the two point correlation function out of the three point correlation function but similarly I mean you can take so we constructed so I I give a definition in lecture three we give I gave a definition in lecture three I in the same way if you if you take the three point the you will volume form associated to the three point correlation of you will something we've constructed if you you take weight alpha alpha epsilon you're going to converge to a random measure in the space of quantum surfaces and it's going to be this so you can also what I did for the two point correlation function you can also very easily lifted to the convergence of random measures okay so this is nice because it really you know it shows how really the link between the two works and it had to be that way they have two mark points so it's it's the you will two point correlation okay so I've been I realized that I still have some time so I'm maybe I'm going to explain to you when one or two points I can that I admitted on the moments of order p and then probably I don't know if it's a problem if I finish a bit earlier than expected I'm it's very good but is it I hope it's clear all this that I'm saying that okay so let me yes so let me explain something something in a very simple way something that I've been admitting in lecture one and lecture two I'm not going to exactly prove this but something I use very often is bound four over gamma square so so imagine I take some some ball of center x and raise r and I always I told you something which is important is that if I integrate the free field say I told you that this is finite I take of course gamma in zero two this is finite if and only if p is smaller than four over gamma square so let me try to explain to you where this four over gamma square can be seen very easily okay so I use this very often so this is maybe a little digression so so let me do the following thing so of course this will not depend so say I'm not forget about the other terms they play no rule let's say I have a nice Gaussian log correlated field here okay so of course this will not depend on r I mean if you show it for one open set it's true on all of them you just cover them and it's so I want to explain how you can see a three line computation where this comes from well do it like this take a square so I'm going to integrate so this is a square I'll call it c I can cut it into four squares okay and so the integral over the square of my exponential of course I can write it as the sum on the four sub squares and so I take p bigger than one by super additivity I have that the integral on the square it's greater or equal to the sum of these on the sub squares to the power p and so by this is a stationary process okay so it's so it's bigger than the sum so remember x plus y to the power p so a plus b say because this is a variable a plus b to the power p is greater if p is greater than one so I get this on each square and by stationarity I get four expectation on the square so exponential gamma x to the power p so this is a this is obvious so these are not in the lecture notes so I I can put them okay now the integral over zero one half okay what is this I do a change of variable so x is one is u over two and I get what so this creates a one over two so wait at dx that's a one over two square so I get one over two integral on this on the initial square so on the initial square x to u x is u over two minus gamma square and once again what I'm doing here on a formal level you can do it rigorously you just put a cut off here you do all my stuff and you go to the limit okay now now let's look at this guy the covariance of this guy is log so I was playing around with these relations ten years ago when I started GMC theory and I really love these little manipulations so I hope you'll like it too it's it's this so it's log two plus log one over u minus v so what does it mean it means that this guy it has the same distribution as x u the initial guy plus a random variable a log variable with centered and with variance log over two okay I'm just so this guy it has the same distribution as my initial guy so one over two square exponential agassian variable minus its variance times the integral over the square but this time of not x u over two but x u so my initial guy now I plug it so I plug this inequality this guy integrated on the square is bigger than four times the same guy integrated on this on the c one square but I know that I integrated on the square c one I can relate it it's the same thing as the initial guy times an independent gaussian variable so let's see let's plug this in so I get that the expectation of the guy on the standard square so c well to the power p is greater or equal to four so what is this guy I get one over two two to the power two p I take this guy I get exponential of so gamma square p square over two log two minus gamma square two p times the same thing so integrated over p so of course if I want this to be compatible I have to check that this thing has to be less or equal to one otherwise the moment is infinite I mean it's a contradiction so let me write it so for those who so you're actually going to see the you know the kpz formula here this is the kpz formula on hostort dimensions if you like it's the way the measure scales when you start zooming in so what is this if I you know if I put this this is equal to two to the square minus zeta zeta of p okay so I get expectation of my business to the power p greater or equal to the same thing to the power p times two to the power two minus zeta p where zeta p is equal to two plus gamma square over two p minus gamma square over two p square so the kpz quadratic kpz relation okay and you can check that zeta of four over gamma square is equal to two take home computation so it means that if p is above this guy okay what does zeta look like so it's a concave not get this wrong something like this then it starts going back down etc and what it says is that zeta p is going to be yeah it's for four over gamma square it's starting to go down so if you're above then this is going to be greater or equal to one okay so that explains the this thing here it explains the threshold and that we've been that I've been using all along it's a very simple you know scaling argument and actually let me let me finish with a few minutes by explaining roughly by the same kind of trick I'm going to explain why if gamma is bigger than two the measure is zero okay so I mean I hope this is useful I think you can really explain very easily these thresholds so I'm going to explain now why gamma has to be smaller than two it's the same kind of ideas I've already done some job here so I introduce zeta of alpha two plus gamma square over two so I introduce my quadratic relation you know which is the way the measures the moment scale around the when you zoom in around the point now if gamma is bigger than two zeta prime of one is negative okay zeta prime of alpha is equal to two plus gamma square over two minus gamma square alpha so it's easy to see that zeta prime is negative and I have zeta of one equals two okay so this means that zeta if gamma is bigger than two then zeta is like this so here there's two and it's doing this okay so zeta is a concave function zeta of one is obviously one that's trivial is obviously two that's trivial and if I take the derivative of zeta prime gamma is bigger than two is equivalent to the derivative at one being negative so it means that I'm I'm like this okay so this means that I can find alpha strictly smaller than one such that zeta alpha is strictly bigger than two okay now if I take I do the same thing alpha strictly less than one I can use sub additivity right this is true and so I do the same game I take my square okay and I cut it into little squares size one over two to the power n so I can cut it into squares so into two to the n so I cut it into little squares of size one to the two to the power n and I have two so I have two to the two n of these squares right chopping it into little pieces and I do the I can do the same thing but in the other way around it's sub additive it's not super additive alpha and what do I get I get that if I integrate the measure on my my square to the power alpha I'm going to get less or equal to two to the two n and I'm going to get one to the two n zeta alpha because in each square I scale like since zeta alpha is strictly bigger than two I can choose one I let n go to infinity and I get zero so this shows that gamma over two is the right is really the threshold where the thing starts to become zero and it also these little manipulations explain to you why the moment when I take the allocorrelated field I get this bound p smaller than four it's just looking at scaling properties around the measure for those who know this zeta p is really behind the geometric kpz relations ok so I'm going to stop here I finished a bit early but I hope that you don't mind I mean it's just that I could go and lecture three but I think it's not a good idea right ok thank you some question for the correspondence with the quantum sphere there was a paper by Arles a young and self so did they do exactly what you showed today? no they did they did something different they said oh I erased it they say if I take a gamma quantum sphere basically when you have a two point you can construct a three point with the same values of the alphas so what you construct is the three point differential ok roughly what in the language of let's say of correlations what you can the alpha quanta the gamma ok you take alpha equal gamma so the gamma quantum sphere is this and you can, by picking a point you can construct the three point correlation function with gamma gamma gamma that's essentially the message of their paper So you but you can only construct the three point with the same weights exactly the same weights So that's kind of the take-home message. So what they did is they did and they take a quantum sphere with alpha equals gamma So it only works for alpha equal gamma you take the the quantum sphere with alpha equal gamma you pick a point Randomly on the sphere you map it back to the complex plane to one say then you can you have the same Distribution as the UVL volume form that we construct By putting three weights gamma gamma gamma So it's what why is what I said today? It's it's kind of the the other way you take three points You want to make one disappear you converge to the quantum sphere? You can pick a point on the quantum sphere to to construct a You measure with weight gamma gamma gamma. Yes Yes Here no, no, no, no No, what's really important is that Okay, the main what is very important is that the the random variable It has to you know, it has a moment of order P for all P less than 4 gamma square. Essentially what we what is very important is that if I take my two-sided Brownian motion So I forget about you can forget about lateral noise. It plays no role What you really need what we do need and you can use yeah comparisons theorems and stochastic different locations To get this what you really need is that this guy? He's really going very fast to minus infinity on both sides And so you can use stochastic calculus for comparisons. That's that's where you can use this. Yeah But you okay on some technical details you can you use it actually Yes much faster because you can't it can't go get a maximum, okay So you calculate a certain moment of Ah, so no, oh, sorry, of course you it's a good question. I Forgot to say something so if I may take two minutes just to answer What I forgot to say is oh, I'm I'm So Me just take one one minute. Sorry. It's a good. It's related to your question and it made me say that So what what I proved today is that? four two q minus alpha over gamma the gamma function so this So the unit volume guy, so this is the the partition function if you wanted the quantum sphere I showed that this was the limit as Epsilon goes to zero of C gamma alpha epsilon alpha Okay, this is I took an hour and a half and I proved this today now Remember we proved the dozz formula so as a corollary of the dozz formula and this is kind of answering your some of your question as a corollary of the Dozz formula we can take the limit in dozz here and we're going to get That since we know how to compute this we know how to compute this and this is worth just let me write it because it's It's you know, it's appears. It's worth this. Oh, there's a mu here It's worth minus pi mu So the L function is the ratio of gamma functions So it's a much easier formula. It's called the reflection coefficient No formula But to answer your question So this is the corollary of the dozz theorem you get an explicit value for the two-point correlation function So it's much easier. It's only four gamma functions But you don't get the law, right? Okay, so this term is completely explicit So if you you just take out the gammas this one and this one the mu etc You get this moment So it's roughly three gamma function, but it doesn't give you the law because the alpha here is linked to the alpha and the power So the the theorems that we get, you know on the sphere on the Riemann sphere Up to now they they don't enable to get the full law of the random variable because the moment is linked to the definition of the Law itself, however, what's interesting is that what our PhD students are doing is is looking at the circle or The disk and in these cases you can completely characterize the law in some situations like in the Fyodorov-Buschel formula I discussed but now you get the moment is linked to the definition of the variable