 Let's start. So first of course I would like to thank Hugo, Dominique Le Copain for inviting me here to speak of Yuville theory and the DOZZ formula. So of course I found, so with my colleagues, we found the quest of understanding this DOZZ formula quite exciting and I hope that I'll be able to communicate to you some of this excitement. When we started this a few years ago, at least for us mathematicians, it was a rather mysterious formula and now we finally understand it, at least in a probabilistic language. So I hope, as I said, I can communicate a bit of this excitement on the DOZZ formula. So maybe I should start by explaining, so the research program that I've been involved in with my colleagues for the past years and that we're still developing for the next few years to come. So let me at least give the name of my colleagues and you know that have been working with me on the program that I'm going to explain to you in a few minutes. So there's François David, who's at the CEA. So there's Colin Guillermo, who's here. So he's a specialist of geometry at Orsay. So there's Anticoupien-N, who's at Helsinki, and of course Rémy Rod, who's at Marne-la-Vallée. And also, okay, let me just say a word on my PhD students. So there's Ryeong, who just got his PhD. There's a white, really? It's not, okay, white. So there's, is that okay? So there's Ryeong, who just got his PhD. There's Guillermo Rémy and Tounanzou working on this project. So I'm going to explain. So roughly, what is our research program? Well, it's kind of to unify, to reconcile two fields of theoretical physics. So, okay, so let me kind of give you both sides of the picture. So on one side you have statistical physics, okay, so statistical physics and probability theory, okay. And, well, what is the goal in statistical physics? Well, it's kind of to study, you know, and compute, say, path integrals, okay. So something that's going, you know, you want to compute something like a function of a field phi with respect to some action, what is called an action. So when you're going to have a potential and you're going to have some background measure on your field. So roughly a large amount of statistical physics is about, you know, studying these kinds of objects. So in the discrete, they're usually very well defined. You know, this is just a discrete measure and so you try to take scaling limits, but you can also try to directly construct them in the continuum and study them, etc. So I say that this is a large amount of what you want to do in statistical physics. Now for UVIL, so for those who don't like path integrals, let me say right away that today, except for this small introduction, there will be no path integrals, just probability theory. Okay, but let me just say a few words. So in UVIL, well, what is, okay, I'm going to put a 1 over 4 pi here, but it doesn't matter much but 1 over 4 pi, but just to be consistent with my notes. So in UVIL, well, the potential is going to be, so say, 4 pi mu. So mu is the positive cosmological constant. If I really, and so exponential gamma to the phi, where gamma is a parameter, which is going to belong to 0, 2, I'm only going to discuss this case in these lecture notes. We can go beyond 2, but don't, well beyond at least 2, 2. I'm not going to discuss this in these lecture notes. And if I really want to be rigorous, I have to add some curvature term. So here you have some manifold, say, and so Q curvature times phi, and Q is this special value that people have seen talks on UVIL, field theory of UVIL quantum gravity. No, it's, this is the Q. Okay, and in UVIL what you want to compute, you know, in the statistical, in the statistical physics language, say, is these path integrals where you look at the correlations of your fields. Sorry, it's going to be K in my notes, I guess. V alpha K, ZK. So these guys belong to the complex plane. So in UVIL, the manifold is going to be C or the Riemann sphere. Okay. And this guy here, it is, what's the definition? Well, V alpha Z is exponential alpha phi Z. So I'm writing it in a physicist notation, huh? Okay, so this is for UVIL, but more generally, okay, we're on the side of statistical physics here. Okay. So, well, if you do probability theory, so probabilistic, so probability, what are the objects involved on this side of the picture? Well, it's the Gaussian free field. I mean, so that's the GFF. So in these lecture, in these lectures, in these talks, of course, how the GFF will be Gaussian free field. And of course, so this is going to be the rigorous thing behind this gradient phi square. And in UVIL, V is exponential. So there's going to be the exponential of the Gaussian free field. So X of GFF. And so this is so Cahan's Gaussian multiplicative chaos. So this is going to be called GMC in the lecture notes and in this talk. Okay, so if someone at some point forgets, sometimes I'm going to say GFF GMC probably. Yes? Sorry, it's gamma, most as you are too. Okay, and on the, you know, in the statistical physics literature, there are lots of formulas. Okay, so as I said, I'm going to present two sides of the picture that we're trying to unify. So the first side is statistical physics and then we'll come to conformal field theory, conformal bootstrap. So, but still on the statistical physics side, there are lots of formulas for these path integrals or for these exponential of the Gaussian free field. So let me give you an example. There's Fyodorov-Bouchot formula. There's, okay, there's a formula by Fyodorov, Le Doussale, Rousseau, etc. And so they have exact formulas for, for these kinds of path integrals or improbabilistic language for exponential of free fields. Fyodorov, Le Doussale, Rousseau. Okay, so these are in the middle of the 2000s. Okay. Okay, so that's one side of the picture. And so, there's another side of the picture and there are numerous specialists of this side in this room, so. This kind of formulas were not known about Gaussian free field before until the 90s. I thought that everything about Gaussian free field was known. It's, it's not, it's not Gaussian free field, it's exponential of Gaussian free field. Okay, well, that's. It's a verdict so great that everything is known exactly right. Okay, so. I mean, I'm not surprised that it's known exactly. I'm surprised that you say it was only discovered in whatever. Okay, so here's an open question and I don't know if physics can answer this. Is, if you take the exponential of a Gaussian free field on the Riemann sphere, it's a, and you integrate it on the Riemann sphere. It's a, it's a random variable. What's its distribution? Does it have a density? What's the density? That, that you cannot extract from the DOZZ formula, a priori. And so, so what I'm talking, okay, okay, I know what you're, because you're, you're a specialist of conformal field theory. So you compute correlations. They're trivial. No, no, I agree. But what is not trivial is to look, what's going to appear in the L'Uville field theory is, you're going to integrate your, your vertex operators on a, on a, on a surface. And you're going to have fractional moments of this to compute. And this is hard to compute. I agree that if you take the expectation or, of a product of these guys, and if it's a free field up there, it's trivial. However, if you integrate it in a, on a surface, finding the distribution is very tough. And I, there, I think there's no answer, no one knows the answer to the question I, I raised. Okay, so just to answer, it's integral over zero two pi, the distribution, exact distribution of integral over zero two pi of the free field on the circle. So this has an exact distribution, okay. And this was conjectured by Fedorov-Bouchot. And this was proved a month ago by, by Guillaume Rémi, one of our students. So Rémi proved that, that this formula is correct. And so with Tounan-Zou, too, they're, they're generalizing. And how do you do this? Well, you map it to L'Uville. Okay, so on the other side there's, I want to say CFT, conformal field, conformal field theory. So usually nowadays, so there are, so 3D specialists, 2D specialists of the conformal bootstrap. So that's the other side of the picture. So on the other side of the picture, it's kind of a different language that appears. So the goal here, you know, just like the goal over there is to study and compute these functionals. The goal of a CFT is, is to compute correlations. And they use a very powerful method called the conformal bootstrap. So it was used efficiently by, by you and your colleagues in 3D. And so there's Sylvain Raoul Santakia who, who works in this framework in 2D here. Okay, and so on this side of a picture you have, you know, lots of natural objects like the stress energy tensor. So it's, you know, and it gives, it gives rise to ward identities. So the holomorphic ward identities. You have degenerate fields, what is called degenerate fields. And this is, this gives rise to differential equations called the BPC. So after Belavin, Polyakov, Zamolochikov, that's 84, differential equations. And, okay, and you have formulas of course. And so there's the DOZZ, for instance. Okay, now, as I said, DOZZ. So this is after Dorn Otto Zamolochikov, Zamolochikov, who, so the brothers, Zamolochikov, who found this formula independently in around 1995, 1994. Okay, so our program is to, our program is kind of to reconcile using rigorous probability these two worlds in the case of, of Youville. Okay, in the case of, of Youville conformal field theory. Okay, so there's a broad audience here and I guess I have to convince everyone that, that they should come back on Friday. So I gave lecture notes, it's filmed, so I think it's a hard task. But still, let me try to, to convince you why this can be interesting. So for instance, in this room there are specialists of the bootstrap. And well, why can it be interesting? First, you know, when you, when you construct correlations using this so-called conformal bootstrap, this recursive method, on a conceptual level it's not obvious that the correlations exist in a consistent way. Usually, you know, there's what is called crossing symmetry relations and you have to check numerically that what you're doing is consistent. So this way of doing things for Youville is going to prove consistency of the theory. That's one thing. Maybe this is too conceptual, but there, there are other reasons that can be interesting to physicists is that by constructing correlations by using probability in these path integral formulations, in some cases we get easier to handle formulas for these correlations compared to the conformal bootstrap. And in particular, I will state in lecture three, so the so-called Knisnik-Polyakov-Zamorocikov conjecture on planner maps. And in this conjecture, the correlations of Youville which arise as scaling limits of observables on a planner map, okay. Well, they can be rather easily described using probability theory, whereas in the bootstrap language they're not that easy to describe. I mean, of course maybe we can debate this, but I think it's, probability is easier to study these correlations. So take home message is probability enables to construct, you know, nice formulas for correlations and nicer formulas in sometimes the bootstrap method. Okay, so for specialists of planner maps, I already kind of said why it can be interesting for you guys. It's because of, you know, the KPC conjecture, so if you map, you can formally embed a planner map in the sphere, and you take the scaling limit, the fields on this map, so the volume form, the Ising spin, if you're studying the Ising model say, they're supposed to be described by the so-called KPC conjecture, and so Youville gives you, and the KPC conjecture tells you that the scaling limits factorize into a, let's say, a regular lattice part times the Youville conformal field theory part. So this will be the part of lecture three, I'll explain in lecture three a bit in more detail the KPC conjecture. And finally for those who are specialists of statistical physics or probability theory, as I already said kind of here, you know, you can see this program as some way of giving exact formulas on the exponential of the free field. Okay, so it's an integrability program if you're not at all interested in conformal bootstraps or conformal field theory. Okay, so I hope I kind of convinced everyone that maybe there could be an interest of coming back. And so now I'm going to give you maybe a, so I'm going to give you a little summary of the four lectures I'm going to give here. So the lectures are really going to follow these notes. Okay, you don't have to know. So they're going to follow these lecture notes so you can, you know, you can basically I'm going to do everything on the board with these notes. Just, you know, opening up and so you can follow easily. So let me give you a little summary. So in lecture one, okay, so what is lecture one? So I'm going to give, so now I'm going to forget in lecture one on this path integral business, I'm going to do probability theory today. So lecture one, I'm going to give a probabilistic definition of the correlations, of the Liouville correlations. So I'm going to, okay, so this is based on our work with François-David, anti-Coupanian, Rémy-Raud. So Liouville quantum gravity on the Riemann sphere. So here, I will not explain, Riemann has one hand, no, I forgot. Okay, so I'm just going to set a probabilistic definition and it's only in lecture three where I'm going to justify this definition from the path integral perspective. But today I just want to define every single object we're going to work on in a very, you know, precise way using probability. And I'm going to state the DOZZ theorem. So this is based on two works with anti-Coupanian and Rémy-Raud. So KRV one, let's say, and KRV two, I'm going to state what we proved. Okay, so we gave a probabilistic content to DOZZ. Okay, so that's lecture one, so maybe, okay, I'm going to go to lecture two. So lecture two, okay, so I'm going to define and lecture two, define the two-point correlation, the two-point correlation function of Liouville. More precisely, I'm going to give a probabilistic formula for the two-point correlation. So, okay, here's another abbreviation, LCFT Liouville Conformal Field Theory. Okay, so roughly, I'm going to define. So in Liouville Conformal Field Theory, the two-point correlation function is called the reflection coefficient. And so I'm going to define with my language here. So this is going to be my notation with these vertex operators. I'm going to define this thing. It depends on gamma and mu, so I'm going to stress this. And it's going to be defined as the limit as epsilon goes to zero of epsilon. So quite naturally, I mean, of this. So I take a three-point correlation function. It blows up when epsilon, when one of the vertex goes to one. So this is going to one. You know, it's exponential epsilon something, so it's going to one. And it blows up, and if you cure the divergency, then you get a two-point correlation function. And I'm, so for those, I'm going to explain why this is, so if alpha equals gamma, I'm going to explain why this is the partition function of a random measure called the quantum sphere. So the quantum sphere is an equivalence class of random measures, which was introduced by DuPontier-Miller Sheffield in a paper called Liouville Quantum Gravity as a mating of trees. Okay, so they develop a theory of random surfaces with two mark points, zero and infinity. And I'm going to explain how this enters precisely into the framework of Liouville, field theory. It's nothing but working with the two-point correlation function of Liouville. Okay, so this I'll explain. And of course I'll get a corollary to the DOZZ formula. So the DOZZ formula is, sorry, it's a formula for the three-point. So if you get a formula for the three-point correlation function, of course you get a formula for the two-point, since it's the limit of a three-point. So this is an exact formula of the DOZZ, so on this. Okay, so it's maybe a bit abstract, you know, this notation of vertex, but I'm going to jump into the rigorous definition in ten minutes. So lecture three is going to be, I'm going to, maybe I'll say it out loud rather. So lecture three I'm going to explain why the definitions of lecture one of these Liouville correlations, why they are a faithful representation of these path integrals. It's probably, it could be interesting to physicists that to see how from a path integral which is ill-defined for us, we get a clean probabilistic definition for the Liouville correlation functions. And also in lecture three I'll state the KPZ conjecture on planar maps, which says that Liouville describes the scaling limit of observables on planar maps. Finally, lecture four, I'm going to give a sketch of proof of the DOZZ formula. So lecture four, so I skipped lecture three on the board, so sketch of proof. I can't, you know, I made a choice when I get, for giving these lectures, is that I'd rather people have, you know, good knowledge, a good understanding of the definitions, rather than straight away jump into proving the DOZZ formula. I really want to spend time defining the correlation functions, two-point correlation functions, etc. And then in lecture four I'll go to the proof. So the main ingredients are the BPZ equations. So we're going to implement in a probabilistic setting. So remember I want to reconcile two worlds. I'm going to show you how to implement this thing here into the world of probability. So basically this is based on the first paper with Antti and Rémi. Essentially the idea is if you want to compute a three-point correlation function, you add what is called a degenerate field. So maybe in a more understandable language for probabilists, what we do is you want to compute an integral. Let me give you an example of what, maybe if there's a take-home message on how we prove DOZZ formula, it's the following. Imagine you want to compute this integral. So if you want to compute this integral, there are numerous methods. But when you know you're starting math, probably the most famous one is if you try to compute this directly, usually you fail. So what do you do? You try to add a parameter inside and study the associated function. So the one way to prove this is to do this. So you look at this function. Now why did you... So the integral you want to compute is 5 0. 5 infinity is 0. So you know the boundary, one boundary condition, not the other. But why is this interesting? Because you can take the derivative of this guy. And so the derivative is this. And so this you can compute. It's 1 over lambda square plus 1. Or more generally, phi is going to satisfy differential equation. And so it's the same idea behind proving the DOZZ formula. You want to compute this some kind of infinite-dimensional path integral in the physics language. So you add a parameter inside, which is z, a complex variable. And you show that this thing satisfies a differential equation. You look at what happens when z goes to 0 or z goes to 1. You look at the boundary conditions. And these boundary conditions, you can imagine, well, are going to be three-point correlation functions. So you're going to get lots of relations between these three-point correlation functions. And at the end, you're going to conclude that there's only one solution, DOZZ. That's the rough idea behind the proof. And so it's based on this. And it's based on studying this four-point correlation function on the boundary. So when z goes to 0 or 1. Now what happens when z goes to 0 or 1? A physicist would say, OK, you're doing an operator product expansion. So you're studying your four-point correlation on the boundary. So OPE. So a mathematician would say I'm doing a Taylor expansion. Taylor expansion. And so these ideas of adding a parameter in the three-point correlation function and looking what happens on the boundary values and using the relations that are called crossing symmetry, it's a beautiful idea. In fact, it was an idea of Teshner in the context of Yuville. So maybe I should put a name here because he had nice ideas. So he's a physicist, but he had nice ideas on Yuville. And so I'd like to emphasize his name. You laugh because I say he's a physicist. Because you said he was a physicist, but he had nice ideas. Ah, OK, that's horrible. And it's filmed, too. No, I cite a lot physicists and I gave a list of reviews from physics. I take physics seriously. OK, so that's the plan. Maybe I was a bit long. OK, so I'm going to jump into lecture one. So give a probabilistic definition of the Yuville correlation. So I'm going to set the definition and show you why it exists. So the first thing I have to do, so it's written in the lecture notes, lecture one, I'm first going to introduce the Gaussian free field because I need the Gaussian free field to give a meaning to this guy. Then I'm going to introduce the exponential of the Gaussian free field to give a meaning to this guy. And then I'm going to set my definition for Yuville. And so you can discuss. And you'll see the Gaussian free field vertex part. I'll do a special one minute explanation. OK, and then at the end, I think I'll have time. I'll present to you how DOZZ themselves found the DOZZ formula. OK, so let's start. So I know that I just want to say for the people who, once again, the physicists, there's going to be formal math. So I'm sorry, I have to give exact definitions. There are lots of mathematicians in the room. And after all, it's my job. So I hope it won't bore them with technical, you know, formal definitions. So I'm going to start with the Gaussian free field. OK, so let me start with GFF and Gaussian multiplicative chaos. So I'm following my lecture notes. I'm at section 2.1 in the lecture notes. OK, so now I'm really reading kind of what you all have under your eyes. So I'm going to start with the full plane Gaussian free field. Let's call it X bar. So X bar, so it's going to be defined as, so first let me give you a few notations. So sorry, I'm going to start with the notation. So what I'm going to call S of C, it's the functions which are smooth. C infinity and all the derivatives go to 0 faster than any polynomial. OK, it's the, I think it's called the Schwarz space of functions because you can take four-year transforms. OK, and the tempered distributions are the guys who you're allowed to pair with the functions which have a, I didn't write a line so hard. It's not an S line, it's an S. So you have the space of distributions. Now there's a subspace of functions which decrease rapidly at infinity. It's the ones which have average 0. So I'm going to say that F is an F0 of C. OK, if it has average 0, so integral over C, F of X, so the Lebesgue measure is going to be written like this, is equal to 0. Now there's also a space of distributions which act on these functions of average 0. It's the ones which live in the tempered distributions modulo constants. They're defined up to a constant. OK, and so the full plane GFF, X bar, what is the, so it's definition 2.1. So let me, definition 2.1, I don't want to take this one out, definition 2.1. So the full plane GFF is what I call X bar, so if I integrate it, so I'm taking the notation of generalized functions where, you know, when I apply my distribution, my random distribution, so it's a random guy living in here, so he belongs to the tempered distributions actually modulo r. OK, so it's defined up to a constant. So if I take someone of average 0 here, and I take another guy G of average 0, then this is a Gaussian variable, and if I want to determine my field, I just need to compute the covariance with respect to these two guys. And so I get, so I wrote a simple integral in here, I get c square f of x G of y log 1 over x minus y d2x d2y, and this is nothing but the Fourier transform. So the Fourier transform is 1 over xi square, so 1 over f square if you're in, for frequency. And so if f equals g, you see this is, so for f equals g, this is non-zero, non-negative. So that's the, so it's defined up to a constant, because if you change X bar by a constant, since this guy has average 0, it's because the log here, the correlation is only defined, it's only conditionally positive definite. Now, once I have a full plane GFF, I can construct all the Gaussian free fields I need in these lectures or on the plane. And so what you do is written in the notes. Well, if you want to define something which is defined not modulo constants, well, all you have to do is take X bar and remove an average of X bar with respect to a probability. So if I have a probability measure, sorry, if I have a probability measure on the complex plane, so this is a probability, well, I can look at this field, X bar, I take out its average with respect to the probability. So let's write it like this. Okay, so this is a, now this is a field which is not defined up to a constant. Because if I shift X bar by a constant, here and here it goes away. So it's really defined against any test function. And so this is how you produce all your Gaussian free fields on the complex plane. You take a full plane which is defined up to a constant which has logarithmic covariance. You take out any average of this guy with respect to a probability measure and it gives you a Gaussian free field, okay? Of course, rho has to have some regularity because if you take something which gives only mass to one point, it's not defined, so essentially I will not discuss this anymore in these lectures, but essentially you can integrate your Gaussian free field against much more than just smooth functions. As long as this thing here is finite, okay? For instance, you can take only continuous functions, but you can take really probability measures which are not regular. As long as this thing is finite, you can consider this guy here, okay? So the real definition is you need, if you want to define this, you need a finite energy condition. So it's just that I start with a clean definition, but then as soon as I have something like this, I'm allowed to consider this variable and I can consider this field. Now, so I give examples in the lecture notes and so I will, to avoid, you know, things getting complicated, I will only work with one Gaussian free field in all four lectures and I'm going to take rho, rho will be the uniform probability measure on the unit circle, okay? So in all these notes, x is going to be this. The Gaussian free field, I'm going to work within these notes, it's going to be x bar of x minus integral 0 over 2 pi x bar e theta d theta. So let me register this formula. So I have an index of notations in my lecture notes. I'm only working with this guy. So it has, if you take the average with respect to the unit circle, it's worth 0. So this is the Gaussian free field and the covariance can be computed explicitly. It's very easy. So for a mathematician, this is an abuse of notation, of course, you know, it doesn't exist point-wise, but I think you get the point, you integrate against, you know, functions and this what I write makes sense. So you can compute explicitly the covariance of this guy and it's this this is a fixed notation. So x plus is x maximum with 1. So this is the Gaussian free field. I'll be working on in all the notes, okay? So it's a full plane. I take out its average with respect to the unit circle. It's a well-defined random field living in the space of, you know, Schwartz distributions and it has the following explicit covariance, okay? So we have to agree on, you have to understand the definition of free field. Now, so this is the gradient part and now I'm going to, there's a break at half or is it two hours? Okay, I think I'll have a break at 3 p.m. just a two minute. Okay, so now I'm going to define the exponential of the Gaussian free field, x, that I'll be working with for all these notes. Okay, so formally it's a random measure. I think for physicists they'd rather hear a random volume form because I guess physicists would say that this d phi is the measure. So it's a random volume form which formally is written like this. So I have my x, my Gaussian free field, which is fixed in these lectures. So this is what the object I'm going to define now. And what is g of x? It's 1 over x, okay? So it's just some nice continuous function worth 1 inside the unit circle and 1 over x to the power 4 outside the unit circle, outside the unit disk, sorry. It's defined on the complex plane, okay? Remember my notation, x plus is x, max 1. So this will be clear in lecture 3 why I'm introducing this guy, okay? So you see since this guy is not defined point wise, you have to regularize it. So you define this, so this is the proposition 2.2, so kahan 85. How do you define this guy? Well, you take your Gaussian free field, you regularize it with any smoothing procedure which is reasonable, okay? And you take the limit. So it's equal to this limit, okay? So that's how you define things. So in language of probability, if x epsilon is a Gaussian free field that I've smoothed at scale epsilon, so typically what am I going to take here? I'm going to take, okay, what am I going to write it down here? I'm going to take x epsilon of x is going to be a smooth modification. So theta is going to be some smooth function, say. I modify my Gaussian free field against the smooth function, okay? So I make it, so, you know, I'm smoothing it at scale epsilon. I take the limit and then I get a random measure which is called Gaussian multiplicative chaos. So this is what kahan proved in 1985. So I don't want to go into all the history of these measures. He proved it in some setting and then there's been a huge amount of papers, you know, extending the setting of convergence of this theorem. I gave a reference in these notes, a reference to a work of Nathanael Berestiki who, you know, who has a nice proof of this kind of theorem in a general setting. So actually you can really smooth this guy with something which is not even the infinity. You can also, and you can take lots of smoothing procedures. One which I'll use here is the circle average smoothing. So I guess Duplantier-Cheffield used this smoothing procedure in one of their papers. Oh well, in all actually. So basically the take-home message is however you regularize your field, there's a theorem that says that it converges to a random volume form, these volume forms, and that these, especially these random volume forms do not depend on the way you smooth your field here. If theta you replace it with some theta tilde, you'll get the same limit. Okay, and probability, and you can even work with discrete Gaussian free fields. If you want you can start with a discrete Gaussian free field, you know, and construct a discrete measure, go to the limit, you're always going to converge to the same object called Gaussian multiplicative chaos. Okay, now where is this condition gamma smaller than 2? It's because cahan proved, so where am I going to continue? So cahan proved that, so it's in the proposition. So let me put it also in this proposition. So it's still the same proposition, 2.2. Okay, m gamma is different from 0 if and only if gamma, so let's say gamma is positive because by symmetry you can take it positive if and only if gamma belongs to the interval 0.2. Okay, so that's where this comes from. Why do you think gamma equals 0? Because I can give you a few line proof of this by a scaling argument, which will maybe, but basically when gamma gets big, if you take epsilon very small, you simulate this density, you're going to see that as gamma goes big it concentrates more and more on very large spikes and when gamma gets too big there's not enough room for one spike to exist basically. It's getting more and more concentrated this guy on a very small set and at some point there's not enough room for it to exist anymore. That's what's happening. Now I don't know how much I can give a very, I don't know, that's why I'm taking a break. I mean I can, there are very simple arguments to explain why gamma equals 2 is a threshold. So I'll see, maybe I'll do that. Okay, but that's why. Let me state some things that are known, so I didn't put a name on this theorem, but okay, maybe Julien Barral who was here knows who did this first, but there's another, so if I take an open set, so O is an open set in the complex plane, so yes, expectation, I'm a probabilist of course for physicists, this is average with respect to the Gaussian free field, but so expectation will be with respect to the Gaussian free field X. So this volume form, if I integrate some open set which is non-empty, well it has a moment of order p if and only if. p is smaller than 4 over gamma square, so I'm giving this problem, and in fact if we want to define correlations we have to give a more precise statement, so now if I take some real number, let me enumerate a few properties of these measures, and I take a complex i, well I can ask the same question of putting a singularity around z, and the statement is the following, so if I look at, if I put some singularity around z, and I integrate it against my volume form, and I ask the same question, when is the moment finite? Okay, well, so do I have room, so can I put it here? It's equivalent, this being finite to the following condition, p smaller than 4 over gamma square, infamum 2 over gamma q minus alpha, where q is the q which appears up there, so it's gamma over 2 plus 2 over gamma. So, okay, so this was, so this part I'm going to take a break of 2 minutes now, so what did I do in this first hour? So I introduced the Gaussian free field I'll be working in all these lectures, it's written here, so it has this covariance, I introduced the exponential of the Gaussian free field, so this random volume form, and I explained to you, so we admitted that some very important basic properties of these random volume forms, namely that they're non-trivial if and only if gamma is in 0, 2, and when I integrate these volume forms against balls, say open balls, the moments exist if and only if of order p, I have this condition, and if I put a singularity around a point, so I'm adding mass to my random, I'm integrating something which is blowing up around z, then also the moment exists if and only if I have this condition. So I'm going to stop 2 minutes now. So we start again. Okay, so just let me remind, we looked at this random variable, so which is, we looked at the average of our volume form, our GMC volume form integrated against this singularity which is blowing up, okay it's blowing up if alpha is positive, and we had a condition so for existence of this moment, and recall that q is this, so this is a side remark that for alpha equals gamma over 2, this guy is equal to this guy, okay, so, right? So if alpha is bigger than gamma over 2, putting this, you know, this blowing up singularity is going to make the random variable here with a, you know, a fatter tail, and for alpha smaller than gamma over 2 it doesn't change the moments. Is there an explicit formula for this equation? This one? No. No, no, no, what's going, this guy is lecture two. In lecture two I'm going to compute the tail of this random variable, and I'm going to get a constant over a power, and the constant is the two point correlation function of uville. It's the reflection coefficient. And so there's an explicit formula for the tail of this guy, and that's the two point correlation function of uville. So the take home message, by the way, is that the two point correlation function of uville, you know, these quantum sphere objects in the mating of trees paper, they're the constants in the tail expansions of GMC volume forms. So this is lecture two. Okay, so, but right now I'm going to register this thing here. So now that I've defined, I can give a definition. So here's the definition. I'm reading formula definition. So I'm reading formula 2.13 in the lecture notes. So I didn't put a, it has a definition, I didn't put in some kind of box definition, but I'm reading 2.13. And so here's the definition. Product of my fields, by definition, is equal to 2 mu minus s, so I'll give you the definition of s, gamma minus 1, gamma s, product 1 over zi minus, so I take n points in the complex plane, and I compute an endpoint correlation function, alpha i, alpha j, expectation of something, so I don't have room. I'm going to write this guy, which is the essential part on a separate board. I'm going to write it on here, in place of this. So, excuse me? I managed to translate physically what it means. It's gamma less than 2. It's just saying that this operator into the gamma x is relevant. And we know that if it's not relevant, then there's no continuum limit. Ah. It doesn't make sense to return the theory by an operator which is irrelevant. Ah, OK. Yes, but it's that. It's that if you, if, OK. So you're happy now with this thing. OK, it's just, OK. If you put, you can't put this here with alpha bigger than, OK, you know, with alpha bigger than 2. But it just, it doesn't change the action. But we actually prove this. We spend lots of time. So the expectation here is the main part of UVIL, so I'm sorry I can't write it. So the expectation is, so let me write it like this, OK, 1 over the integral on the complex plane fx of z. So I did a, a fat or a bold, I don't know, z here. So I integrate my volume form. I take a fractional power on it. And what is s? So s is sum by definition minus 2q over gamma. What am I integrating? I put a fat z here. Product, OK. So that's the definition. So the UVIL correlation functions are 2 times mu. So mu is a positive constant. It plays no role. It's just a scaling factor. To the power minus s, s I wrote up there, times gamma minus 1, global constant. The gamma function, so I guess everyone knows what the gamma function is. It's, you know, it's the complex, the meromorphic function with poles at the minus the integrals. Times this cross product, the Gaussian free field part. And what is really special about UVIL and what makes it difficult is to compute this expectation. So this part here, it's times, to the power s, the integral of my volume form integrated against, you know, these, these, these functions. So what is this function? So this is worth one if I'm inside the unit disk and x outside the unit disk. But especially what is, you know, kind of interesting is that you get 1 to the power x minus zk to the power gamma alpha k. You get singularities around the points zk in your definition. And so you have to integrate them against the measure. So that's the definition. Okay. So I decided to, to, to throw this definition in lecture one. And so I'm going to, to, I wanted everyone to see it. I mean, so here it is. If you want to say something on UVIL, at least with your probability list, you have to study this guy. And that's the, that's the goal. I don't know if it's. Okay. So first thing. Is this part to show you this material formula? It's, it's not very hard to show from the path integral. It's okay. I'll explain this in lecture three. The main ingredient is the Gersonov theorem. And for a physicist, it's just the complete the square trick. So I don't know if you know the complete the square trick. But, okay. You have people who use complete. But I can explain anyways. When I'll do lecture three. So this is the starting. So first, let me, let me, let me write the bounds. When does this thing exist in the sense that it's non-trivial, not equal to zero? Well, I've written all, all we need on the board here. Okay. If you, you have to check that this moment here exists and is non-trivial. Okay. So, so how can, when do you define these guys? Well, this expectation I wrote here. It belongs to zero infinity. If and only if minus s. So I can, I can write it even explicitly. Say 2q minus sum for k equals 1 to n alpha k over gamma is what? So first thing, if I, you know, if I take some ball which doesn't intersect the z k's, I need this measure to, to, I need this volume form to have a moment. So I need this bound here. So remember, I'm taking to a power s minus s, sorry, the Gaussian, the GMC volume form against the function. So I need this to be smaller than four over gamma square. Now I also need, if I look at this guy here, I have to look at what's happening. If I integrate, if I look at this guy around the ball centered around z k here, it has to have a moment of, of minus, I mean it has to, it has to have a moment of order minus s, right? So I just copy this thing here. And so I need it around each point. And I get infimum with the minimum for k between 1 and n, 2 over gamma, q minus alpha k. So this is the condition to ensure existence of the Liouville correlations in a probabilistic setting, okay? So that's the, that's the bound. I just copied the bounds because my function is just, you know, it has gamma alpha k singularity around each point. I just copied these conditions up there, okay? So that's, that's, that's, that's how far you can get using probability to define Liouville correlations. Of course, if you get out of these bounds, you can define this object. It's, it's going to be zero or infinity, but it's not the right object. So using probability enables you to define Liouville for a certain set of bounds, but not outside these bounds. Actually, we're trying to see how far we can go with complex alpha k's, et cetera. But when you're real, this is, this is as far as you can get, okay? So let me make a remark. So, and it's very important that we have these, because we want to work on this with these bounds and, and, and, and they include the famous, you know, Cyberg bounds. So the Cyberg bounds 1990. So it's a review that I cite in my lecture notes. The Cyberg bounds are, oh, sorry, I forgot an essential bound too. I forgot, sorry, I have to add them. Where am I going to add them? I need also that alpha k is smaller than q. I need alpha k smaller than q. Okay, because if alpha k is bigger equal to q, it's easy to show that in fact this, this guy is worth infinity almost surely. Okay, so these are the, if alpha k is bigger than q, this is, this guy is worth infinity. So these are the two bounds, and they include, so that's, as I wanted to say, they include the Cyberg bounds. Okay, so Cyberg was working in this path integral formulation in 1990, and he, he found bounds for the existence, and, but we can go beyond the Cyberg bounds, and it's very important to go beyond the Cyberg bounds. So Cyberg, it was 2q minus sum over alpha k, negative, and for all k, alpha k is smaller than q. Okay, so this of course is included, this is a, a bigger, because of course this is positive, so, so these are the bounds for, for existence. Okay, so I, I took some time to, to, to explain, so of course these global constants can seem very mysterious, but they're due to the fact that we want to, to show the three-point correlation is DOZZ, and we have to tune the constant for it to be true. Of course, you know, DOZZ is defined up to a constant, but when we prove a theorem, we, we need to put the exact constants we want. Would you put this as a definition? Exactly. Can you explain what it comes from? Lecture three. The, the, you, I mean, I, I wanted everyone to see the definition, and then it's well, you know, it's a well-set definition, right? Where does it come from? Well, essentially, this comes from the Liouville potential, which is the, the V-phi up there. And, and, and then this comes from the Gaussian free field part. I mean, everything is explained in lecture three in a transparent way. I mean, and I'll explain next week where it comes from. But for now, I'm, I'm trying to, to give the definition. So what, what one can actually show, so the following thing. So the K-PZ relation. So the K-PZ relation is, if I take a Mabuse transform on the Riemann sphere. So this will be actually proven in lecture three, but I'm stating it here. So one can prove the following. So if I take the correlations at n points, instead I take them at, I apply Mabuse. So sorry in my notes, of course, this, these correlations depend on gamma n mu. Then I get that it behaves like a conformal tensor. So I'm happy. This means I didn't, my definition is, is correct. And I have the, so, okay, so this, I'll explain better in lecture three. But you can actually, you know, see that it's, you've constructed a, so this is kind of okay for people who know nothing of, of conformal field theory. Probably this is kind of the definition that says that you've constructed a conformal field theory. You, you, you take a product of fields, you apply Mabuse. And if what you get is the same thing to your points, if you move them with Mabuse, if what you get is the same thing times a product of the derivative of your Mabuse to some powers here called the conformal weights, then you've constructed, you know, a real conformal field theory. So this is, will be proved in lecture three. Okay, so what am I doing at the end of this lecture today? I'm heading to the theorem we proved. Okay, so I have to state this. Okay, so now I can head to the theorem. I don't want to erase the definition. Okay, so when you have a Mabuse on, on the Riemann sphere, when you take three points, there is a unique Mabuse. Okay, you take three points and you take another three points. There is a unique Mabuse which sends these three points to the other three points. Okay, so this, this thing here, this, this conformal covariance condition, it restricts, it restricts the three point function. So now I'm, I'm heading to the theorem. So I have a probabilistic expression. And what I know, so this is a very familiar computation for specialist conformal field theory. It's kind of where all of it started. You know, these three point correlation functions that Podiakov was computing. Well, you have this formula here and here I have a constant. So I should make it depend on mu in my lecture notes. I didn't make it depend on mu, but it's implicit. So what is this formula? Okay, so I want to compute the three point correlation function of the theory. Now, if I apply a Mabuse, I have this thing over there happening. Okay, these side primes. So what happens? If I take this guy and I divide it by this guy, if I apply a Mabuse to the points, it's not going to move. I take, I use this formula. If I take a Mabuse, I know that this is equal to psi prime z i to the power one half, psi prime z j, sorry, to the power one half, z i minus z j. This is a, so this is a take home exercise. If I take a z plus b over c z plus d, and I make, look at the difference in the complex plane, I get the derivatives times the difference. So if I take this guy, I divide it by this guy, and I apply any Mabuse, it's easy to see that by using this computation that it doesn't move. So if it doesn't, so since three points can be mapped to three other points, any three other points, this means that the ratio, this three point correlation function divided by this product, it's a constant. So you get for free from this relation that there is some constant, okay, and such that the three point correlation function is this. So this is a very, you know, it says that the conformal covariance property of correlations in conformal field theory completely restrict the three point correlation functions up to a constant. And so you can extract this constant. So sometimes you, so this is what physicists write sometimes, you know, quite naturally like this. They write it as, sometimes they write it. So you can, you can write this quite naturally too. In physics you'll usually see this like this. Okay, why? Because if you look at this formula, it's easy to see that c gamma alpha 1 alpha 2 alpha 3. Is the limit. Okay, so let me not get this wrong. Z3 going to infinity. Z3 from, from the formula. Okay, if I look at this formula, it's easy to see that if I take this three point correlation function with the third point going to infinity and I renormalize properly, I get exactly the constant I'm looking for. Okay, and I get an explicit expression for this constant in the probabilistic setting. And this constant is worth. So remember my S, I can recall it. So here I'm on page nine. I get this expectation and so I'm introducing a notation. So the average with respect to the Gaussian free field of some variable and what is this variable? Well, it's the same kind of variable you get except that you have a pointed infinity. So it's the integral of X plus. So I've sent one of the points to infinity, which I integrate against my GMC, my Gaussian multiplicative chaos volume. So I get this compact formula for the three point constant. All right, so now I can state the purpose of these lecture notes of these, okay, these lectures, which is what we proved. Okay, so S here, let me recall, S is alpha one, I have three points, minus two Q over gamma. Now let me make a side remark, just a minute side remark for people who know nothing of the bootstrap. You know, I told you in the beginning that our program was kind of to reconcile the bootstrap and probability statistical physics. Now what do people do in the bootstrap? They start by saying that the three point correlation function is this. They find a formula for this. So this sets the three point correlation function and then recursively, hence the name bootstrap, they compute the four point, the five point, the six point correlation by a recursive procedure. So their way of constructing correlations is completely different than what I presented for the past hour. What I did is I straight away gave you endpoint correlation functions. I didn't yet explain exactly why they're interesting, but I'll do that in lecture three with the other K-P-Z relation, but I gave you a definition for endpoint correlation functions. It's still written here. Now what they do is they give a definition for the three point and then they recursively compute the four point, the five point. And what we want to do at the ultimately is to show that what they do in physics when they do that for Liouville is equal to this probabilistic construction. That's our goal. And the first thing, the first matching part is to show that the constant that they inject in the recursive procedure, the DOZZ formula, it coincides with our probabilistic formula. So this brings, so what these lectures I guess are about. Just one question. You, in your formula for the last one, the X plus, right? Yeah, so this is... Plus comes from your proper definition of the... Let me write it, max of X and 1. Does it come from the particular VFF you took with the covariant? Yes. It doesn't matter. Oh, sorry. So if you had chosen another normalization, you'd have gotten a slightly different one. Yes. This is the background metric. More generally, you take any metric and you get the same formula up to a constant. It's the veil anomaly. But I don't want to talk of veil anomaly in these lectures. I don't want to talk of different metrics because I don't want to get, you know, people confused with them. So I just work with one GFF. But I can work with any GFF. And I just have different things here. So there are an infinite number if you want the probabilistic formulas for this. But I chose one for these lectures. Okay, so here's... Also from a practical point of view, yes, it would be interesting to know if the following simple result falls from here to here, that you could just discretize everything I did on the lattice and we compute this thing on the lattice by doing some Monte Carlo simulations and we compute this three-point function. Yeah. And then the limit of the lattice space is going to zero. Are you going to get this formula? This one? Yeah. Or the DOZZ? This one, DOZZ. Yeah, if you discretize a free field. Well, you don't discretize the free field. You discretize the whole thing. That path integral. Ah, yes. If you discretize that path integral, I mean, you're going to converge to, yeah, to this thing and to the DOZZ. It's contained. Yeah. I think... I'm not sure I understand completely, but I think yes. Maybe... Okay. Okay, so here's the theorem. So with my co-offers. Anti-propionin in Rémy-Rod. So it's based on two papers. So sigma, alpha 1, alpha 2, alpha 3. This thing I defined is fractional moment. So it's a moment of some variable depending on alpha 1, alpha 2, alpha 3 inside the variable but also in the moment. So because S is related to the alpha 5. This thing here is equal. So assuming, of course, that alpha 1, alpha 2, alpha 3 are real, which satisfy the bounds I gave you. So alpha 1, alpha 3 satisfy. Okay, so it's written in the notes. But for n equals 3, there's a gamma here. 2 over gamma. I wrote it previously, but they satisfy these bounds. I can write it a bit more nicely. I'm getting tired. So it's 2q. So the condition for it to exist on the probability side. Okay. So if I have these bounds, then the theorem says... That this constant, that this fractional moment, equals DOZZ formula. Okay. So, of course, I didn't define the DOZZ formula. So I have... Now I have time. Let me write the DOZZ formula. It's written so at the beginning, really at the beginning of the notes, I wrote this, okay, kind of... It's a really non-trivial formula. So in order to gain a little bit of time, I encourage you in the lecture notes to look at expression 1.6. So 1.6, rather than me writing it wrong, 1.6 is a clean definition of Zamorotikov's epsilon function, which is a holomorphic function defined on the complex plane, depending on the parameter gamma over 2, and which has simple zeros that I've indicated. So the DOZZ formula is this. So this constant I defined, this 3-point, is equal to the following formula. So p pi, sorry, mu, L, gamma square over 4, gamma over 2, 2 minus gamma square over 2, times 2q minus alpha 1 minus alpha 2 minus alpha 3 over gamma, times, so, star. And what is star? So let me first say what L is. So first L, sorry. So what is L? I'm introducing two special functions, the epsilon function from Zamorotikov, and the L function. So the L function is just the standard gamma function taken at point x, divided by the gamma function taken at point 1 minus x. So in physics, usually, okay, I'm going to write it in a different color. So let me start by red. Okay, red. So in physics, L of x is usually denoted gamma of x. But I obviously don't want to take this notation. And in physics, usually, people work with the potential exponential minus 2b phi. And so, b is equal to gamma over 2. This is, you know, math to physics translation. Okay, just, and physicists call gamma over 2b, and they call this L function the gamma function. Okay, at least, okay, I work a lot with Sylvain Ribot's review and maybe it's standard notation, I think. And what is so the DOZZ formula? It's so this gamma function, well, double two gamma functions to this power here times something, and this something is what? It's Zamoro-Chikov's, so it's, so Zamoro-Chikov's special function that you take. So I'm going to explain what alpha bar is in a minute. So of course, by symmetry, you know, alpha 1, alpha 2, alpha 3 play the same role. So you see this, of course, on the formula. Okay, so there's four epsilon functions. So I encourage you to look at this function. It's the relation 1.6 page 3. Okay, and alpha bar is, I should have put it here, actually it's the sum of the alpha i's. So I'll put alpha bar and I'll put here that, so DOZZ is this thing to the power 2q minus alpha bar gamma times, you know, 8 epsilon Zamoro-Chikov functions and alpha bar is the sum of the alpha i's. So this is what we proved. That's the statement. So you see when we started a few years ago, we had these path integral descriptions up there and well, you know, there were discussions in physics on how much these path integrals are related to these formulas and, okay, we tried to understand it and, you know, after some work we managed to prove this theorem. Okay, so we have our first link with the conformal bootstrap approach of Viewville. So the second open problem would be to show that, you know, 4.5 point correlation functions are given by these bootstrap procedures. So this is still an open problem. Okay, so I have, okay, I have 20 minutes left. So I'm going to explain to you now how DOZZ found DOZZ. Okay, so I'm going to, the good news is that I can cast the derivation within my probabilistic framework. Of course they were using path integrals and everything but we can translate in the probabilistic language what they did. So that's what I'm going to do in the next 20 minutes. So let me just, okay, let me just say a word on this DOZZ formula which is going, so I'm going to set it up there for good. So of course, let me start by, you know, where's the eraser? So let me insist, the notations are fixed. So when you see this L function, it's gamma of x divided by gamma 1 minus x. So of course, I mean the first time you see this formula, it's very mysterious. I guess for most of you. The second time you see it, it's still very mysterious. If you look at it a bit, at the end you start to understand what happens. So let me say that a crucial point on this formula is the following. If I take this function, so DOZZ, so let me write something that you can show. So this epsilon function here, it has very special properties. You know when you shift it by some, by gamma over 2. So I wrote the relations in the lecture notes. And this function, it's a function when you shift it by gamma over 2, or 2 over gamma. Then you get, it's relation 1.7. You get that it's the ratio of two gamma functions times the initial function. So it has two very special, this epsilon function has two very special properties with respect to shift. And if you look at these, well what happens when you look at the DOZZ formula, where you change one of the alpha 1, say alpha i, so alpha 1 plus gamma. You get this formula here. You get that this is equal. So I'm skipping the computation, but you get that it's this L function. So it's going to appear everywhere in this talk. Gamma alpha 1 over 2, L of gamma alpha 1 over 2 plus gamma square over 4, L of gamma over 4 alpha bar minus 2 alpha 1 minus gamma divided. So alpha bar is alpha 1 plus alpha 2 plus alpha 3. So you get some crazy formula 2, which is the way when you apply a shift. So what I call a shift is you shift by gamma, say one of the alphas. You get this relation. And okay, but it's 2.22. Relation 2.22. I hope this is readable. Yeah, minus 2 alpha 3. Okay, and remember that L of x. So this comes out of very special properties of the epsilon function here. So you have this special property and actually you have the same dual property. But you get the same equation by replacing gamma over 2 everywhere by 2 over gamma. But, and this is kind of, you know, you replace mu by a dual. So this is by a dual cosmological constant mu. So I don't know it by heart and this dual, yes, it's equal to minus to the power 4. Okay, so it's a bit, you know, complicated business, these formulas. But okay, so if you change one of the alphas by alpha plus gamma, you get the ratio is worth, you know, these, all these products of L functions and you get the same thing if you, if you shift by plus 4 over gamma because you're replacing, and you replace your mu by some other mu, mu tilde called the dual cosmological. I mean, this is just stuff that comes out of complex analysis with this formula. It's because of the very special product. So let's just register this crazy formula and now I'm going to explain how DOZZ found the DOZZ formula. Okay, good, I have, so let's go. What did they do? Okay, so I am following page 10, the derivation of the DOZZ formula in physics, analytic continuation of the Dozenko-Fatteyev integrals. What did they do? They guessed and they're kind of, you know, geniuses to guess exact formulas. I have to admit that it's kind of amazing how they guessed this formula. So I want to try to explain to you, communicate to you how you do. So there is one case up there where it seems easy to compute this thing. What is it? Is if minus s is an integer. Because it's much easier to compute integral moments than fractional moments. It's, it's always the same game in statistical physics. You can compute integral moments but you can't compute fractional ones and what a physicist does, he computes integral moments and then he tries to guess what it's worth if you change the integral with any parameter. And it usually always works but then we spend years proving that. And that's exactly what DOZZ did. So they looked at the limit when s, so s remember is alpha 1 plus alpha 2 plus alpha 3 minus 2q over gamma. So you look at this and you take the limit and this guy goes to minus n an integer. So n an integer. Why am I taking, so I take a limit because you see for minus an integer this is blowing up, this is the gamma function. It has a pole at minus an integer. And the pole is minus 1 power n divided by factorial n. So the limit s goes to minus n, s plus n. This is standard business. This is minus 1 to the power n over factorial n for the standard gamma function. It has a pole and the residue is this for the gamma function. So if you take the limit of this guy, what happens? Well, it's equal to, let me follow my notes, it's equal to 2 minus mu to the power n over factorial n. So there's a gamma minus 1 actually, but I'll divide by that. It's a global constant, never mind. Expectation of my business to the power n for 1 plus alpha 2 plus alpha 3 divided by x to the power gamma alpha 1, 1 minus x gamma alpha 2. Exponential, so I'm writing my measure under the formal notation to the power n. Now I can compute this, right? How do I compute this? So a probabilist or a mathematician says, I use Fubini. So this to the power n, it's like an n-fold integral with respect to n variables, and then I can interchange the average with respect to the randomness and the average on the complex plane. If I do that, okay, I'm going to get this, so 2 minus mu to the power n factorial n of, so I'm doing it at a formal level, but everything can be made completely rigorous. You just regularize your field here and then you go to the limit. And what do you get? You get the integral over n in the complex plane, product gamma alpha 1 plus alpha 2 plus alpha 3 divided by xj gamma alpha 1, 1 minus xj gamma alpha 2. Okay, expectation, product of these parts. So these people who do statistical physics they recognize straight away what's going on here. Oh, I always forget the square here. It's a square, okay? So I get this product and I get a product of the measures, right? So I can even write this explicitly, remember that this g function is nothing but, oh no, there's no expectation anymore. Okay, I did Fibini. Now this is a Gaussian variable, so I can compute everything, right? Okay, so this is just the product for i smaller than j, okay? Of 1 over xi minus xj to the power gamma square. And I still have to do, so I still have these guys too. Let me do it, so this is not true. Let me do it by steps because otherwise I, it's the end, so I'm going to mess up what I'm going. So this is, let me write this more. This is going to be expectation gamma square sum i smaller than j. I have a Gaussian here, so, sorry. Okay, so this is worth this, okay? Because, okay, of course I regularize this, this thing, I take the variance of the sum here and these guys cancel out and I just have the cross terms. I'm doing a very simple Gaussian computation here. Now if I, and remember that this guy here, remember the formula is x of x, x of y, the covariance is nothing but log 1 over x minus y, plus log of x plus, plus log of x y plus. So this is max of y and 1, remember? Now if you, so if I use this formula here that I have here, I inject it here, you know, I massage a bit all this. What do I get at the end? I get this, 2 minus mu to the power n over factorial n integral cn product j equals 1 to n, so xj plus gamma alpha 1 plus alpha 2 plus alpha 3 plus gamma square n minus 1 minus 4 product 1 over xi minus xj to the power gamma square xj gamma alpha 1, 1 minus xj gamma alpha 2. Okay and I forgot here the product of, okay? So it's kind of okay, it's a bit nasty computation but so you get this thing, okay? I'm integrating on x1, xn, I have this formula and if you look at things, so what is important in this formula is that if this is equal to minus n here, you can see that this is equivalent to, sorry, this is equal to 0. So this is equal to 1. So this guy is 1. So this is exactly what Bertrand was referring to. It's independent of the background metric. This is the veil anomaly. In my computation this thing has to disappear because it's completely arbitrary. So I get 1 here. And then I get this integral 1 over these products and then it's the dotzenkov-Fatteyev integrals. You know, the ones they computed to construct the three-point structure constants. And so you have a formula and I'm going to conclude in a few minutes. Yeah, in one or two minutes. And you go see in the, so DOZZ went to look at the, they went into the, to look into the dotzenkov-Fatteyev papers and they found an exact formula for this guy and they found a crazy formula. Okay, so, okay, and I'm going to stop here because they found lots of L functions and everything. So all these, these formula, these, these integral moments were computed by dotzenkov-Fatteyev. They found, you know, so it's written on page 10. They found a crazy formula, dotzenkov-Fatteyev for these integrals. And then what did DOZZ do? Well, they, they replaced N here by these, this, this sum, by sum of the alphas minus 2q over gamma in an analytic way. But, so let me finish with this. So at the end, at the end, if I call this integral, if I call this, like in my notes, iN of alpha 1, alpha 2, alpha 3, they found that if I look at iN minus 1 of alpha 1 plus gamma, alpha 2, alpha 3, and they divide by iN. So this means that, essentially, you know, if you add gamma to alpha 1, then of course, if you add gamma to alpha 1, you're pushing N to N minus 1. And so they did the ratio of the dotzenkov-Fatteyev integrals, and they found that this ratio here, well, if you do the substitution alpha 1 plus alpha 2 plus alpha 3 minus 2q equals minus gamma N. So you replace the Ns everywhere by this thing here. They found exactly this. So I'm not going to write it, but they found this thing here. So I went a bit quick, but essentially, they computed for very special integral values of the alpha i's, the moment of these volume, GMC volume forms, they found an expression with lots of L functions. You couldn't really do much on this. So you do the ratio when you shift one of the alpha 1, say, by gamma, and you shift it, the ratio of all these L functions all over the place, well, you can see that then you can replace N by its value here, and you end up with this formula. So they said, okay, what is true when some of the alpha i's minus 2q equals minus gamma N? Well, we can imagine it's going to be true for any value of the alpha i's. And so they guessed this formula like that. Okay, so I think I'm going to stop here. Thank you.