 So, I just re-wrote on the board before starting what we did in lecture one, essentially. So remember, in lecture one, what we did is we defined the Liouville correlation functions by a clean probabilistic definition, which is written on the board. So remember, I said that, okay, so these are supposed to be exponential fields of the Liouville field, okay? So I said a definition for correlation functions. Now under these two conditions, which we call the extended cyberbounds, we define the correlations as some trivial term, so 2 mu to the power minus s, where s is this, gamma of s, some product here, which is probably false, I think there's a minus in front, like you did right, sorry, I think this is rather a minus, 1 over zi minus zj alpha i alpha j times, so what was really interesting in Liouville is the expectation of, to the power minus s of the integral over the complex plane of my Gaussian multiplicative chaos measure integrated against some function here, okay? And so remember, x plus, this was, so this was given by the maximum of x and 1, the norm of x and 1 in the complex plane, and especially around each point zk, there were singularities. Alright and so this was my definition and I kind of, you know, imposed this definition and I said that in lecture three, I would explain where this comes from from the path into. So remember just that m gamma, the Gaussian, the GMC measure was the exponential of a Gaussian free field, the x, integrated against g of x, and remember that g of x was equal to 1 over x plus to the power 4, okay? And another way of writing it would be, maybe the blue chalk, I could also, so this is a remark, I could also write f like this, f is nothing but g of x to the power minus x, 1 fourth sum k equals 1 over, this is the same thing, right? And maybe to anticipate a bit a little on what's going to follow, let me also say that if I take g of x and I compute the laplacian of this thing, well it exists in a generalized sense, it doesn't exist as a function but it exists as a distribution and it's nothing but four times the uniform measure on the circle, okay? So u, nu is just a uniform measure on the standard circle of center 0 and radius 1, meaning that if I, you know, if I integrate this against a test function, then I'm going, then I can do integration by parts and the laplacian goes to the test function and I get this here, okay? So this is, so the, and what is this Gaussian free field, remember it was the Gaussian free field which had average 0 on the circle of center 0 and radius 1, okay? So I can write this as, let's say integral of x against minus laplacian equals 0. This is how I found my, this was how my Gaussian free field was defined. So why did my writing get like this? Because as I said, the g of x, you know, that I chose, this guy here is completely arbitrary and in fact I can take any g of x which corresponds to a metric on the Riemann sphere and then I get a definition of viewville correlations. So I replace, in all this discussion, I replace g of x by any metric on the Riemann sphere, so for instance the round metric whatever, I'll get a definition here where I replace x plus by g of x to some power. I replace my Gaussian free field in my definition by a Gaussian free field which is equal, which is of average 0 against minus laplacian, okay? So this gives me, for each g of x a definition of viewville correlations and what we can prove and I'm not going to prove is that the definition up to a global constant does not depend on g of x. So it's, okay, in some sense, okay, it's called a veil anomaly. So this is kind of a remark, so I mean if you're comfortable with, you know, with metrics on Riemann surfaces and what I'm going to say in the next hour you can basically take any g of x but I'm going to, okay, I'm going to take this one. If you want, just not to change any metrics or any definitions. Okay, so what is lecture three about? So I told you lecture three is, so this was my definition and what was lecture three about? What is lecture three about? It's about showing my definition. Well, it really corresponds, okay? So putting in between quote marks, so I'm on the Riemann sphere or if you want the complex plane because, you know, you can, the Riemann sphere is just a complex plane with a point at infinity. So I told you, I want it today, I'm going to show that my probabilistic definition is a faithful definition of this object, that you can see written in theoretical physics where, okay, so in the lecture notes I didn't stress the dependence on g but I can and what is, so in SL, g is the UVL curvature of the metric. So you know, I'm going to define all these objects, you don't really need to know much on Riemannian geometry integrated. So today I'm going to explain why if I interpret this thing correctly. I'm going to end up with the guy upstairs. I'm also going to show, so that's first goal today, second goal today is to show the Kpz, so I'm going to start on the second. Second goal is going to be about showing the Kpz relation after, so, I'm going to prove today that if I take, so if I take this guy here or, you know, this guy, but properly defined which I'm going to show is that guy up there, if I apply a Mabuse transform, so Psi is Mabuse on the Riemann sphere, I apply Mabuse, so remember I always forget to put gamma mu. I don't know what, I'm going to put it, it depends on gamma and mu. Then I get the product of Psi prime of these, I get what is really, you know, characteristic of a conformal field theory. I get the same thing, my correlations are the same up to these products of power of Psi prime. And the third thing I'm going to talk about today is the Kpz conjecture on planar maps. So here I'm going to try to explain where these correlation functions appear when you want to study the scaling limit of planar maps equipped with a conformal structure and that you conformally embed in the sphere. Okay, so, what? This, it's the quadratic relation, yeah. If you look in a physics textbook. No, but the fact that it's a vector, that's something which we have, you know, over the years, I don't think it's Kpz themselves. Yes it is, it is. Yes. Oh, it was known before, so it was before 88? No, no, no, no. It was known in 88. Okay, so let me start with, so I'm going to start with the path integral. Okay, so I'm first going to try to explain, so first the curvature term. So, you know, when I look at a metric, what is the curvature at a point? When I'm in conformal coordinates like I am in this whole talk in these lectures, it's minus 1 over g of x, standard Laplacian. So, you know, just the ordinary flat Laplacian on the complex plane. Log g of x, that's the curvature, okay? Definition in Riemannian geometry. Okay, and as I already a bit anticipated, well, the Laplacian of this guy, it's four times, so from now on, but as I said, you can work with any metric, but just to keep one metric in all these lectures, I'm working with this guy. So as I said, the Laplacian of this guy in the sense of distributions, it's nothing but, you know, the uniform measure on the circle. So the GFF that I introduced as I said, it's nothing but the GFF that satisfies this, okay? My definition of my Gaussian free field, I have this relation. Okay, so let me introduce the L2 functions. So what are the L2 functions on my Riemann sphere with respect to, so, okay, I should say it doesn't depend actually on G, so I'm like, it's nothing but the functions, okay, which satisfy that, oh, sorry, there's a typo in the notes, okay? It's just standard square integrable functions, okay? And let me, what is H1? Well, it's nothing but the functions which satisfy this, okay? Plus the integral of the gradient square against G of x d 2 x is finite 2. Now, you know this gradient operator is conformal invariant, and so in fact, it's equal to just the standard gradient in C, this term. So what I'm already interpreting this term here, times this. And it's nothing but the functions which satisfy this. So this is just the flat Lebesgue measure, and this is the standard gradient. If I don't put a G anyways, just like here, it means I'm looking at the standard gradient, standard Laplacian in the plane. So in these lectures, I'm trying to throw away basically all the remaining geometry because I have a more probabilistic audience, okay? So this is the H1 space, okay? Now, let me introduce, so the eigen basis with respect to the Laplacian with respect to G. So minus Laplacian with respect to G of a function, it's nothing but minus 1 over G of x Laplacian phi, where Laplacian is the standard flat Laplacian on the complex plane, okay? And so I'm going to introduce a sequence of eigen vectors corresponding to this operator. So it's the sequence of functions which satisfy Laplacian. So the standard Laplacian with respect to G, which is nothing but Laplacian is equal to lambda j phi j of x. And integral, it's of average zero with respect to the metric, okay? So this gives me a sequence, and even you know lambda j you can study by the Vale formula, it's equivalent to a constant times j, the jth eigen value, sorry. Okay, so I introduced this sequence of which lives in L2, okay? Now, maybe I'll continue like this, okay? So now I'm going to start trying to give an interpretation to this path integral here. So I'm going to do it step by steps by following exactly my lecture notes, okay? So, well, it's standard in analysis that, okay. Of course I need to choose a, sorry, I need to choose a normalization and I'm going to, because if I multiply by a constant, of course it's the same. I still get an eigen vector for the Laplacian. So I'm going to normalize it to have L2 norm 1, okay? Now, I think everyone's going to admit the following decomposition theorem. If phi is in L2, then phi has a unique decomposition as a constant, plus the sum for j equals 1 to infinity of C index j, phi index j, where, it's just the orthonormal decomposition of my function, phi in L2, where phi, where Cj is nothing but phi of x, phi j of x, g of x. So when I solve my Laplacian, my eigenvalue problem, I get an orthonormal basis in L2, but I must not forget, okay? That when I solve these kinds of equations, I have a constant and I should not forget the constant here. So the constant is the average of this guy with respect to the g of x. But, and this Cj's is just I'm doing an orthonormal projection. So everything is, I think everyone will agree with this, this is very standard mathematics. Now, okay, so now that we've, now that we've written this, I think we can, we can start, I want to keep that upstairs. So we can start trying to give a meaning to this guy upstairs. Now, so in physics, in physics this guy is called the free field measure. Not the Gaussian free field measure, the free field measure. It's not Gaussian, it's the Lebesgue measure on the L2 space of functions. Mathematically it doesn't exist. The Lebesgue measure on the L2 function does not exist, because you know it's an infinite dimensional space. But let's still write, write, write the definition as it, if it existed. As if it existed. Now, what would you write? Well, if I take a function and I integrate against this, so this is formal. But at least formally, this is what I want to write, okay? Of course, this is what I want to write. I'm trying to give a meaning to this business. I just, I decompose phi in the orthonormal basis. Of L2, and what is the Lebesgue measure? Well, it's putting Lebesgue on each, on each guy, on each constant in the orthonormal composition. So first definition. Now, now the gradient term, I told you, this gradient term here, you can forget about the geometric, because it's something conformally invariant. And so the gradient term, okay, what is it? Well, these guys are orthonormal in H1, okay? So when I compute this guy, I get what? I get sum, yes, I get 1 over 4 pi sum for j equals 1 to infinity. So the constant disappears, the gradient of this constant is 0, you don't see it. And you get the sum of the integral of gradient of 5g squared, okay? And this, you know, by integration by parts, it's phi j, Laplacian phi j. So then I plug in my relation, my eigenvalue relation and I get 1 over 4 pi sum for j equals 1 to infinity of lambda j Cj squared. Sorry, of course there's a Cj square here, okay, because I'm decomposing along. So this is what this is worth. So already, so I can already give an interpretation to the, so this leads to the following definition, formal definition. So this is the second. Remember, the idea is that I'm giving lots of formal definitions which are meaningful and at the end I'm going to try to identify something where I'm going to say, but this I know how to define rigorously. Right now I'm still formal, okay? But it makes sense, right? I mean, it's rather logical if you, well, then here is what it's tempting to write. So the gradient term, this is equal to the integral over r, my quantity dc and product for j equal 1 to infinity of exponential minus. So now things are starting to look a bit better seeing the Gaussians arrive, the Gaussian. Okay, so this is my second formal definition, okay? This one, so I just, I took the one upstairs. I replaced f by f exponential minus the gradient. And so the terms, these guys, so I get an exponential of minus this and I put each of these exponentials in front of you. All this is meaningful, I mean, there's really nothing, I mean, at the end I hope I'll convince you that it's completely logical. Okay, so I'm almost done, so let me do a change of variable. I'm going to need actually this relation, so I think I'm going to keep this. Let me do a change of variable. Now, if I do just a little, so if I do uj equals cj square root of lambda j divided by square root of 2 pi, okay? And I plug in my change of variable in each of these guys. What do I get? I get that this guy here, which is equal to this guy here, is equal to a constant. So I'm continuing my computation here, so let's do it like this, is equal, okay? So change of variable, this guy is equal to this guy, which is equal to a constant times integral over r. Integral r, all the sequences on r of fc plus square root of 2 pi sum over uj phi j square root of lambda j. Okay, I'm just doing a standard change of variable. DC product for j equals 1 to infinity exponential minus uj square over 2 uj with, so here is where you always get catched up when you're doing formal with this kind of infinite constant. So now I'm done at a formal level by interpreting this thing as the Lebesgue measure, doing a change of variable, I end up with, if I want to compute this kind of guy against any function f, well it's nothing but an infinite constant times this. Now here, let me just make a comment on the constant. The constant, well it's nothing but the determinant, so this is the product of the eigenvalues, so it's the determinant of the Laplacian, the power of minus one half, okay? The determinant is the product of the, so of course what you do is this doesn't mean anything, but you replace it by what is called the zeta regularization of the determinant of the Laplacian, okay? So you want to say that this guy is nothing but, okay, up to some constant, okay, 2 pi, which is infinite 2, but you want to say this is the determinant of the Laplacian, so the power of minus one half, and there's a theory on this. Now in the sequel I will, I'm going to throw this constant, set it to 1, it's not very important, except when you're trying to change background metrics, but okay, just for the sake of simplicity I'm saying this is a constant, but you can make sense of, it's been made sense of in the 70s by Atiyah Singer and all these people working in conformal geometry. Now this guy, everyone who does probability is starting to see who this guy is. Now remember, so there's a, so now there's a, here's a, here's a, here's a very clean probabilistic theorem. So if I take, so how did I write it in the notes? If I take a sequence of IID Gaussian variables, then it's a theorem in probability, so I don't know where it's written, but it's a theorem. It says that square root of 2 pi sum J equals 1 to, let's say, T epsilon J, pi J divided by square root of the eigenvalues converges in the space, say, of distributions to the Gaussian free field X, okay? So this is, this is a standard decomposition of the Gaussian free field. You, you projected on each eigenvalue of the Laplacian. Eigenvector, sorry, and you divide by square root. So this is, this is a clean, this is a clean probabilistic theorem, okay? And so, of course, this is how you want to interpret this guy, since the UJs here are sampled according to the, sorry, I forgot, anyways, this is a global constant. You're sampling UJ according to the standard Gaussian variable. And so then you rely on an exact probabilistic theorem, which is this one. And you arrive at the following completely rigorous definition, okay? So this is a definition, but is that if I integrate f of phi, so I'm at the bottom of page 16, exponential minus 1 fourth of the gradient term. This is nothing but integral over La Beg expectation of my Gaussian free field shifted by a constant. So this is, so this is a, this is a rigorous definition. This I'm allowed. So now I finally arrived at a rigorous definition, okay? So the left-hand side by the conclusion of all this is by doing fairly, fairly, let's say, meaningful manipulations on a path integral, I arrive at a clean probabilistic definition. So the take-home message is that usually in constructive field theory, people say if you take exponential minus the gradient squared against d phi, it's a Gaussian free field. But you have to be very precise in newville is that it's a Gaussian free field, but plus a constant integrated against the La Beg measure. And this is very, very, very important not to forget the constant. Zero mode, yeah, it's the zero mode, it's the zero mode. And so a last remark, of course, is that formally this was a measure against functions which are L2 and at the end you arrive at a definition where X is not at all living in L2 functions. So you see it's formal, it's at the end the rigorous definition is you must take f living, it lives, it's a continuous function on the space of distribution say, because X is very irregular, it's the Gaussian free field. All right, now once I've done this, I can also add the curvature term, right? Because the curvature term is nothing but, so the curvature term. I'm not going to get it right now. I'll get it after these lectures, you'll see I'll do the gymnastics to get there. Now, what, do you want me to, do you think that's smarter, right? Okay, yeah, but it would be too easy, okay? And so, bottom of page 16, I have my gradient term and I want to add also my curvature term. So integrating against, and I'm almost done, I have my, now I'm all set, I'm going to have my definitions by definition. Well, I'm just adding this guy in my definition. So I'm putting it in here, and X has zero average with respect to the curvature. So all that's left is the constant part, okay? So I forgot, sorry, there's a typo here. So in my lectures, I forgot the Q, and I get this definition, okay? So now the only thing that's left, and this is going to take me a bit of some time. I'm going to have to add the exponential interaction term, which is here. That's the last guy that I have to add. But here I have a definition, a clean definition, okay? Because if I, this guy, I put it in the expectation. X has zero average with respect to him by definition, and so I get this guy. So that's my definition. Now let me introduce the Lueville field, the member. So the Lueville field is, I have to shift by the metric tensor if I want to. So the Lueville field is nothing but this guy, and I'm going to define. So maybe the notation is not very nice, but I'm going to define an infinite volume measure, not a probability measure, on the Lueville field. And it's nothing but, so this term, but I shift by the metric tensor, okay? So this is the definition of the Lueville field, okay? Just, I want to work with respect to a flat background metric. So I'm just going to shift my Gaussian free field by q over 2 log g. And now I'm going to start, so be where, I don't know if it's a good notation. This is an, if I take f equals 1, this is infinity. So it's not, okay, it's an infinite volume measure. So maybe I should not put expectation, but okay. Remark, expect, equals infinity. Okay, so this is really, so right now I just, I just shifted x with some, some deterministic quantity. And now I'm going to state a theorem which I think everyone who's looked a bit at Lueville quantum gravity and all this is going to recognize. It's the following lemma. So it's lemma 4.1, lemma 4.1 in the notes. If I take a Mabuse transform, so, okay, once I have this, I've defined this, I can start giving my definition of my Lueville correlations. But first, I'm going to state an important, you know, change of coordinate formula, which people who've seen Lueville quantum gravity will, will recognize. So if I take a Mabuse transform on the sphere and I take a function, okay, which is like continuous on, on the space of distribution, say, then if I look at my Lueville field and I do, I apply Mabuse and I add q log psi prime, this is, this does not change, it's the same thing. So I get the change of coordinate formula that is, you know, well known to physicists in, in, in Lueville quantum gravity and Lueville, you know, Lueville theory is that I, if I, if I shift, if I apply a Mabuse, this is the conformal invariant statement around the Lueville field theory. So for those who know the work of Duplantier-Miller Sheffield, this is taken as a definition of a quantum surface. Now here, it's a, it's a theorem. It's not a definition, it's a, it's a theorem. You have this invariance, which is in the, the measure here that, that I introduced. So how do you prove this? So I'm going to need this actually to show a, you know, nice conformal invariance properties on, on my correlations. So proof of this, I'm going to prove this. So I mean, there's no way you can expect this kind of thing is going to be true on a Gaussian free field because the problem is when you start applying a Mabuse to a, a Gaussian free field, you're changing the average. So that's why you, you know, you have to integrate against the Lebesgue measure here. And so to put it into a statement, here's the, the key identity. So I take my, my Gaussian free field x, okay. So I take my, my Gaussian free field with average zero on the circle of radius one and centered in zero. I apply a Mabuse and I take out the, the average. So this is a Gaussian variable. This thing has the same distribution as x, okay? So this is what I'm saying. The Gaussian free field is not conformal invariant. If you apply a Mabuse to the Gaussian free field, it's not the Gaussian. It's, well, not for all Mabuse, you recover the Gaussian. You have to take out the average, okay? Now, so in the sequel, in the notes, I set big chi equals one over two pot. And I'm going to need a theorem first. So I'll, I'll state it in the course of the proof, actually, the Gersonov theorem. So let's go. Now that I have, so my main starting point to, to prove such a thing is to prove that if I take my, my Gaussian free field I apply a Mabuse and I take out its average, I get the same thing. All right, so now, let me see where I'm going to write. Okay, so now, let's go. So I apply my Mabuse. So remember, I, I still have the definition over there. And so what do I get? I get that this is the integral over r of, so I'm going to put actually the, straight away I'm going to use Fubini and put the integral on, on the zero mode inside the expectation here. So I get the expectation of the integral over r, exponential minus 2qc. Expectation F of x applied to psi plus q over 2 log g psi. Psi prime squared plus c dc with the expectation, okay? This is just because this is q over 2 log psi prime square. And I put it in my, in my, so here's the definition, I did nothing. Really nothing here, okay? Now I do a change of variable inside, so inside my expectation I do a change of variable and I, I, I said c plus chi minus q expectation of chi squared. So remember, chi, it's just, it's just the average of x composed with psi. So it's just a Gaussian variable. If I do this, okay, and I, I, I, I, I, I, if I do this change of variable and just do Fubini back, it's not, it's not hard to show that this is worth the integral over r, exponential minus 2qc prime, exponential 2q chi minus 2q squared. Variants of chi squared, f of, so x composed with psi minus chi plus q e of chi squared plus q over 2 i prime squared plus c prime, okay? So just by doing a simple change of variable, I see a Gaussian variable appear upstairs, and yeah, so that it's written. And then so I, I do it, I do this change of variable inside the expectation, and then I, I redo Fubini back, I'm integrating on dc prime, and I get this. So this is just, you know, very standard manipulations. I mean, this should be rather, rather clear. Now here, here's the trick. So, of course, you see now I'm starting to, to look really, really good because I know this guy here, minus chi, it's, it has the same distribution as x. So I already see my x appear again. Remember, I have to show that applying this change of coordinate formula doesn't change anything, okay? So, so, so anyways you can follow this, this definition with the lecture notes. I'm going to write in the center. And what is the Gersonov theorem? So let me, let me write it with, with chi. But of course it's, it's, it's, it's true with, with, with any random, with, with any random variable here, with any Gaussian variable which is measurable with respect to x. The Gersonov theorem says that under this new probability measure, all you have to do is replace x by x plus its covariance with this, this, this guy upstairs, okay? So by Gersonov, maybe I can, Gersonov says that, see this is, this is looking at this field under a new probability measure where the, the Radon-Nikodym derivative is this exponential of a Gaussian, which has variance, which has expectation one. So I'm just changing my probability measure here. So I replace x by x plus its covariance with chi. So for physicists, this is complete the square trick. I get at the end, the following thing that for all c prime, okay, for all, for all c prime fixed, whatever c prime, I get that the expectation of 2 q chi minus 2 q squared variance of chi, f of, so now I'm really going to write it, you know, this is a field minus chi plus q chi squared plus q over 2 log x psi prime to x squared, okay? Plus c prime, so I'm taking it as a field, okay? So this is a field, you know, this is nothing but the same thing, minus chi plus v of x plus c prime, where v of x is the covariance between this guy and my, my field here. So it's, I get this formula. So this is a bit of heavy computation, but the take home message is that, so the take home message is that I first shift my field by a random variable such that I get the original Gaussian free field. And when I do that, it creates an exponential Gaussian variable that I can, you know, re-inject back into my field. It just shifts it by some, so by a covariance term. And at the end of the day, no, no, because if I take covariance of this guy with this and I get a minus 2 q variance here, plus q variance and so I, I get minus q. Yeah, so of course this has no effect on deterministic things, but it shifts the random variable by the covariance term. And at the end I get this function and if you do a bit of computation, so this is the, the painful part I'm not going to explain to you, you just compute and at the end you can prove if you're a bit courageous that it's worth q over 2 log g of x. And so at the end, what did I prove? Well, I proved that if I apply this change of coordinate formula, I get this guy which has the same distribution as my Gaussian free field I started with, plus q over 2 log g of x plus c prime. So by integration then this means that this is nothing. So if I, if I admit this thing which is painful to compute, if I admit this computation, at the end I get that this thing here is nothing but f of, so this has the law of the Gaussian free field I started with, so x of x. Plus, well v of x is log g of x plus c prime. Now if I integrate this relation on the c prime variable, I get that I end up with this thing here. So I've proved that if I apply the change of coordinate formula, so this is how you prove rigorously what is written all over the place in physics that if you, if you apply the change of coordinate formula you get the same thing. And this is the, you know, the famous change of coordinate formula in gravity. So I wanted to show you kind of this, this theorem because I, I think it's a very important theorem. I, I called it lemma 4.1, it's a very important theorem aspect of, of Uville, of Uville field theorem. So this is, okay, if you want to say, if this is really the, a rigorous definition, a rigorous proof of, of, of what you see written in, in physics thing. Okay, so now I'm, I'm, I'm going to define my, all I have, I'm going to stop in two minutes. Yeah, I'm going to stop two minutes and then I'm going to set the definition and show you how you recover the probabilistic, probabilistic guy. So two minute break, okay? Let me, let me summarize what I did the past hour. So, let me start over with an explanation. I introduced the L'Uville field was a field under a measure, an infinite volume measure. And I set it equal to this. And I explained that this rigorous mathematical definition was nothing but, you know, a faithful definition for the field 5 plus q over 2 log g under this measure. Okay, so I already took care of the gradient square term and the curvature term in what I'm looking for. So now, well, I'm going to define the correlations, the first goal. So what are the correlations? Quite naturally, the correlations are going to be, so there's going to be, I'm going to have to use a renormalization. So I'm going to introduce Phi Epsilon. So, for instance, you can take the circle average of my field, of the field. And I'm going to introduce, you know, the so-called vertex operator. It's going to be for all alpha. I'm going to introduce Epsilon Alpha square over 2, exponential of Alpha Phi Epsilon of X. So this is a well-defined field. If I take Phi and I take its circle average, I take the exponential and with this little, you know, with this cut-off scale, then I can inject it into this definition, right? And so the correlations are what? Well, definition, the definition is product, my correlation. So V alpha k, Z k, gamma mu. By definition, I set it as the limit in Epsilon. Of, I wrote it like this, E alpha k, Epsilon Z k, gamma mu. Where this guy here, where this thing, well, it's nothing but, so I put a 2 in front, but it's, you know, it's to match the DOZZ formula, but never mind the global 2 here. Well, expectation of my fields, so I create the missing potential term V gamma Epsilon to the X, integrated against the Lebesgue measure. Product for k equal 1 to n, Epsilon Alpha k square exponential Alpha k V Epsilon Z k. So here's my definition. So you see, so I define this measure here. So I need to take exponential minus the volume, okay, with respect to my measure, and the product of the exponential of the fields. And so if I show that this thing converges, and I find, well, what I'm going to show you to you is that this thing, I'm going to explain to you why this thing converges, and it gives exactly the probabilistic definition I gave to you in lecture one. All right, and then, yeah, I think I'll have time. And then I'll show how lemma 4.1, the change of coordinate, leads naturally to the second goal of my lectures, is to show that if I apply a Möbius, then I get the K-PZ relation. And then I'll do some kind of reverse engineering to show you how the K-PZ relation enabled K-PZ to, in some sense, formulate their conjectures. So, so let's, so this is a definition proof. Because you have to prove that the right-hand side, so definition theorem if you want, so this is proposition 4.3, so proposition 4.3. The proposition 4.3 says in the lecture notes that the right-hand side, the limit exists, and is given by the definition of lecture one. So let's, so proof, so the expectation of exponential minus mu, d gamma epsilon of x, e to the x, so product, what is this? So this is nothing but the expectation. So I'm putting the, this is a common procedure, I put, I first average with respect to the, the average with the Gaussian free field and, and I put the integral inside, dc, exponential minus 2qc, product of, so I, I write my definition. So circle average or whatever, whatever regularization of my Gaussian free field plus, so q over 2 log, so a regularization of log g, okay? Plus c, so these are, sorry, these are the vertex, I should not forget the renormalization, okay? And here, what do I have? I have exponential minus mu, exponential gamma c, the integral over c, exponential of gamma x epsilon x plus q over 2, regularization of log, so for, you know, the regularizations of log g, it's like log g, you know, it's, this is a regular function if you want. Okay, so the important thing, and this is, okay, this is a very important step that enables, you know, to understand one important part of UVIL, is why do you start with an exponential and you end up with some fractional power? And the, the answer is, here you do a change of variable. So in this integral, you set u equals, so mu, exponential gamma c, integral of this guy here. So this is the change of variable you want to make. If you do that, then you see, this is going to be exponential minus u, and all the c terms here and here are going to make, you know, this fractional power arrive. So if there's some kind of take home, take home message, it's, it's really that, you know, this integral over c, you first want to get rid of the integral over the, the average level of the field, and it creates a, a fractional power. So let me make also a remark, which is, which is true and you can, it's not very tough to, to show this, is, is the following. Let me get rid of something straight away so I don't, well, it's easy to see that this thing here, it's, it's converging as epsilon goes to zero to my, my Gaussian chaos measure. Okay, remember this was, you know, exponential gamma x minus gamma square over two. So I always write the renormalization term. So it's, it's not very tough to show this, okay? So in, in what follows, I, I, I don't want to, I'm going to replace this guy by its limit. And if I replace this guy by its limit and I continue my comp, so I do my change of variable and what do I get? So I, I admit this, but it's not very tough, you know, you just play around with, so q is worth the gamma over two plus two over gamma. And if I, so if I replace this guy by m gamma c, what do I get by this change of variable? I get this, I get gamma minus one integral of u to s minus one. So in, okay, in the lecture notes, I didn't do that change of variable. It's not important, I, I, I didn't put the mu, just so. I didn't put the mu in the change of variable, but never mind it. I'm going to follow my notes, okay? Where, you know, in this formula, s equals sum alpha k minus two q over gamma, okay? Times the expectation of my product. Now, I have to replace all the c's by log u over gamma plus minus log the total mass, right? So this, this guy is converging to m gamma c. So if I write it, I get, so in the notes, I, I kept the epsilon, but I get the integral of exponential gamma x minus gamma square g of x dx to the power s, okay? Okay, so that's, I hope it's, it's, it's, it's written in, so this is nothing but the gamma function. This is nothing but the gamma function times mu to the power minus s by a simple change of variable then. So you see, so the, and then I have this term here. So I have the inverse power of, you know, the total volume. And upstairs I have a product of Gaussian, of exponential of Gaussian variable. Now here it's the same business as, as, as previously. You want in, in the, in the proof of the change of coordinate formula. You, you wanna just put these x's in here by, by using Gersonov's theorem, okay? So here what you get. So let me just emphasize once again, if you do no probability, it's easy to explain the appearance of s. It's just something like one over x to the power s is equal to something like this. Exponential sum of alpha k minus 2qc minus exponential gamma c x. Something like this must, you know, up to some factors and everything but dc, sorry. Something like this must, must, must be true, right? Where s is the sum of alpha k minus 2q over gamma. You know, if you just take du over u, d of u, it comes off your channel, right? I don't know if you see that. du over u. Yeah, d, d, yeah, it's d, d log u over u is dc or, okay. But that's basically, I'm, I'm doing this change of variable. This explains the power. Okay, and then I do, I do Gersonov here. I replace, you know, what, if I take out, so what I do is, I, I apply Gersonov. This guy here, this sum, it's a Gaussian variable. Okay. And when I take exponential of this Gaussian variable minus its variance over 2, I'm changing the probability measure. And it amounts to shifting, replacing the field x with my x plus sum for k equals 1 to n of expectation of x of x, x epsilon of zk. So I'm looking at my Gaussian free field here under some new probability measure. And if I use correctly, you know, the, the, the following fact that the covariance of x with itself, with the covariance function is nothing but this. Then, so by applying Gersonov, I shift my field x by the sum of these covariances. I inject this formula. Okay. And so after a bit of algebra, taking the limit epsilon goes to zero will, will give me that. So the term, let me be a bit more specific, the term. So I get this, and then I, I, okay, I get this. So it converges when epsilon goes to zero. Okay. I'm going to, I'm going to write it with one, one, one step. I get product k equals 1 to n, okay, times the expectation of one half of the sum here x epsilon zk to the square. Okay. And so I, I'm going to write it in steps. Product k equals 1 to n. So, I mean the expectation of minus, so this is going to log g times, and what is here? Well, it's the expectation of the sum of alpha k x epsilon zk minus its variance because I, I, I added it and I subtracted it so I can apply Gersonov. So Gersonov's theorem, so divided by exponential gamma x to the power s. Okay. So I did nothing here. I just rewrote, I just rewrote what I get here. Exactly the same thing except that I, I, I multiplied and I divided if you want to, I mean I added on the left, I, I did, I added this guy and I subtracted it here. So, you know, this times this is one. So I'm doing nothing, right? Now in this, when I have this guy, this is the same thing as changing the field x by what I wrote here. This is, Gersonov theorem enables me to replace in this guy here. See, I have a Gaussian. I take out its variance over two. So this is expectation one. So this is a probability measure. So under this probability measure, x becomes, I'll write the color. I can replace x. So I can say that I can remove this and just say x is x of x plus sum of alpha k x covariance of x with this. So this gives me that this thing converges to the integral of exponential gamma x plus these terms here which create log one over x minus z k plus terms. And this term here, well you have to compute this, the limit of this guy. And the limit of this guy is going to give you the rest of the terms. And so I'm not going to spend hours on this, but let's say that if you inject this relation here, now you go to the limit and at the end you're going to get the probabilistic expression. So I write it here just to remind you of what it is. But the global idea is written here is that you use Gersonov to get rid of all your exponential free fields that are diverging. So you get at the end that this thing over there, so at the end you get that this thing converges to 2 mu minus s gamma s product expectation of the integral of my function, power minus s. So this is what you get. So it's written in great detail in the notes, but I hope that you get the main ideas by this proof. I sketched here. All right, so the top terms, all the top terms on the right here, they finally give only the product of the one of the double product. I mean, this thing here? No. Okay, the point is, so this term is going to converge to exponential sum for i smaller than j expectation of when epsilon goes to zero. So this term here. And you're also going to get a few variance terms. These are the cross terms, but you get variance terms. And the variance terms, they're log 1 over epsilon, they compensate, plus some metric tensor dependence. And in fact, the metric tensor dependence upstairs and in this power compensate. It's just going to take me two or three boards to simplify all the terms. I mean, roughly, you know, this guy, the covariance creates a log, and so it creates this guy here. And that's the idea. Okay, so I hope you get the main idea. So now I'm going to prove KPZ, the KPZ formula. So proposition 4.4 in the notes, KPZ formula. And it says that if I apply a Mabuse, then I get the same thing up to these conformal weight terms. So this guy is alpha k over 2, q minus alpha k over 2 times the correlation. I mean, to answer Nicolas question, this is kind of why we look at the exponential. Because if you don't look at the exponential, applying a Mabuse will shift by q log psi prime. So if you take the exponential, you're going to get a power. And so this is a Riemannian metric tensor. So it's natural and conformal geometry. Okay, so to do this, I'm going to apply, so remember this is the limit where I regularize an epsilon. And this is some limit when I regularize an epsilon. Okay, so I'm going to take the epsilon, somehow the epsilon version of both sides and then go to the limit. So now remember, did I erase this? I showed to you, I hope it was rather clear that I showed to you that I had this Milouville field satisfied the change of coordinate formula. Remember this relation. So this was a clean theorem. Remember this. So what I'm going to do on this board, so to prove this, I'm going to apply this to, I'm going to apply my lemma 4.1 to the function, well, product. Remember, so remember my correlations are defined as the limit of the expectation with respect to my q measure of the product of the vertices of the vertex operators, exponential minus mu integral of exponential gamma phi epsilon of x d2x. Okay, so I apply my relation here to this guy. So the right hand, so the right hand side, so it's an f of epsilon if you want. The right hand side is converging to my correlations. Okay, and I'm going to look at what happens to the left hand side. Okay, so I apply my lemma 4.1 change of coordinate with this function. What do I get? I get the following equality. So I get, so what I'm going to be interested is actually, so is the left hand side because the right hand side is converging to this guy. Okay, so let's look at the left hand side. The left hand side is, okay, so by applying my change of coordinate formula, the right hand side is converging to my correlations by definition. This is the definition of my correlations at zk. Now I'm going to look at the left hand side. So in the left hand side, I replace all my phi's by phi circle psi, composed with psi plus q log psi prime. And what does epsilon mean? It means that I take the circle average of f composed with psi and the circle average of log of psi prime. And same thing in the vertex operators. Now, okay, this is regular. So q log psi prime, if I take a circle average around x with radius epsilon, well, this converges to q log psi prime. So, you know, this is very, very regular, very regular guy. Okay, so I'm going to replace everywhere. I'm going to replace everywhere my circle regularization. So this is nothing but let me make sure that everyone gets it. You know, by definition, q log psi prime of x epsilon, it's q integral circle average, say, log of psi prime of x plus epsilon e theta. And this converges to q log psi prime. This is, you know, it's regular function. Now, phi composed with psi now, epsilon of x, by definition, it's 1 over 2 pi, integral 0 over 2 pi, phi of psi of x plus epsilon e theta. So, this is roughly equal to 1 over 2 pi. When epsilon is small, it's very close to phi taken at the point psi of x plus epsilon psi prime, so plus epsilon e theta psi prime of x. Okay, I'm just, this is, this is, this is what I get. And so what I'm saying is this is roughly equal to phi that I regularize at a different scale, not epsilon, but at the scale psi prime of x times epsilon. Okay, so if I do, you know, so if I replace this by q log psi prime and I say that, you know, regularizing at scale epsilon phi composed with psi is nothing but taking phi but regularize at another scale, I can rewrite the above equality. And I'm going to get, so the following psi prime to the power 2 delta alpha k. So I'm going to write it, I'm going to write the conformal weight explicitly. I'm just rewriting the left hand side, so this term up here. So expectation of exponential minus mu integral over c psi prime of x epsilon to the power gamma square over 2 exponential gamma phi taken at the scale psi prime of x epsilon x. Okay, psi prime of x to the power 2 delta gamma e to the psi prime epsilon k epsilon z k is equal to, so you know, the guy which converging to the correlation functions. Okay, so it's nearly equal. What did I do? I, what did I do? I replaced all the psi, so all the q log psi primes I replaced, regularized at scale epsilon, I replaced it by their limit. It creates, you know, so let me look at the vertex operators. So it creates exponential of alpha k q log psi prime z k. It creates psi prime z k alpha k q. And then I took psi prime z k say to the power minus alpha k square over 2, I add it here and I remove it here. So I get a psi prime alpha k square over 2 to the power alpha k square over 2 and I get a psi prime to the power minus alpha k square over 2 here. So you know, I'm just multiplying by this and dividing by the same quantity. And I do the same thing upstairs. Now I'm done, because what I do is I do a change of variable in my limit, okay? Is that, you see, I'm, okay, so I'm regularizing on a scale which depends on the point, but if you imagine that it doesn't, that you can still, you know, take this limit and et cetera, et cetera. So first I see, oh, sorry, so this is, sorry, this is false. I have a psi of x. So this, I agree that it's, I'm looking at your faces that it's maybe a bit painful. So you see if I do a change of variable and I, you know, for each k this is going to zero and I have the same scale here. So I can, I can imagine that I can replace this by epsilon and this by epsilon, okay? It's the same scale. And I can do the same thing here. I can replace this by epsilon and this guy also by epsilon. And here the conformal weight, so here I get the conformal weight associated to delta and it's, this is the conformal invariance, this is two. The conformal weight is one, so I get a two here. So by a change of variable, this guy here is converging to, well, my, it's converging to, let me write it here, the guy up scares, stares, you know, in the exponential. It's going to converge to the integral over C of, well, the same thing, my Gaussian chaos measure, but with psi, okay? And g of psi of x and psi prime of x to the two conformal weight delta d2x, okay? So this is, this is, this is the exponential minus mu term. It's going to converge to this. And so I can do a change of variable, u equals psi of x, and very important, so here's the important part, okay? So how do I say important? This means warning, but important. The conformal weight of this guy is one. So when I do my change of variable, I get the same thing. I get my standard volume, and what I say is that, you know, this guy in the limit is the same thing as if, you know, it's just the same thing composed with psi. So at the end of the day, this thing is going to converge to the correlation function, but applied to the point psi of z. And then I have psi prime in front, and if you, you put it back on the right-hand side, you see the, you see the relation. Okay, so this, this, this, this, this part was a bit technical maybe, but okay, usually it's, it's a bit, okay? On the conceptual level, it's important to explain where things come from, and so there's quite some computation, and at the end you can, you can prove a, you know, the change of coordinate with the KPC and all that. So now I'm going to finish the last 10, 15 minutes with a bi-bi, okay, discussions on the KPC conjecture and equation, and then I'll, I'll be done. Okay, so let's, let's, I just need to keep this thing up there, the KPC. Okay, so what is the KPC conjecture? Which to my knowledge is 88, but, so here I'm going to, I only have 15 minutes, so I'm going to be a bit sketchy, but I'm going to give you the main ideas of the, the KPC conjecture in, in physics, and so what I'm going to present is, I think, you know, a kind of faithful representation of what is the KPC formula, or the, the KPC conjecture more in equation, sorry, so this, this is what I call the KPC formula, and now I'm going to give the conjecture and the quadratic equation that most of you know about. So imagine you, you look at a random field, so I'm going to write it in a generalized, a generalized function notation, so you have a, a random field, so this guy is going to be a, say, you know, a random distribution, and it's expected to be, so let's say it's expected to be the scaling limit of a, of an observable and a critical statistical physics model, and so let's say that, and so it's going to be, it's going to have the following conformal covariance property, so I consider the scaling limit of, okay, in CFT language of a primary field of a critical statistical physics model, so it's going to, the random field is going to have, is going to have this, this property, sorry, psi, psi of zk, is going to be, if I apply a M-abuse to it, I'm going to get what is called the conformal weight. So let me give you an example, which was proved, you know, by, you know, by, you know, by, you know, you know, that if you take the, the Ising spin on, on, on the, on the plane, so it was also proved, I think independently by, by Julien Dubidat, so if you take a full plane Ising model and you look at it at critical temperature, it's going, the random, okay, at least at the level of correlations, the correlations of the spin are going to converge to some function, some correlation function, which is in fact explicit. So in the case of Ising spins, you get that the correlations of your model converge to something which is completely, so I'm forgetting about the constants, but something completely explicit, which is the sum over mu1, mun, belongs to minus 1, 1, sum of the mu k's equals 0, product for i smaller than j, zi minus zj to the power of mu i, mu j over 2, to the power 1 half, and there's a constant, okay. And so if you apply a M-abuse transform to this guy, you'll actually see that the correlations, they satisfy this relation with conformal weight 116, so that's an example. So that's the, and you can actually, so this is what Gamia-Garbon and Newman did, they showed that if you can integrate these correlations into small balls and you'll get a random field living in the space of distribution, okay. So that's an example, and so you can take lots of other examples. That's the scaling limit of the ising spin on a regular, say, a square Euclidean lattice with mesh size going to zero. And you can, so you can ask the same thing. You can ask the same thing if you are with a random planner map, so imagine you sample an ising model at critical temperature on a random planner map that you can formally embed into the sphere. So let's say that you look at the limit, the scaling limit of the ising spin or any other field of a critical statistical physics model on a random planner map, so you can call it like this, right. It's going to, so these three points are the points where you, so you take three points at random on the planner map and you send them to three fixed points on the Riemann sphere, okay. So let's say these three fixed points are z1, z2, z3, and you look at the image of your ising, you look at your ising spin on this planner map, okay, and it's going to be, and it's going to be described, the scaling limit by some random distribution and I'm going to note like this, okay. And so Kpz, what they argued, so Knislik Polyakov Zamunochikov, so for me it's 88, they argued that, well, the scaling limit of your ising spin on the map should factorize under the form of the ising spin on the regular lattice, say, so if I'm still continuing with the ising spin, say, times the vertex operator associated to Liouville. So if I write it like this, now let me forget about the, okay. Now, so this is what a physicist would write. He would say the ising, if you look at your field on a random lattice, it factorizes into the same field on the regular lattice, times the exponential of alpha phi where phi is the Liouville field. Now there, so I'm coming to that. So the first thing is, okay, I don't have much time. The first thing is you have to do, you have to, okay, so mu, forget about mu, you have to determine the gamma, okay, of the theory, which, because when you're in a planner map and you write this, first you have to determine gamma, okay. So gamma is going to belong to this and so I'm not going to say much on the value of gamma because, okay, there are ways to see it but I didn't, I don't have much time here. I'm going to insist on how do you find alpha, okay. So by definition, if you can formally map your spin on the sphere, it's conformally invariant, by definition. So this means that if you apply a movies to this guy, what happens? So if I do it still in, you know, doing physics notation, if I apply a movies, I should see this. Or if you prefer, if I write it integrated against a little ball, you know, if I do a, okay, if I integrate this against a little ball so you see that I'm conformally invariant, okay. So let me, let me say it out loud more than, right. So this saying this, this is equivalent to saying that this thing is conformally invariant. This random distribution is conformally invariant. It's by definition because I'm mapping it to the, so you look for the conform, so now you know how to do this, right. Because what happens here is that you, when you apply psi here, if you apply a movies to this guy, well what do you get? You get psi prime x to the minus, minus 2 delta sigma. Psi prime x to the power minus 2 delta alpha. So where delta alpha is the weight in newville times the original guy. So if you, so the Kpz equation, the quadratic Kpz equation says how, by what you multiply your, your, your field on the regular lattice by which vertex operator you, you multiply in order to get something conformally invariant. So the Kpz equation is delta sigma. So this is, you know, the guy on the weight of sigma on the regular lattice plus this guy. So let me write it in the case of newville and you get the quadratic relation. So that's how you choose alpha by solving this quadratic relation, okay. See, it's alpha q over 2. So this is, you know, gamma over 2 plus 2 over gamma. Sorry, I'm right bigger. And this is minus alpha square. And so this ensures that the field hill is conformally invariant. So I did this, you know, by the kind of physics notation. So let me write a very clean conjecture. Well, I define my limit on planar map by, so by definition. So here's my definition. So you see, written like this is, is not really a clean definition. So the definition, so the conjecture, if you prefer, is that, so I take, if I take three points at random on a random planar map conformally embedded in the sphere and I look at the limit of a field which on a regular lattice converges to a field with this kind of property. So which is saying that it's a primary field. Then on the planar map it should converge to a field written like this. And how do I determine the field? Well, I set its definition. So this is a clean mathematical definition. The co, the correlations of the field is equal by the two. The product of the correlations of the field on the regular lattice times, so the Liouville, the Liouville correlations. So I have to divide upstairs and downstairs by this. So I'll explain in a moment what this guy is where. So I, I don't have time to explain how to find gamma, but it's in previous reviews I wrote with Remy Rod where, so I solve, where I find alpha by solving this equation. Okay, so the, the scaling limit of my field on the planar map conformally embedded in the sphere. So it's going to depend. I suppose that I map the three points chosen at random on the map to three fixed points, z1, z2, z3. And so this creates these three terms here. And then the correlations are the factorization of, sorry, they factorize as these correlations. This is a well-defined mathematical object. Sorry, so here if I put it next. Times the correlations of Liouville, this is a well-defined and downstairs the three-point correlation. This is a well-defined math object in this two. So this, these are, so for instance, okay, what you can, you can something that's rather interesting in my opinion is you can, you can take, say, like the Ising spin. You know what it is on a regular lattice, so now you have a random field and you can study it. In your conjecture there is only definition, so far, right? Oh, so this is a definition. Oh, you mean, okay, this is a definition. It's a definition. If you want, this is a definition. If I take z1, I can define this field. And so if you want the conjecture is, if I conformally map a random planar map to the sphere, if I'm taking three points at random and sending them to three, the three fixed points, then my field will converge to a random field who is defined by the following correlation relation, product of the two. So the two examples that are nice to look at is Ising. So you take the Ising spin, you get the Ising spin field on the planar map. So now there are tough questions on this guy. And since, you know, look at the tail of this guy and everything, it's an open problem. I mean, we have an idea of what it should be, but it's an open problem to prove it. And the case that, okay, what most of you have been interested in the past years is you take the trivial field, sigma equal one here. Then in that case, okay, this is one. And then when you solve this, so if sigma is one, it's conformal weight is zero. And so you solve this equals one and you find alpha equals gamma. And so you get, and you can integrate this in little balls and you'll get the volume form. But I want to emphasize that the Kpz, you know, conjecture is a very, very general thing. I mean, it's supposed to work on any field that is described on a regular lattice by conformal field theory. Okay, so thank you for listening and going through all these painful computations. Thank you.