 Good morning everybody. We are going to start our morning session today and Slava is going to give his third lecture. Okay, thank you. Good morning, everyone. So we stopped last time our discussion at the subject of conformal blocks. So that's where I start. That's where I start. So remember we said that we are considering a four point function which I can express by doing the OP of two operators and two operators. So I get it as a sum F one to K F three, four K. And then here is some differential operator which acts at the operator okay at the point X two. And there's also another differential operator which acts at the operator okay at the point X four. And this thing here can be expressed I said as a certain function G delta K LK which depends on the cross ratios divided by some kinematical factor some X one, two to some power X three, four to some power. Some trivial factor here. And so this function here is called conformal block. And by the way sometimes this whole thing this whole thing including this kinematical factor is called conformal partial wave. So conform block is a part of the conform partial wave which depends on U and V. And so I would like to discuss how one can compute this conform blocks and there is a certain differential equation. There's a differential equation which is called Kazimierz differential equation the partial differential equation which this conform block satisfy. So I'd like to explain how this come about. And so this has grouped to the equation this equation see Kazimierz and to derive it we have to first of all recall that so we have as I said we have conformal group we have conformal algebra which is SO D plus one comma one and it has generators P mu M mu nu dilapation generator D and the special conformal transformations generator K mu. It's all there is in the dimensions. And so each of these operators gives rise to a word identity expressing invariance of a correlation function under this generator. So this word identity has so let me call all of these generators LA. So the word identity has the form. So if you have a correlation function let me say okay let's take the same correlation function as there and then so the word identity says that LA actually on this correlation function equals zero but what it means in practice is that you have to have take a sum of the following form. So you take LA acting on the first operator or one X one or two X two dot dot dot plus or one X one LA or two X two dot dot dot. So if you take the sum of all these terms it gives you zero. This is the word identity. And so each one of these things appearing in this word identity is a differential operator. First store differential operator. For example if LA is equal P mu then this equation expresses infinitesimally the fact that the correlation function is translation invariant. So if you put there dilapation generator then this just expresses that the correlation function is transforms correctly under dilapation. So this is the infinitesimal form of the fact that the correlation function is conformally invariant. So each of these operators can be worked out. For some of them it's of course well known. Yeah for example the P mu acting on any operator or Vax this just gives up to some signs and eyes. It just gives D mu or O Vax, right? That's then dilapation generator acting on O Vax. It gives X mu D mu plus the scaling dimension of operator acting on O Vax. So there are such rules. The most complicated of these rules is the action of K mu. It's the action of K mu on O Vax. Let me write it down. So it's equal to two X mu X D minus X squared D mu. And then there is a part plus two X mu delta plus two X lambda sigma mu nu acting on operator O Vax. Where, okay, this is, if you are familiar, this is the vector field corresponding to special conformal transformations, infinitesimal special conformal transformation. Then there is a part proportional to scaling dimension and then there is this matrix which acts on the spin indices, acts on the spin indices. But in any case, so there are such formulas that you can look up for each generator of the algebra. And so we have the conformal invariance of the correlation function expressed in this form, but we also have the conformal invariance of the OPE. So I was discussing, so this OPE, which is written there. So let me write it again, O one X one O two X two equals sum over K F one to K E X two. So this OPE is also conformal invariant, which means that if you take this OPE and if you act on it on any of these LA's, and acting on it with LA's just means take act with LA on O one using one of these first-door differential operator plus act with LA on O two. The result of this operation has to be equal to acting with LA on O K. So this equation expresses conformal invariance of the OPE. And actually, so yesterday we were discussing how to determine this infinite expansion, this infinite expansion in derivatives P. And I said one way to determine it is to match with the three-point function. But another way to determine this expansion is just to enforce this property, that you can commute LA, you can push LA through from acting on the left-hand side of the OPE to acting on the right-hand side of the OPE. So this property is very important. And actually, this property guarantees that if you start from this property and you say, well, now I'm going to use OPE to compute three-point functions, four-point functions and so on by repeatedly using the OPE, it's going to guarantee for you, I'm not sure, okay, if you don't see it, then think about this, that it will guarantee for you that the correlation function that you get in the end of the day are going to be conformal invariant. This property is necessary to guarantee that the correlation functions that you compute using the OPE are conformal invariant, yes. P, because P here is just, well, it's not guaranteed. I'm telling you that this P has to be determined in a way that you should be allowed to do this procedure. Not guaranteed. If you take precisely the separator P contains derivatives and L also contains derivatives. So if you take whatever P you want, this is not going to be true, definitely. So therefore, very specific P is going to be true. Any other questions? So this is the conformal algebra. And now we can do the following trick. Take the conformal block, take the conformal block. Let me express here. Conformal block is individual term where I just, I do not sum over Ks, but here I take one particular K. This is conformal block or let's say conformal partial wave, everything. And now I'm going to act on this conformal block with a particular operator. So I'm going to act on it with, I'm going to act on this operator with LA acting at the point one plus LA acting at the point two. Well, actually this LA acting at the point one, LA could put two. It means really LA acting on O one X one or two X two. X one or two X two. So by the conformal variance of the P, the result of this action. So this action is the first-order differential operator. So the result of this action is going to be equivalent to pushing this LA generator. So this is going to be equivalent to acting with LA on O K. So I'm just explaining in words this equation here. But now let me do it the second time. So let me act, so now act the second time. So I'm considering a differential operator of the form D equals sum over A LA one plus LA two squared. So this is because I'm going for the Casimir. So you see I'm considering the quadratic Casimir, but it's Casimir, but here I'm considering an operator which is a sum of two differential operators, one acting at the point X one, one acting at the point X two. But each of these operators can be pushed to act on O K. So this is going to be equivalent to acting with Casimir, so with sum over A LA squared on O K. But here we go because so okay, this operator was a complicated operator, it was acting with two different points. But here I already have an operator which acts on one point and is applied to this operator O K sitting at this point. And here we just have a basic theory of group representation. So if you have a quadratic Casimir of a group acting on any representation, it's going to give us the same operator times a constant. So there's going to be an eigenvalue. So this is going to be equal to some eigenvalue C delta L times O K. And this eigenvalue can be worked out. It's something like delta times delta plus demo minus two, no, it's sorry, it's delta times delta minus D plus L times L plus D minus two. So what did we prove? We, so this argument explains that there is a secondary differential operator, it's this one. It's operator D. That if you apply it to a conformal partial wave, you are going to get the same conformal partial wave back up to a constant. So we showed that the conformal partial wave satisfies an eigenvalue equation. Here we go. Now you have to work out this equation and you have to solve it and this way you're going to find the conformal partial waves. So this, and you still have to think how to organize this computation, how to find this operator. This is by the way where, so the nicest way to organize this computation is to use the so-called projective Nalcon formalism. I think I don't have time to talk about. But since according to the survey, it's something that not many of you are familiar with. So it's a formalism which allows, so let me just say a few words. So in this formalism, you lift the correlation function of a conformal field theory, which are defined in the dimensional Euclidean space. You can lift this correlation functions to a cone in D plus two dimensional space. And the advantage of this thing is that on the cone, when you lift the correlation function to the cone, the conformal group which acts on the Euclidean space in a somewhat complicated way, in particular the special conformal generator. When you lift it to the cone, the conformal group acts on the cone linearly because the conformal group is just the Lorentz group of the D plus two dimensional space, and on the cone it is realized as the Lorentz group. And so in particular, all of these generators, LA on the cone, they look just like the usual Lorentz group generators. And so it is very easy to work out the action of the separator. So you do all the computation on the cone, and then at the end, you go back to the Euclidean space. So I'll just write down the answer for the separator, and it's instructive to just look at it a little bit. So let me write this operator acting on functions. So yesterday, I explained that we can parametrize equivalently conformal blocks. We can parametrize everything for print functions, not in terms of U and V, but in terms of these variables Z and Z bar. So let me use these variables. So in terms of these variables, the operator has the following form. It has a form which is factorized in Z Z bar in Z Z bar minus Z squared E Z plus exactly the same thing with Z exchanged by Z bar. And then there is a second piece which vanishes in D equal to two, so it's proportional to D minus two. Z Z bar divided by Z minus Z bar, and then one minus Z E Z minus Z going to Z bar. So once you do this complicated computation, you end up with this simple looking differential operator. And so what do you see from here? So first of all, you see that in D equal to two, this differential operator completely factorizes into something depending on Z. So the equation that you're supposed to solve is that D minus C delta L acting on the conformal block is zero. So that's the equation you have to solve. So in D equal to two, you have factorization. So Z Z bar factorize. And so your conformal blocks are going to be equal to some functions of Z times some functions of Z bar. I'm going to write these functions maybe in a second. So if you go to hard D, then there is this second term in the equation which does not allow you to find the conformal blocks in a completely factorized form. So Z, that's what I said. So we can use the variable Z and Z bar in any number of dimensions, but you are not going to be able to take advantage of the factorization in high dimensions. The dimensionality should be even. Which dimensionality? I didn't understand the question. In this equation, you can see that Z and Z bar is independent variables. Usually, because the conformal block depends on two variables. You can call them X and Y and they call them Z and Z bar, but for the purposes of the differential equation, these are independent variables. But of course, when you go back to evaluate the conformal blocks, then you are going to set Z equal Z bar complex conjugate. Well, maybe let me write down the formula. So in two dimensions, in D equal two, then we can write the conformal blocks. In the following form, G delta L is equal to some function K which depends on delta minus L of Z, K delta plus L of Z bar plus the interchange, where this function K beta of Z, it's Z to the power of beta over two. The hypergeometric function to F one with arguments beta over two, beta over two, beta Z. So this is such a nice explicit formula for which follows from just solving this differential equation. Then in D equal to four is another interesting case. It turns out that you can also solve this equation. And the formula you get looks as follows, Z Z bar divided by Z minus Z bar. You solve this equation by taking in ansatz. Let me look for a solution in the form one over Z minus Z bar times something. And it turns out that this ansatz goes through and it gives you a factorized equation. So here you have K delta plus delta minus L minus two of Z, K delta plus L of Z bar plus Z going to Z bar minus Z going to Z bar. In D equal to N equal four dimensions, there are these nice formulas. In other dimensions, for example, in D equal three, there are no such known nice formulas for the conformal blocks. And well, you should not be surprised about it because it's known that even in odd dimensions they often behave in a significantly different way. So what do we do then? Well, we found a very efficient way to generate power series expansions which converge extremely rapidly for these conformal blocks. So from the practical point of view, when you're doing numerical computation, you can just take advantage of these power series expansions, you expand the conformal blocks to some very, very high order, and then you truncate. But still for numerical purposes it's efficient, but maybe one day somebody will find a nice analytic way to deal with these conformal blocks also in odd dimensions. Yeah, because I have a treatment. Yeah, so I should comment about that. So when we are in two dimensions, in two dimensions there are two types of conformal blocks. There are the so-called big blocks and small blocks. And the small blocks are the blocks where you are summing over the descendants corresponding to the global conformal symmetry generators. And these are the ones that I talk about here. But then there are big blocks where you are summing over the full verasaur algebra. Those are much more complicated objects and particularly they depend on the central charge of the theory. And they do not have any nice closed form expression even in two dimensions. So yes, yeah, these are the small conformal blocks. Which by the way can be seen as the limits of big conformal blocks and in the limit of infinite central charge. Yeah, the questions? Yes? Sorry, I didn't understand. Yeah, so this equation needs to be supplemented by the conditions, by the initial conditions. So the question was how, since it's a second or the differential equation, how do I fix the solution? Well, first of all, I have to tell you the initial conditions for this equation, they take the full inform. So in the limit when Z, Z bar going to zero, which is the limit when the two points collide. So in this limit, I can use the OPE in order to determine the leading term of the conformal block. And this leading term looks like G delta L of Z is equal to Z Z bar, the power delta over two times a polynomial, some known polynomial, which expresses the angular dependence and depends on the spin of Z plus Z bar divided by a square root Z Z bar. So this is the leading behavior. And then it turns out that given this leading behavior and the differential equation, the solution is unique. So even though it's a second or differential equation, the solution is unique and you should not be surprised because the point Z equal to zero for this equation is a very special point. It's a singular point. The coefficients of the differential equation go to zero at this point. The coefficient of Z squared goes to zero. So in such situation, the equation is second order, but it may effectively for the purpose of boundary conditions behaves as a first order equation. The solution is unique. Okay, so I want to explain this method to generate the fast convergent power series expansions for the conformal block, because this will allow us also to touch upon some other techniques in this business. The series expansions. So in order to understand the series expansions, the first thing we should recall is the radial quantization. So I'm again starting with this correlation function one, two, three or four. So so far I was not really discussing the Hilbert space structure of my conformal field theory, but those of you who are familiar with the two-dimensional CFT, they will remember that one often considers radial quantization, which means that we pick some origin, say let's say point X equal to zero, and then there are these two points X one and X two, and then we quantize our theory on the spheres surrounding the origin. So we foliate our space by spheres surrounding the origin, and okay, this is the appropriate thing to do in the conformal field theory, because then we can view the dilatation generator as the Hamiltonian of this quantization procedure. And so when we insert two points, say okay, let's take a particular sphere maybe of radius one. If you insert two points inside the sphere and two points outside the sphere, then we can view such correlation function one, two or three or four. As in the radial quantization, we view it as a sum of matrix elements, of products of matrix elements, or one or two, which gives us a certain state psi, psi or three or four. The sum is over the basis, the orthonormal basis of states leaving on this sphere. This is the radial quantization. And the nice thing about the radial quantization is that it works in two dimensions, but it also works in hard dimensional CFTs, completely analogously. So in particular, we have a state operator correspondence, which says that these states leaving on the sphere, they are in one to one correspondence with local operators inserting at the origin. So they are local operators. So these states can be generated by some operator O inserted at the origin acting on the vacuum. Or this is one state if it was a primary, and then we have also to consider states D mu O acting on the vacuum, D mu D mu O acting on the vacuum, and so on. These are all possible states that exist on the sphere. For all post, if we take these states for all possible operators O, we get all states on the sphere. Now, these states are not going to be, if you consider states on this basis, then in general, they are not going to be orthonormal. So there's going to be some gram matrix between these states. If we want to go to the orthonormal basis, we have to invert this gram matrix. And so we can write the following formula, that the conformal, we are going to write the following formula, that the conformal partial wave corresponding to an operator O that we exchange here. So if we exchange here. So here we want to sum over all states which are descendants of some operator O. Okay, let me call it okay. As we usually were calling it. But we also have to orthonormalize them. So this conformal partial wave operator okay, is going to be equal to let me write it like this. O one of O one O two. And here I'm going to insert this new object that I'm defining okay, or three or four. And this object okay, is the sum over all states of the form O, P mu O, P mu O, and so on. I have here alpha. And here I'm inverting. So this is the gram matrix which I have to invert, beta. N alpha beta. Beta is the matrix of inner products. How do we find the orthonormal basis inside this multiplet? Is that the question? Yes. All this, so you have to take this basis in the natural form okay, d mu okay, and so on. And now you have to orthonormalize the basis. In order to orthonormalize the basis, you compute the gram matrix, then you invert it, and you get an orthogonal orthonormal basis. So let me introduce now another coordinate. So I already introduced this coordinate z, but now I would like to introduce another coordinate. So in my coordinate z, I was inserting one point at the origin, one point at z, one and infinity. That was for the z coordinate. But now I would like to insert points symmetrically. I would like to insert two points. I would like to introduce coordinate rho, such that there are going to be two points in my correlation function inserted symmetrically at rho and minus rho. And then two other points I'm going to fix at one and minus one. So this coordinate rho is going to be of absolute value less than one. So what I'm saying is that for any configuration of this type, I can find a conformal transformation, which is just going to move points around, and I'm going to find a conformal transformation which will map this configuration to this configuration. This is always possible. And you can explicitly write such conformal transformation and rho obtained in this form is going to be some function of z. So this function of z, so rho of z is equal to z divided by one plus square root one minus z squared. So the reason, so here I'm taking, for theoretical studies, this variable z is totally okay. It's totally convenient. But since at the end of the day, I'm going to use the numerical analysis, it's important to expand my conformal blocks in a way which is going to lead to the fastest convergent power series expansions. So what I'm saying here is that if you want to do a fast expansion, then it's much nicer to expand in the variable rho rather than in the variable z. And if you take the full series, then these two expansions are going to be completely equivalent. But if you're going to truncate the expansion, then the expansion in the variable rho is going to, after truncation, give a much better approximation. And there are several reasons for that. One reason is that, one naive reason is that rho is always smaller than z, as you can see. Absolute value of rho is always smaller than absolute value of z. That's one reason. Another reason is that you can see that when you're going to truncate the expansion in the rho variable, it will correspond to still keeping an infinite number of terms, resigning an infinite number of terms in the z variable. So you are going to be throwing out fewer terms if you expand in the rho variable. So this is a nice coordinate. But for numerical point of view, but also this coordinate is, it also has some theoretical advantages because now since we inserted the points symmetrically, you know, we inserted the points symmetrically. Now imagine that we are going to exchange, here I have to draw another picture. So let's go back to this formula, right? So I would like to consider the matrix element. So I would like to consider the conformal block in these coordinates. And I would like to consider the matrix element where I have, okay, I have two operators, one inserted the point rho, one inserted the point minus rho. And then I'm, there's going to be some state psi and let me take this state psi, which has a certain scaling dimension delta and spin J. And then the other two operators are inserted at one and minus one. So what I would like to claim is that, okay, is that this matrix element has a very particular dependence on rho, you know, given by the spin of this, by the state that we are going to exchange. So let me, let me write down, let me write down the answer. So this written in this form, this product is going to depend on, let me write rho as some r to the power e to the i phi. So I claim that this matrix element is going to be proportional. So some constant times r to the power delta times a polynomial Cj of cosine phi. And actually this polynomial is going to be not just an arbitrary polynomial, but it's going to be the so-called Gigan-Bauer polynomials, Cj nu, where nu is equal to d over two minus one, Gigan-Bauer polynomial. So if you're not familiar with Gigan-Bauer polynomials, it's a generalization of the Legendre polynomials from three to d dimensions. Of course, they are all familiar with Legendre polynomials from quantum mechanics. So they describe how wave functions of, they describe wave functions of states with fixed spin in quantum mechanics. Well here we are doing quantum mechanics of spin J in d dimensions. And what I'm saying, what I'm telling you is that a certain matrix element in this quantum mechanics is just for group theoretical reasons, it has to be proportional to the angular dependence of this matrix element is completely fixed and is given by this Gigan-Bauer polynomial. So this is analogous to the usual rules of angular momentum in the usual quantum mechanics. And this dependence on the scaling dimension of the field, it comes also for similar dimensional reasons. I realize that this is probably a bit sketchy, but let me write at least the end result and the derivation may be sketchy, but the end result you should be able to appreciate. So the end result of this discussion is that the conformal block G delta L written, let me write as a function of rho which is r to the power e phi is going to have an expansion, going to be a power series where I have to sum over n and j, going to be double series. So n here is going to be the level. So it's going to be n equal 0, 1, 2, and so on. And the level means that I'm considering descendants of dimension delta plus n. And so there's going to be some coefficient, let me call it b and j, r to the power delta plus n and gigabar polynomial Cj nu of cosine phi. So this formula, it captures completely the, no, it tells you the natural basis of functions into which you are supposed to expand your conformal blocks. These are the functions in which you are supposed to expand. And these coefficients b and j, you have to determine them. So they are going to be some, so these coefficients b and j are going to be some functions of n, j, and delta, functions of delta, actually rational functions of delta. So this argument does not tell you what these functions are, you have to still fix these functions. And one way to fix these functions is to take this formula and plug it into the Keismar differential equation that we already discussed. If you take this, you plug in the differential equation and you order by order, you'll be able to fix all of these coefficients. That's one way to do it, but there is actually a much more efficient way to do it. Let me write, actually any questions about this formula, j plus l. No, so okay, I did not explain what j is. So on every level n, you're going to have descendants of dimension delta plus n, and of spin j, of spin j, which is going to range from l minus n up to l plus n. So j is going to range from l minus n to l plus n for any different operators. So let me tell you about one more result, probably the last one today, today in the morning. So some recursion relations. Let me tell you about recursion relations. So let me write this conformal block, g delta l. Let me take out this factor r to the delta. And what remains, let me call it h delta l. Then it turns out that for the function h delta l, I have the following recursion relation. h delta l is equal, so h delta l is of course a function of r and phi. This is equal to h infinity l of r and phi, plus a sum over i, c i divided by delta minus delta i, r to the power ni h delta i plus ni l i. The h infinity l is a known function. It's equal to the Gegen-Bauer polynomial, c l nu of cosine phi, divided by one minus r squared to the power nu, square root one plus r squared squared, minus four r squared, cosine squared phi. Well, anyway, don't copy this formula, you can look it up in the notes. Let me just explain you what this formula means. So this formula, it comes about by thinking what is the structure of the conformal block as a function of the scaling dimension delta. So this formula tells you that conformal block as a function of delta has a series of poles. You know, these are delta minus delta i, these are the poles in delta. And you know, if you know the positions of all the poles and if you know the residues, then you can reconstruct the full function, right? So you have to know the limit of the conformal block when delta goes to infinity. This you can figure out pretty easily by solving the differential equation in this limit. You still have to know the positions of the poles, the coefficients, and here you see something interesting happen. So here, what I'm telling you, what this equation tells you is that if the conformal block has a pole for certain value of delta, then the residue of this pole is also conformal block. So this actually you can easily understand from the differential equation, from the Casimir differential equation. So if you take this whole formula and you substitute it into, so this formula has to satisfy the Casimir differential equation. But near the pole, this term dominates. So everything else you can forget about. So it means that the coefficient of this pole satisfies by itself the Casimir differential equation. So it is the conformal block, right? So this is not so surprising. Now, what is the origin of the poles and what are these coefficients? So the origin of the poles you can understand by looking at this formula. See here, I told you that in order to compute the conformal block, you have to invert this gram matrix. Generically, this gram matrix is going to be a positive definite and inverting it is no problem. But for some very specific values of dimensions, this gram matrix is going to be degenerate. And when you invert it, you are going to get a pole. So for these values of dimension, the conformal block is going to have a pole. So you can work out for which dimensions this happens. Now, of course the dimensions for which this is going to happen are not going to be the physical dimensions that occur in the real conformal theory. So all of this, so in the real conformal theory, all the dimensions in which we are interested are going to be positive. But the dimensions for which the poles occur, they're going to be negative. So these are unphysical dimensions. But you can formally consider the conformal block also for negative dimensions. It's an analytic function of delta. You can consider also for negative dimensions. And this formula tells you that for negative dimensions, there are all these poles. So you can work out the positions of the poles and you can work out also this coefficient. So this coefficient Ci, they are more complicated to work out but you can work them out in several ways either by matching with the Kazimer differential equation or directly. So all of these things have been worked out but now once you have this formula, this formula is an ideal method to generate this power series expansion. Because suppose that you want to compute a conformal block in that expansion. So you start with the first term, then you have these poles. You see, each pole, it buys you, so all these numbers ni, they are positive numbers. So actually positive and growing. So it's a growing sequence of positive numbers. So each term here buys you several powers of r. So you are using this recursion relation, you push the cutoff in r high and higher. So once you reply several times, you can generate the terms that you don't know, they get pushed to higher and higher orders in r. And so if you are interested in generating this power series expansion up to order 100, well it just means that you have to apply this recursion relation 100 times and that's it. So actually this recursion relation about which you can also read in the notes, it's the most efficient currently known way to generate the coefficients b and j. Okay, so I think that's all I wanted to say about the conformal blocks. I hope I convinced you that we have many handles to compute them, at least numerically. And in my second lecture today, I will show you how you can put them to use and get some results about conformal theories.