 So this is the tail end of the meeting, but I, so probably your concentration levels are on the wind, but the aim of my lectures is to actually expose you to certain ideas, which I think are important, and I'll give you a more of a broad brush overview of them, except for maybe small snatches. I will not go very deeply into too many technical details, but I think the general ideas are very appealing, and so hopefully I'll try to convey to you some of that and not tax you too much with detailed calculations. So please stop me any time in the middle as well, and of course there's the discussion session, so feel free to ask questions any time. So the topic that I want to expose you a little too is called Melon Amplitudes in Conformal Field Theory. So I'll, perhaps many of you, this is an unfamiliar topic, so I'll start with some motivation and of why this is interesting. And then we'll sort of go deeper into the topic by first reminding you of some things, some properties of amplitudes in Conformal Field Theory or correlation functions in Conformal Field Theory and how they're conventionally represented in terms of cross-ratio and so on, and this will help us to sort of lead us into the Melon Representation, which is sort of a special way of representing these amplitudes. And after defining and this thing, we'll see why it's so interesting. We'll talk of some of the general properties of these amplitudes, and we'll see that this is sort of the key reason why these general properties are so attractive, so nice that, so this is sort of the main reason to sort of study this. We'll then illustrate this general properties through some sample computations and cases in perturbed CFTs, so perturbed Conformal Field Theories, so in a sense of weak coupling expansion, and finally some computations in the opposite regime of if you wish strong coupling where the CFTs have a representation in terms of amplitudes in anti-dissid gravity theories. So, and then finally sort of summarize with some outlook. So that's the plan. So starting with the motivation, let me start with sort of two questions of fundamental importance, which I feel are very contemporary questions of fundamental importance. So the first is, the first you can phrase as are all large N CFTs, Conformal Field Theories satisfying some conditions, which I can specify maybe later, some conditions, the small print. Are they identical to perturbative string theories? So we have sort of a definition of perturbative string theories, in terms of some well sheet amplitudes, are large N CFTs dual to sort of perturbative string theories on ADS, on anti-dissidial space times, because they would have to have the same isometries. So this is a question, I think, many people believe the answer is yes, that if provided these conditions are sort of more sharply defined, that would be consistent, consistent unitary large N CFTs would be dual to perturbative string theories on ADS. So that's one question that I think many people are interested in and certainly something that I find very important to answer. And the second question is seemingly unrelated. Can we find sort of efficient analytic techniques solve the bootstrap equation of CFTs? So you had already a course from Slava Richkov about CFTs, the general properties and in particular about the amazing progress that has been made on understanding properties of CFTs and higher than two dimensions using these bootstrap techniques. A large number of the results, most of the results in this topic have been numerical results, getting bounds on dimensions and OP coefficients and so on, but in a sense the results demonstrate the power of the bootstrap equation already that using such a sort of crude hammer you can already get a number of very interesting results. So is there a right language or a right approach to try to make progress analytically and go beyond sort of the numerical results which of course you can sort of push for the head but it's not going to ultimately help you to fully solve the theory. So these two questions which look quite different but perhaps there are even relations between them but these two questions I believe can be addressed in, these are an example of two questions that can be addressed using this sort of melin representation of CFT amplitudes. And so though I will not go very much into, I mean I won't be able to tell you how to solve these questions of course these are outstanding questions but I'll try to indicate along the way as we go how we might be able to make headway into some of these questions using this representation of CFT amplitudes. So this melin representation of which I'll define very shortly might appear at first sight to be just a change of variables and in a sense it is just a change of variables or a change of basis for writing the CFT results so it might seem like just a trivial thing. So it might seem like a somewhat trivial thing but I think change of variables is not something to be sneezed at it's something that can be very powerful and let me just give you as theoretical physicists just two examples that all of you are familiar with. So the first is just in quantum mechanics I mean the Schrodinger or more generally the direct picture of quantum mechanics clearly helped to unlock many of the structural properties of quantum mechanics and improved our understanding of quantum mechanics in contrast to the Heisenberg's matrix mechanics which is equivalent but is a more cumbersome way of viewing it. Another example on a similar vein is about the introduction of the vector potential in electrodynamics which Maxwell never needed to use for writing down his equation but we now know that the vector potential is essential for the deeper understanding of the theory. It's generalizations, it's quantum mechanical manifestation and so on. So change of variables is not something one should necessarily look down upon. So that's something to just keep in mind. Okay, so let me refresh your memory about amplitudes or correlators in CFT and how sort of we conventionally express them in terms of things called cross ratios. So in conventional QFT we also see an example of this change of variables which is useful namely amplitudes in momentum space. Amplitudes in momentum space are what we use most of the time as opposed to position space. So these are more convenient and let's just recall briefly why. So for one thing, say three level amplitudes. So if you are doing a better bit of expansion and you're considering some arbitrary three diagram it could be some endpoint function but a classical three level amplitude in momentum space, in momentum space are given just by products, products of propagators and vertices, no integrals. So we don't need to carry out any integrals the expression is just algebraic and multiplying together the things that the Feynman rules tell you. This is as opposed to position space where in position space we have a kernel for the two point function and you need to, is an integral kernel but the propagator in momentum space is just algebraic is just. So it's essentially because the equations of motion in momentum space are the quadratic equations of motion reduced to algebraic conditions. So that's one good thing about amplitudes in momentum space. But furthermore, more deeply the momentum amplitudes have good analytic properties. So as functions of the external momenta have these, they have very attractive analytic properties and these analytic properties have nice physical interpretations. So the poles correspond to stable or unstable particles and single particle states. There are branch cuts corresponding to and then there are branch cuts and they appear on the real axis, positive real axis typically and they correspond to multi particles. So we understand the behavior of these analytic properties and the analytic behavior in a very nice way. The residues at the pole, residues factorize on lower point amplitudes. So all these sort of nice features are present in momentum space and not in position space where things are, can have very, the behavior can be very complicated and highly and there's no good analyticity properties. And in fact, these analytic properties have a nice translation of various physical properties like locality, causality and unitarity. So properties like these are very nicely translated into the analytic behavior there. For instance, locality translates to some kind of boundedness at the infinity of some kind of polynomial boundedness if you, at least here is coming from with a finite number of derivatives that have nice properties in the momentum space amplitudes. Similarly, causality is related to the properties under analytic continuation, unitarity is encoded in various dispersion relations and so on, so cutting rules, et cetera. So there are all these very nice properties in momentum space so that's what makes it the natural language to adopt. So we can ask, so this is what we do in general conventional quantum field theories, so why can't we do the same in CFTs? So CFTs are of course special, they have additional symmetries and in particular, so let's ask first why we can't do, why we wouldn't want to sort of use momentum space in CFTs. So firstly, non-trivial CFTs, it's interacting CFTs on flat space if you try to define CFTs on R, they have a continuous spectrum going down to zero because there's no mass scale in the theory, there's no mass gap and you see this in, so two-point functions, so two-point functions go like something like one over x square to the delta, so two-point function of two primary fields, let's say. So you've probably seen, I'll assume that you've seen a lot of the basics of CFTs in Slava's lectures, so you know that two-point functions behave like this and if you try to Fourier transform that, it's actually, for a general delta, need not be an integer in a non-trivial CFT, you get fairly bad analytic behavior, even very lively, you can see that it sort of would translate on dimensional grounds to something like this which has branch cut going down to zero, so this has, and then it's very difficult to pick out any features that are all no sort of, in this sort of continuous, when you have this scale in red and sort of spectrum, you don't see these sort of poles in momentum space in any simple way, you instead see these sort of branch cuts, but more seriously, the special conformal transformations which are things that are part, so they don't act nicely on momentum space, so we have these additional global symmetry scale in variants as well as these special conformal transformations and they don't act nicely on momentum space, so it's very, and so it's very, so clearly you would like to have a formalism or a language in which you see the global symmetries manifest in for ordinary QFTs which had just punkery in variants that was fine because p-square and other in variants were in variant, so things like p-i.p-j or Mandelstam in variants were in variant under the global symmetries, but now when you have these additional symmetries, this is not a good momentum space is not very useful. Another way to say it is that our equivalently the theory is best formulated on in sort of radial quantization as you would have also heard in Slava's lecture, there it's, so here the time translation, the generator, the Hamiltonian is the, in this radial quantization is the dilatation operator and the dilatation, this has a discrete spectrum of dimensions, so the eigenvalues are the dimensions of the corresponding operators, and this has a discreet spectrum of dimensions for at least for D greater than two, and so the conformal features of the real theory is more naturally formulated in this sort of global cylinder, this is R, and so here of course, momenta are not anyway the good quantities where you would at most label them by some spherical harmonics on the sphere, so you see that momentum space is anyway not the natural quantity, so that's why in CFDs and in things like Slava's lectures, the correlators are written in position space, so you're back to position space, and you consider some endpoint function of some local operators in position space, and then you have all the sort of disadvantages of position space, so you're back there, but so the question is there another representation which has the advantages of momentum space in flat space CFD, flat in the usual quantum field theories, and yes, sorry, D equal to two in higher dimensions because essentially this is a sphere with curvature, so that automatically sort of you get a discreetness in this thing whereas on the circle you can have a continuous spectrum in two dimensions, so the answer is yes, R phi square, so that's what the goal of these lectures is to show that momentum space has these properties and in fact it's somewhat surprising that this was proposed by only as recently as 2008 by Mac who is one of the pioneers of conformal field theories, but so this has come quite recently and I think it's still, which is why it is still relatively unexplored, though there has been several papers after this, but still I think the full potential is yet to be realized. Okay, any questions? So let me, so before we introduce this momentum space, let's go to these correlators in position space and explain some of their properties. So we'll look at, so we'll generally look at conformal field theories which have in addition, which have the special conformal transformation symmetry which I assume all of you have seen, but they of course have scale invariance and sometimes I mean sometimes I'll, it will be useful to sometimes make a distinction between scale invariance and conformal invariance. Conformal invariance is more restrictive, but sometimes I'll see, it will be I think illustrative to discuss the constraints imposed by scale invariance and then the additional ones coming from conformal invariance, so I'll just alert you to that, though it may well be that there are no interesting unitary conformal field, no unitary quantum field theories which have scale invariance, but are not conformally invariant. No one is almost close to a proof of that, but it's still not completely settled. In any case, just at a formal level, it'll be sometimes useful to make this distinction. So I'll consider mostly amplitudes which are correlation functions of primary operators. So these are what are called primary operators and again, you've probably seen this, they are the fields or the states in our conformal field theory are labeled by representations which have a sort of highest rate or lowest rate state, which from the point of view of a state, it's annihilated by the special conformal transformations and these are the primary operators, they are the ones with lowest dimension and then you act by derivatives which are the descendants. So these primary operators are labeled by delta I is the dimension and Li is the spin, spin or the SOD minus SOD quantum numbers, but very often for simplicity, I will restrict to external scalars Li equal to zero, I mean, people have developed the formalism more for external states with spin, it's not complete, but it introduces lots of polarization vectors, many things I will not go into that. So we'll mostly talk about external scalars and very often I'll focus on the four-point function which is in a sense the first non-trivial correlator in the CFD, which has a non-trivial dependence on the positions, the three-point functions are essentially determined by numbers, the two-point function by the dimension. So the two and three-point functions have information encoded in a few numbers, but the four-point function is the first one as we'll see which has information about positions. So what can we say about these correlators in a CFD? Firstly, by Poincaré invariance, a of xi can only depend on xi j square on xi j square which is xi minus xj square. This is the translational invariance tells you that it can depend only on differences and rotational invariance tells you it can depend only on rotation or Lorentz invariance tells you it can only depend on the squares. Of course, if there are spins, then there would be external factors which can depend on xi minus xj to various tensor indices corresponding to the spins, but those will be some sort of overall factors and as I said, I'll mostly consider those. So we'll consider something that depends only on these xi j square. Now, scale invariance is the statement that of lambda xi that these primaries they transform in a homogeneous way under the scaling with the dimension being the power of the homogeneity. So under this, so xi j square, of course, we'll go to lambda square xi j square. So things can't depend purely on xi j square, so a of xi. So firstly, okay, so a of xi j square, so the amplitude if you scale up all the coordinates is because each of the individual ones scales this way, the amplitude scales as just some of all the powers so it just comes out as an overall factor like that. So since xi j square scale like this, we can define, of course, the usual ratios. So xi j square divided by some xk l square is invariant under this rescaling. So if you just had scale invariant, you would say that a of xi, you could write as, you could take out, so of course it depends on these, so you take out a factor of these xi j square raised to some powers, delta ij where delta ij are chosen such that sum over i and j these are of course symmetric. So I could, I should really write i less than j, but any case, I guess this twice, this is equal to delta i because we know that each of the xi j square transform like this, so this whole factor transforms as with an factor of lambda to this power, so you can, you choose the delta ij such that this is equal to the sum over the delta i's because you're writing it in terms of that times a reduced amplitude which depends only on ratios, so r ij are independent set of independent ratios and so we can take them to be example, so the r ij are a set of independent ratios which you can take to be for instance, xi j square divided by say x12 square and where ij is not equal to delta ij 12, so you take these ratios, these are all independent because you can make any other ratio from these, so it's easy to count how many of them are there, basically the xi j's that are n into n minus one by two of them, but you're not including the one too, so there are n into n minus one by two minus one of these ratios. So in terms of some, so you have some freedom in choosing what these ratios are, so you choose them to be anything which is convenient to you like this, you can write the whole amplitude as a function of these ratios times an overall factor where here these delta ij's also you have some freedom, so some arbitrariness is there, so the arbitrariness in the choice because these delta ij's again, there are essentially n into n minus one by two of them and you are just imposing one condition, so there's some, there's a lot of arbitrariness in the choice, sorry this is below people's horizon, I'll try not to write over here, so this arbitrariness in this choice is just can be absorbed into your a tilde because if you choose a different set of delta ij's but also satisfying this, the difference will be just appearing in terms of some, it's just involved changing the function a tilde, so that's quite trivial, so this is what scale invariance tells you, now you also have conformal invariance, or special conformal invariance, I'll take a short cut in talking about the special conformal invariance by using the fact that very often the special conformal invariance comes from the presence of an inversion symmetry, basically you define xi prime mu is equal to xi mu divided by xi squared, this is the inversion transformation, and the special conformal invariance, the K mu generator if you wish can be viewed as, so if i is this inversion symmetry, then the special conformal transformation is really a composition of an inversion, a translation and an inversion back again, maybe you have seen this in some of your lectures in any case, that's a simple way to define, to sort of obtain the special conformal transformations, if you haven't seen it, you should work that out, and see that you get exactly the form that of special conformal transformations that you normally have, so but it's often more convenient to work with this inversion symmetry, and so under this inversion symmetry, the primary operators under this inversion essentially transform a factor by a factor of this, so it's almost as if you can think of this as a local scale factor, and this is the kind of scaling, so you see if you replace lambda by one over x square, and then it's the same sort of transformation, this is the general rule for the transformation under a primary field under any while transformation, for a primary field under any while transformation, if you would consider non-primaries, the transformation can be more complicated, but you can work it out from the basic one for primaries, so therefore, so if you have this as the transformation law under inversion, the whole amplitude, the a of xi that we defined over there, so if I consider inverting, if I invert all the coordinates, then what I get factor of xi square to the power delta i for each i, so if for the endpoint function I get an overall factor like this, which depends on the individual x's, so this is a more constraining condition on the form of the more constraining condition on the form of the correlator. What does it do for us? So just like we saw under the scale transformation how xi j square, which is what things can depend on, transform there it was very simple homogeneous transformation, but here under this inversion xi j square, it's a little bit of algebra to just check that it transforms to xi j square divided by xi square x j square, so that's a little, just a small algebraic exercise, which if you haven't seen before you can just check, so this is very convenient to write it, so under this inversion you have things like this, this transformation transforms like this, so clearly ratios, which were scale invariant like this one are not necessarily, are not in general invariant under this transformation, so you can ask whenever you have a symmetry it's useful to write down the things which are invariant under the symmetry, so you want to construct things which are invariant, and so these are the so-called the cross ratios, which are not just ratios, but, so you can see that if I define xi j square x kl square, and then let's say xi ik square x j l square, so that's why it's called cross, because I'm just crossing the ratio xi j square by, so it's, I'm taking sort of a mutation of the indices, so these are invariant under inversion, and that's almost obvious, right, because xi j square will pick up a factor of xi square x j square, and then denominator x kl square will pick up x k square x l square, and here you see you have the same indices ik j l, so you'll pick up the same factor the downstairs as well, so they'll cancel, and so the whole thing is invariant. All right, so this is the invariant under inversion, and therefore special conformal transformations. So things must depend apart from an overall factor now on not these cross ratios, but now not on ratios, but on these cross ratios, so how many of such cross ratios are there for an endpoint function? There are n into n minus three by two, so let's quickly prove that, I mean it's something that's often stated, but it's very simple to just show this number, so you see there are fewer than the ratios, the ratios there by n into n minus one by two minus one, so there are n into n minus three by two independent cross ratios for n points, so I'll just quickly just sketch it, so supposing we have, if we have a set of independent cross ratios, so for n minus one points, supposing someone gave you a set of cross ratios for n minus one points, we just do this recursively, whatever, inductively, so we want to now consider, and we add the point xn, we want to consider the independent cross ratios for this, now the set of n points, so we already have cross ratios involving all the, these are, the set is, let me call the cross ratios c, so we have some set involving all the x one, for this is for the points x one up to xn minus one, these are the various independent cross ratios that you have formed over here, now we add in xn, so we can construct new cross ratios involving xn, so the new cross ratios must involve the point n, the xn, so we can construct them, for example, like this, so where a goes from three up to n minus one, up to n minus one, so you're taking the point three, the distance is from three to n minus one, to n are in this xan square, that's then the x2n square in the denominator, so you've taken into account the positions from two to n and from a to n, from three to three onwards all the way up to n, so these are all independent cross ratios that you can add, they're all of this form, so they're all cross ratios and there are n minus two of them, so there are n minus, sorry, n minus three of them, so there are n minus three of them and then you can, but we see that what's missing is x1n in this, so we just have to add, also add something which involves x1n and you can do that by just considering another cross ratio involving x1n, like x1, something like that, so you have involved x1n as well and so you've built independent cross ratios involving all the new set of xn's and how many of them are there, you've added n minus two of them, so if sn minus one were the number of independent cross ratios for n points, n minus one points, then what we have is a recursion relation like this because you have n minus three plus one, so n minus two additional ones, and then all you need to know is, let's say that for four points, you have two independent cross ratios which will be important to us, so let me write them down. Conventionally, there are chosen to be U and V which is, I hope I'm keeping to the same convention as was used in the bootstrap lectures, so for four points, that's the first non-trivial case where you have any cross ratio, any non-trivial cross ratio, and in fact, this tells you that sn is n into n minus three by two, so we have all these cross ratios which are fewer in number than what a general theory could depend on, so coming back to our question of what is an amplitude that will satisfy this, we can write that a of xi is of the form like before, xi j square to some power delta ij times a reduced amplitude which depend only on these cross ratios, cn's, cn's are the set of all these, so these are all the set of cross ratios, independent cross ratios which we wrote down over here, and so we write it in a similar form like over here, except that we again have this delta ij, this overall factor which must take into account this covariance of the transformation, so the amplitude is not invariant, if it were invariant, we would write it, build it purely out of the cross ratios, but it's covariant and to take into account the covariance, we have this additional factor and using the fact that xi j square transform under inversion in this way, we have delta ij chosen, so that sum over j delta ij, so j of course is not equal to i, so this is summed only over the index j, this is equal to delta i or two delta i, no just delta i, because it transforms as xi square to the delta i, so you see each of these xi j squared, we need to get an overall factor of xi square for each i raised to the power delta i and so each of these xi, so all the ones which have an i in them will get a factor of xi square raised to the power, the corresponding delta ij, so you sum over all the j's and that total power should be equal to delta i, so these are n constraints, so these are for each i equal to one to n, so now the delta ij are more restricted, but there's some arbitrariness again, but that can be absorbed too, so you're free to choose something and the rest of it, I mean you can absorb it into, because the two different choices will just depend on cross ratios as you can convince yourself, so you can choose a canonical set of the delta ij's such that they obey these conditions, but nothing else, so there are only n conditions on the n into n minus one by two delta ij's, so you have quite a lot of freedom and you choose it in a way that is satisfied and then everything depends only on this reduced amplitude, so it's a much bigger simplification compared to a general Poincare invariant theory, because everything is now essentially in this function which depends on far fewer variables than you would have expected in a general Poincare invariant field theory where things could depend on n into n minus one by two of these x ij's squares, but things here depend only on n into n minus, the function that depends only on n into n minus three by two variables, so for example n equal to, so that's the reason why the two and three point functions are in a sense trivial, no cross ratios there, they don't have any interesting functional dependence for four points, you have one function of, four point function depends non-trivial on these two variables, U and V and whereas a general four point function would have depended on four into three by two, six variables, but here it's just two variables, so it's a vast simplification. In a scale invariant theory, it would have still depended on five variables, so special conformal invariance really gets you quite a lot of mileage, you have things which depend on much fewer variables. Any questions? Then the number of points you're saying? Yeah, so in general if you are, okay I'm ignoring the fact that in some, Can you repeat the questions? Can you repeat? He was I think asking that the dependence grows quadratically in the number of points, the dependence of the functional dependence, so of course, so in general that would be the case in special dimensions there can be additional relations because if you are in two dimensions for instance, then these two are not independent, they are sort of complex conjugates or you can form complex conjugates out of them because they are all forced to lie on a plane, so there are linear relations between the vectors and so in higher dimensions there'll be some more complicated constraints, but typically that's the functional dependence, I mean those additional, those are sort of accidents that depend on particular dimension, the functional dependence is sort of more uniform so you can, indeed it does depend on, in general on these quantities, in any case in four dimensions, these are for instance, these are genuinely independent cross ratios for the four point function in three dimensions. In two dimensions you'll get this, they are essentially complex conjugates. Here we come. Rajesh, we directly go to the question session. Oh, okay, all right, I think that's already announced. Any other question? Okay. Okay, so let's thank Rajesh first. Thank you.