 So by now we know all the building blocks of this approach. So now one should just put them together and see what follows. So let us consider a four point function of four identical scalar fields. And as we discussed last time, this four point function, okay, it's going to be proportional to some function of z, z bar, which can be expanded into conformal blocks. So fk squared p delta k lk z, z bar. And there has to be an equation satisfied by this function g. So one minus z, one minus z bar to the power delta phi p z, z bar minus z going to z bar has to be equal to zero. So we take this expansion and we put it into this equation and we get some equation for dimensions delta k lk and for this coefficients fk squared. And so by the way, notice that out of this infinitely many terms, so there's going to be infinitely many terms in this sum, there's one term which is known, which is the coefficient of the unit operator. So we normalize our fields by saying that the two point function is equal to one over r to the power to delta phi. So it's just normalization. And this means that if you take the OP phi times phi, the phi of x phi of zero, so this OP includes infinitely many operators, but it also includes the unit operator with the coefficient equal to one. This is just normalization. Plus, so let's separate the OP into the unit operator plus all the other operators. With the coefficients that we don't know. So what I'm saying here is that in this equation, there's going to be the one term that we know which is the one corresponding to the unit operator. By the way, the conformal block of the unit operator is just one. So for conformal block g of the unit operator, that's that bar is just equal to one identically. Because the unit operator has no descendants. The derivative of the unit operator is zero, so it's conformal block consists of just one term. So in this equation then, there's one term that you know which is the one corresponding to the unit operator and all the other terms that we don't know. And well, it's a natural question to ask what can we learn about all these other operators okay in their OP coefficients given, for example, the dimension of the field phi. Is there anything general that you can say about operators appearing in the OP? It's not a priori clear because you see, it's an equation, there are infinitely many unknowns here. All f's are unknowns, all delta k's are unknowns. And it's not a priori clear how this, how constraining this equation is. Not clear. So actually, the way the progress has been achieved, so there are two ways that you can try to approach this problem. One way would be to just try to solve this equation. Let's try to solve some f's and deltas, f k's and delta k's, which solve this equation. Well, that's a very hard problem because you have infinitely many unknowns. So this problem basically is not, it's not known how to solve it. So this problem has not yet been solved. But then there is a more modest, then there's a more modest way to approach this problem, which is to say, there is this whole space of CFT data. So there is CFT data, which is f's and deltas. Can we use this equation like for starters to rule out some parts of this big space? No, for example, can we get the result of the type that there is like a big chunk of the CFT data, which would be inconsistent, something like some result of this type? Actually, we can even put the question somewhat differently. So let's suppose, let's make some assumptions so let's make some assumptions about the spectrum. Let's make some assumptions about delta k's. The kind of assumptions, I mean, you can make different assumptions. Well, let me give you one example of assumptions. I mean, let me make a very stupid assumption. Let's assume that delta k is equal to some constant, but let's assume delta k is equal to k. That would be one very silly but possible assumption. So now we fix the spectrum, delta k is equal to k. Now, once the spectrum is fixed, you can view this equation as an equation for f squared. And now, so once you, so this is an example, right? So now, we have unknowns. So once the spectrum is fixed, the unknowns are pk equal fk squared. Let me call them xk. Xk, fk squared are the unknowns. So the important point is that all of these unknowns have to be positive, has to have to be non-negative numbers. So one question that you can ask is, does this equation, seen as an equation for xk's, have a solution such that all xk's are positive? Well, this is already a more concrete question, and you can imagine that one can try to make progress on this question. So this becomes a kind of a problem of linear algebra. Well, it's not yet in the form that you can put on a computer because you see the problems that we can put on a computer if you have some vectors and matrices and so on, but here we have functions. So in order to put this problem on a computer, you have to somehow come up with a way to take this problem for functions and translate it into a problem for vectors and for matrices. And one way to do this would be to say, well, this equation has to be satisfied for all z's and z bars, right? But let me just pick a finite number of points. Let me just pick some finite number of points, zi, and instead of requiring that this equation is satisfied for all z's and z bars, I'm just going to require that it's satisfied for this finite number of points. So here we go. Instead of having a problem for functions, we have a problem for vectors. The length of the vector is equal to the number of points that we chose. So of course, this is going to be a necessary condition so that the equation is satisfied for each zi, it's a necessary condition, not necessarily a sufficient condition. But if you take more and more points, then you will be approaching more and more to the original equation for all z's and z bars. Instead of picking different points, you can take a different strategy. You can pick one particular point, for example, z equal one half. So this was one strategy's many points. Another strategy is that pick one particular point say z equal one half. And tailor expand up to some finite order, up to some large order, lambda. And then impose that to each order in this tailor expansion, this equation be satisfied. Again, you are going to get a finite dimensional system of equations. And then, so what do you see? If you do this trick, if you do this truncation, you have a finite dimensional system of equations, but you still have infinitely many unknowns because the spectrum contains infinitely many operators, at least in this example that we chose, right? So if you did not have this constraint that xk have to be positive, then basically you would always find a solution. So the solution would always exist. But with this constraint that xk have to be positive, it turns out that it's not always possible to find a solution. Even though you have infinitely many operators, you do not necessarily can find coefficients so that they all sum to the contribution given by the unit operator. So you see the contribution given by the unit operator you can see it as the right hand side contribution and you have to reproduce it by all the other operators with positive coefficients. So this is not always going to be possible, that's clear. But you can try to come up with geometrical pictures for that, but you can also view it as a numerical problem that you can put on the computer and analyze on the computer. And the reason why it's possible to analyze on the computer because it turns out that problems involving linear inequalities like this one, that xk larger than xk have to be positive, it's linear inequality. It turns out that numerically problems involving linear inequalities can be solved on the computer almost as quickly as problems involving linear equality. So we all in the courses of linear algebra, we study linear equations. But it turns out that there is a chapter of linear algebra which involves linear inequalities which is not unfortunately is not usually studied in basic course of linear algebra, but there are algorithms for solving problems involving linear inequalities which are almost as efficient as the ones involving linear inequalities. For example, there's one algorithm which is called the Simplex method which allows you to decide whether the problem has a solution or not. There was a question there. Excuse me? Oh, I'm sorry, yeah, it's z going to one minus z, yeah. And z bar going to one minus z bar, thank you. So this is the core logic of this numerical approach. So you have non-perturbative ingredients, the conformal blocks, then you make some assumption about the spectrum and you ask whether this assumption is consistent with crossing symmetry or not. But you know, this assumption about the spectrum that I gave here as an example, of course, is a very stupid assumption and I mean, this is not an interesting assumption in practice. But you can come up with more interesting assumptions. So let me give you an example of a more interesting assumption. So let me draw the sigma, so five times five OPE. Let me split this OPE. So there's a unit operator of OPE. And then there are operators of pin zero, then there are operators of pin two, pin four, and so on. Okay, when you take the OPE of two identical scalar operators, then only operators of even spin are going to appear in this OPE just because of the symmetry under the interchange of the two fields. What can we say about the dimensions of these operators, pin two, pin four, and so on? So for example, among the spin two operators, there's going to be the stress tensor operator, T menu, which has dimension of D. So the stress tensor operator in any conformal field theory has dimension exactly equal to the spacetime dimension D. And then, so moreover, it turns out that all, so it's not going to be the only spin two operator in this OPE. There are going to be infinitely many spin two operators, but in the unitary theories, all other operators are going to have dimension larger than the stress tensor operator. So it's going to be the stress tensor plus operators of dimension larger than D. So this is called the unitary bound. So the unitary bound tells you that for each spin, there is the minimal allowed dimension, which is consistent with the unitary. So actually, the unitary bound says that the dimension of the operator of spin L has to be larger than or equal than D plus L minus two. This is for spin L equal one, two, three, and so on. And then for spin zero, delta zero has to be larger than or equal than D over two minus one. So this is the dimension of the free scalar operator, D over two minus one. So the unitary bound says that the dimension of, in an interactive theory, the dimension of a scalar operator can only be larger than in the free theory. So this is for scalars. So inelegantly, for other spins, the free scalar theory contains the stress tensor operator, but it also contains this conserved currents of higher spin. And the dimension of this conserved current is equal to D plus, equal to L plus D minus two. And what this unitary bound says is that in an interactive theory, the dimensions can only go up. So in particular, so you know that for spin two, all the dimensions are going to be larger than or equal to D. For spin four, you know that the dimensions are going to be larger than D plus two, and so on. And so I'm going to make the following assumption about the spectrum. For spin two, four, and so on, I'm not going to make any assumption apart from the one following from the unitary bound, which is not an assumption, it's just an assumption of unitary. So here I'm assuming the unitary bounds. But for spin zero, I'm going to make a separate assumption. For spin zero, I'm going to make an assumption that all dimensions D, delta here, are larger. So let me say, let me call the first operator, the lowest dimension operator in this OP. Let me call it, let me call this operator epsilon. So this is the lowest dimension operator. So epsilon of dimension delta epsilon. And then plus higher dimension operators. So basically my assumption is going to be quantified, but what the dimension of the operator epsilon is. So if this delta epsilon is equal to the free scalar dimension delta k, then I'm not making an assumption. But if I'm starting to push this delta epsilon up, then I'm making a stronger and stronger assumption. And the question that I would like to ask is whether any value of delta epsilon is allowed. Is this a clear question? So I'm assuming that epsilon is the lowest dimension scalar. And I'm assuming, I'm taking some, I'm taking its dimension to be equal to delta epsilon. And I would like to ask whether this is allowed. So I'm going to now, for each value of delta epsilon, I can ask a question whether this equation has a solution involving epsilon of dimension delta epsilon and involving all other operators about which I will not going to assume anything except that they have to lie, for spin zero, they have to lie above epsilon and for spin two, four and so on, they have to lie above the entire t-bounds. This is the question that I would like to ask. And this, as I explained, this question can be analyzed on a computer. This, any questions about the question? And okay, you can, so you can basically analytically, you can understand that there is going to be an upper bound on the low value of delta epsilon. So I'm not going to explain this, you can look this up in the notes. But basically, if you look at the conformal blocks, so this, I explained to you what is the structure of conformal blocks that they have a certain expansion in terms of this variable row and so on. So if you use this structure, then you can show rigorously that there is an upper bound on the low value of delta epsilon. That delta epsilon cannot be arbitrarily large. So I'm not going to explain this, how it comes about, but you can look this up. But once you know that there is this bound, the bound that you're going to get analytically is going to be a crude bound. You can only prove the existence. But if you really want to compute the exact value of this bound, then you have to do the analysis on a computer. And when you do this analysis, you discover amazing things. So let me show you, you can do this analysis in any number of dimensions. You can do this analysis, for example, in two dimensions. So let me explain this plot. So in this plot on the horizontal axis is the dimension of this phi operator with which I start. This operator in the left-hand side of the OP, right? So I vary the dimension of phi. And then for each value of the dimension, I am computing an upper bound on the dimension of this operator epsilon. Well, here it's called delta zero. Delta zero is the maximum allowed value on the dimension of the operator epsilon. So you get a curve. Actually, here I have several curves, as you can see. And they are numbered by this parameter lambda. Well, this is this truncation parameter. I told you that when you do this numerical analysis, you either choose a certain number of points or you choose a certain number of derivatives. And so this parameter lambda is numbers, how many derivatives you included your analysis. And as you see, as you increase lambda, then the bound that you get gets stronger and stronger until it converges to some line. This thick blue line is the limit of the bound when you take lambda going to infinity. And so all points above this line here, this white region are excluded. So in other words, we know rigorously that there cannot be any unitary conformal theory in two dimensions, which lies, which has the dimension phi, say, 0.1 and the dimension epsilon 1.5. There is no such conformal theory period. Well, this is already by itself, this is an interesting result because even in two dimensions, even in two dimensions, okay, within two dimensions, we know a lot about conformal theories, but we know especially a lot about a particular class of two-dimension conformal theories, which are called rational conformal theories, which have finitely many primary fields. So this is a subclass of conformal theories. And if you relax this assumption, then we know significantly less. So this result is completely general. It does not use, it applies to both rational and irrational conformal theories. So this is already interesting. But another interesting thing, which transpires on this analysis, is that you can ask, well, okay, you derived some numerical bound, but in order to make sure that you do not make any stupid mistake, well, we should look, how about some conformal theories which we know exist? Do they satisfy this bound? For example, let's take this rational conformal theories, which are where we know the dimensions of operators. Do they satisfy this bound or not? Well, you discover that they do satisfy the bound. Moreover, some of these theories, they actually saturate the bound. For example, the two-dimensional easing CFT, it lies at this point and the other minimal model CFTs, which describe tri-critical and multi-critical easing models, these are the CFTs M, M plus one, they also lie along this line, so these points. But particularly interesting is the easing model CFT because it lies at this corner point. So, well, this is remarkable because why is this remarkable? Because we are looking for a method which would be applicable to all dimensions at the same time. So, there are all these beautiful two-dimensional methods but they are really applicable only in two dimensions. We don't know how to extend them to higher dimensions. And here is a method, it's a numerical method, but it's applicable for all dimensions. And yet we see that in two dimensions, it allows us to single out some CFTs, for example, the easing model CFT. So now, well, the natural next step is to say, well, now let's take this method and apply it in three dimensions and see what's gonna happen. For example, are we going to single out the three-dimensional easing model CFT if you just repeat exactly the same analysis in three dimensions and not in two dimensions? It's a very natural question task, yes. There are many things which are special about the two-dimensional easing model but which one of these things is responsible for producing this CASP is, well, one, there are several answers to this question but one answer is the following. So, you see, there is this line which interpolates all the minimal models. So, it means that there exists a one-parameter family of four-point functions which solve crossing symmetry which for some special values of this parameter, they reduce to the minimal models and then you interpolate and then you arrive to the easing model. So, actually, we know this analytically, you know, this plot was produced numerically but the four-point functions which describe this line, they are known also analytically. So, you can ask, well, you know, let's look at this analytic solution here and ask, well, why can't we just go, keep going? Why can't we just continue this analytic solution to below the easing model? Well, the answer is clear. So, if you take this analytic solution, you expand it in conformal blocks. Each OP coefficient, you can derive analytically what it's going to be. But now, as you continue delta, you find that at this particular value, one of these OP coefficients goes to zero because the easing model has a certain null states. Norms of these states are equal to zero and so the coefficients, so the contributions corresponding to the states, they disappear at the easing model point. So, if you were to take this solution and continue even to smaller values of delta phi, then this coefficient will become negative. But we are imposing the condition that all coefficients have to be positive. So, it's clear that at this point, this line should terminate. To terminate, and you have to switch to some other regime. And analogously, you can show that, okay, here, unfortunately here, we don't know the analytic solution. So, we can, but we can follow numerically the solution to crossing symmetry responding to this branch. And again, you see that the same thing happens. So, there is this solution. And as you come to this point, there is one OP coefficient which becomes zero and you cannot continue it. So, there is this crossover phenomenon. Yes? There are many points which lie inside the outreach. Well, minimal models has many OPEs. Right, so some of these OPEs, depending on which field you choose phi, you call phi and which one epsilon, some of these OPEs, they lie on the line and other OPEs, they lie inside the allowed region. You know, free theory, in free theory, you can find OPEs which lie along this, in free-scaler theory, you can find OPEs lying along this line. It's not fully understood. I mean, intuitively you say, well, it's because minimal models, they, there are many ways to arrive at the minimal models in the usual two-dimensional way. So, for example, one way to derive minimal models is to say that these are the unique models of center-charged smaller than one, which are unitary. This means m, m plus one. This is completely algebraic way. So, there are many states in this minimal models which are null. Now, all this logic does not exactly go through the, it's not exactly clear what corresponds to this logic in our numerical approach. Because in this approach, we are not using the Versailles algebra. We are only using SL2C, only a subgroup of the Versailles algebra. And yet, what you find is that the spectrum along this line of states, it organizes itself in the Versailles multiplets. It's consistent, but it does not really explain the reason behind it. I think the answer is that it's not fully understood. But let me show you now the three-dimensional results. So, if you do this in three-dimensions, then you find, again, numerically this plot. And you see that this plot, again, is similar to this plot in the fact that it has again an allowed region for small dimensions, disallowed region, and it has a corner point again. Well, in this plot, this corner point is not very sharp. It has, it's a bit smoothed out. But actually, the positions, the position of this corner point, when we first produced this plot, we were very happy because the position of this corner point, actually did agree with the known dimensions of operator sigma and epsilon in the three-dimensional using more CFT. So, because, as I mentioned to you, there are several ways to arrive at this operator dimensions. Even before we started developing this CFT methods, there are people already did Monte Carlo simulations and they tried to resum perturbation theory. So, it was known more or less with some not very good but reasonable precision that these dimensions, they take such and such values. And those values agreed with the position of this corner point. This is really remarkable because you see, when I was explaining to you the bootstrap philosophy, I was said, well, if we impose this bootstrap equation for all correlation functions, for all four point correlation functions, then we should be able to single out the consistent conformal field theories in any number of dimension. But here, we are not imposing the bootstrap equation for all correlation functions. We are imposing it for just one correlation function, phi, phi, phi, phi. And yet, it turns out that for this three-dimensional is more CFT, this seems to be sufficient to fix it if you believe in this corner point story. So, that was four years ago. And then, the question then was how to improve the precision. So here, the corner point is not very precise and we wanted to get the operator dimensions of these two operators with a good precision. And we found an interesting twist in the story which we called see minimization. So, let me explain what this means. Basically, the point is the following. So, we have evidence that the easing model lies in the corner point of the space of CFT data. So, it's a special point. There is a CFT data and the easing model seems to lie at the corner point. Now, how do you get to a corner point? Well, in order to get to a corner point, you can imagine by starting from the space of CFT data and you can try to push in such direction. For example, okay, we can suppose that you start from this, view it like this. Imagine that there is a wind. There is a wind blowing in this direction. So, you start from this space of CFT data and you follow the wind. And if the wind blows in the right direction, then you're going to end up in the corner point. But there is a corner point solves many optimization problems. There is a range. You expect that there is a range of optimization problem. There is not a unique. So, we found one optimization problem which is this optimization of delta epsilon which brings you to the corner point. But there may be other optimization problems which also bring you to the corner point. And it's not clear that the epsilon optimization is the most natural optimization problem. What will be a more natural optimization problem for the easing model, specific to the easing model? Well, there is one natural quantity that you may want to optimize which is the central charge. It is a natural quantity because it's known that in two dimensions the two-dimensional easing model is the unitary conformal field theory with the smallest possible value of the central charge. C equal one-half. So, in 2D, C equal one-half is the smallest possible value for the unitary theory. And this is a rigorously known result. Now, what about three dimensions? In three dimensions, first of all, what is the central charge in three dimensions? So, in two dimensions there are many possible definitions of the central charge. You can define the central charge through the erasor algebra. You can define the central charge through the Weiler anomaly. You can define it through the stress tensor OPE. And all these definitions, they give you the same central charge. C. Now, when you are in three dimensions, not all of these definitions make sense. For example, Weiler anomaly doesn't make sense because there's no anomaly in odd dimensions. The erasor algebra definition also you lose it because there is no extended algebra. So, the only definition of the central charge which generalizes naturally to three dimensions is the OPE definition. So, if you take the two-point correlation function of the stress tensor T mu nu T lambda sigma, then in two dimensions, this is proportional to the central charge. And let's take this also as a definition of the central charge in three dimensions as well. So, it's going to be Ct over x to the sixth times some tensor structure. Let me take it as a definition of the central charge in 3D. Now, this definition is not totally useless. It's useful because... So, by the way, notice for other fields, I said that I'm going to normalize the two-point function to one. For the stress tensor, I did not normalize it to one. I kept this normalization Ct. And the reason is that for the stress tensor, the natural normalization is through the word identities. So, the stress tensor has already a natural normalization through the word identities. And that's why it's useful to keep the central charge explicit. But if you use the word identities, then you can show that the OPE, so we are considering this, what is the OPE coefficient of the stress tensor in the OPE Phi Phi? So, this OPE coefficient is going to be equal to delta, delta Phi, divided by square root of Ct, T minu tilde. So, T minu is the central charge which is normalized canonically and T minu tilde is the stress tensor. And T minu tilde is the stress tensor rescale to have its two-point function one. And then I'm claiming that this OPE coefficient for such normalization is going to be delta Phi divided by square root of Ct. So, delta Phi it just follows from the word identities because the word identities are going to involve the dimension of the field Phi. And this, you know, you can figure out this, the correct normalization of the central charges. It's the one that I'm giving you here. So, the physical meaning of this is the following. If you take a theory with a very large central charge, then the stress tensor is going to contribute less and less to the four-point function of scalar operators. This, for example, is the situation for theories, large end theories with ADS duals. Because there, if you're familiar with ADS story, there the central charge contribution or the stress tensor contribution, which you describe using the gravity in the bulk is one over and suppressed. So in the limit of large central charge, stress tensor decouples. If, on the other hand, you take a limit of small central charge, then the stress tensor contribution becomes larger and larger. And so you can expect that there's going to be some inconsistency. So if the stress, if the central charge is too small, then, you know, this stress tensor contribution is going to be huge. And you may expect that it's going to be inconsistent with the crossing symmetry. Because the crossing symmetry says that, you know, all contributions have to be more or less of the same order of magnitude and then they combine with each other and the sum is going to be crossing symmetry. But if one of the contributions is too big, well, then it might be impossible to satisfy crossing symmetry. Well, this is just intuitive argument. So now you can ask, you can try to make this intuitive argument precise. You can say, well, given delta phi, what is the smallest possible number? What is the smallest possible value of the central charge consistent with crossing symmetry? This question is once again amenable to this numerical analysis that I described. It's by exactly the same algorithm. So you do this numerical analysis and, lo and behold, you find that this is true. So you find that again in this plot, in which on the horizontal axis, okay, I plot delta phi dimension. And on the vertical axis, I plot the allowed value of the central charge. Normalize to the free scale of central charge. This is in three dimensions. And you find that this bound has a minimum. Again, okay, here the minimum is not very sharp, but this minimum is located precisely where the three dimensional easy model is supposed to be located. So this plot, this numerical plot, it gives evidence for the conjecture that the three dimensional easy model CFT is the smallest central charge CFT, unitary CFT in three dimensions. And once we made this conjecture, we then pushed the numerical analysis and we were able to determine the position of this minimum precisely enough to get the dimension of sigma operator, which is the phi, the one that I call phi, with many digits and the central charge with many digits. And moreover, when you go to the minimum, then of course when you go to the minimum, then the dimension of, at the minimum the dimension of sigma is fixed because that's the minimum, the dimension of the central charge is also fixed, but it turns out that the dimensions and the opaque efficiency of all the other operators are also fixed because the minimum is an extremal point. If you want to reach the minimum, then all the parameters have to be fine-tuned to take precise values. So you basically know everything. So if you take this assumption that the easy model leaves at the minimum, then this gives you an algorithm to determine all 50 data of the 3D mass easy model, 50. Any questions about this? So this was a few years ago and then what happened is that, well this is plausible that the minimal, that the easy model has to have the minimal stutter charge, but strictly speaking this is still an assumption. So it would be nice to find a way to justify this assumption. Justify this assumption. This has not yet been done, but what has been done is that we found a way to get rid of this assumption completely. So we found a way to do the analysis which is free of any unproven assumptions, at least for the three-dimensional easy model. And okay, I'm not sure I have time to talk about this, but let me just say a few words. So basically, as I said, here the amazing thing of this analysis is that we are just studying one correlation function and we're already learning a lot. But what if we studied several correlation functions together? So for example, this three-dimensional easy model CFT has two lowest-dimensional operators, sigma and epsilon. And up to now we only studied one correlation function, sigma, sigma, sigma, sigma. Why can't we study all possible correlation functions of sigma and epsilon together? Surely we will learn more if you do this. And so it turns out that if you study these three correlation functions together, then not only do you learn more, but you basically fix the model absolutely. So this is another amazing property of the three-dimensional easy model. So let me explain just in words what this plot means. So you take these three correlation functions, you study them together, and you say for which values of these two operators, the crossing symmetry equation for all three correlation functions is consistent. So if you do this, then you find that, so we had this big allowed regions and a priori all points inside those regions were consistent and only the angular point was consistent with the easy model. But now it turns out if you study three correlation functions that all of this, a big chunk of this allowed region goes away and you are just left with a small island around the easing point. And here you're already not making any assumption, no see minimization or whatsoever, just crossing symmetry. And then okay, then this analysis was pushed a bit further up and basically it led to this currently most precise definition of operator dimensions in the three-dimensional easing model using this bootstrap method. So anyway, this was a bit fast review of the numerical techniques, but well, that's as much as I could do in the time I had, I think I'll stop here.