 Thanks, Nick. So I'd like to thank the organizers, especially Robert and Andrea, for putting together this event and for inviting me to present my line of work. So as you know, in the theory of critical phenomena, there are two complementary approaches, one of them, which is central to this conference, is RG. But there is another approach, conformal field theory approach, which, so there is the conformal field theory approach, which many of you are familiar with in the two-dimensional case, but today I will focus on the recent work having to do with the three-dimensional case. Just a few words about the history. This approach goes back to the 70s, to the work of Polakov, Ferrara, Gato, Grillo, and Mac. So this was very important work, which laid formal foundations of this idea, but at the time, no concrete results have been obtained for various reasons. So the modern period started in 2008 when I wrote this paper with Ricardo Ratatze, Erectonia and Alessandro Vicky. And it took a while to get this idea to realize its potential, so the most impressive results were obtained only in the last couple of years, and this year we founded a collaboration on the non-perturbative bootstrap with the support of the Simons Foundation, so we hope to take it even to the next level. So why do we need yet another approach in the theory of critical phenomena? Well, there are two reasons. The reason one is that it's a practical reason. You know, the reason is that the existing approaches are not really fully satisfactory, so the RG is a great qualitative tool to map the phase diagram, but if you really want to make precision calculations, it has limitations, and if you want to simulate critical phenomena, it's also hard, it's expensive, and not always conclusive. So let me just give you three examples of long-standing puzzles in the theory of critical phenomena. So the first example has to do with the measurement of the new critical exponent in liquid helium, so it's O2 model, O2 universality class, and in this case, since about 10 years, there is a eight-sigma discrepancy between the best experimental measurement done on board of space shuttle and the best Monte Carlo determination of the same exponent done by theorists. And this discrepancy has been since 10 years, and RG is not able to tell you who is right and who is wrong. I'm being too far away, so it doesn't check. So here's a second, well, not puzzle, but a question where it would be interesting to have more precise information. It has to do with the question of cubic anisotropy in the Heisenberg model in three dimensions. So basically, the question is whether the Heisenberg fixed point in 3D is stable with respect to the cubic anisotropic perturbation, so you have two fixed points, the cubic fixed point and the Heisenberg fixed point, and the question is whether the operator which drives this perturbation, which is a spin four operator with respect to O3 group, is a relevant or irrelevant perturbation of the Heisenberg fixed point. And the best, it just turns out that the dimension of this operator is incredibly close to three. So at the two-sigma level, by this best critical determination, it's smaller than three, but two-sigma is not a great deal of accuracy, and other studies indicate that actually this operator is irrelevant, so we still don't know which one of these pictures is realized. That's my second example. And the third example is the determination of the number of critical, the critical number of firm and species for which the three-dimensional QED flows to a conformal fixed point in the infrared. So in this case, as this table which I took from a recent paper, you can basically pick any number between one and 10, and you will find it in this table. So it's really unacceptable situation. So that was the practical reason for having another approach. And then there is a conceptual reason is that on the one hand, the critical state, the critical behavior is a universal phenomenon. So the critical exponents describing this state are really fundamental constants of nature, just as fundamental as pi or E or any other number in mathematics. But if you're trying to describe this critical state using the RG, we introduce a lot of non-universal bells and whistles in the process which have to somehow disappear in the end. And it's a check that you're doing anything correctly if all this non-universal stuff doesn't matter. But that's not satisfactory in my opinion. And here at this conference, people quoted Wilson and Feynman and who else? So here's, so the quotes are dangerous, but as long as they fit your narrative, they're good. So here's one which fits mine. So it's a quote from Polakov who said that a musician group is a human-made thing. It's a smart way to calculate, but it does not have a breathtaking quality of the Dirac equation. And so what he had in mind when he said this in an interview is that we need a manifestly universal approach to the critical state and CFT is precisely such an approach. So why CFT, why conformal? Okay, this would take another hour to explain it in great detail, but basically, so this is going to be just one transparency. So what happens is that if you have a continuum field theory which is invariant under rotation invariance, under scaling variance is appropriate for a fixed point and which is local and the condition for locality can be encoded, for example, by saying that it has a local stress tensor operator, then under these conditions, generically you will require conformal invariance in the infrared. Well, I mean, we are already in the infrared since we are imposing scaling variance. And okay, generosity here is important. It's, for example, generosity can be stated by requiring the existence of interactions. So it should not be a Gaussian theory. In fact, there are counter examples to this implication, but all known counter examples are Gaussian. So as long as you are willing to take this generosity assumption, then this implication is true. And so that's why we can use conformal invariance. So what is a conformal field theory? So I'm going to review some basics of conformal field theory. I apologize for those who know these things. So conformal field theory is in the most simple way, is just a collection of local operators. It's the list of local operators. It's generically, generally an infinite list of local operators. And for these local operators, you're allowed to compute correlation functions. You only talk about correlation functions as observables. You don't talk about actions and things like that. And these correlation functions have conformal invariance. So that's, this is the most basic definition of a CFT, but then of course there are all other properties. So what is, what do I mean by conformal invariance? So conformal invariance is invariance on the conformal transformations, which are the transformations preserving the angles. And in 2D, the group of these transformations infinitely dimensional, any analytic transformation has this property. In higher dimensions, it's a finite dimensional group of transformations, where which consists of Poincare group, rotations, translations. And in addition, you have the dilutations. And this special conformal transformation generator, which can be obtained from the translation generator by conjugating with inversion. And it is this special conformal transformation generator, which makes everything possible. So the crucial property of this generator, which makes everything possible, is that it has a negative scaling dimension. So in a normal field theory, what you can do, the only thing you can do is differentiate your local operators. And if you differentiate, then you raise the dimension of an operator by one unit. So in a conformal field theory, you get this extra operation, which allows you to lower the dimension of your operator by one unit. And it is from here that a lot of formal structure of conformal field theory follows. So the local operators coming back to them, they're characterized by what? They're characterized by their spin under the rotation group. So they can be scalars, vectors under the rotation group. They can be two tensors, like the stress tensor and so on. And they're characterized by their scaling dimension. So the scaling dimension is the one which is related to the critical exponents. But I'm going to state my results in terms of the scaling dimension because that's the natural, that's the natural number for the CFT approach. And then, okay, the scaling dimension just means that an operator scales like one over x to the delta. For example, if you take the two point function, then it's gonna scale like one over x minus y to the sum of the scaling dimension. So this is the usual thing. And so what do we know about the scaling dimension? So here I'm going to flash the latest result about the scaling dimensions of the critical point of the easing model which have been obtained by Koss, Poland, Siemens, Daphne and Vicki in this year in the March of 2016. So in this plot, on the horizontal axis, I have a scaling dimension of one operator present in the easing model which is the sigma operator in the field theoretical approach that would be called phi. And on the vertical axis, I have a scaling dimension of another operator which we call epsilon. That would be what our GPO would call phi squared. And the dimensions of these operators are related to the critical exponents, eight and new. And the conformal filter techniques predict that these dimensions have to leave in this tiny island, in this tiny red island. So the dimensions of this island are given here. So these are the numbers. So you have six significant digits in both delta sigma and delta epsilon. So what I would like to emphasize is that these error bars that you have here, these are rigorous error bars. So it's a rare example of a quantum field theoretical computation where the error bars are not just estimated but they are really truly rigorous error bars. So anything outside of this range is fully excluded, just cannot happen. So these error bars are about factor 2550, okay, depending on which of the two I'm looking at, are better than the best Monte Carlo determination of these critical exponents made by Heisenbush in 2010. And as far as to the best of my knowledge, it's about three orders of magnitude better than the best compatible renormalization group calculation. So what I mean by compatible renormalization group calculation is that you often see people doing energy calculation and then they compute your 10 digits which follow from, I don't know, LPA approximation or anything like that because they can but they don't estimate in systematic error which follows from their approximation and very often these results are incompatible with what we now know are the exact values of this, not the exact but the most precise values of this exponent. If you exclude those results which are incompatible, so they are wrong, then the best result which is not wrong is a thousand times larger error bar. So that's where that's the purpose of my talk will be to try to explain how do we get these numbers. So just a few more words about the CFT. So I mentioned already the importance of the scaling dimensions and that they determine the two point functions of operators. Here I want to say one more thing about the two point function. So it's a consequence of this CFT magic of the existence of this operator K mu, the special conformal generator that actually you can show that the two point function of these operators is not only does it scale like one over X minus Y to the scaling dimension but actually it's diagonal meaning that the operator has a two point function only with itself. That's why I have this delta IJ here and also it's not just proportional to this power but the whole structure here which is a spin dependent tensor structure is fixed by conformal invariance. So okay, if you have a scalar then it's just one. If you have a vector of the Lorentz group then it's delta mu minus two X mu X mu over X squared. So it's fixed. Well, this is true for the so-called primary operators which are in conformal theory divide the operators into primaries and the derivatives. So basically this is true for the operators that you have to worry about because all the other operators are derivatives of those. Okay, so once you are done with scaling operators the next thing that we have to bring into the discussion is the operator product expansion. So of course, I'm sure that everyone is familiar with OPE in quantum field theory. So in conformal field theory OPE has some features which are similar to the usual to the normal QFTs but it is much more powerful than it is in the usual QFT. So let me say a few words about that. So by OPE we mean the same thing. So we have an endpoint function of some a bunch of local operators and let me take here A1 and put it at zero and A2 to put it at X and then there are some other operators in this endpoint function. And then I can replace the product of these two operators, A1, A2 by an infinite sum which includes what? First of all, it includes it's an infinite sum over local operators which can appear in this OPE. So here's this AK of zero. So these operators, they are multiplied. First of all, they are multiplied by this factor one over X to the delta one plus delta two minus delta K which is forced from the dimensional analysis. They are multiplied by this OPE coefficients C1 to K which is just a pure number which characterizes our conformal theory. And an important thing is that when an operator appears in the OPE then it's derivatives. So del mu AK, del mu del mu AK. They also appear in the OPE. Okay, for dimensional reasons they are multiplied by X mu, X mu, X mu and so on. But a very important property of conformal theories is that the coefficients of these operators that are fixed by conformal symmetry. So the only thing that you have to specify is this overall coefficients and everything else is fixed. So this is of course very particular for CFTs. So if you take any other conformal, any other quantum field theory like QCD for example then OPE is also valid in QCD. But it's not true that the coefficients with which the derivative operators appear they are related in a simple way to the coefficients of the leading operators. There you have to do a separate computation for them. So here you don't have to do a separate computation. So that's one very important difference. Okay, so once you have this OPE you can reduce the endpoint functions to N minus one point functions and so on and you can keep going and reduce it to the two point functions which are fixed as I said. And so this shows that at least formally if we know the dimensions and spins of all local operators of your conformal field theory and if we know the OPE coefficients then we can compute everything. We can compute all correlation functions and so we can compute all the observables. So these numbers because they're so important they're called CFT data and to solve a conformal field theory means to compute the CFT data. From the CFT point of view the OPE coefficients are just as fundamental as the scaling dimensions of the operators. This is very different from the point of view that RG approach is taking. So in the RG papers there's a lot of discussion about the scaling dimension of operators and the critical exponents but there's hardly any discussion about the OPE coefficients. So this is because they don't seem to play such a central role and also because they are much harder to compute. So even though we know that they exist people don't bother to discuss them, we don't bother to compute them. So in the conformal field theory approach to critical phenomena they go hand in hand. You cannot determine the scaling dimensions of operators without determining the OPE coefficients and vice versa. And in fact, I showed you the results from CFT for the scaling dimensions of the operator sigma and epsilon but we have results also for the OPE coefficients of these operators which are of comparable accuracy. So this, for example, the OPE coefficient of for epsilon to appear in the OPE sigma times sigma is this and the OPE coefficients for epsilon to appear in the OPE epsilon times epsilon is this. As you see it's again with the order six digits of precision and these numbers are here just for comparison this will be the numbers in free theory in three dimensions, square root two, square root two. So these are of course not at all, they don't very different from these free theory numbers. Okay, so now another difference from normal field theory that I would like to stress which makes OPE particularly powerful in conformal field theories is that it's not just an asymptotic expansion. So in normal quantum field theory often we discuss OPE and we say well OPE just determines for us the leading asymptotic behavior of a correlation function when points come close to each other. But in conformal field theory much more is true. So it's not just the leading asymptotic behavior it's actually mathematically convergent power series expansion. We define it radius of convergence. So let me discuss, let me describe this result. So suppose you have an endpoint function in conformal field theory where you have two operators A1 and A2. Okay, this is some sphere which surrounds them and of radius R and suppose that all other operators in this correlation function they are inserted outside of the sphere. So let's just pick the sphere in such a way where all operators are inserted at a larger radius. Then the following theorem is true. The first of all, OPE converges. So if X stands inside this sphere then it's really mathematically convergent power series. And moreover you can estimate the error that you make by truncating the OPE. So the OPE in the OPE you naturally can order terms in the order of increasing scaling dimension. And then you can ask what happens if I only include in the OPE terms corresponding to operators of scaling dimension less than a certain cutoff. What happens? Well then you can prove a theorem that the error in the correlation function that you make by doing this scales like the ratio of X over R to the power of the cutoff. So this means that for example if you stay within half radius of the sphere then the accuracy you obtain is exponential. So if you truncate, if you take and if you drop all the operators of dimension larger than 10 then you obtain the accuracy one over two to the 10. And so on. So this of course we don't have theorems of this sort in non-conformal theories. They might be true but we don't know if this is true or not. And so this question, this property of OPE convergence in conformal theory, it plays the role. It plays a rigorous counterpart of the property between the of the decoupling of low energy modes and high energy modes in normal quantum field theories. So low dimensional operators are like low energy modes. And if you drop high energy modes, if you drop high dimensional operators then you don't commit such a big error actually. So you know rigorously what is the error that you commit. And this is very, very useful. So now we come to the crucial point. Is that, okay so I told you that if you know the CFT data, if you know the scaling dimensions and if you know the OPE coefficients then you can compute everything with CFT. So now the question becomes does any set of scaling dimensions and OPE coefficients define a consistent conformal field theory? The question is no, the answer is no. The answer is no. And the consistency condition that we still have to discuss is known as OPE associativity. So in the mathematical language, suppose we have three operators, AI, AJ, AK. So associativity means that if you take an operator product of the first two operators, you obtain an OPE series and then if you take a product of OPE product of each operators in the series with the AK, then you will get the same answer. You should get the same answer if you were to do this product in the opposite order. So this may sound as a mathematical condition so where does this follow from? Well this follows from the requirement that the correlation functions in your theory are unique. So for example, let's look at the four point function. So this is an equivalent formulation for the associativity which is known sometimes as a bootstrap equation for the four point function or crossing condition on the four point function. So you take a four point function, A1, A2, A3, A4 and using the OPE, we can take the OPE of operators A1 and A2 and A3 and A4 and we can express this four point function as a sum over operators exchanged in this S channel by using this OPE. So by the way, these are not Feynman diagrams. These diagrams, they encode for you the order in which you are doing the OPE. Or you can do the same computation in another order. You can take the OPE of one and four, two and three. Of course, you have to be careful to pick a configuration of points in such a way that both of these OPEs are convergent. But I gave you the condition for the convergence of the OPE in terms of this sphere, which should include points and should not include other points. So using that condition, you can show that these two expansions, they do have overlapping radius of convergence. So everything's mathematically well-defined. And then since it's the same four point function, you have to impose that these two expansions agree and this gives you a condition which is, since you are using the OPE two times, it gives you a condition which is quadratic in the OPE coefficient. So it's C12K, C34K times something, some kinematical factors, which come from all these two point functions on the OPE coefficients and so on, has to be equal to the sum done in the opposite order. So it's easy to convince yourself that if you take this bootstrap condition for four point functions and if you impose it for all possible four point functions of the theory, then this is equivalent, this is fully equivalent to imposing the OPE associativity condition that I mentioned in the previous slide. So in practice, we actually work with this condition. Okay, so second and last slide of history. So these bootstrap equations, as I described them, they were first introduced in the 70s, as I said. And at the time, it was already understood pretty much that these equations are non-perturbative, that they are manifestly universal, they are manifestly universal and that they are mathematically well defined. But what was not, what scared a little bit people at the time is that this is an infinite number of equations for an infinite number of unknowns and it was not fully understood how to find your way in this infinity, in this forest of infinity, even though it is mathematically well defined. So it's nothing, it's not the sort of infinities that usually are encountered in perturbative quantum field theories. But then there was a breakthrough in 83 in the famous paper of Belayn Polikov as a Molochkov who found, who realized that in the two-dimensional setting, sometimes you can find conformal field theories which only have a finite number of primary operators. And so for, by doing precisely the same analysis, precisely the same bootstrap equations, but in the two-dimensional setting, they found finite dimensional solutions of these equations. And this was the breakthrough of the two-dimensional conformal field theory. So, but it was still not known what to do in the high-dimensional case where the number of operators is necessarily infinite. And so that's when we started doing the progress starting in 2008. And the reason we achieved the progress was thanks to numerical techniques. So we know, we said, well, sure, okay, the number of operators are infinite, but we're not afraid of infinitely many operators. We still do know that these expansions converge. So if you keep lots of operators, if you keep hundreds of operators or thousands of operators, the error that you are going to make is going to be e to the minus 1000, e to the minus 100. So this is a very small error and we should not worry about it. So that's why we achieved progress. So what can we do? Well, we can construct using our numerical algorithms very precise, arbitrarily precise numerical solutions, approximate solutions to crossing, to this crossing equations. So they can be arbitrarily precise, okay. If you want a hundred digits, we can construct you. We can give you a spectrum which satisfies crossing up to 10 to the minus 100 accuracy. So that's one thing. And another thing is that we can prove. So in many cases, we can prove using our numerical techniques that in certain regions of CFT parameter space, solution is just impossible. You can prove it numerically, but rigorously that certain chunks of CFT parameter space are inconsistent. You can just rule out huge chunks of CFT parameter space. So let me just using the three-dimensional easing model as an example to show you what a typical bootstrap analysis looks like. So which steps do you have to go through in order to bootstrap your favorite CFT? So first of all, given a CFT, given a universality class, so the one most important characteristic thing of this universality class is the global symmetry. So in the case of easing model, it's Z2. So you already know that you should, all your local operators, all local operators of a CFT are going to be classified by representations of the global symmetry. So in case of the easing model, it's are going to be Z2 operators and the Z2 operators, Z2 even and Z2 operators. So that's something that you know a priori. But you know also something else. It also makes sense to characterize universality classes by the number of relevant operators that they have. So you have critical theories, you have pre-critical theories, multi-critical theories and so on. So in the case of the three-dimensional easing model, we know it's a robust fact, it's a robust experimental fact that it contains two and only two relevant scalar operators. Okay, the fact that it contains one relevant Z2, even operator, it's extremely well known. It just falls from the fact that in order to reach the phase transition, you have to fine tune only one parameter temperature. But the fact that it only has one Z2 odd relevant operator is also well known. It falls from the fact that the phase diagram of water of the liquid vapor phase transition is two-dimensional. So it's two-dimensional, it's a pressure and temperature. And in that case, the Z2 symmetry is emergent. And since it's emergent, we know that the total number of relevant scalars has to be equal to exactly two. And so the other one has to be Z2 odd. So we can define the three-dimensional easing model universality class as a three-dimensional CFT, which has the two global symmetry and precisely one Z2 odd even and then Z2 odd relevant scalar operator. And so, well, there's also another condition that there should be a local stress tensor operator, as I said with this locality. And then, okay, so you say, I have my operator sigma and epsilon, let me consider their operator product expansion. What do I know about this operator product expansion? So if I take the OP of sigma times sigma, it's going to contain in general all the two even operators of the theory which are of even spin, even because of the symmetry which interchanges these two operators. And so what are these operators? Well, for example, the operator epsilon will appear in this OP, but they're going to be also infinitely many, infinitely many other scalar operators. So in the usual field theoretic approach, you would call the separators five to the six, five to the four, five to the six and so on. You would invent some names for them, which include the field and its derivatives and so on. So in conform field theory, you don't do that. So it makes no sense to give such names to these operators. So you just call them with some labels, epsilon prime. So what we call epsilon prime would for RG people would be five to the fourth. And what you know about epsilon prime is that it's irrelevant. So epsilon is the only relevant scalar and this guy is irrelevant. And you can impose this as a condition on your CFT that this operator, you don't know what its dimension going to be, but it's irrelevant. That's something that you know. Then you have a spin two sector and this you have a stress tensor operator, but it's not the only operator in the spin two sector. There's going to be some other guy of high dimension, let's call it T prime. You just invent some names for them. It doesn't really matter. So this is the sigma times sigma P, epsilon times epsilon OPE is going to contain the same operators but with different OPE coefficients. So this is interesting because now the conditions that the dimensions of operators in this OPE and in this OPE are the same is not such an obvious condition. So you can expect that if you were to impose it, you would get some constraining power. Finally, if you take the OPE of sigma times epsilon, it will contain sigma. The next scalar is going to be sigma prime. So this is something that RG would call five to the fifth. In CFT, you don't call it five to the fifth, you say it's in a relevant scalar. But there are more things that you can impose. So OPE coefficients, you can impose that the OPE coefficients are symmetric. So this is a consequence of conformal invariance. So the OPE of sigma times sigma contains epsilon. So there is this OPE coefficients sigma sigma epsilon. But if you take the OPE of epsilon times sigma, the OPE coefficient with which sigma occurs in this OPE is equal to the first OPE coefficients. So the OPE coefficients are fully symmetric in IJK. So that's something that you can impose very easily when you do the conformal bootstrap calculations. Another thing that you can impose is that the OPE coefficient of the stress tensor is fixed by the word identity. So for any scalar operator O, well, scalar or not scalar, or can be here sigma can be epsilon can be anything, the OPE coefficient of the stress tensor, appropriately normalized, is going to be given by the dimension of O divided by a square root of a certain number, which is the three-dimensional analog of the central charge. So you see, by fixing just one number, you fix the OPE coefficients with which stress tensor appears in the OPE of any operator with itself. So that's of course very powerful. Unitarity plays a role. So we know that there are many interesting universality classes which are described by unitary or reflection positive quantum field theories. And so they would correspond to unitary CFTs. So in that case, you can say something about the OPE coefficients. You can say that OPE coefficients are going to be necessarily real. And you can also say something about the scaling dimensions of the operator. So you can say that they're going to be bounded from below by certain numbers. And these numbers are called unitarity bounds. So these are basically the lowest dimensions which occur in free theory. So these dimensions delta minus two over two is the dimension of the free scalar. And then in any unitary CFT, the dimension of all scalar operators has to be above this bound. So these are some rigorous results following from unitary. So since you know this, you have to impose this. And then, okay, so how does the workflow go? So our goal is to determine the low safety delta. By this I mean, we cannot possibly hope to determine the dimensions and OPE coefficients of all operators of the theory because there are infinitely many of them. But modestly, you can start by saying, well, I would like to determine the dimensions and OPE coefficients of the operators of low scaling dimension. This is what I call low safety data. For example, in the easing model, you might want to determine the dimensions and OPE coefficients of the operator sigma epsilon and say the leading irrelevant scalars in each sector. In the Z2 odd and Z3 even sector. So you say, okay, this is something that I would like to do. Then the first thing you do is that you pick a set of four point functions which include the operators you are interested in, either as external operators or as operators which are exchanged when you do OPE. So for example, here you might want to choose the correlation function sigma, sigma, sigma, sigma. So sigma occurs now as an external operator but then epsilon occurs as an internal operator because there is an OPE sigma times sigma gives you epsilon. So you will have access to epsilon through the OPE. Alternatively, if you want to achieve even high precision then you will do the analysis including epsilon also as an external operator. You will say, okay, now I'm going to include also four point functions where epsilon is occurs as an external operator. Then if you do this, then for example, in the OPE sigma times epsilon you will have access to the operator sigma prime which is something that you did not have access to if you were just studying this correlation function. So obviously the more correlation functions you can control, you can add, the better your accuracy is going to be. So for the considered four point functions you take the OPE expansion in all possible channels and you divide this OPE expansion into operators exchange the operators which are low and those are the ones that are interested in and the high operators. So you don't care so much about the high operators. So you want to marginalize over the high operators. But so the game is you would like to know given the low operators can you find high operators in such a way that the crossing equation is satisfied? If you can, well then it means that the low operator scaling dimensions, the OPE coefficients that you for example fixed are allowed or at least you cannot rule them out. If on the other hand you can prove rigorously that no matter what you can do the high operators cannot satisfy, cannot help you satisfy crossing. Well it means that you ruled out that part of the parameter space. That's the game. And so in this plot that I already showed you before about the scaling dimensions of the easing model so that's precisely what we did. So in this all this white region is excluded because so in this plot it was fixed. So the dimension of the operator sigma was fixed. The dimension of epsilon was fixed. It was imposed that these are the only two relevant scalar operators. And it was found that for every point outside this region it was found that the irrelevant operators cannot, you know you put sigma and epsilon. Sigma and epsilon by themselves do not satisfy crossing. Actually you can show that in high dimensions you can only satisfy crossing if the number of operators that you exchange is infinite. So certainly if you just include sigma and epsilon and you only exchange sigma and epsilon you will not be able to satisfy crossing. But you can ask can you satisfy crossing with the help of irrelevant operators? Without assuming anything about them except for the fact that they are irrelevant. And then lo and behold for every point here you can show that this is impossible. You cannot, there is no solution to crossing. Now is this surprising? You may say well at which point do we have to be surprised? Well at this point I would not yet be so surprised. It's not surprising that by doing this, by applying this method you can rule out chunks of the CFT parameter space. I mean it was in a sense it's surprising that it was not done earlier. It was not done before we got around to do this. Because basically what happens is that these exclusions they follow from positivity. So I mentioned to you that the big coefficients in unitary theories are real. So these crossing equations they involve they're quadratic and opaque coefficients. So in the simplest cases they just involve just squares of opaque coefficients which are positive numbers. And so these exclusions they just follow from positivity in a certain sense. I mean I put positivity in quotation marks. So what happens is that if the low operators they violate crossing symmetry by a certain amount, then a certain positive amount, then the high operators cannot undo this. Now but what does it mean positive? Because we are working in a very multi dimensional functional space if you wish because we are dealing with a space of functions. So what you mean by positivity is not really clear and the role of our algorithms that we developed is precisely to identify which directions are positive and they give you interesting constraints and which directions are not particularly useful. And so this is something that I cannot possibly explain in the limited time but the keywords here are linear programming and semi-definite programming. So these are the algorithms that we are using to do to identify these positive quote unquote directions. Okay, so the fact that the constraints exist is not so surprising. What is however surprising is how strong they are. So this is something that could not a priori be foreseen before we did these calculations. So this is something that amounts to a discovery that by just putting together constraints from a handful of four point functions as the plot that I showed you it included just three correlation functions in the analysis, sigma, sigma, sigma, sigma, sigma, sigma, epsilon, epsilon, epsilon, epsilon, epsilon. You get such a tremendous constraining power and you get these tiny islands in the parameter space which basically points which tell you that the three dimensional easy model CFT is basically uniquely fixed. So why exactly this happens is an open problem. It might signal that in fact, the three dimensional easy CFT is exactly solvable. It might signal something else, but I don't think anyone has any good idea about what precisely is happening. So this is an interesting thing to think about. Going back to this plot by using the same algorithm which in the white region tells us that the solution does not exist. In the shaded region, the same algorithm gives us a solution. So you run the algorithm and it speeds out for you a very, very precise solution to crossing symmetry in this region. This solution is not unique. So if you run the algorithm two times, you might get a slightly different solution. If you vary around this region, the allowed region, your solution will vary a little bit, but it will vary by a tiny amount. So this is how we estimate our errors. So the fact that certain numbers that we give, they change in the six or seven digit, it's because the solution varies a little bit when we go inside this region. And so in particular, here is a synthesis of everything that we know about the operator algebra of the three dimensional easing model. So we know the dimensions of a bunch of operators. So we know more, but okay, these are the ones that fit on the slide and we know the opaque coefficients and we know the center of charge. Particularly, we know the dimension of the leading irrelevant that even scalar, the leading irrelevant that thought scalar and we know many more things that people ever bothered to discuss in their approach. And in particular, the opaque coefficients that nobody ever bothered to discuss. So if you've been following this line of development a little bit, then you may notice that the results are getting more precise. Thank you. With time, why is this happening? Well, it's happening because we keep adding more constraints. So for example, before we were using just one correlation function, now we can use three correlation functions and okay, a couple of years, we'll use five correlation functions and so on. And so that's one reason for increasing constraining power. Another reason is that even if you include a single four point function in the analysis, then this bootstrap condition is a functional equation because it has to be satisfied for any positions of the operators X1, X2, X3, X4. And so in principle, even a single equation contains infinitely many constraints. And so in any numerical analysis, you pick out out of these infinitely many constraints a finite subset. And this subset also, as the algorithms improve, this subset increases and so constraining power improves. So these numbers that we quoted, if you turn the crank a bit longer, we can easily get one, two, three more digits. So in this sense, really the easing model, at least from the numerical point of view, the three-dimensional easing model is solved at the critical point. It's an invitation for people to think about perhaps an analytic solution, but numerically, it's in good shape. So what do you do next? Turning the crank and obtaining a few more digits is perhaps not so interesting, perhaps only to make the point. But it's more interesting that the same method can be applied to other universality classes with a small number of relevant operators because that was the thing which was important, which allowed us to control the easing model so well, because there are only two relevant operators in the game. But there are many other universality classes which share the same property. So the OAN model, the GROSNIVU model, and even the QED3, they all have this property. And so the work has already started to extend this method to these other models. And so one nice thing, for example, about the OAN models is that the same islands that I showed you for, the same island that I showed you for the easing model, it exists also for the OAN models. So by performing exactly the same type of analysis, you find that, okay, this plot is already a bit old, now there are better smaller islands, but it's true that also OAN models live in islands and you can control these islands up to very large N where one over an expansion, you can compare to one over an expansion. And so this plot shows you very clearly how this approach can lead to a numerical classification of conformal field theories. Here, you make a single plot and you see, okay, that there is actually a family of conformal field theories living here. I mentioned this puzzle about the discrepancy, eight sigma discrepancy for the O2 model. Well, this puzzle is also on the way to be solved and the winner are the theoreticians, seems to be. So these are the rigorous islands for the O2 model and you see that this is the theoretical determination from Monte Carlo. You see that for the O2 model, the Monte Carlo is still better, it's still more precise than the bootstrap, but you can also see that the experimental error bars here, they are on the way to be excluded by the bootstrap analysis. So in this case, the bootstrap is not yet, is not yet more precise than Monte Carlo, but at least it's being able to distinguish who is right and who is wrong. And so what, so this brings me to the name of my seminar, so the CFT Genome Project. So what I was trying to convince you is that, is that the scaling dimensions and the big coefficients of the conformal field theory are its genome, so these are the most fundamental parameters which characterize any conformal field theory and just as the two spirals of the genome are intertwined, in the same way the scaling dimensions and the big coefficients are intertwined and you cannot think efficiently about one without thinking about the other. And so what this leads to is a project of classifying three-dimensional universality classes based on this language, it's not a simple task, so it's definitely not just an automatic extension of what has been done so far, so this is going to be a challenging project, but the work has already started and I'm sure that on the scale of five years or so, we will get a much clearer picture about the three-dimensional universality classes based on the CFT approach than what we have now, thank you.