 So I'm going to talk about work that I've been doing for the past couple of years in collaboration with various linear combinations of the following authors, three of which are here at the conference, Ami, Zuhair, and Liam. And kind of the overall setup or the overall goal is very akin or very similar in spirit to the truncated conformal space approach that we've already heard multiple nice talks about. And so the basic setup we want to study is I want to start with some conformal field theory in the UV. And it's in some number of spacetime dimensions. And I'm going to assume that I have all the data about this starting point, about my original UV fix point. And then what I'm going to do is deform that CFT by one or more relevant operators, breaking conformal invariance, and creating some RG flow leading down to some new quantum field theory in the IR. And I want to be very agnostic about what's happening in the IR. I could have a mass gap, the gap could close, and I could lead to a new IR fix point. But I'm especially interested in studying this system, as is basically everyone else here at this workshop, in the case where this theory is strongly coupled. And so just working backwards in the title, we're going to tackle this problem via Hamiltonian truncation, very much akin to the spirit of the truncated conformal space approach. We're going to try and use the conformal structure of my starting UV theory to organize my truncation basis. But we're going to do a slightly different choice. We're going to choose to work in light cone quantization, which I'll argue later on, or I'll give you kind of our motivations for why we're deciding to work in this. And the goal is to use this truncation framework to study the full RG flow. And so I'm going to show you some plots later on in the talk where we can actually see the full RG flow starting from the UV CFT down to the IR. And so our motivation for kind of trying to pursue an alternative or a complementary framework to the more standard TCSA approach, or I guess that's redundant, TCSA, is the kind of three-fold motivation. One, we're really interested in studying dynamical observables. I really want to study things like time-dependent correlation functions, scattering amplitudes, things like this. So we're going to frame our system directly in terms of Minkowski space. And we're actually going to work in momentum space to kind of get better access to these dynamical observables. Relatedly, I'm very interested in studying kind of the infinite volume results. I really want to study correlation functions at infinite volume. And lastly, I'm very interested in working in higher dimension. So there's been a lot of great work in TCSA and other Hamiltonian truncation methods in two dimensions. And there's been some push towards getting to higher dimensions, but we are pursuing this alternative approach in the hopes that it will allow us to study theories at higher dimensions. And I'll show some results today in 2 plus 1, in addition to work at 1 plus 1. OK, so the overall structure of the talk. First, I'll just introduce the basis. I'll tell you our proposed Hamiltonian truncation basis and how to formulate it and apply it to a very generic CFT in any number of dimensions. Then we'll get into the details by looking at two specific applications. So right now we're in the state where we're testing the method or trying to understand how it works in controlled environments where we have answers to compare to. And so I'm going to focus on two examples, both of which are just deformations of scalar field theory. So my UV theory is just going to be a free scalar field. And we're going to study it in 1 plus 1 and 2 plus 1 dimensions. And then I'm going to talk about some ideas we have for how to apply this proposal to gauge theories and potentially bypass a lot of the naive difficulties associated with trying to apply truncation methods to gauge theories. And then lastly, if I have time, I'd like at the very end to just say a couple of things about subtleties that you encounter when trying to do Hamiltonian truncation in light cone quantization. This is what we've been focusing on a lot for the past few months is trying to understand this, especially in the context of deformations of large end CFTs. And if I have time, I'll mention a few brief things about that. But I would also be happy to talk about this offline if people are interested. OK, great. So that's the goal. So first, the basis. I'm in some UV CFT, kind of the natural basis to use from the perspective of the CFT, which is exactly the same in spirit as what TCSA does, is to use local operators, to use the primary operators of my conformal field theory. But I'm interested in studying dynamical observables. And so kind of a natural place to work is in momentum space as opposed to position space. So I'm going to do the dumbest thing imaginable. I'm just going to define a basis of states by just taking the Fourier transform of primary operators acting on the vacuum. So this is going to be our basis. They're created by local operators, and they're identified by the following quantum numbers. First, they're identified by, well, obviously, their momentum and the invariant mass, mu squared, is just defined as p squared. But then they're also identified by the eigenvalue under the quadratic casimir of the conformal group, which is associated with the associated primary operator. So for those of you who aren't familiar with CFT structure, the conformal casimir operator can just be written very schematically in terms of the other generators of the conformal group. It's eigenvalues. So states created by primary operators are eigenstates under this operator, and their eigenvalues are just set by the scaling dimension and spin of the associated operator. And really, all you have to know is just because this d squared is here, because this insertion of the dilatation operator squared is here, the eigenvalues just grow the scaling dimension of the operator squared. And so for the purposes of this talk, I'm going to be using casimir and dimension interchangeably just because it's much easier to think often in terms of the scaling dimension, but really, we're identifying these states by their conformal casimir eigenvalue. It's not that casimir by itself does not suffice to classify operators on the CFT. So yeah, so there's also spin quantum numbers as well. There's spin and global symmetry, and so why do they single out casimir? Ah, good. So we're singling out the casimir. You're absolutely right. I have additional quantum numbers, but the reason I'm singling out casimir is because that's going to be our truncation parameter. And our motivation for using this comes mostly holographically in the sense that states with low conformal casimir, if I think in terms of the dual ADS picture correspond to low mass states. And so our intuition for truncating in casimir is that this corresponds to just the naive effective field theory in ADS. But I'm definitely not going to be assuming anything about the holographic description of my associated theory. That's just kind of the motivation in the back of our minds for why this might be a cool thing to do. Thank you, sorry. I lost. What is mu? Mu is just I4A transform, so I just have the energy and my momentum. And so mu is just p squared. So I have just the spatial components of momentum, and then the energy, or I'm instead writing the Lorentz in very unobservable variable. And there is an extra label, alpha, but that comes for the rest, though. I mean, this is. Oh, yeah, yeah, yeah. That's right. I've suppressed all other quantum numbers. Yeah, yeah, that's right. I've only focused on the good parts. Or the relevant parts for what I'm going to say today. OK, great. So this is the proposed basis. You can always define it at any CFT. And so this is what we're going to use to construct our Hamiltonian. But it has one obvious shortcoming. The eigenvalues of the Casimir are discrete in a CFT. But we have this continuous label mu parameterizing our states. So every single primary operator doesn't just create one basis state. It actually creates a continuum of basis states. And so I need somehow to discretize this additional parameter mu. And the reason I need to do that is because I want to put this on a computer and numerically diagonalize the Hamiltonian. And so I need some finite dimensional discrete basis. So there are a bunch of different ways you can do this. But we're choosing, I think what is kind of a general framework is you can define new states labeled by some discrete parameter k, which you just get by integrating this continuum of states associated with a single primary operator. So if our CFT is just a free theory and always some operator which is made out of a bunch of, say, scalar fields contracted derivatives contracted in a certain way, then this is just some state in the form of space. Yes, exactly. Now, and in that case, I understand it's a good state. Now, if you have a strongly coupled CFT and you take this sort of Fourier transform which smears over the whole Minkowski space, is it obvious that this is a good state? And because there are some locality properties. For example, if you were to compute some observable in this state, like sandwich it, sandwich some local operators between two states of this form, then your integral, it would extend twice over the whole Minkowski space. And there would be some regions of this integral which are like light-like separated, space-like would be a mess. Is it clear that this would be a well-defined integral? Is it clear that this would be a well-defined integral? You just mean if I want to compute the overlap of some UV operator or some, yeah, good. What process are you asking? You're asking, I want to look at some operator on this basis. Yeah, for example, I would like to relate a three-point function. I would like to relate a matrix element of some local operator sandwiched between two states of this form. Is it clear that it's going to be a finite quantity in a stronger couple of states? Yes. Because of causality. Because I'm confused about how causality will work out. Yeah, yeah, good. You're just asking, OK, great. You're just saying I have the local operator I'm inserting like these states in the middle. Is it obvious that this? Yeah, yeah, yeah. And now I have to do this double free transform of some non-local three-point function which kind of extends to some causal structure and so on. So I mean, it definitely depends. Also, there are some singularities on the light core. Well, there is a white-man prescription. And you perform the right Fourier transform of the white-man correlator as long as your relevant operator that, as long as the operator that you are considering evaluating, as long as its dimension is bigger than d over 2, it's fine. Smaller than d over 2. Bigger. No, bigger than, yeah. Retrograde to the one in the middle or the one on the outside? The one in the middle. The one in the middle. So yeah, if I want to compute a three-point function, for example. Yeah, so what I find is that. If you say ds is a representation of that. But for example, for free theories, this problem doesn't appear. So why is this? No, it does appear for three. Yeah, that's going to be important. Or what? Can I explain? What state I insert anything? No, I mean, it's a no, no, no. So for example, if I look at, yeah, yeah, great. If I look at, like, say, state 1, and I look at the expectation value of like phi squared. If I compute this in free field theory in 2 plus 1 dimensions, the integrals over the momenta of the external states. This is fox-based states weighted by some wave function associated with whatever local operator I've chosen. Let's say I wanted to compute phi squared, for example. That would be some integral over two particle states in the free theory weighted by a flat wave function. Or if I want to look at T mu nu, it would be some polynomial momenta. And when I integrate over those momenta, it turns out that if the scaling dimension of my operator is less than d over 2, which happens for phi squared in 3D, then I get IR divergencers. I get divergences associated with when the momenta of the individual, like, partons that make this guy up go to 0. And so that's actually, yeah. So we're jumping way ahead with this, but that's actually like a really big technical problem or at least complicates the calculation in 3D. We understand how to work with this, but it does make the calculation more difficult. But yeah, you definitely do encounter these sorts of divergences. What about 2D? 2D, 2D, it happens for a more, yeah, yeah, so good. So 2D, this can happen as well. If your external states were, say, vertex operators, you find exactly the same divergences. But if you look at, like, conserved currents, you don't. And that has a nice, like, interpretation, which I guess I could just tell you now. Basically what happens is your naive basis would be built out of conserved currents and vertex operators in 2D. You find IR divergences that lift out all of the vertex operators, leaving only conserved currents as, like, your complete basis. If you add, like, a phi squared deformation, if you have a mass term deformation, which sort of makes sense, because you think, like, as soon as I go away from the massless theory, I don't expect these to be independent degrees of freedom. And so the desired divergences seem to be doing something good for you. They seem to be lifting out states that you would have thought are redundant along this RG flow. But yeah. OK, yeah, no, thanks for that question. What is GK? Oh, yeah, good. So yeah, that's what I was about to say. So great. OK, so I need to discretize mu. And what you can do is you can define a discrete base of states by integrating your continuum against weight functions GK, which you can choose to be whatever you want. Two obvious choices that we've played with is one is just a set of orthogonal polynomials built from mu. The other is you could just do discrete step function wedges where you just say, this has support over mu i to mu i plus 1. The next guy has even supports. So just what? Some sort of weightlet basis. Yeah, exactly. Yeah, yeah, so you could consider multiple, yeah. But the important thing, the only reason I'm drawing attention to this, is because, A, you definitely need to discretize in some way. And B, when you discretize, this is kind of an obvious statement. You have to introduce some sort of scale in this. You have to introduce some sort of UV cutoff lambda squared. But the important thing is that this is a Lorentz invariant cutoff, which is why I was kind of writing things explicitly in terms of mu here, is that our cutoff, in contrast with the cutoff in, say, TCSA, is a simple Lorentz invariant cutoff. So I don't have to worry about non-local counter terms arising from the presence of this cutoff lambda. Great, OK, so I do this. So this is the actual basis we're going to consider, is a basis of conformal Casimir. Built from local operators in momentum space, where I've discretized the invariant mass mu in some way. And so I naturally have two truncation parameters associated with the spaces. The first is the conformal Casimir. So I have to set some maximum conformal Casimir and only keep states below this. This is our proposed truncation. And then the second is Kmax. How many of these discretized mu states do I keep? The first truncation parameter, Cmax, that just corresponds to how many primary operators I have in my basis. Kmax tells me kind of the resolution of each multiple associated with a primary operator. And so intuitively, this is a very physical, like, nice parameter, which from our holographic perspective, we expect to be a good truncation parameter. This Kmax was just an artificial discretization of mu. And so I don't expect to have good truncation in this. So generically, I expect that I don't have to set Cmax very large, or at least the convergence should be more rapid in Cmax. Whereas Kmax, I expect the convergence to be slower. So in practice, I expect to set Kmax much larger than Cmax. Yeah. So you said you had to discretize mu. Oh, yeah. Just because I want to hand Mathematica or whatever system I'm using to diagnose my Hamiltonian, a finite dimensional matrix at the end of the day. So I discretize p because you will break Lorentz invariance. Well, I'm going to work. Yeah, so I'm just going to work in a particular p frame. So I will affix the frame p. And so the only remaining parameter is just mu. So this all throws in the choice of wanting to work even at volume. Yes, exactly. Yes, exactly. OK, great. No, but for example, it depends on what the dependence on mu is. If the dependence on mu was analytic of exact wave functions, and analytic wave functions of mu, if you were to choose the right basis, you could get a very fast convergence also for this Kmax. Oh, yeah, yeah, yeah, absolutely. Yeah, you're right. You're right. If I was clever or if I somehow knew ahead of time what an efficient basis would be for these packets G, then yes, I could definitely improve. Let's say it depends not as much, pretty much, any reasonable basis works if the function is nice. If the function is not nice, then you may have to think about the basis adaptive function. But yeah, but right now we're being very naive about our implementation for these guys. We're just using very dumb either polynomials or just discretized step functions in mu. OK, great. So that's the basis. Now we're going to do Hamiltonian truncation with it. So I started off with some CFT, which I deformed by some relevant operator. So my Hamiltonian is just the original CFT Hamiltonian, which are states by construction are eigenstates of. They have the associated mu eigenvalue, or are at least easily computable in terms of CFT eigenstates. And then my deformation, where this deformation comes from the one or more relevant operators that I added to the theory. And so to construct the Hamiltonian, computing these matrix elements is trivial. So the only hard part is just computing. Or the only thing we have to do is we have to compute the matrix elements of this correction to the Hamiltonian. Sorry, just before you go, I'm the square that you introduced in that integral there. That's your UV scale. That's how you do it. Yes, that is a UV, yeah, exactly, yeah, yeah. So yeah, so there'll be two kind of quote unquote physical scales or mass scales that we've introduced. One is the one associated with this parameter here, and then the other is just lambda. But I want to push that much larger than whatever physical scales I actually want to study. So if I try to construct these matrix elements, then my deformation is just some local operator from the CFT. My external states are just defined as Fourier transforms of local operators in the CFT. So this is nothing other than just a Fourier transformed three-point function. And so in conformal field theories, three-point functions are very simple. They just have a nice universal kinematic structure and then OPE coefficients. And so this is going to just turn into some universal kinematic function determined by p and k, or equivalently mu, and the scaling dimensions of the associated operators, and then just some number of the OPE coefficients, which I'm going to assume that I have from my CFT. And so everything is phrased solely in terms of conformal field theory data. Kind of the scaling dimensions give you the basis, and the OPE coefficients give you the Hamiltonian matrix elements. OK, up to this point, I've said nothing about quantization schemes. You can define this basis. You can apply this Hamiltonian truncation method in any quantization scheme. You could apply it in equal time, or you could apply it in light-cone quantization. But there comes a time in every person's life where they have to decide which quantization scheme to use. And so what we are going to choose to do is we're going to choose to work in light-cone quantization. So you define light-cone coordinates. You choose one spatial direction to be special, and you define new light-cone coordinates t plus or minus x. And so the actual Hamiltonian we're going to diagonalize is going to be p plus the generator of translations in the x plus direction. And I apologize to the light-cone people in the audience because I'm using slightly different notation, but this makes more intuitive sense to me. So I have an easier time thinking in terms of this lowered. Anyway, it doesn't matter. So this is the Hamiltonian we're actually going to diagonalize. But really, I care about Lorentz invariant observables. And so really, the operator I want to study is m squared, which in terms of light-cone generators is just this combination of p plus, the generator of kind of the light-cone momentum, and then the additional transverse directions. But my states, my basis states, are already eigenstates of this p-perp and this p-minus. Those are just the spatial momenta. And so what we see is kind of one nice nice idea about light-cone quantization is that diagonalizing the Hamiltonian p plus is equivalent to diagonalizing the invariant mass operator m squared. OK. Oh, yes. Where were you called p squared before? This is going to be in the free theory it is. Sorry, in the undeformed theory it is. But then this is going to get a correction because of this. So it's going to be m squared c of t, whose eigenvalues are mu squared, plus a correction that looks like p minus times lambda v, like in this notation. Is that obvious why that's true? This is the original, like, mu squared for the c of t, and then this is my correction because I'm deforming the theory. OK. So we're going to work in light-cone quantization, but why are we doing this? We think it seems that light-cone has the following kind of advantages. So if we think kind of more, big picture or more generally, our kind of standard framework for studying theories when we do like perturbation theory, for example, and we expand things in terms of Feynman diagrams, what's the advantage of that representation? It makes Lorentz invariant manifest but at the cost of obfuscating unitarity. Whereas Hamiltonian truncation is much more similar to kind of like old-fashioned perturbation theory. It makes unitarity very manifest. I'm just constructing a Hermitian operator. But at the cost of obfuscating Lorentz invariants. And what I mean by that very precisely is that when I compute the matrix elements of even a Lorentz invariant observable, so let's say I'm in equal time quantization and I want to calculate the matrix elements of m squared, say, in our basis, OK? The individual matrix elements here, like m squared 1, 1, are going to be a function of the external mu's. So it's going to be a function of mu and mu prime. But it also cares about my frame. And obviously, the eigenvalues of the full infinite dimensional matrix don't care about this. They're obviously Lorentz invariant. But if I truncate this matrix in any way, I've screwed this up. And so if I look at the eigenvalues of a truncated matrix m squared, the eigenvalues will depend on the frame that I work in for any finite truncation. Now, obviously, I expect that this momentum dependence or this frame dependence to be suppressed by whatever my truncation parameter is. So I expect this, if I look at the actual eigenvalues, to get kind of mu, I'll call them tilde for the eigenvalues of the full guy. So the truncated guys to be the true eigenvalues plus some function of p that's suppressed by some power of c max. So I expect in the limit that my truncation goes to infinity that this Lorentz invariant or this frame dependence goes away. But at any finite truncation, I'm always going to have this. However, light cone quantization avoids this problem. The matrix elements for light cone quantization do not care about the frame that you're working in. And the reason for this is because, or one way of seeing this, is that you can define or you can compute the matrix elements of m squared in light cone quantization as just the infinite momentum limit of matrix elements in equal time quantization. There's a technical caveat to this, but I'm only going to talk about it at the very end if I have time. And I'd be happy to tell you. In other schemes, at least there is a possibility to check that you're OK with Lorentz invariants because you can do the calculation in the momentum 0 sector and the momentum 1 sector. And you have to see that the dispersion relation is Lorentz invariant. In the light front, you already fixed the frame. If you screwed up somehow, you made a mistake. If your commentonion is wrong by a light torrent invariance, you'll never notice this. You'll just get wrong spectroscopy and you will not even realize that Lorentz is wrong. Sorry, sorry, sorry. We could check. Suppose that you made a mistake in your calculation and your commentonion is wrong, in fact. How would you notice that? How would I notice it? I mean, what I would do is I would compute, I don't understand what you could mean. You're saying, you're saying, OK, if I'm understanding the question correctly, you're saying that this Lorentz variation is good because I can work in different frames and I know that those different frames have different matrix elements, but then I can see that I get the same answer. It's a check that you do not screw up your matrix, your matrix element computations and so on. I'm just saying that sometimes it's good to have this. Why can't they just compute some other quantity which has a Lorentz transfer? You get the eigenvector, you compute some observable and check its length. Yeah, if what I'm worried about is Lorentz invariance, yes, I could look at the transformation properties of my matrix, I mean, match. I mean, you could calculate some correlation function of something with spit. Yeah, that's what I get the right, yeah, exactly. So I could diagonalize it, take my matrix elements, and then insert them in some two-point function or compute the expectation value and study that I'm correctly reproducing Lorentz invariance. But yeah, so the matrix elements of my light cone operator m squared are just the limit as p goes to infinity of the equal time matrix element. So in this sense, light cone isn't that scary or that weird, it's just the infinite momentum limits of the equal time and it has this nice advantage or this nice property, at least, that the momentum dependence drops out. The second advantage that we have, which was mentioned in the previous talks this morning, is that light cone has this advantage, but then it seems like a disadvantage later that the vacuum is trivial. And what this means, just at a dumb operational level, is just that if I look at any Hamiltonian matrix elements that involve the creation of, say, particles from the vacuum, so I have, say, a matrix element that would mix the vacuum with some higher particle state, these all vanish in light cone quantization. And the reason for this is very simple. It's just because of positivity of light cone momenta. The vacuum is the unique state with total p minus equal to 0. And this is a spatial momentum, which is conserved by my Hamiltonian. And so there are no p minus equals 0 states for it to mix with. And so the vacuum is trivial, so I don't have to worry about vacuum renormalization. And what this means is that if I look at the matrix elements for m squared in equal time quantization, I would get bubble diagrams which have explicit volume dependence. Whereas in light cone quantization, because I don't have this vacuum mixing, I don't have to worry about bubble diagrams arising. And so I don't get this explicit volume dependence by Hamiltonian matrix elements. And so kind of from this perspective of I want to study things in infinite volume, I want to make both unitarity and Lorentz invariance manifested every step in the calculation, then kind of a natural choice. So one naive choice is to work in Hamiltonian truncation in light cone quantization. So this is our motivation for why we're working in light cone. Yep? In your equation for m squared, if you're up there in the middle, it seems like one to the left of it. Yeah. Oh, sorry. Oh, sorry. Over here. Yeah, yeah, yeah. So you're saying only p plus gets corrections here? Correct. Yeah, yeah, yeah. So only p plus gets corrections in the light cone. But normally if I calculate, say, stress tensor in an interacting theory, I see that all these algorithms get corrections. No, no, good. Yeah, yeah, yeah. So what you can see is that, yeah, great. So the simplest way to see this is that, like, so p minus is just defined, like say, in 2D or something like that. It's just like the integral over t minus minus. And so this would be proportional. Say if I just added some relevant operator, I would expect this to be proportional. So like eta minus minus times OR, but this is 0. So I actually don't get a correction to this, for example. And you can make similar arguments for the remaining. But if I changed the minuses to plus, I would still not see any correction. No, no, no, no. But then you get plus minus if you want to look at p plus. Because you're integrating. And so this is just the trick. Exactly. Because I've picked a direction to be special, you're right. If instead I was working at like a flat frame and integrating over that, then I would see that these things guys can get. I see, so this is already in the light cone. Yeah, yeah, yeah. OK, great. So if you get wave function renormalization, then I would expect everything to change, right? Because then I get nothing eta minus minus. But I get to add a counter term proportional to the kinetic term in the Lagrangian, which would give non-zero components, non-zero corrections to all components. Yeah, let me think about this. Sorry, I have to think about this. I mean, I'm introducing when I'm computing the Hamiltonian. Yeah, maybe we should talk about it more afterwards. At the level of the, I'm just inserting the Hamiltonian or defining the Hamiltonian in the UV. And so maybe you're right that if I wanted to worry about how this runs or how the entries in that run, but at the level of just I'm inserting some matrix that's well-defined in the UV and just diagonalizing it, then I don't think I have to worry about it. But yeah, we can talk about this more after. OK, great. OK, good. So now let's talk about some applications. Or are there any more questions about the general setup before? OK. So first, we're going to work in 1 plus 1 dimensions. And so our UV CFT is just going to be free scalar field theory. And then our deformations that we're going to add is just a mass term and a phi to the fourth term. So this is the full theory we're going to consider of UV CFT plus deformations. So what I need to do to apply our method is I need to construct a basis of, I need to construct all primary operators up to some conformal some maximum conformal casmere level C max. When should I start, by the way? I just want to make sure I'm not. OK, perfect. OK, thanks. OK, great. So I need to construct all primary operators. This theory is just a free field theory. And so as Lava pointed out earlier, these basis states, I can compute them in terms of the typical Fox base states. So I can define kind of like a wave function in momentum space associated with each state, which is just the overlap or the projection of my state onto the typical like Fox basis. And so the operators I'm going to construct are just operators built from insertions of phi. And naively, the three building blocks I would have, the three things I can build primary operators from are the two conserved currents d minus phi and d plus phi, and then the vertex operators e to the i alpha phi. In light cone quantization, these right moving modes d plus phi are actually non-dynamical. And so I can integrate them out of the theory, leaving only these two naive building blocks. But then as I mentioned earlier, when I actually compute the matrix elements for these vertex operators, I find that IR divergences actually lift them out of the theory. And so the only states that I've overlapped with kind of the finite or the low energy states are just the d minus phi guys. And so it's sufficient to construct my entire basis. I can completely span the Hilbert space just using these d minus phi's. So what I need to construct is all primary operators of the form, just various derivatives, d minus derivatives acting on multiple insertions of phi. Sorry, didn't it repeat the way like somebody does? So yeah, good. So they are technically in the basis in the sense that they're orthogonal to these guys. But then because I'm adding the specific deformation of phi squared, then there are IR divergences in the matrix elements for all matrix elements involving these EI alpha phi's. So if I diagonalize that matrix, I'll find that there are some divergent eigenvalues and there are some finite eigenvalues. The finite eigenstates only talk to d minus phi. And all of these guys are associated with divergence. And this has the interpretation of just saying that as soon as I add this m squared phi squared term, I no longer expect phi just from the equations of motion. I no longer expect this to be an independent degree of freedom. And so I think that my intuition would be that I could describe the massive theory using only these d minus phi states. But if instead I added, say, like a sign Gordon theory or something like that, then I would have to worry about keeping these guys around. But it's specific to the theory that I'm studying. Does that make sense, sir? Can I follow up on this question? Because I'm surprised about the full effect. So I know that now if I take the separators, oh, and I take these functions f of pi, these are going to be some orthogonal polynomials, which are going to form complete basis of orthogonal polynomials. So I would be even surprised, even leaving aside the question of the inferred divergence when you simplify squared, I would be even surprised that you found some other functions which you know you associated with these vertex operators, which are orthogonal to all of these f phi pi, because f of pi forms a complete orthonormal basis of functions. Yeah, yeah, good. So how can you find some other functions which are even independent of those functions? Yeah, yeah, yeah, good. So this is a good question. Yeah, yeah. So you're absolutely right. So if I write things in terms of these wave functions instead, then you're right. Like naively constructing a complete basis is just constructing a complete set of polynomials, like orthogonal with respect to this measure. But if you compute, yeah, good. So another word, I would say that these functions would be sufficient also for Saint-Gordon or anything else, because they are complete, so a complete base. So we have to be a little bit careful. OK, so yeah, I have to think about the representation of if or what the representation of this would be in terms of these guys. So I can just make the dumb observation, the following naive observation, which is just that my basis states are just Fourier transform of operators. And so the inner product between a state created by, like, say, d minus phi and e to the i alpha phi is just a Fourier transform of their associated two-point function. And so this vanishes. But yeah, yeah, yeah, so it's a good question. Yeah, yeah, yeah. So this goes back to my question, that I'm worried that the moment you take some non-trivial operator, like the vertex operator, and try to define that integral. So I'm worried that what we break down is that that integral may not make sense. So maybe the reason why is that it drops out is not because of the fact that you identify squared, but simply that the state which is associated with this vertex operator somehow makes no sense, doesn't make sense. Well, it's not obvious to me. OK, so good. You are right that when I try to compute, this is getting very in the weeds, when I try to compute the matrix elements for these e to the i of a phi directly in terms of Fox-based states, that their inner product is actually divergent. So I have to add in some IR regulator. And then I can properly, which makes sense, right? Like, if I think of this as just an expansion in terms of phi, like each term in the series is log, and so I have to add in some R, some IR scale. And so I can properly, so I can just add in some IR scale and then properly renormalize this operator, normalize its two-point function to 1 by getting rid of this IR scale. And it's precisely that R that I had to introduce to define these states in the first place that comes back in when I compute matrix elements involving, say, like this phi squared operator that lift them out. But I don't think, I mean, maybe you would say that that's a sign that they're not well defined, that I found IR divergences naively when defining them in the first place. But naively, I would say, yeah, it's not obvious to me that that's the case, that I do have to introduce this prescription to define them. But once I do that, they're well-defined states. But yeah. OK, great. So I need to find combinations of these guys that correspond to primary operators, so combinations of derivatives acting on phi that are annihilated by the special conformal generators k. And this is actually equivalent to, as Lava just said, as finding a complete basis of symmetric orthogonal polynomials with respect to some natural Lorentz invariant measure. And so this is definitely the hard part, the main computational bottleneck in doing this sort of work is finding all primary operators or, equivalently, finding all symmetric wave functions. And what we've found, we've come up with some tricks of using the CFT structure of this to speed up the calculation. But still, if there's a place to make progress in terms of increasing the size of the basis, it's at this level of just constructing the operators. Sorry, I have a very basic question. I thought there would have been something pathological about using this as your UV CFT, just because the spectrum is continuous. Like, you have these vertex operators, but you say it doesn't matter, because there are IR divergences, and all their contributions drop out. Well, as soon as, oh, you're worried if there's some infinity over infinity effect to where? Yeah, that's a good question. I mean, naively at the level, yeah. In the massive theory, I mean, so yeah. In the massive theory, I know that, just because it's a free field theory, I know that the Hilbert space is just spanned by polynomials weighted by these Fox-based states. So in that case, I believe that it makes sense that these guys had to be removed. I have to think, yeah, OK, that's a fair question to think about the subtlety of, does it make sense to? But if I add it, yeah, I think I'll just leave it there. But I'd be happy to think about it more. But you could always take a minimalistic interpretation that even if this UV-50 doesn't quite make sense, the fact that if you have a primary operator of this form, you will get an orthogonal polynomial, this definitely makes sense. Yes, yeah, yeah, of course, yeah. You are allowed to use the safety language, yeah. Do you get operated with zero derivatives? No, you don't have operators with zero derivatives in the sense that those are related to this. Exactly, yeah, yeah. So the easiest way to see this, well, there's two ways to see this one, is just that the integration, the natural Lorentz integration measure for the light cone quantization has factors of 1 over p minus in the integral. And so when you compute, let's say I choose phi squared, that'll just be a constant wave function. It'll be divergent, which is exactly the same divergence that you encountered with this guy. And so you always need factors of pi minus for every single state. So you need one derivative. So I'm missing all the constant terms in that sense, yeah, yeah. Now amusingly enough, if you rephrase our problem of constructing all, or you rephrase our basis of just primary operators built from d minus phi in terms of this wave function, then the basis that we're using is actually identical to the symmetric polynomial basis that Hiler and Chibicheva talked about in their previous, that Sofya Chibicheva talked about in her previous talk. So not the light front coupled cluster, but just the symmetric polynomial, it actually turns out that that's equivalent to constructing primary operators up to some scaling dimension. So even though we came at it from very different perspective, our basis is actually the same as theirs for this example. OK, great. So good. So now let's say I've done this. I've constructed all primary operators up to some threshold C max, or equivalently some maximum scaling dimension delta max. Then I construct the Hamiltonian, which is just evaluating the three-point functions evolved with phi squared and phi to the fourth. And then I just diagonalize the resulting Hamiltonian. So now I'm going to show some slides. But you can construct many more states because of the tricks that you're using. Yes, exactly. So we're able to go to much higher delta max than in practice. Yeah, it definitely seems that just at the level at a dumb operational level of trying to construct the basis using the fact that it's a CFT is actually a huge advantage just for us. Because we were initially doing it just a very dumb systematic way. And then we found that as soon as we actually use the fact that it was a CFT, things got much more efficient. Or we were able to do things much more efficient. OK, great. So you construct the truncated Hamiltonian. You then diagonalize it. And you look at the resulting eigenvalues. And so what I'm showing here is four plots. The y-axis for all of these is just the lowest. So the green lines are going to be the lowest eigenvalue of the matrix. Blue line is the second lowest. And because my theory has a z2 symmetry, and I haven't broken it with my phi squared or phi to the fourth, I can split the Hilbert space up into an odd sector and an even sector. So green is actually the lowest odd eigenvalue. Blue is the lowest even. And then red is the lowest, the second lowest odd. And then on the y or the x-axis, I'm showing you the coupling lambda in units of the bare mass m squared. And so at lambda equals 0, all of these eigenvalues for all four of these plots, the lowest eigenvalue is 1. The next one's 4. And the next one is 9, which makes sense. This is just the 1, 2, and 3 particle thresholds. But then as I increase lambda, what I find is that all of these start to decrease. So the lowest eigenvalue starts to approach 0. The lowest even eigenvalue also does. And they all start to go down. And then as I vary delta max, which is increasing the size of our basis, what I find is that for at any finite delta max, the lowest odd, the lowest even, and the second lowest odd all hit 0 at different values of lambda. So for example, you can't see it in this plot, but over here, you see that the first eigenvalue, the lowest odd eigenvalue hit 0 first, then it's some higher value of lambda, the lowest even, and then the second lowest odd. What I would expect is I know that for this theory, if I tune lambda to some critical value lambda star, the gap should close. And this should flow to a IR fixed point in the same universality class as the 2D easing model. When the mass gap closes, I expect the spectrum to be continuous. And so I would expect that at the critical coupling, all three of these eigenvalues would hit 0 at the same time. But what we're seeing is that that doesn't happen. But that's an artifact of truncation. That's a truncation effect. And we can see that because as we increase delta max, these guys slowly start to converge upon a single point. And so what we want to do is we want to determine what's the critical value of the coupling or what's the point at which all three of these guys hit 0. And so what I can do is I can make this plot for different values of delta max and then extrapolate to try and figure out what the delta max to infinity limit should be. Oh, yeah, sorry. Before I do that, one thing you can note is that Hamiltonian truncation is just a variational method. And so this lowest eigenvalue right here always places or actually for all these eigenvalues. But this lowest eigenvalue right here is an upper bound on what the lowest energy is going to be. And so I'm always placing any time my data crosses 0, that's placing an upper bound on what the value of the critical coupling will be. It will always move to the left as I increase delta max. And this just has to do with the fact that my method is just a variational method. And so without extrapolating, we can already with our delta max of 34, which corresponds to roughly 12,000 basis states or 12,000 primary operators, we can place an upper bound on the critical coupling in light cone quantization of 1.98. But then we can try and do better than this. We can extrapolate in delta max. So this is now extrapolating our finite delta max results. And we can get these three plots, which at least within error bars appear to be hitting at the same point. And by looking at just say the lowest eigenvalue and where it crosses 0, we can place an estimate for the critical coupling in light cone quantization at roughly 1.84. And then we can compare this with other results in the literature. So the only two that I'm aware of are there's a really old paper from 1987, 1988 by Harinder Raff and Ferrari using the discretized light cone quantization. They get an estimate of 2.6. But there are no error bars reported. And I don't really understand what the uncertainty in this value should be. But then if you look at the work of Burkhart, Shabisha Van Hiller, they estimate the critical coupling to be about 2.1. It's not surprising that this is above our value because their basis is the same as our basis. But they're using a much smaller value of delta max. Their delta max is roughly 18. But they're not truncating uniformly in different particle number sectors. But all this is to say just their basis is a subset of our basis. And so we've included more states. These plots always move to the left as you add more states. So it's not that surprising that their value is close, slightly above what we find. So how do you estimate the error? So yeah, so how we have to estimate the error is this is just the error in the extrapolation. So we have the various points, say, for this first guy. For each value of lambda, I look at, say, the lowest eigenvalue as a function of delta max. And then I fit that. Yeah, I just fit that. And this is just the uncertainty in that fit, roughly by just, oh my god. OK. What happened to K max? What happened to K max? Oh, great. Yes, thank you. I'm meant to say that. So for this specific theory, because all of my states are built from left-moving modes, d minus phi, any operator built from these guys is going to be annihilated by p squared. Or more accurately, it's annihilated by p plus. And so any operator built has invariant mass 0. And so there is no continuous parameter to discretize. But that's very specific to this theory. When I go to 2 plus 1 dimensions, that won't be the case. It just has to do with the fact that I have purely holomorphic basis states. Yeah, but that's too many theories which describe this requirement. Yeah, yeah, yeah, that's true. In 2D, yes, there are a lot. In 2D, any theory which is in dv, a free theory? Yes, if I'm starting from a free theory, I will always have this. Yeah, that's true. It's the same for free for me. Yep, absolutely. OK, great. OK, so that's just looking at the mass spectrum. Now what we want to do is we want to look at dynamical observables. So we don't just have the eigenvalues, we have actual eigenstates. And so one thing we can do is we can try and look at correlation functions of local operators. And so because I'm working in momentum space, kind of a natural thing to look at instead is the spectral density, the Kallen-Liemann spectral density, or spectral representation, which is just the decomposition of my two-point function of some local operator in terms of mass eigenstates. And so this is just the equivalent of the two-point function, but just in momentum space. And so in practice, it's actually going to be simpler to compute not the spectral density, but the integrated spectral density, so the kind of cumulative overlap of my operator with mass eigenstates up to some threshold. And so how we compute this in practice is we just diagonalize the Hamiltonian. That gives us these mass eigenstates mu i. We just compute their overlap with whatever operator we're interested in and then sum up those overlaps up to some threshold to get the integrated spectral density. And so for example, I can look at the integrated spectral density of the trace of the stress tensor, which in light cone coordinates or in light cone quantization in 2D is just the component t plus minus. So what I'm plotting here is for a specific value of lambda. So I've just looked at the theory at one specific value of the coupling. I've set delta max. I've diagonalized the Hamiltonian. I've computed the overlap of t plus minus with the various eigenstates that I get out when I diagonalize the Hamiltonian. And then I've just added up this is showing their cumulative overlap with t plus minus. And so the dots actually correspond to the discrete points corresponding to basis states. And the dotted line is just an interpolation to kind of see what the functional behavior is. So how should we believe this? Because your convergence gets worse and worse and worse and worse as you move up in mu, right? So great. So I'm going to show you a ton of plots about convergence. So let me know if you're unhappy. But OK, so I started lambda equals 0 in the free theory. So t plus minus only talks to two particle states in the free theory. So unsurprisingly, the spectral density starts at 4m squared at the two particle threshold. And then as I increase the coupling, what I find is that two things happen. One, the eigenvalues start to move to the left. This is just the same thing that we saw in the previous plot for the eigenvalues. We're starting to go to 0 as I increase the coupling. And the spectral density starts to deform from its free field theory value. And then as I keep going, what I find is that as I get near the critical coupling, as the mass eigenvalues hit 0, the spectral density of t plus minus goes to 0 in the IR. This is indicative that the theory I'm flowing to is a CFT, because I expect in a CFT for the trace of the stress tensor, which is exactly what I'm plotting, to vanish at a CFT. And so this is kind of a nice check or a nice way of seeing that the theory we're flowing to is described by a conformal fixed point. And so you can imagine doing this in a theory where I didn't already know the answer. I could see the mass gap close. And then by looking at observables like the trace of the stress tensor, I could determine whether or not this theory corresponds to a CFT. Which of course I knew this a priori, because I expect this point to correspond to the 2D easing model. Okay, but it's all well and good to look at this at a specific value of delta max. The question is how does it converge or how much should I trust this data? And so what I can do is I can make a plot of the same thing, just the integrated spectral density. Here I've just done it at a particular coupling or a particular mass gap. And then I'm gonna vary delta max. So I'm gonna slowly increase the number of states I have. And what we see is something kind of nice. Our basis builds up the spectral density from the IR up. And so I get the most rapid convergence for even say delta max of 16 in the IR. I already am doing a good job of reproducing the spectral density with very few states. And this is exactly what I want. I really care about IR physics. I really wanna study IR states. So this is confirming that our basis, at least for this model, is a useful basis. It's sufficiently reproducing the IR and then slowly moves up as I increase delta max. So have you looked at what individual matrix elements are doing? Because you're summing up lots of terms. Yeah, yeah, yeah, great, great. So yeah, so you can, yeah, you can understand this convergence. I think I'm running it a little short on time, but yeah, you can look at matrix elements and you can understand actually the rate of convergence or what you would expect, what the approximate size you expect for the error due to truncating. Yeah, absolutely. Okay, great. So yeah, you can zoom in, look at the IR, and then you can compare to the theoretical prediction for the easing model and you find that in the IR it agrees with the easing model and then starts to deviate. And I just wanna stress, this deviation is not truncation error. This deviation is because the theory I'm studying is not the easing model. It just flows to the easing model in the IR. And so this deviation right here, because the theory is converged, we can see that all of the results agree as I increase delta max. This is just showing me the flow from phi to the fourth theory or from free field theory in the UV down to the easing model in the IR. And you can see it's right above scale, right around scales of order of the coupling is where I start to see deviations from the easing model prediction. Okay, I'm gonna go really quickly because I have five minutes, but I'll just show you one other plot. Well, I'll bypass this. So you can look at other operators like phi to the n. We can see that we get universal behavior because all of these Z2 even operators should flow to epsilon in the IR. So we should see that their spectral densities agree in the IR. We can again zoom in and we can see that in the IR they all agree with the prediction, the easing model prediction for epsilon. And then the last thing we can look at is the component T minus minus. So another component of the stress tensor. The integrated spectral density of the component T minus minus is actually equivalent to the zomologic of ZC function, which flows from the central charge of the UV CFT down to the CFT of the IR, down to the central charge of the IR CFT. And so if I plot the spectral density of T minus minus, that can tell me that's equivalent to looking at the zomologic of C function along this flow. And so here I've plotted the spectral density for a particular value of the coupling. And what I really like about this plot is you can see the entire IR flow. In the UV, it asymptotes towards the free scalar value of one. And then right around the scale which you, so right around the approximate scale of the coupling, which you would expect just from naive dimensional analysis, we start to see drastic transition away from the free value. And so this is the scale, like from this plot you can just read off what the interaction scale approximately is. And then down in the IR, because this theory has a mass gap, this flows to just the trivial central charge value of zero. Now the obvious thing you would want to do is you would want to try to get the mass gap as close as possible to zero and see the sum leveling out some plateau around C equals a half. You would like to read off the central charge value of the 2D easing model from this. And it turns out that this value right here is very UV sensitive or equivalently very difficult to reproduce with low delta max or with the delta max that we're currently at. And the reason is for the following, it's for a very dumb reason. It's because your critical coupling is not very large. And so what you need, if you want to read off the central charge value of your IRCFT is you need a large separation between the following three scales. You have your mass gap, which is just the lowest eigenvalue. You want that to be much less than the critical coupling scale. But you're limited by your IR resolution, which is roughly set by the coupling scale, the other, the scale in your matrix divided by whatever your truncation scale is. And so you need a gap to be much, much less than the critical coupling to be able to read off the plateau at C equals a half. But you're limited at finite delta max because you're bounded by low, just your IR resolution can't get low enough. And it turns out for this particular observable, we're actually remarkably sensitive to these corrections. And so the corrections just have to do with the fact that T minus minus, like the UV operator we're using to study is approximately the easing model T minus minus, but then plus higher order corrections suppressed by some effective cutoff, which is just of order of the interaction scale here. And so A, this suppression is small and B, this operator is relatively, has relatively low scaling dimension. And so this infects our ability to easily read off the IR center charge. That's right, that's true. Okay, great. So I'm basically out of time, but so we're now working on 3D so we can go to two plus one dimensions. In two plus one dimensions, the only added complication is that you no longer have D minus phi as your building block, you also have D perp phi, you have this additional transfer direction. There's an additional subtlety associated with divergences and the mass matrix, which I don't have time to talk about, but we're working on constructing the basis to try and study. And I was hoping to show plots, but we're not quite there for a five to the fourth theory in 3D. But what we've done instead is we've studied just a large N model. So we've taken a theory with N scalar fields in the large N limit. In the large N limit interactions, which mixed particle number are suppressed and so you can focus just on the low particle number sector. And so this is, sorry if I'm going really fast, I just wanted to say this and then wrap up, but I can look at the low particle sector and reconstruct say the spectral density of phi squared and compare it to the theoretical prediction, which is this line in black, which aligns really well with our results. And the one thing I wanna distress is that this observation that processes which mixed particle number are suppressed by one over N, this is only true in light cone quantization. There are additional matrix elements which vanish in light cone quantization, but are present in equal time. So to do this calculation in equal time would be just as hard as studying say the easing model or studying just phi to the fourth, sorry just studying general phi to the fourth theory. And so one nice advantage of light cone is that it makes this calculation actually much simpler to do. Okay, great. So if I have two minutes, I just wanna say a couple words about gauge theories and then I'll be done. Okay, so obviously we would like to apply this to gauge theories and eventually try to study something like QCD in three plus one dimensions. So the obvious thing you would do or just the most naive thing you would try is to repeat this process exactly. So you would naively think that you should just start from a theory of free quarks and gluons and then gauge the global symmetry by your deformation would just couple these two things together, you would be adding like an a.j term basically. And then the idea would be then you could study the flow down to QCD or just some confining theory. However, this is very difficult as we all know. Why is this difficult though? Like what really is going on? The reason why this is difficult is super easy to understand. The deformation that you've added is not a local gauge invariant operator. Your Hamiltonian, like if you had infinite calculation the full Hamiltonian would be gauge invariant but that's only from integrating this local operator across all of space. But if you put in any sort of regulator you're no longer integrating over all of space and you've broken gauge invariants unless you can be very clever in choosing your regulator. So this is why things, I mean this in a nutshell is why gauge theory is difficult. But what you could do instead is you could try to bypass all of this difficulty by instead starting from say an interacting fixed point. So let's say that I had some data on say some bank sacks fixed point. So to be very concrete, let's say I was looking at an SU3 gauge theory in four dimensions with 16 flavors. That has a relatively low value of alpha star. Alpha star is something like 0.01 something. 0.04, yeah, yeah. And oh that's right, yeah, 0.04. And so I could imagine potentially trying to compute this data perturbatively at least to leading order in alpha. And then the deformation I would add would be a mass term to all but say three of the flavors or all of the flavors if I just wanted to study period angles, but I wanna study QCD. So I'm gonna give a mass to all but three of the flavors. This A also flows to QCD so long as my starting point is sufficiently weakly coupled that I have a separation of scales. But the operator that I've added is a local gauge invariant operator. And so I can put a hard cut off in here and I can treat this system exactly the same as all of the scalar field theory examples that I've shown you right here. And so instead of trying to mess with choosing a proper regulator and worrying about gauge invariants I can just close my eyes to all of gauge invariants, think as a CFT person and start from this interacting fixed point instead. Now obviously this requires me to have data at an interacting fixed point but to get our feet wet we could imagine trying to do this for large end gauge theories. So doing this in systems where I have a one over end expansion or where I have some data to start with to where then I could try to start studying things like pure Yang-Mills at large end and eventually work my way towards doing some more realistic theories. Okay, great. Thank you very much for your time. So we are a bit out of time so maybe it's time for a one over end question. And then it goes from very urgent. You are the organizer so. Yeah, okay. Well, it's okay. Yeah, no, no, there's no other questions. No? Okay. I have two urgent questions. The first question is that you probably, I mean, very nice calculations and you're sitting probably on a treasure trove of data and so on. So using what you computed, can you convince us not by some argument but some concrete data that truncated in CMAX, truncated in Kazimierz, is the right thing to do as opposed to other truncations that I can imagine? For example. Sorry, all right. I could imagine that, suppose that I believe that somehow two particle states, independently of what the Kazimierz is going to be, are more important than four particle states. Can you refute this expectation based on the weights? Yeah, good. Yeah, yeah, yeah. So yeah, so good. So if all I want to know in life is just like the lowest energy state, so let's say I just cared about the MASC app for this like specific example of like five to the fourth theory. Then you can see that at least away from the critical coupling for most of this plot, then you can see that there is some semblance in which particle number is, now it's hard to disentangle because Kazimierz also is suppressing, I mean Kazimierz grows with particle number but roughly it seems like particle number seems like an all right organizing principle but as soon as you go to the higher states, if you want to go to like slightly higher and you want to get the full spectral density flow, then you can see that these states do have actually significant overlap with higher particle states and that truncating in particle number does not do a good job. If you just artificially tie your hands or not tie your hands but just choose a different scheme and you just say, okay, I'm gonna really focus on just like low particle number guys and keep those to very high degree and only supplement them with a small amount. Now, obviously, but then, yeah, good, so experimentally you can see this. I guess, yeah, sorry, I don't have any plot to show you but in looking at the weight it does seem suggestive that Casimir is at least a slightly, a better organizing principle for these excited states as opposed to just particle number. Now, it could be that there's some better, that there's more structure to understand. It also could be that this theory is very special but it does seem just from the data that Casimir is a sufficiently efficient organizing principle. But yeah, I totally agree that it'd be worth understanding this more, slash exploring it and more theories to understand more robustly, how this behaves. I thought also you guys said for 2D QCD, you've compared the Casimir cutoff versus particle cutoff that other groups had done. Oh, that's true, yeah, yeah, that's right. There was previous work, yeah, yeah. Yeah, yeah, in 2D QCD it is absolutely true that for the lowest, even for the lowest state you can see that the lowest two operators of the same dimension, one of which is a two particle and the other one is a four particle, basically contribute equally. Only the four particle contribution only disappears at large angles. Maybe, yeah, unfortunately, maybe we can continue doing the coffee break and we will see you much more soon. Okay, thank you, thank you.