 So thanks a lot to the organizers for the invitation to come and give a talk. I want to tell you about conformal data from critical spin change using periodic MPS and the COOS alert formula. And I prepare notes. I'll be using the blackboard mostly, but I'll be going back to the slides whenever there is a plot. This is work. My talk will be basically a summary or an introduction to these two papers. One is just in the archive, and the other one was published recently. And this is work with Ice Milstead, who is a postdoc at Perimeter Institute, and also E.J. and Zoe, who just started his PhD, also at Perimeter Institute. And I want to acknowledge Simon's Foundation. I'm illegally here. I belong to another Simon's collaboration called the Many Electron Problem Collaboration. And I know that this is enemy territory, but I think you've been very nice so far. So thanks. So what do I want to do? I want to tell you I have here. I'll start with the motivation. And the motivation will just tell us. I'll just remind you of what problem I'm interested in. Basically, I'll have some microscopic Hamiltonian corresponding to a critical system, and I want to extract the conformal data. So what I'm trying to do is to learn about the universal physics of a critical spin chain, starting from the microscopic lattice Hamiltonian. Then we'll do that. We'll extract conformal data, both by looking at the low energy spectrum of this lattice Hamiltonian. And this is based on work by Cardi and the 80s and others. And then we'll go beyond just looking at energies and momenta. And we'll look at also how these low energy states are related to each other. So energy eigenstates are just orthogonal to each other, but there is some relation between them. And we'll explore that using the co-solar formula. And finally, I'll discuss all this can be done with exact diagonalization. So Andreas, please pay attention. But you can also go to larger system sizes using matrix product states where you will have a significant reduction of finite size errors. And that will be also very useful. And finally, if I have time, I prepare a couple of slides on an application of following a spectral RG flow from one CFT to another, working on the lattice. Very good. So let's get started. And maybe I can have this off now. So we are with the motivation. So as I said, our starting point is so we take as an input. We can think of this as a big algorithm. And the input is going to be a critical spin-chain Hamiltonian H. And to be concrete, this could be your favorite one, which is, I hear, the eigenmodel. But the discussion will be fully general for any critical spin-chain Hamiltonian. We could apply these ideas. And the output, what we would like is to extract from here is the conformal data. We would like to characterize the critical universality class of the phase transition. And I'm going to assume that there is some CFT to the CFT underlying this phase transition. And so this will mean to identify the conformal data of this underlying to the CFT. Very good. So what do I mean by this conformal data? Well, most of you know this better than I do. But still, some of you also come from other research areas. And maybe it's worth reminding you of what this conformal data is. So what we are going to say is that, for instance, if we look at the correlators, ground state two point correlators for this critical system, and we identify some local operators suitably in some way that I'm not going to specify, but if we have ability to determine such operators, which correspond to quasi primary fields in the CFT, then we will realize that at least the long times and long distances, they behave in a universal way. So we expect some two point correlator to decay in some characteristic way with distance and time. So it turns out that the correlators, maybe at short distances, on the spin chain behave in different ways. But at long times, their behavior is completely characterized by the scaling dimension and this conformal spin. And the idea then would be for us to actually extract this data. Maybe we can just look at two point correlators at long distances, long times, and extract this coefficient. So in Euclidean, this formula is understood where it's supposed to be valid, but in Mimkowski, has it been understood how close to the light column should you already see this kind of behavior, or should you be far away from the light column? Right, so in the Euclidean case, indeed, you should have a long distance. And in the Lorenzian case, I'm going to assume, I don't know the answer to your question, but I understood the question, which is already remarkable. And so you would like to be probably away from the light cone, the two insertions, right? Because near the light cone, this correlator should diverge, and on the lattice, you won't see any diversion. So that already tells you, you don't want to be too close to the light cone to see the universal character, at least through this scaling. There might be some other universal characterization there, but I don't know of it. All right, so these scaling dimensions and conformal spins are an example of conformal data. Specifically, so there is more to it. This would be for quasi-primary fields, and now let me list what's the conformal data. So conformal data. The minimal amount of information that you need to extract in order to completely characterize this immersion CFD would be given by the central charts, C, and this will be some number that would appear in some particular type of correlators for the stress tensor. We would have also, we would like to identify the scaling dimensions and conformal spins for primary fields. And for those who don't know primary fields are, it's just some subset, some selected subset of quasi-primaries. And then we would like to find also the OPs, the operator expansion coefficients, which control the three-point correlators, and I'm not going to discuss today. So the methods that I'll discuss, there is work in progress where we explain how to also find the OPs, but today I will not discuss OPs. So the idea is that we would like to characterize this conformal data if we get these guys, if given a microscopic Hamiltonian we can characterize all these properties, then we have a complete, we have completely characterized the CFD, the underlying CFD, OK? So our goal is precisely to get these guys. Input is this spin chain Hamiltonian, output should be all this, and today I'll focus on just this part, OK? Any questions? Yeah, sorry, is it crucial to do, Miko? Can you do a quick, the OP data can be extracted? Yeah, the rules of the game is you can do anything you want, but you want to get those. Why don't you prefer to go to Euclidean? Are you correlators, co-minutes, operators, co-minutes, Tixar, you don't have these life-con issues, what is the preference? I don't have a preference. I was just saying this is the meaning of this conformal data in the Lorentz inversion and in the Euclidean version, I would have to write something like that, right? So this will be the correlator and then some angle here that has to do with it. You're always going to say stuff in real time. I will not even do that. I will get the ground state, low energy states, and this will come out of it. Are you going to check for conformal variance in some way? No, as an input, I'm going to assume that the critical Hamilton, so I'm taking as input the critical Hamilton, I won't discuss how to find it, which is a very important question, how to find a critical Hamiltonian. I'm going to assume that this is the input and then I'm going to assume also that there is a CFT. Can you, for example, check that phi alpha and phi beta don't overlap at different dimensions? You can have the two-point function level. Right, I'm not going to build two-point correlators, so maybe that's. No, no, I wanted to motivate who are these guys. OK? Good, I don't have a lot to say, so keep asking. Good, so if there are no more questions, done. So how are we going to obtain this conformal data? One possibility would be to study two-point correlators in real time, in a clear time. We would need to know who are these scaling operators, and that might be a non-trivial question to identify the scaling operators on the lattice. We may also need to take into account that we cannot study infinite systems. This asymptotic behavior assumes that we are on an infinite system, so finite size effects might be important. And there are things you can do by studying correlators, but we're going to use another strategy here, which is also, has been used in the literature for many years, which we're going to use the operators take correspondence. And those who know that, please continue to be patient with me. And the others, this is just a statement, the scaling operators in a CFT scaling operators of this phi alpha, and it doesn't matter whether they are primary quasi primary or descendants or anything. These scaling operators with scaling dimensions and conformal spins, they are in one to one correspondence to states on the circle. So what I'm saying is you put your CFT on a circle, the space. This is a space. You can imagine this is a cylinder with time going vertically. This would be time or tau if you prefer occlusion time. And here we have space. And the idea is that there will be some Hamiltonian, some momentum operator. Hamiltonian generates translations in time. Momentum generates translations in space. And these guys commute. You can simultaneously analyze them. And then you can get the energy eigenstates. And these energy eigenstates come with energies and momenta, the eigenvalues by these two operators, which happen to be directly related to the scaling dimensions and the conformal spins that we are interested in. So what this scaling dimension will be two. So if this direction here has space has size l, it's going to be 2l scaling dimension minus c over 12, c being the central charge. And the momentum is 2 pi l times the conformal spin. So here's a trick. You can use the operator state correspondence to instead of study two-point correlators, just directly try to study a lattice version of the CFT put on a circle. Then you look at low energy states. You look at energies and momenta. You then analyze the Hamiltonian and the momentum operator. And you can read off from the circle the scaling dimensions and conformal spins from the energies and momenta. And that's a standard. That's very well understood piece of theory. And the only thing we're going to do now is a lattice version of this. So in the lattice, we'll put n spins, OK? And then we'll have some lattice Hamiltonian, h, some momentum Hamiltonian. Sorry, we'll have some translation operator t. So in this case, we can then analyze these guys and we'll get the energies and the momenta again. I'll delete these guys in a second. So let me call these energies in the CFT. And if I don't say CFT, it means on the lattice. Then these guys are basically, there are a couple of normalization and microscopic constants to be determined here. But the rest is very similar. So what we have is that, at least at low energies, the spectrum of energies on the lattice of this spin chain will have up to some, a couple of non-universal constants will have that the energies are organized again, according to the scaling dimensions. And the momenta is organized again according to the scaling dimensions and conformal spins. I don't see the correction term for delta alpha is delta alpha minus c over 12. And here, this was c over 12. So that's still the universal part. So what's non-universal here, which lattice dependent, is these normalization constants and the sub-leading corrections. But the leading corrections in one over the system size, one over n, are universal. Well, they all work the same for all the states, except that we don't have a normalization here. Yes? So what I'm saying is, on the left-hand side, except for these guys that we need to take care of, the rest is universal. And on the left-hand side, we have data that comes directly from demonizing your spin chain. So this is lattice or numerical data, and you can extract all these universal properties from that. OK? So are there questions? Why is B unisensitive to lattice physics? Because when they give you Hamiltonian, the Ising model, right? Actually, my favorite Ising model Hamiltonian comes with a factor 2 there. And now we could argue forever who? I actually think it's a 2, but OK. So you need, yeah, there is some. And then this one is just shifting all the energies. Very good. But that's important, yes? The speed of light is not. That's exactly right. So we need to identify what's the speed of light in your lattice system. And then with that information, you fix this, and you remove the API. Very good. So and this has been known forever. So these are techniques that were developed following work by Cardi in the 80s, right? And this is a typical way of extracting conformal data from a spin chain. This has been done by many people over the years. I think, Andreas, did you mention this? I think you had some plots. There was one slide. Right. Very good. So now what I would like to do is to go a bit beyond this. So again, let me mention this. So what we could do is, given your Hamiltonian, you identify the eigenstates on the lattice of the Hamiltonian momentum. So these guys will have these energies. And you don't have a momentum in principle defined on the lattice, but you have translations by one side. And so you analyze the Hamiltonian with states that are invariant on the translations. And so they fulfill this. And the translation, they just spit out the phase. And from this phase, we extract the momentum. So this is not when it's defined up to 2 pi, whatever. OK. So let me show you a couple of things you can do with that. Can I have the slides here? So this would be the prediction from CFT. Just this is the exact data. If you analyze your CFT on a circle. On the axis, you have momentum and energy or directly scaling dimensions and conformal spins. And what you should recognize here are the conformal towers of the identity in blue. So this would be energy 0 or scaling dimension 0 and momentum 0. And then in blue, you see all these blue dots. These are the descendants of the identity. You see the stress tensor or states that correspond through the operator state correspondence to the stress tensor t, t bar, t, t bar. Then you have two more towers, the spin in orange and all its descendants, and then the energy density or epsilon in green and all the descendants. And we spread the numbers around the x-axis a little bit just to show the generacy. But of course, the momentum are quantized or the scaling dimensions. So all these four guys should be on top of each other. And we just spread them. And they should have a spin 1, 2, 3, 4, 5. But we spread them because we can, because there is no ambiguity there. So this is what the CFT tells you that you would find. And if you do that on the lattice, you have something that looks very similar. So this would be for a system of 64 spins. If you analyze for the criticalizing model, this kind of thing here. If you're going to analyze it on 64 spins, you would get this spectrum. And this spectrum is not identical. But the difference is half the, well, what you can see here is that low energies come very accurately. There are numbers attached to that. But even from this figure, you already see that the departure from the exact solution is more clear as you go up in energies. So as low energies, you get high accuracy or high correspondence numerically. Quantitatively, and as you go up in energies, this gets spoiled. And this is because of these subliminal corrections. Yeah? Does it mean like your lattice cutoff? Does it mean that there's also a cutoff in spin that you can measure? If you keep going, this is what I mentioned. On the lattice, your spin, well, on a final lattice, if you have sufficiently large momentum, then it goes around. And so you cannot distinguish p from p plus to y. And well, there's this type of thing. Or in this case, the spin is spin plus 64, something would be the same. Right. Right, that right, that right. So we stop plotting these things because it's hard. These are 64 spins. And this is the eigenmodel. But we didn't use that. It's the eigenmodel. It's an arbitrary spin chain. We can plot something like this. So there is some computational cost attached to getting all these states. And so you have to stop somewhere. But once you see that this is not exactly the generate, you say, OK, data is being corrupted by finite size effects, and it's OK to stop there. So here you didn't do any extrapolation in n. You just did n equals 64. This is n equal to 64. Equate. Do the equations without the error. No attempts to polish the data so that it's more impressive or anything. That's what you get, OK? So then what? This is literally for this transverse field. Yeah, and we could use free fermions and get more. But we didn't use that. So that's always the danger of when you use the eigenmodel as a simple time model to illustrate things that they will tell you, oh, but we can do that using free fermions. Sure. So I have a slide to find that. Anyway, so let's do more models. Let's take 14 spins. So let's use exact denonization. And before Andreas correctly claims, oh, I can go to beyond 14 spins. Yes, but I only have my laptop. So I could go to, well, I could not. And the rest could go to 30, 40 spins, 50? No, 50. Ha! Ha! Ha! Ha! Ha! Ha! Ha! Ha! Ha! Ha! I don't know, but maybe he found a trick. Anyway, so let's have a hard upper bound on how large the system can be. And then you'll find you can then analyze the Hamiltonian using these well-established techniques. And what you get is this momenta versus energies. And it's cute. And I also added some labels, because I knew in advance that this was the energy, the state corresponding to, well, the ground state corresponds to the identity operator. The first excited state is the sigma. This guy is the epsilon. So I had some information, so I added that there, just for reference. But what you can see is that the data is already significantly less well-behaved. And also, a priority, when you get this plot, you don't know who are the primaries. And remember, we are trying to get not just scaling the dimensions and conformal spins, but those corresponding to the primaries. So we will need to work a little bit for a generic Hamiltonian to understand who are the primaries and who are not. Was that a question? This one? Good. I will answer that. Maybe. So the two questions are, first, how do we manage to paint, to color all these dots? How do we go from here to here? How do we understand that there are conformal, who belongs to the same conformal tower? So on the lattice, what you have really is a list of eigenstates. If you take the overlap between them, they are zero. If they are different eigenstates, and that seems to be it. That's the story that you could naively explain. So we want to go beyond that, and we want to say that whatever eigenstate corresponding to this dot here, say the ground state, and this other eigenstate corresponding to, say, the stress tensor, that they are related in some way. We would like to be able to do that. That's one thing we would like to do, and the other thing is that we would like to be able to scale this up a little bit so that we can get more eigenvectors beyond what exact the analyzation could do. And then I want to show you a plot of what happens with more complicated Hamiltonian so that you can see that this is fully general and why it's then suddenly non-trivial. So this would be the three-state POTS model. This would be, again, the CFT data coming from CFT. So that would be the exact scaling dimensions and conformal spins. And this is the type of data after painting, after coloring. You can see, for instance, that if they give you just this dot and they ask you, OK, where are the primaries? I don't know how you would get that there is a primary up here. It's highly non-trivial. You really have all this data, all these eigenvalues, conformal spins and energies. And sometimes you can say, well, there is some microscopic symmetry, right? Z3 or in the Asimov Z2 or whatever, you can use that to say, oh, the spin is the lowest energy state which is hot under Z2. So yes, you can get some, for specific models where there are microscopic symmetries, you can get some information out of those. But you need much more to identify all primaries. So this is an example of, I mean, you can get many more states, but we just focus on some subset. This is the three-state POTS model with periodic boundary conditions. If you put anti-period boundary conditions, you will get other sectors of the CFT. And basically, you can scan all the primaries in this way. OK. All the multiplicities are correct, at least. The multiplicities? Yes, there's no missing. Right, yes. But what I'm trying to say is that, yes, that you can play around with partial information such as symmetries, multiplicities. But that's genetically not enough to determine who are the primaries. I need to do more. And I want to explain what this more is. Very good. And so what notes here? What am I done? OK, so the next step then, I think we can leave this on for a second, because I'll use it again in a minute. But I want to go to this concealer formula, which will be the key to be able to color all this data. So I think it's number three. Coo, Saler. I think you guys know this. Hoover, Saler. And so what we want to say now is that, well, that in a CFT, there is some bit of pseudoalgebra. You have generators of this symmetry, infinite dimensional symmetry group, conformal group. There are very interesting expressions. If you are familiar with them, you don't need to watch them. If you are not familiar with them, you don't need to watch them either. That's why it's just, you know, instead of writing them. I mean, there is some algebraic structure supporting all this immersion symmetry that you can see on the lattice. And the question is, well, and the point is that these operators, they act as ladder operators on the low energy spectrum, or on the spectrum of the CFT. And so if we had access to such operators, these operators connect different eigenstates, OK? And they connect them, eigenstates that will be connected by these operators will correspond, will belong to the same conformal tower. Conformal tower, actually, is an irreducible representation in the action of these guys, OK? So the idea is, it would be great if we had somehow a way of having a lattice version of this bit of solar operators, OK? That would be great, because that's the type of objects on the lattice that we would use to act on these low energy eigenstates on the lattice. And the result would be some other eigenstate on the same tower, OK? So I wish I had a lattice representation of these guys. And that's actually what the co-solar formula does for us, OK? So if you want, as motivation, not. It's a simpler version of this question just for a global conformal part of the group. It would be nice to understand if there is a latest version of P mu, which there seems to be. So if you can strike out some global group descendants out of the table, you can already be halfway. Right, and higher dimensions are all that you expect, yes. So the answer is, we can do everything. And a subset of everything is what you ask. Before you do everything, can you explain something? Like, that simple question doesn't have a simple answer, or you have to go through co-solar in that case? I think it's the same motivation. So what I'm going to do is I'm going to identify the stress tensor on the lattice. And from there, you can build all the generators, global or bit as sort of. So then, instead of telling you directly co-solar formula what it is, I'm just going to go back and try to understand the first part, where we say in the low energy, you know, you take the lattice Hamiltonian, you diagonalize it, and you start extracting energies and momenta, which correspond to scaling dimensions and conformal spin. So what's happening there? Well, what's happening is very simple. We have some Hamiltonian of the CFT, which is nothing but an integral over the circle of some Hamiltonian density for the CFT. And what we are seeing is that somehow this acts or there is some correspondence on the lattice with some Hamiltonian on the lattice. But the Hamiltonian on the lattice is not one big piece. It's a sum of local terms. So maybe the same way that we've been able to look at this guy and see that the spectrum is analogous to this guy, how about we just look at these two guys and conjecture that maybe they are related. So how about as motivation, maybe H, the local Hamiltonian term in your Hamiltonian, can be equated to the H, the energy density on the CFT. And remember that this is nothing but the stress tensor T of x plus T bar of x. So that was actually our motivation. And then later we discovered that this had a name and it's called Coussler formula because Coussler did this in, I think, 94 for some integral models. And our proposal was we were not aware of this, but now we quickly adjusted to reality. We call it the Coussler formula. But the idea is this was originally motivated for integral models, but we are just going to use it for a generic model. And a generic model that gives you a lattice Hamiltonian, which is sum of local terms. So now we are going to take a local term and going to think of it and manipulate it as if it was a lattice representation of T plus T bar. I'm sorry, why do you need Coussler? Isn't this just the usual RG? You take a local operator on the lattice. You're supposed to match it on whatever continuous limit operators which have the same quantum numbers. And that's the leading term in that matching. That's the usual RG Wilson formula. Are you saying this is going to work? That formula definitely is very natural. Good. Good. I also like it. So the question is this. You presented us if we don't know if it's going to work. Right. It's guaranteed to work. I wanted to make it more exciting, but I don't see you very excited. That's OK. So basically the summary is that we've been using Cardiff's formula from the 80s to death. And we've not been using this, which was there. And conceptually, it's a tiny step to take. Instead of equating this to that, you just equate the densities. And yes, it's absolutely natural. And all I'm saying is if you use that, you can color your low energy spectrum. So I agree with you. It's extremely natural. And before we discovered that this was Coussler, but it had even a name, we were very excited, but we were wondering why is that people have not been using this. Because it's just a very natural thing to do. OK. So I agree with you that this is not even exciting. All right. So then what do we want to do? We have this biracera algebra. And interestingly, what do, in practice, what do we need to do to start playing the game of identifying who is the identity? How do we start identifying primaries and quasi-primaries and so on? Well, what we need to know, or first, let me just define the object that will be the only object we will need. It's going to be Hn. I'm going to define Hn as I'm going to take Fourier modes of the Hamiltonian term here. So sum over the lattice sides from 1 to n of e to the i, j small n, 2 pi over n. So I'm just taking this Hamiltonian term here, here, here, here. I'm adding them, but putting a face. I'm going to call this Hn. And the Hamiltonian, actually H0, would be equal to a renormalized version of the Hamiltonian on the lattice. So if you don't put faces, you still have this term here. So these are the Fourier modes of the Hamiltonian, something that you can easily build on the lattice. And now, all we need to understand, well, another one, is how they act on the CFT states. And then I'll do the same on the lattice. So what do we need to know? Let's see what's the next. Oh, yeah, I'm going to use this one. So it turns out that on the CFT side, this is, again, exact CFT data, not spin change. You can use these l n's to move around. And for instance, as an example, if you start from the ground state, identity state, and you apply l n's to bar, you would, oh, I didn't. The l bars, l's and l bars before, this will allow us to move around. And what's the relation between the l n's and the h n's that we just defined? So that's, sorry, I lost this. It's somewhere. Oh, I have it here. So even at the level of CFT, the h n is nothing but l over 2 pi integral of dx. And then you put it, transform this on the circle. And then you put the Hamiltonian density of the CFT. But now we just need to remember that this is t plus t bar. And then you can end up writing this as this generator's l n, b-serial generator's l n, and l minus n bar minus c over 12 delta n 0. So basically, having access to this h n is very close to having access to the l n's. Now, the good news is that this l n with positive n, when acting on states lowers the energy, whereas the minus n for positive n will increase the energy. So if you go with h n on some state, eigenstate, the result will be, of course, l n on this state plus l minus n bar on this state, maybe some identity contribution. But the point is that this part here has lower energy, and this part here has higher energy. So it doesn't really matter that you don't have access independently to l n and l minus n bar. You just have access to h n, because when you act with h n, you can quickly determine which part comes. You just look at the result of this action will be states with higher energy or with lower energy and states with higher energy. And you just split the two contributions, and you know that this is if you were acting with l n or l minus n bar. So we could make a big effort to build the momentum operator, which corresponds to 2 e minus t bar, but we don't even need that for today. So we could try to find also the momentum operator on the lattice, the momentum density operator on the lattice, but we won't even need that. Because this combination, the one coming from t plus t bar, will be enough for all the purposes of today. OK. So then we are very close, but it's late. Do you have an interesting algebra, these h n's? It follows from the other one. Sure, but does it close on itself? Oh, no, I don't think it closes. I think that, let's see. No, because, yeah, the minus. So these two guys don't talk to each other, but this minus sign here will put a relative minus between the two. Yeah, I don't think they close. Another question which connects what Slavo was asking for. So are these h n's exactly the spec from generating operators or are there small corrections? There are always finite size corrections on the lattice. Is that what you're asking? I mean, you construct some states on the lattice. Are these h n precisely the operators that link these states or are there some, oh, you know what you're saying, there are finite size corrections. There are finite size corrections. And so it's important to go to large system sizes to see whether these finite size corrections decrease or increase with system size, so whether it's an artifact of, well, whether there is something serious happening there or it's just a finite size effect. That's why it's important to go beyond exactly the analogization at some point. Good, so, okay, so what do I want? I mean, now I could state many things, but maybe I'll just say, yeah, I'll try to move quickly through this. So what's a primary state? Okay, if we have a primary state, well, it turns out that it's a state that is annihilated by all the l n's and l n bars for all n, what is it, for n larger than zero, okay? So that would be a characterization from the CFT side and you can see that actually you only need to check for n one and two, this is enough, the rest follows from conformal, from the bit of soror algebra. So we would like to check that and now what we're gonna do is, well, you can actually show that that's the same as putting the h's here, okay? H, so, if this happens for n equal to plus minus one and plus minus two, that the low energy part is equal to zero, so lower energy, okay? What I'm saying is you can take this understood, well understood conditions, what's a primary state, you can re-express them in terms of how they behave under the action of the h n's, okay? And you need h one, h minus one, h two and h minus two. Only four of these Fourier modes. And then you can just go to your states and ask the question, what happens, for instance, if I apply h one and two to this guy, there is no other state and lower energy, so this could only be a primary, but you can ask the question for other states and you can answer, right? If by acting with h one, h two, h minus one, h minus two. On a state, there is nothing being created at lower energies, that's a primary, okay? If you do that only with the ones, that's a quasi-primary and so on, okay? So there are well-known and well-understood facts from CFT that translate into a criterion on the lattice to identify all these guys. And then once you identify the primaries, you can also identify all the descendants by just acting with these h n's. Now you look at the higher energy contribution. Good, so, and you can do weird things. I mean, on the lattice, you are not promised that you wouldn't know who are the descendants of the stress tensor, but actually, yeah, you can determine this to higher accuracy. All right, so I'm running out of, no, I'm not running out of time, I'm running out of paper. No, oh yeah, no, actually, I forgot the most important part. So, how, so we would like to go to larger system sizes and so how are we gonna do this? And that's the section four. I have to say it was fantastic to work with exact generalization because you really can already see lots of this. You can demonstrate all these ideas with exact generalization. So it's not, oh, you need matrix public states or any other fancy technique to start using this, right? So you can incorporate, if you are using exact generalization, you can incorporate all these ideas. But eventually, you want to go beyond exact generalization, so now I want to use a matrix product state and let me make some bold statement here. I'm gonna say that if exact generalization, depending on your laptop or you, I don't know, I'm gonna say we can go typically to 30 spins. Okay, 40. Yes, no, you are not smiling. Now you are. So here, we can easily go to 500 spins. Now this is going to be matrix product state in a challenging environment. So this is going to be matrix product states with periodic boundary conditions or in a periodic chain and that means that the cost of the algorithm is not the usual one, it's more expensive. So let me first define the answers. So our wave function will have all these many coefficients that we will just write as a tensor network as a matrix product state and then we have periodic boundary conditions for the ground state. Let's first consider ground state. So this is just a quick summary of what you need to do to go to 500 spins, okay? And we'll proceed sequentially instead of taking the Hamiltonian, speeding out all the eigenvectors, we start with building the ground state and then we'll proceed more in a quantum field theory fashion in which you, once you have the ground state, you can build excitation on top, local excitation on top, okay? So first we need to write the ground state and the ground state will be described by this tensor network where its tensor has dimension chi, chi, here the physical dimension let's say equal to two for a spin one-half for the IZ model and this one dimension will, in the examples that the plots that I'll show ranges between 24 and 36. So this is small compared to the brutalities that nowadays can do with open boundary conditions. You can put this equal to 10,000, okay? So we're gonna have, this would be with open boundary conditions at the state of the arc calculations. I've heard of 40,000 and more and that's very nice but in periodic boundary conditions we will be using an algorithm with a cost. So let's see, let me do this carefully. So memory, how much does it take to store the ground state wave function? Well, we have to store one of these tensor, right? We have n copies of this tensor but they are the same and somebody made the observation that you don't need to store them n times. So that gave us speed up, okay, that's a joke. So this is how the memory scales, okay? This is the number of complex coefficients we have to store in the computer with chi of this order. So this is really very small requirement in memory. And then time, this is the bottleneck, time scales, the computational time scales as the fifth power times some d to some power, okay? So this is the bottleneck. This has to be compared with MPS in open boundary conditions where this thing would be chi to the third power. So this chi to the third power allows you to go to very large one dimensions because we have the fifth power and we cannot go to the same huge one dimensions. One question, if you use open boundary conditions, you don't have translational invariance anymore. So do you need to store more tensors? Yes. So open boundary conditions is worse for memory? It could be worse for memory, it could be better for time. It is not better for time but it could in principle. But then you will lose the well-defined momenta, right? So this guy is guaranteed to be translation invariant by at the level of the answers. So the variation of class is already translation invariant. If you lose that, you may regret it later when you look at excitations. All the excitations that we're gonna get have well-defined momentum as well. And you want to build them from exactly translation invariant ground state. Can you use these hn's to help you calculate the ground state? I mean the lattice hn's perspective, we know that they're supposed to violate the ground state. Right, that's a very good question. The attitude we take, so the answer is I don't know and we don't need it. So the attitude we take is we use well-understood techniques to find the low energy spectra and then we use the hn's to relate the spectra, not to find the spectra. But indeed, once you have, say, the ground state, you could start acting with the hn's and build descendants, right? But the point is we don't need to do that because we have alternative ways. So we'll just use the hn's to relate against states of 10 using other methods, okay? Good, so anyway, so there was some technical. How do you do that relation? Do you take your low energy eigenstates that you've constructed and then apply the hn's and then check overlaps or something? Okay. Is that what you did? That's that. Yeah, okay. And now I was giving you some hint on how we find the ground state. There is a variational ansatz. If you are not into variational ansatz, just I'm giving you some minimal technical description here. We can use energy minimization so you can compute the ground state, the expectation value of the energy by these ansatz and then start tweaking the variational parameters so that you can lower the energy and there are very powerful and well-understood techniques to, well, well-understood, no, they work. So we use gradient descent, okay? And once we have that, an important aspect of this is that we will define some excited states by, in a tricky way, so you can think of this as block, MPS block states, okay? So now we have the ground state and now we're gonna build the excited states by taking the same tensor everywhere as in the ground state so we don't have to compute it again. But in one location, we'll put the new tensor in just one location, L, okay? And that's not yet the end for the excited state but I want you to first look at this and recognize that what we're doing is we take the ground state wave function and we modify it only in one location, 10 minutes, okay? So by modifying this tensor, we're not changing the wave function in one point. There are correlations here that spread around and so you could think of this as modifying the ground state everywhere but the modifications decay away from the place where you modify your answers, okay? So that's the basic ingredient and then all we have to do is we take linear combinations of this modified MPS on one side. We take the same guy and we translate it. So we are summing over L, right? The place, we take a linear combination of modified ground states where we consider modifying it here or here or here or here. That's what the sum is about and we just put some two pi over N L times some, P and that defines your psi of, so this is your variational class for an excited state with momentum P, okay? It has the momentum P built in, so it's an exact eigenstate of the momentum operator and the question is whether it's also an exact or approximately an eigenstate also of the energy and that's what you determine again variationally. So you have this variational class where the ground state has zero momentum by construction and you just look at how to lower the energy and the excited state you choose in every momentum sector, right, you choose the momentum sector and within that momentum sector you optimize V so that you get the lowest energy state, the first excited state and so on, okay? I think that the high level summary that this is all that can be said in. Are there issues about missing certain excited states ever? Very good. So it turns out that this was a gap system. In a gap system, this will be the single particle, single quasi-particle excited state. So you will be missing a scattering state. You will be missing states that correspond to having two excitations colliding, okay? So the surprise and then a posteriori owner when we understand everything, blah, blah, blah, is that this is enough to capture all the excited states in a CFT, okay? And to understand this, I don't have a, well, the claim is that this works and there are strong numerical evidence supporting this claim, but the full understanding, I don't know, I can argue, right? I'm gonna start arguing, well, this guy is centered here, but in this wave function, the correlation, this is a critical ground state, so correlations go around the system, no problem, so it's not that you are only modifying the wave function on this point or within a finite neighborhood of it, okay? It's not that you have a correlation length and therefore this modification only affects the wave function within a correlation length. You do not have a correlation length. The correlation length is as large as the system, okay? So that would be an argument to say this answer is more powerful than the single excitation answers that you would think in a gap system. This is one. I guess I'm thinking of solatonic states where the left and the right of B might want different tensors. Uniform, but. Right, the question is, do you have quasi-particles in a CFT? I agree, I agree. So am I talking to someone who would say yes or no? Because if you say yes, there are, then I say, okay. Okay, so then, yeah, okay. So I cannot fight that with logic, but I can bring in some psychological pressure. Okay, so yeah. In this way, if you do a variation answer, for every momentum, you just focus on one state. For every momentum, I focus on a tower of states. I don't know, but what variation do you need to do the lowest one? Right, very good, very good. So yeah, you can imagine that instead of just getting, first you get the lowest one, and then you ask what is the next guy orthogonal to the lowest one. Just simple orthogonalization, I'm asking, this works, you mean? This works. Okay, because that's also what you might call it on trivia, that it's going to work. Okay, very good. Right, okay, so this works, this works, this works. What does it mean? I don't know. Okay, this works. So the plot I showed you at the beginning was just to show you that we could get the right energies and momentum for lots of low energy states. And if we want to go quantitatively, then we have that the scaling dimensions have to do with the energy, accuracy and the energy. Now we allow for extrapolation in system size, so this goes from up to 228 sites. And if you ask me why 228, that's exactly the question I asked the student. But they decided 228 was their matching number. So there is nothing special about this number, I'm hoping. And then the one dimension to increase them up to 36, which is very, very modest. And you can get, it didn't explain how to compute the center of charge, I won't do it now. But you can get this, these are the exact results and then you can get five, six digits of accuracy. As you go up in the energy, the accuracy drops, but you still get four digits, which is pretty decent. But I have a question on the extrapolation, because this is finite size extrapolation, but there is also finite chi error. Yes. And since there is no length scale in the problem, I expect the energy of all these states will have a one of a chi collection. Right, we don't play this game? So you shoot for the other side, you should first extrapolate with chi and then do the finite size extrapolation, right? That's been done in there. The error is probably much bigger than that, because of the finite chi. I mean, the relative error, I don't think this is the one you're talking about. It's surprising that it's so small. It's a small center of charge. I don't know where you are going. Oh, five minutes, okay. But then it would, I mean, the number of states that you need to keep would grow exponentially with C. So then C equals to one half is nice. Yeah. There are negative Cs out there. No, so indeed, this, the performance of the algorithm, as any other algorithm depends on the model you attacked and the AC model is particularly, it's very nice. There are very small finite size corrections which are well understood. They come from some irrelevant term that has dimension four or something. If you try more tougher models, then you will see a general decay of accuracies and so on. But yeah, that's what it is. So the question is, throw your best method into the AC model and look how it compares. And I've said that because we come out nice. I mean, I've been looking at different methods for a few years now and these are on the top. They compare very well with the best of available methods. And this is laptop computation. We haven't made any effort to parallelize the code or to go to use powerful machines. And again, we could go to 500 spins. So why did we stop there? So you can see all this type of accuracies. So there is not just a quantitative understanding that we're getting the low-energies correctly, but actually it's highly accurate. Good, so I had five minutes, five minutes ago? Or how? No, you have two minutes. Two minutes, okay. So I can tell you about an application just flash it around. I prepare slides for that part knowing that I would come here in travel with time of what you can do with this beyond trying to solve again the IC model, which is always fun. So application, I'm gonna just use these techniques to study spectral energy flow. What by spectral energy flow, I mean is that I'm gonna look at low-energy states on a spin chain on a circle as I increase the system size. And I'm gonna start with the Hamiltonian that is close to the tri-criticalizing CRT, but has a relevant perturbation that will flow as we increase the system size, will flow to the criticalizing model, okay? So I'm gonna, I want to play a game where I see something like this, right? I flow from the UV, I have the tri-criticalizing model, some central charge larger than one half flowing down in the IR to a central charge equal to one half. But instead of doing the RG thing of having an effective Hamiltonian as I change the scale, I change the couplings and so on, I'm just gonna start with a fixed Hamiltonian and I'm just gonna look at different system sizes for the same Hamiltonian. So what's gonna happen is at small system sizes, it's gonna happen, what happens is that at small system sizes, the Hamiltonian, the low-energy spectrum of the Hamiltonian looks like this, okay? This is after they're analyzing for 24 sites, okay? Using colors, so we use all these techniques to identify who are the primaries, who are the descendants, and then we can compare to the tri-criticalizing model and we see, okay, this is actually not exactly the same spectrum, but when we try to identify primaries, they actually work in exactly what, you have, you can reconstruct them up. So you can recognize the tri-criticalizing CST on this spin chain and I have to say there is a parameter here, gamma equal to 10, okay? For the initial Hamiltonian, this is a very special and nice Hamiltonian that we use for this application. This Hamiltonian here is the icing Hamiltonian times the perturbation or an extra term where this term, it actually takes an extreme value 247 roughly speaking, so it's not an exact value here, for this to become the tri-criticalizing CST, okay? It's, this model is nice because this perturbation conserves the Z2 symmetry of the icing model, but also it's cameras by yourself as the criticalizing model and so this locks you, all the perturbations that you can have are irrelevant with respect to the criticalizing model, okay? So this is an irrelevant perturbation with respect to the criticalizing model, so we, you know, this would, this flows back to the icing model, okay? A long instances, but it turns out that if we tune it to a value of disorder, then it becomes the tri-criticalizing model. So you're guaranteed to flow to a 15 or to a gap phase? It's guaranteed to flow back to the criticalizing model because in the critical, at least perturbatively, in the criticalizing model, you have two possible relevant terms, the spin, which is odd under Z2 symmetry, so the Z2 symmetry kills that possibility, and then the energy density, the epsilon, which is odd under Kramers-Vanier duality, and since this is self-dual, you also kill that, okay? So you are guaranteed to, you have killed all the possible relevant perturbations of the criticalizing model, and so whatever you add or this perturbation, right, is irrelevant with respect to the criticalizing model. Sorry, I thought that actually 1.3 degenerative field relevant deformation, depending on the sign of the perturbation, actually can flow to either, like IRCFT or massive phase. Sorry, which perturbation on what? So I think it's exactly the perturbation we're looking at. It was like worked out by Gianmologico that depending on the sign of the perturbation, you can actually flow either to massive phase or conform a fixed form. Not from the ISI model. If you look at this as a perturbation of the ISI model, that's not, as a perturbation of the ISI model. Maybe the question is, if gamma is bigger than 247. That's right. Then you have, yes, yes, yes, yes, yes, yes. Okay, good, so then what we can do is we can study the same Hamiltonian, right? For gamma equal to 10, maybe it's surprising, maybe it's remarkable that so far away from 247 it still remembers, for small system sizes, it still remembers the tri-criticalizing model, okay? But it does, and this is the numerical evidence of it. That's the exact spectrum. This is the spectrum on the lattice for 24 sites, but when you try to relate these guys using the agents, they still know about the CSD. And then you take the same microscopic Hamiltonian and you put it in a larger system size, larger, larger, larger. And by the time you get to 128 sites, you get this spectrum, okay? And this spectrum, low energy spectrum, again, you apply the same agents now on more sites, okay? But they are built from the same microscopic Hamiltonian. And now they have a different identification. They give you these conformal towers, which correspond to the criticalizing model. And in between, in between what you can do is you can just, so let me show, tell you what this is. Now we select a few of the energy spectrum of the energy eigenvalues as a function of system size, one over n squared. So this means this is a small system size and in this direction, you increase the system size, okay? And you follow the low energy spectrum on the circle and you see that on one extreme, you would have that this energy levels correspond to some local operators on the criticalizing model. And on this side, the continuation of these eigenvalues or eigenvectors correspond to a different set of scaling operators on the criticalizing model. And so we have a nice way of relating scaling operators from one CFT to another, okay? And this is highly, well, it's fully non-perturbative, blah, blah, blah. Okay? So that's it. Thank you.