 So, well, thanks a lot, Slava, for the invitation for this wonderful workshop, at least up till now. So, I will indeed be talking about, well, I will immediately write matrix product states. So, Tensor Networks was the title, I think, in the program for the Schwinger model. So, matrix product states, as Frank Pullman already talked about yesterday, are this one-dimensional Tensor Networks states. I must say, so it's Tensor Networks states, I mean, there's a bigger picture. So, in the end, you would want to, obviously, it's a goal that's not attained yet and not immediately attainable. But in the end, you would want to apply this Tensor Network framework to QCD and to get, like, new insights into these heavy ion collisions, for instance, where you have real-time dynamics and finite fermionic densities. So, but this is QCD 3 plus 1, so this is 1 plus 1, so it's much more modest, let's say, a first step, hopefully a first step in that direction. Okay, so this is, so the work I will present you is work with my student, Boye Berjens, also with Rito Hageman, Frank Verstrahten, Altree from Ghent, and then we also did some stuff with Florian Hebenstraht and Simone Montangero from Heidelberg and Ulm. The plan of the talk is that I will give you, while a bit of an introduction is going to be, while adding a bit and repeating a bit what Frank Pullman already said yesterday, but I will put it a bit differently, and then I will introduce you the Schwinger model and go into the specifics, so the Schwinger model is one-dimensional quantum electrodynamics, so it's a QFT, it's not by itself a lattice model, it's a quantum field theory, and it's a gauge theory, and these will be the two specifics that go into the matrix product state simulations. After this, I will go through a sample, let's say, of our results, because we have many different results, but I'll go through a sample of our results on the slides. Okay, let me start by, let me start here, let me start here. So let me start with writing you, well, just a general expression for a quantum body state. So you express this as, as Frank already said, you express a state in a local basis. So my spins here and spins. Well, these spins, let me give them a local dimension Q. So this I index for one spin goes from one to Q. So for a spin of half degrees of freedom Q bits, Q is two. Okay, and well, you can write this, so this is just the expression of a general state in this local basis. You can write this state graphically in the following form. So it's a rank n tensor with n external legs, each legs, each leg a range one, one to Q. And so what our matrix product states is basically giving you an approximation for this tensor in terms of local tensors. Where now these local tensors have three legs, right? So you have the physical leg. This is just indicating the n dependence or the side dependence. So they have like an external leg. So let me put this index here. And then you have two new degrees of freedom if you want, two extra legs. Let me also put an index n here. This is an alpha index, a beta index. As I said already, the range of this guy is Q. And then the range of your virtual indices is called the bond dimension. And we will take alpha to go from one to D. So you have a range of D in these virtual indices. Okay, let me maybe, because I, well, I'm taking a fixed bond dimension here. And let me maybe write this in this form where this is now a vector with only a virtual index. And then you see where the name matrix product states comes from given this notation. So matrix product states refers to taking these indices as your matrix indices. And then you see that this expression indeed is something like this. You have a virtual vector on the left. And then I have my now matrix of taking this as a matrix indeed. All right. So now, obviously, it's very simple to write this. But when is this actually true? So when will this be a good approximation to your quantum money body state? And it turns out that what you need to look at is the entanglement structure. So let's just take a bipartition. So let me call this system A. Well, I won't have more sites here, but I'm not drawing them. Let me call this system B. And then you immediately see, so if you cut your virtual index here, that this amounts to writing your state as some phi alpha A. So the alpha is the index that I'm cutting. And A is this system. So you have a state living on this part of your system. And then we have the same index alpha index here. And now this is a state living on the second part of my system. Now, these are not orthonormal states. But then you can go to Schmidt decomposition where they become orthonormal. And you will get something like this. These are my Schmidt coefficients. And then let me write the tilde for the orthonormal states. But what you know is that if you go to the Schmidt composition, I mean, this is d-dimensional. I mean, you have d different states here. You know that at most you will have d different Schmidt coefficients here. d non-zero Schmidt coefficients here. And you can write a statement as a statement on the entropy between the two blocks. So you can define your Renny entropy, which is... So this is my density matrix, let's say, of the A side of the system. And then you will immediately find, plugging in my Schmidt coefficients here, that this guy will always be smaller than the number of non-zero coefficients, which is d. So these matrix product states have what they call an area law for the entire entropy. So the area law in general, so in general dimensions, is a statement that if you look at the bipartition of your system and you look at the entire entropy between both, that this entropy will scale not like the volume of the smallest subsystem, which it would do for general states, but that it will scale like the boundary area. So it will scale like the boundary area of your... I mean, of the boundary, the area of the boundary. And so in this case, the boundary is just a point. And you indeed find that this entanglement entropy, regardless of the size of A and B, is bounded by this constant. Now the statement on area laws for actual states, for instance ground states of gap Hamiltonians, and this is what we will be looking at, is in general dimensions, it's a conjecture. It is strongly believed to be true, but it has not been proven in all generality that in general dimensions, indeed, ground states of gap local Hamiltonians do obey an area law. In one plus one dimension, so one spatial dimensions, it has been proven, the first proof was by Hastings already in 2007. And as a corollary of that proof, you can show, well, the proof is an area law for the von Neumann entropy, but you actually also have an area law for Reignier entropies with n smaller than one, for some values, n smaller than one. And this you can use to prove that for general ground states of gap Hamiltonians, your MPS will be an efficient approximation. And efficient, in that case means basically that your bond dimension has to scale only polynomially in the system size. If you want to have a good description, if you want to have a small error on all possible observables you might want to put here. So also very non-local observables. So obviously n, so the number of sites, if you're interested in the thermodynamic limit, it's still a very bad scaling, so polynomially there is still a very bad scaling. But then actually recently, in 2015, Huang proved also by looking at the Reignier entropies for a gap system. So he proved that if you're interested in local observables, so let's say observables living on two, three fourth sites, so local observables, then also in the thermodynamic limit, you're fine with a bond dimension, well, which does not grow with the system size. What if I'm interested in the two-point function of two operators separated by a large number of sites? Well, then, so the thing is that your system is gapped and you have an exponential decay of correlation, so in the end, yeah, you're not... But you might be interested, for instance, in computing a string, which is the... Is that for expectation values in the ground state? Yes, yes. What about non-gapped systems? For non-gapped systems, so the statements are less clear in practice, so actually I will get to that, I will get to that now. Well, not indirectly I will get to that. But the general philosophy of tensor networks is you try to capture the entanglement structure of your state. Actually, this is also the philosophy in higher dimensions where there are less precise statements, but you try to capture the general entanglement structure of your state and then you will have a good approximation. And let me immediately apply this reasoning now to the Schwinger model. So for the Schwinger model, so the Schwinger model is, as I said, QED in one plus one dimensions. In one plus one dimensions your interaction is relevant. So actually in the UV, you know, it is a CFD, I would say. And so from that, so if you look at the entanglement entropy between A and B, you get a prediction and actually, because it's one direct fermion, I mean it's an equality I can write here, you get a prediction that this guy should be one over six log. And then here I should put my UV cutoff, okay I'm putting the lattice spacing here. And then here I will put something like the correlation length. And obviously, so this could be two times the correlation length or ten times the correlation length. So what I know from, I know it should be, it should scale like this. And so applying this reasoning, because we're interested in simulating the Schwinger model as a QFT and this is, well, not directly answering your question, but indirectly. So in the continuum limit, you want to take your lattice spacing to zero, keeping the correlation length in physical units fixed. So if you look at your correlation length in lattice units, it's effectively diverging as you're going to this, well, to the continuum critical point. And for actual critical theories, which really do not, where the gap, well, so in some sense, you could say that the continuum limit is already kind of a critical limit. We are keeping the physical mass fixed, yes. And I think for true CFTs, you can also approximate them with matrix public states and while the idea will be similar, so you will try to push your bond dimension, and this indeed translates, as you see, to the bond dimension that will have to grow polynomially with the cutoff. But you will push the bond dimensions, try to approach, let's say, the strict CFT point. Does that make sense? And obviously, I mean, for CFTs, there are the MERAS, which are precisely the sort of multi-scale entanglement minimization that are precisely designed to capture a different behavior in the entropy. So this is a different approach where you... Yeah, okay. So I'm already... Okay, so let me just... just to give you kind of a flavor of the type of computations you need, let me try to... Okay, so let's say I have a Hamiltonian, and let's take it Hamiltonian with local terms that have support on two sides. So one of these terms on two sides, so using this type of notation, I can express this guy like this, right? You need to take in two physical indices and spit out two other physical indices. And so now, for instance, you're interested in calculating the expectation value. If you do some variational optimization, this is definitely something you would want to compute. So how do you do it? So let me first look at my operator working in on the cat. So I have a general expression for the cat here. So the cat is... So this is site N, N plus 1. And then I have the last two sides here. So this is my psi. Now I have this operator that needs to operate on these two sides. So this gives me something in the graphical notation like this. And then you want to compute the expectation value. So you need this state, now the bra. And let me... So you need a complex conjugate coefficient. So let me use the convention that if I flip these tensors, that this means complex conjugation. And so then you immediately see that you get this, right? So now as the closed lines always means contraction, but now also these physical indices here are contracted. So now you want to compute this. Let me, for instance, start from the right-hand side. You immediately see that what you need to do is basically apply this operator. This is called your transfer operator. I mean, it's very much like in statistical physics. So you have this operator. And, okay, so at first it has to work on this state. So this, you can think of this as in different ways, but the way I've written it here, it's a state in a Hilbert space of two spins, let's say each of local dimensions d. Let me already write it as a general state in that form. So this is the state I start from, and I get a different state here. And basically you just apply it, apply it, apply it, keep on applying this operator until you, with your right vector, living on this virtual space. As I said, we'll be interested in working directly in the thermodynamic limit. So in that case, finding this vector here amounts to finding the quote-unquote ground state of your transfer operator, the dominant right eigenvector. And then similarly, on the left side, we get this. And so this shows you explicitly that these tensor network methods, and in particular the matrix public states that I'm showing you now, have this dimensional reduction in them. So you start from a d-dimensional system, in our case a one-dimensional system. And then basically your map, all your computations, the complexity to the complexity of computations on a d minus one, in this case a zero-dimensional system. So we only have two sites, so to speak, effectively a zero-dimensional system. So, well, maybe one more thing on this transfer operator. Because everything, so all your, as I said, so we will be working, I mean, most of the time you're working in a thermodynamic limit, so it's really one operator for the whole system. And actually all the information on your state is in this operator. You can, for instance, look at the spectrum of this operator. That's called lambda, not the largest eigenvalue. So actually this guy will be positive and real. Other eigenvalues can be complex, but I'm ordering them according to their complex modelers. And what you immediately find, for instance, if you would calculate this two-point function. So you put an operator here, an operator there. And then you want to again compute the connected part of its expectation value. Well, you can see that computing this connected part, the dominant right eigenvector drops out of this connected part. And actually you're left with a part. So let's say I have an operator here, an operator here, n-sites. So if you have n-sites, the connected part will scale like lambda n over lambda naught to the power n-n-sites, which indeed shows you this exponential decay of correlations. For the first question, did you look at the expectation value of the Hamiltonian? It seems important that it's a local Hamiltonian. Yes. In general, yes. There are ways to get around that. But in general, yes. I mean, actually, well, there are ways to get... So Marie-Carmen, I guess, will show us some calculations with non-local Hamiltonians on finite systems. But even on infinite systems, there are ways to work with non-local Hamiltonians as well. But here I'm showing you local Hamiltonians. No, no, I'm also fine with local Hamiltonians. What if your eigenvalues are degenerates and you get non-experiential follows? Yeah, well, in the generic case, you won't have a degeneracy. But the generic case is not a critical case. Yeah, but so this actually is why I first was giving you the entanglement entropy argument and then showing you this, because this would suggest that even with one dimension two, you can indeed tune your transfer operators such that this lambda one becomes very, very small at least. And you might go to very, very large correlation lengths. But this is not... Although indeed you would have a state with very large correlation lengths, this would not be a good approximation to an actual ground state of a system with large correlation lengths. You really need to look at the entanglement entropy for that. Okay. So, well, let me now introduce you the Schringer model. So as I said, the Schringer model is one plus one dimensional quantum electrodynamics. So the Lagrangian density, I should say, is this, the Maxwell tensor of my gauge field. And so we're working... I think like this, we're working in two dimensions. Then you have the Dirac kinetic term. So your Dirac spinor in two dimensions has two components. And let me write them in the Dirac representation. So let me write them like this. So I'll identify the first component as the positron annihilation operator in the non-relativistic limit. And the second component as the electron creation operator in the non-relativistic limit. So, well, the gauge coupling is, of course, this. And then we will have a master in the usual way. Okay, so this is the theory. So why is it an interesting model? It's, in some respects, it's a toy model, and I stress some respects. It's a toy model for QCD. Because you have confinement, actually already at the perturbative level. If you compute your three-level potential, well, you will get the potential. Let me write it this. You will get the potential, which is G squared x. So x, the distance between my two charges. So you have, indeed, a linearly rising potential perturbative confinement. So this is at the perturbative level. This also holds at the non-perturbative level. And actually, if you want to talk about strong and weak coupling, you actually should look at G over M. Because, as I said, the theory is super renormalizable. Dimension of G in one plus one dimension is one. So this is the dimensionless variable to look at. So weak coupling then also amounts to the non-relativistic limit. Because weak coupling is the same as having a large mass. And so obviously weak coupling. So this guy is small. Well, we have the regular Feynman perturbation that we can use. For instance, Feynman perturbation theory that we can use. For instance, to show that indeed we have this confinement. At strong coupling, Coleman, I think in the 60s, bosonized the theory. And you can actually show that the Schwinger model is equivalent to a sign Gordon theory, but with an extra master. In the strong coupling limit, the sign Gordon potential disappears. And so in exactly in this limit, so in the G to infinity limit, basically you have a theory of one scale. One meson, if you want. One free meson, one free bound state of an electron and a positron. Now this actually also makes the model attractive if you want to test your matrix product simulations because we can test them from both sides if you want. And look at weak coupling results from a weak coupling expansion and results from a strong coupling expansion. Okay. So now obviously it's a gauge theory. So this Lagrangian has this symmetry. I hope I have the signs right. But you have this your local you one gauge symmetry. And then so we are doing or we want to do a matrix product simulations. Matrix product simulations. So at first we actually want to go to a Hamiltonian because we want to do a Hamiltonian simulation. So it's not a part integral simulation. It's a Hamiltonian simulation. And so what we do effectively amounts to taking the a not is not gauge. I guess it's called the time like actual gauge. So in this, so taking a not to zero, you can immediately write down a Hamiltonian density. So you can quantize this theory on your a one field because you only have a not disappears. For the bosons you only have a one left where your canonical conjugate field is a derivative which is also the electric field. I'm not writing down this Hamiltonian here because I won't be using it. But well, as you know, if you put this as a gauge condition, you actually need to put the equation that follows from varying with respect to a zero. So the equation, so this should be zero if you vary with respect to a zero, if you vary over action with respect to a zero. And so obviously now a zero is gone and what you need to do, you need to impose this as a gauge constraint on your physical states. So your Hamiltonian lives in a Hilbert space which is way larger. But only, well, the states that fulfill this condition are truly physical. And this is actually nothing but Gauss law. Yeah, I'm kind of using notation as it was in three dimensions but to be sure it's in one dimension that we are working. So this is really Gauss law where you have like a charge density. And you can actually also show that this operator, so both terms generate gauge transformations which are time independent. So it's really asking that my states are independent or are invariant under time independent gauge transformations. So it's a bit, so if you come from QFT sites you typically don't want to do this because you explicitly break Lorentz invariance. So you can show that you have Lorentz invariance but on this physical subspace. I also typically use something like BRST quantization or Gupta-Bloyler. But there the full Hilbert space in that case we also have like a subspace in this Gupta-Bloyler. There the full Hilbert space is kind of sick in the sense that you have negative norm states. So this is not the case here. So in this case the full Hilbert space is fine. It's just that if you want to say okay I'm talking about this theory then you need to restrict yourself to this physical subspace. And actually you know what this, so one more thing on this full Hilbert space you know what this full Hilbert space is. The full Hilbert space simply consists of states that violate my Gauss law. And so violation of a Gauss law you can simply think of this as having external point charges there. And you don't want to integrate out completely A1 because this will give rise to non-local commitments? Well, depends on your... So you can't deal with this non-local Hamiltonian. So because as Marie Carmen will show this is indeed one approach that you can do is completely integrating out my gauge degrees of freedom. Which gives you a non-local Hamiltonian but you can deal with this. But this is not what we did. So we also kind of... Well, to have a local Hamiltonian but also kind of to have a general idea how this gauge invariance and actually we have to impose on our matrix public states how this gauge invariance is realized in this matrices. Okay, so now... Well, I didn't write this down but this is still a Hamiltonian of a QFT. But then Kogut and Susskind show this the end of the 70s how you can discretize such a Hamiltonian. So what Kogut and Susskind do is so you discretize your fermionic operators but what is important is that you will put the positron operators on the even sides, let's say and the fermionic operators on the odd sides. So you have like a staggered formulation and this is well specific in one spatial dimension so with this staggered formulation you don't have any fermion doublers. So you have let's say the Dirac fermion has the proper dispersion relation in the low energy limit you don't have any unwanted other low energy degrees of freedom there. Does that generalize the higher dimension? No, no, no, no. This is really specific to... Okay, so now the way I'm drawing my arrows they are still fermionic operators so they will become something like the deep fermionic operators with that type of commutation relations. What you do then is do the Jordan Wigner trick where you basically I hope I have tried where you replace your fermionic creation annihilation operators with these operators which spin operators so you replace your operator basically with this operator but in order, so this is just now the this is the Pauli ladder operator but you attach this string to it in order to have the right statistics to your operator. And in any case, so if you do this and I think I can I will be able to fit my Hamiltonian on this. So if you do this you will get a Hamiltonian and let me first write down the master. You will get a Hamiltonian with a master stagger trite. Where do you need to deploy this Jordan Wigner? You have a nice Hamiltonian in terms of this C and C daggers why do you... So... you don't. It's just that you can also write down matrix product states for fermionic local states if you want but okay for the formalism, well we just use this Jordan Wigner trick because then we could use the, let's say traditional matrix product state simulation on spins. So this is my master. You get this sigma z, so this is just a z operator. So your fermions now become spin a half degrees of freedom living on the sides. You get this sigma z operator at side N and well here you see you have a different term for even and outside. This is due to this staggered formulation that we are using. So where again this is the positron annihilation operator but this will be the electron creation operator on the outside. Okay, so then we have the kinetic term and so the kinetic term this is like the most important formula that you are writing so far and you are squeezing it in this tiny space. Should I? Okay. No, no, no, I did. Is it too small? Okay, now let me indeed let me just rewrite it. You mean this doesn't look so much as a sigma. Okay, let me indeed, let me okay. So let me first, so I'm kind of keeping the order that we have there in Lagrangian density. So let me first write down the master that I wrote down. So this is sigma z N. That's a sigma, no? That's for me. I don't know if you feed this into a neural network. Right? Okay. So then we had this kinetic term. So the sigma plus operator, the ladder operator, the ladder operator N plus 1 and then here I will write this e to the i theta like this. This is the, what's the ladder operator and then the going down operator plus Hermitian conjugate. So you see that this will kind of amount to the kinetic term including the gauge field. So this is the gauge field. So that's the link, right? Indeed, indeed. So the gauge fields, you have to think of this if this is site N, N plus 1, you have to think of the gauge field as living here. Okay. Does it have an index? It has an index N, indeed. And let me wow because I need to, indeed take care of my time here. Okay, let me just so this gauge field, so we're using compact QED. So this theta goes from 0 to 2 pi. So my local my local dimension of this link of the Hilbert space here is already infinite. And so in order to use this matrix product state in order to use them you will need to truncate this local Hilbert space and actually so the truncation is best done and so here I'm writing down the electric field so you can think of this L operator as the dimension dimension 0 electric field again at site N, so we have an electric field at every site and this L commutator with E theta is E theta so E theta is like the ladder operator of the electric field values and so the spectrum of your electric field operator consists of all integer values and so it turns out that we write the wave function in phi how does L act on this function L will generate so E to the I phi L on prime will give me phi prime plus phi. So it's a momentum operator of your yeah okay let me so there are some just to get because if this was the most important formula I should get it right right so there was a lattice spacing here and 1 over 2 lattice spacing there there was no lattice spacing here and as I said we simulated this model going into a 0 limit so what should I do now because I see I have 7 more minutes left so then let me just say that the additional ingredient here in order to simulate the system and indeed focus on this gauge invariant subspace is this Gauss constraint and so the Gauss constraint in this discretized form becomes Lm plus 1 minus Lm minus my charge has to be 0 on my physical states so this is basically saying telling me if I don't have a charge I want this electric field to be equal to the electric field here if I have a plus charge plus 1, if I have a minus charge minus 1 and then so I won't show you this because I'm running out of time but then it turns out and you can imagine this that you can very naturally encode this because it's a local thing you can very naturally encode this in this structure you have here and this is precisely what we did so let me now go to some of the slides so what you see there is the Schmidt decomposition now for a general state so this is not a matrix but for a general state on this physical subspace and so this p refers to the electric field eigenvalue so if I have an electric field p here then I know what the electric field plus the charge here plus the charge here has to be and so you get this decomposition of your entanglement entropy in this p super selection so p referring to the electric field and then so this gauge invariant matrix public state that I did not show you explicitly but it turns out that what you get are matrix public states where you have a bond dimension per p-sector and the name of the game is then to distribute your bond dimensions along the different p-sectors in order to successfully capture the state so we had to do this manually so we distribute our bond dimensions so here we effectively truncate at electric field at 3 so this is 0, 1, 2, 3 minus 1, minus 2, minus 3 you do this then you do your simulation for instance this was done with the imaginary time evolution using TDVP and then you end up with a distribution of your Schmidt coefficients so this is a log plot and then you see okay they are concentrated around 0 electric field 0 you should look at this guy and then for instance at electric field 3 this is already almost 10 orders the largest smith value there is almost already 10 orders of magnitude smaller than the largest one there why is there a little symmetry between charges with the negative charges? yes because of this staggered formulation okay what did you do then on mid-slings did you truncate the possible values is it slinged this guy by an infinite or is it still or did you truncate that? we effectively truncate our bond dimensions basically assigning them bond dimension 0 which is the same as truncating this Hilbert space bond dimension 0 putting bond dimension for this sector to 0 so do you have a lattice model where each side is finite dimensional or do you have a lattice model where half of the sides are infinite dimensional? I mean they are infinite dimensional but we just truncate let's say our Hamiltonian for each link we truncate the Hamiltonian we truncate the possible charges that the possible electric field there is a tensor per each fermionic side and then the bond index takes care of the gauge degrees of freedom directly there is no tensor attached to the in EI-LPS so well there are different ways to do it but for this simulations we block and so okay so actually I should point to my so we block a fermion side or a spin side and one edge into one effective side with one NPS tensor and then so the physical index runs over the two values of the spin or electron and several values so how many values of the well like I show you in this case seven values but I thought this was the Schmitt decomposition yes so this is the Schmitt decomposition that doesn't that tells you about the bond index not about the physical index so this tells us about the physical electric field so this is just in generality if you do a Schmitt decomposition for a gauge invariant theory so for gauge invariant states obeying Gauss law you will get this decomposition always where these are the electric field values these are these electric field values and then the way we write down our matrix public states the this physical index if you want this physical electric field translates to a virtual label and then in these virtual labels per label so per electric field we can assign a bond dimension so that's the way it works because I skipped this it was hard to guess this what is the gauge invariance when you do this application is it still preserved in some form yes it's perfectly gauge invariant interesting and so so I will not go to because we have very nice results because I don't have the nicest results are on the real-time simulations that we did but ok it would be like harsh to give you it's doing injustice to the beauty of these simulations to present them to you in two minutes I think but I will just go a bit deeper into this gauge invariance because in the end it amounts to because you were asking about this so in the end effectively it amounts to having matrices in your matrix public states which are very sparse so with many many zeros so this you can really look at it in this way and then you can ask yourself ok but yeah should you really impose this gauge invariance because you have something like Ilytsur's theorem which tells you that gauge invariance cannot be broken spontaneously meaning that if you simulate your gauge invariant Hamiltonian on the full Hilbert space the ground state will be the proper ground state will be a gauge invariant state gauge invariant ground state and so we actually ah ok what were the I don't think that's the case in an extended Hilbert space you will definitely have extra matrix elements in the game and your energy will go I would expect that the energy will go down no so it doesn't Ilytsur's theorem really tells you that the ground state so I'm talking about the ground state is obeys your gauge constraints on the full Hilbert space so I believe Ilytsur or not but we tested this at least for the Schwinger model so what we did just to play with this so we took the same matrix so again already truncating electric field so this is effectively truncating the Hamiltonian for the electric fields up to 3 or something like that 3-3 but we didn't impose gauge invariance so we just basically filled all the zeros and then we didn't impose this particular structure and then we did the same simulation so to compare so either with the gauge invariant form explicit or with the general form and so what we find is you can maybe look at this comparison so this is without imposing gauge invariance and we find that you converge to the same ground state so this is my gauge constraint squared and so it's effectively zero here it's by definition I mean by construction zero but then you can look at the difference in simulation time so this is for the same effective total bond dimension and you get six hour and a half versus five minutes for this distribution of bond dimensions and what you can understand if you look at your algorithm that you have these sparse matrices this will for sure help but there's actually an extra reason you also see that you have many more steps if you use your imaginary time evolution many more steps in this case to converge and the reason is actually that is your theory on the full Hilbert space well it's nearly critical so what you have on the full Hilbert space are string states so we have one side here one side there one side there and so on and what I can do on the full Hilbert space I can violate my Gauss law so I can put charges here giving me for instance an electric field here giving me so this is one letter size so giving me for instance in this case an energy g squared a so a going in limit a going to zero you effectively have these string states effectively become massless and this is the reason so this is the difference between these two Hilbert spaces and this is well the reason why you should why indeed helps to work on this Gauss space and because I'm already five minutes over time I will stop here thank you okay very quickly these were not dynamic so what we looked at is the Schwinger effect so the Schwinger effect amounts to putting a quench where you put a background electric field so where was my beautifully written Hamiltonian there so you change this L to an L plus alpha at time zero we started from the ground state of the alpha zero case and then you quench with the Hamiltonian where indeed we had this substitution where we add this background field so what we find is that so what you see there is the evolution of the electric field in time so the electric field expectation value what we find is for small but also not so small quenches is actually not a thermalizing behavior but kind of this oscillatory behavior so these are the you can see the different quenches and actually this goes very well with the work of Gaubor well he wasn't talking about this yesterday but this is an illustration from their paper so what they looked at is quenches also in a confining theory but now icing with a longing to know the magnetic field and so the explanation for not having thermalization is this explanation or picture but the picture is this so according to the Cardi Calabresi picture we have particle pairs that spread so you should think of your original state as a state with particle pairs of the quench Hamiltonian quasi-particles if you want and these quasi-particles will spread and this will give you a thermalizing behavior but because of the confining potential if these are fermions because of the confining potential well they cannot spread they are confined and you get this oscillatory behavior this is a picture and actually the picture simply amounts to this statement so this is the quench so this is the zero time state where I start from and the picture amounts to saying that this state is very well approximated by so this is now the ground state of the alpha Hamiltonian and these are creation operators actually you have two type of particles in this case of zero momentum states of this Hamiltonian so you basically get a coherent state with a density with zero momentum particles and so what I just wanted to say is that you can in fact check this picture quantitatively because we have information also on the states and these green lines are actually so there is no fitting here are the plots you generate based on this picture so this seems to work very well and then just pop maybe go to this final result for stronger quenches you do see kind of a thermalizing behavior so this is the electric fields this is particle number and what these dashed lines here are intervals where we can estimate the thermal value because we also did finite temperature simulations so we can actually estimate the thermal value of these observables and although we cannot definitely conclude that we have thermalization to local Gibbs ensembles at least we have strong indications that we have so this is like 5 minutes do you keep the same truncation in the local electric field here? yes so the truncation typically can say the same the bond dimension does not so in this case we used Gifrey's ITVD where basically the bond dimension will grow with your simulations and this is the reason why we have to stop at some point why did you do that? you have this you had to propose this other method yeah but there it's harder to to have like an updated bond dimension ITVD it was and then we had to start already with very big bond dimensions other question imagine a time evolution we were showing that the initial state is a specific allocation variant but what does it happen if you break the initial state is it restored after an imaginary time evolution? well yeah this is what Ilytsu is telling you and this is what we so what about the other answer the spectrum of of excitations above the ground state yeah so we also did that I so let me just maybe show you one slide on that so this is these are the zero momentum these are the masses so the zero momentum excitations and so for background field zero we find we can well okay so I will just tell you what we found not how we did it and you find actually three excitations and actually in this case because this guy seems to be in the continuum band of this guy but still it's a stable one while we see it in our simulations but also here we have an extra charge conjugation symmetry which keeps this guy stable from decaying to two of these guys but then as soon as you turn on your electric field we already I mean we see it also in our simulation that this guy becomes unstable having two excitations there and then well we can do this for different background so for any ratio of F over G or less of these two no no no because for instance well for any because well depending on the background field also the second excitation disappears for alpha while point three or something but for alpha equals zero it's not secretly interruple no but nonetheless you get this excitation of a threshold yes yes actually the prediction by Coleman is that for smaller and smaller coupling you should find more and more of these excitations that are basically string excitate well if you want excitations of your charmonium or your masonic excitations so you have like you know do you have a nice plot like for the mass of the lowest meson which would agree with perturbation here in one regime with strong coupling in another regime and would interpolate between them something that other approaches could I don't have one for the mass but maybe it's the simplest plot that you can imagine for me you go all these complicated quenches and so on well for instance this I think is a nice one because we can also this is not what you want but I like this one because we can also construct eigenstates over excitations with momentum so these blue lines are square root k squared sorry it's e squared rk squared plus m squared the Einstein dispersion relation that goes perfectly fits perfectly and then maybe on kind of the nicest cross check for me so using Feynman perturbation theory was this so this is the energy density if you add this background field which is now denoted as q and this is in the weak coupling regime so m over g is large as you see so g over m is small and then you can compute this well basically by putting a background electric field and integrating out everything and then you will get something like this these are the first two coefficients well the first two contributions to this coefficient and so what this plot shows you is this contribution subtracted it's already with this contribution subtracted so this is really the one loop Feynman diagram that you calculate and this plot shows you indeed that in the appropriate regime you are smack on what Feynman is telling you and why I think it's you should find this it's nothing spectacular in that sense but I like this because if you think about it, indeed so what we did, we do the a0 is not gauge we do kohut suskint then we have our matrix product states then we take the continuum limit but then obviously the other way is just use pot integrals in the perturbative approach and you indeed find well the same value