 Thank you very much for the introduction and thank you all for coming to this talk and to the organizers for the invitation. I'll continue on talking a bit about applying NPS to lattice gauge theories and I want to explain a couple of things, so basically. I want to comment on the possibility that Carrel already mentioned before that in an open boundary condition system in 1 plus 1 dimensions we can integrate out the gauge theories of freedom. And actually the NPS formalism is compatible with doing that and we have tested this approach. So I will explain how explicitly why this doesn't spoil the advantages of NPSs. And I will explain it first with the model that Carrel introduced, the Schwinger model. But then I want to go one step further, still in the one dimensional easy setting. So one step further in complication towards the ultimate goal of being able to apply these tools to QCD in 3 plus 1 dimensions. So one step further is going to non-Avinian theories. And the simplest model we can consider here is SU2. And I will show you how we can combine all these things and apply them also to SU2 and show you some results that we recently got with this model. So I kept here the drawing that Carrel first did, so he did it again in the same way. So that you keep in mind that one of the aspects or of the ingredients of starting one of these simulations with Medici Product States is to find a suitable basis. So we need a tensor product basis of the Hilbert space. Of course we have to have a well defined Hilbert space where our problem lives. We need to find tensor product basis, and in this tensor product basis our ansets tells us that the wave function we are looking for, the state we are looking for, has coefficients with a particular structure, which is that of a matrix product state tensor network. But, well, first we have to choose these bases. And in choosing these bases we want to do as well as we can to obtain the most efficient representation. And this is somehow what is inside this kind of strategy. So let me start by writing again the Schwinger Hamiltonian as Carrel already presented it. So we didn't coordinate so well in here I kept the fermionic language in all the expressions because I think they are a bit more compact and easier to understand. Although, to tell you the truth, in our simulations we also use the spin formalism because at the end it's just easier if you have standard MPS tools, although it's perfectly possible to work with fermionic degrees of freedom all the time. So, in the fermionic language, or in terms of already discreet fermionic operators, so these fermionic operators that Carrel introduced that live on every vertex of the lattice, we had this Hamiltonian with the staggered mass term. There are slight differences here because when you think of the spin language, well, the number of fermions is basically a sigma z plus 1 divided by 2 or minus 1 divided by 2, and then you have these one half factors and plus 1 or minus 1 going around. But this is basically the same thing you already saw before. Where this thing is a electric field, electric flux on every link and all the rest was already defined before. And we have this plus a constraint, which is Gauss law, which in this language, I think here my notation is a bit different, but okay, I hope that's still okay. So, this thing lives on a one dimensional lattice where we have fermionic and gate sides. So, let me use circles for the fermions. And in between every fermionic side, so every pair of fermionic sides, we have a link variable, so where the gate lives, and this will be fermions. And Gauss law connects the values of the gate variables on both sides of a fermionic side. And, well, if we just use this formulation, we can use a basis, so going back to this first step, the basis we can use can have this structure, say. Let's start numbering the fermionic side from zero. So, we will have a N0 for the occupation number of the first fermionic side. And then let's say L0, because this is like an angular momentum. Well, this is the electric flux here, as Carrella already introduced, and so on. And if we have a background field, then this actually is the same as having like a first initial value of the electric flux that comes in the system, but it's non-dynamical. So, I will just assume it is zero and not write it here, but it wouldn't change anything in the basis because they are like decoupled, completely decoupled sectors. Okay. If we now go back to Gauss law, as was commented already before, we can use it to solve the value of the electric flux in every link, right? So, we can just say from here that the value on a certain link is whatever it was on the link before plus the fermionic content with some offset depending on whether it was a negative or a positive charge that we have in this side. And we can go on, substitute this one in terms of the one before and so on, and so on, until we arrive to whatever we had as a, say, boundary condition for our system, which I assume is zero, right? So, if I do this, I could just actually write this in terms of this condition, alpha zero plus sum sum over all sides, say, k, smaller or equal than this n here, of c, k, c, k minus the corresponding term in here. And, well, we can actually pull this out of the sum. This will be proportional to the identity. So, here we have a number operator, but this is something proportional to the identity. It will be some constant, which depends on the side. But this is not introducing too many problems in our Hamiltonian. So, I'm just concentrating on the problematic terms or the terms that might give rise to some problems. But, well, if you're interested in the full expressions, I can show them to you later. Okay, what happens here? If we now substitute this term in this equation, of course, we get rid basically of all the L variables here in terms of whatever fermionic content we have in the rest of the system. So, we have the square here of ln. So, we have the square of this sum. And then we will have cross terms of the number of fermions on one side with the numbers of fermions everywhere else, basically. So, we have long range couplings now in our Hamiltonian. So, it is interesting to see how the structure of this term, actually, to see that it is actually not that difficult to deal with. So, in principle, formally, we can plug it in our simulations and continue working with that. Okay, so, basically, so I'm not sure... Okay, I didn't say what was the last index in the last fermionic side, but this sum will extend only over links, right? Link variables. So, it goes to n minus 1 if n was my last... last side. This is not so important here. And then I will have a sum of K and L, smaller or equal to n of this. This is just... Okay, this is not equal. So, this will give rise to a term which is like this. K, CK. I'm just writing here the terms that are interesting now. Because any other... the identity times something like the number of fermions will give me something which is local in the number of fermions with some constants in front that I can sum over all the n's. Okay. So, I actually can make this a bit easier, but by writing that, okay. So, restricting the sum, basically, to K, smaller or equal to n, and say L equal to K plus 1. I think this is the easy one. No. L smaller than K, say, from 0 to K minus 1. It doesn't really matter. So, these guys commute with each other, so I can just write one term of the sum and look at the structure of this one. And now this is like K equal to 0 to n, and this only depends on K, but I can actually manipulate the sum of the indices and make the K run for all the possible values, all the links, from 0 to n minus 1, and make the n be just larger than K. Okay. So, if I do this here, then I will have a sum over K equal to 0 to n minus 1 and the sum over n equal to K n minus 1. And since nothing here depends on n, this is just accounting of the number of terms that include a given pair of fermionic numbers. So, at the end, I get something which looks like, n equal to 0 to n minus 1, sum of L, which is smaller than K, of n minus K times this C, K, L, C, L, C, K, dada C, K, something like that. And, actually, I can also play with these indices and make the... Well, it doesn't really matter. So, which one I write first. I can focus on what will happen when I arrive to a certain site. And let me explain, because this was not mentioned explicitly, before why this makes it easier or easy for us to work with this. Is the first n minus K, not L minus K? N minus K, n is the total number of sites here. And K is the second one. So, this is just accounting, like... Because I'm confused, because if I were to think in the Lagrangian language of integrating out gauge fields, I would just get density-density interaction proportional to x minus y. Yeah, but this is not what you get here, because this is just counting. So, this is a term, okay. But is this procedure analogous or equivalent to integrating out gauge field or is it something else? I don't know how you translate it in one with the other. I mean, we can look at the expressions in more detail later. But it's basically the same thing. I'm not doing anything else here, but integrating Gauss law. Only I'm doing it in the discrete formulation and already writing it in terms of the fermions. So, if you want to understand what this means, this is like... In the Hamiltonian, I will have a term that has a product of the sum of all fermionic occupations for all sides, say, from k or l equal 0 to k minus 1, in l, times nk. And this term appears more and more times if I go further to the right because every time I include all the sum of all the terms to the left. So, okay. And the factor in front is just how many times this appears. Depending on... Okay, so every time I go here, I include more terms, but this term, of course, includes... appears in all the terms that are to the right of it. So, more and more terms. Is this clear? More or less? I mean, what this means? It's clear how you include the computation, but what is not here is why it does not agree with the definition. Okay, I need to see the expressions to see if I understand it because these are not the ones for which I have an intuition for. So, the only thing I want to argue here is these kind of terms. So, this factor depends only on the second term here. So, I can just group it with this operator. I can put in all the terms that are to the left and then this operator. So, when I'm computing the expectation value of the Hamiltonian, as Carel was showing before, I can easily do local terms. And each of these terms in the sum would be something in my MPS. I have something like this, say this is my MPS, and now I have this kind of two-body correlation, so I have to put one operator here and one operator there. So, this is the same contraction. But if I am... So, this is something that I can contract efficiently. So, say that I want to optimize this tensor, so I want to contract everything from here to there. Or even without thinking of the optimization, say that I want to compute the expectation value of this guy. So, I can compute this contraction, then sum the contraction with the other term here and the other term there, and so on. But what I can do is I can start here the contraction of the identity and the contraction of the operator here. Then I move on, I keep these two contractions independently. So, I move on, and for this one I can add another identity, or I can compute the contraction of one operator here. And for this one, because I already included the operator, I can just put the identity. Ok, so now I put sum these two, because these are part of this operator. So, these two are just two vectors of this generic form that Carrel was using before. And I only need to keep these two terms in here, and I can move on in the same way. So, there are systematic ways of taking care of this in the whole Hamiltonian, but the important thing in here is that for all this sum I don't need to count what is the distance between this and this other side, because at the end when I put the last operator I will just include the proper counting factor. Ok? So, this means that I have a very efficient way of dealing with these long range terms in the formulation. It doesn't tell you immediately that the NPS will be good, because now we have a Hamiltonian which is non-local anyway. Right? This is another... Marc-Anne, is what you just said equivalent to saying that there is a very compact matrix product operator representation of this Hamiltonian? Yes, one dimension five I think. And if you were to use a matrix product space that explicitly preserved U1 symmetry, would you be able to read this number from the one index directly? Just the thing that we are removing completely the value of the electric flux from the picture, so we are just working in the physical subspace, and we can recover I'm showing it to exactly the number. What I'm saying is that this operator that you are measuring, so it's the product of... No, you first sound the particle number, fermionic particle number from the left to whatever position you are in and then you have some number operator on the next side. But this information, you could codifying the matrix product state Born index. If you preserve U1 symmetry explicitly in the Born index. Ok, but this is in the Hamiltonian, ok? This is part of the Hamiltonian. I'm still not talking about what the representation of the NPS is. Just what I'm saying is that if you eventually would use a matrix product state to represent the wave function you could read such information. Yes, we actually do that in some cases when we need it. Only for what I'm telling you now I think you don't need it at all. But what I'm telling you exactly now is how you connect, so ok, let me just give you the intermediate steps that I wanted to mention. So if we do that, we can eliminate all the particles on the picture. So we can reduce our bases to just the physical, just the matter content if you want. Because the Hamiltonian only connects states with the valid electric flux in between and now we know how it connects it because we can write this term in terms of the fermionic content only. So now we have a Hamiltonian which only lives on this fermionic degrees of freedom and actually this is the physical subspace because we have eliminated any gate degrees of freedom which are non-physical. If we want to go back, if we want to recover what the electric flux is in a link, we can actually do it nicely from the MPS kind of mindset in terms of local isometries basically. So the idea is that if we know the fermionic content on one side and the electric flux on the link before, the only thing we need to do is to sum them. So we can just write an isometry which maps some value of the electric flux and value of the fermion content and recovers the next electric flux in the next link and this will be just according to Gauss law. I here I just need to pass on the value of the electric flux. And of course I can concatenate all these things so that if I have an MPS in the physical subspace which has some nice MPS structure I can just, well because I said now I am considering that the background is zero so I can just start from this thing here and just reconstruct what the electric flux is on every link. So in this link it will be that and then in the next one something else and so on. And well, the last one doesn't really matter. It's again not dynamic. So I could just write this thing and this thing is just an isometry which recovers the value of the electric flux from before. And what this thing tells you is it needs to be the case. But if this MPS in the physical subspace was a good MPS so it was a good approximation to whatever states we are interested in with a small one dimension when I am applying this isometry here well I can again write an MPS in the full basis before if I interpret as tensors of the MPS everything contracted so I flatten it and then the dimensions of these tensors in general would be up to the product of the dimension I had, the one dimension I had before and whatever this isometry is introducing and what this isometry is introducing will be given by the value of this electric flux because I am just passing the electric flux on so I need to include as many states for this isometry as values of the electric flux I need to I need to keep showing actually at the end of his talk in his results and what we intuitively expect is that for low energy states at least we don't have contributions where the electric flux on one link is extremely high the value of the electric flux or the contribution of these sectors decays pretty fast so that's somehow the argument why this thing even though it's not local well it still has all the ingredients to be a good way to deal with the problem so it makes sense to look at it so actually we did that for the Schminger model with this formalism and got pretty good results you can also use it for generalization of the Schminger model where you include several flavors and this gives you access to like the simplest situation in which Monte Carlo calculations have assigned problem even in 1D, if you put several flavors and you allow them a different chemical potential for the different flavors in principle Monte Carlo calculations are already finding in the general case I mean there are some specific parameters where they don't but in the general case they will have a same problem so I have a question maybe for Juan actually because in the end I sympathize well that you want to do this to learn how to go to higher dimensions but specifically for one dimensional systems is it correct that in the Lagrangian formalism you can integrate out the gate series of freedom and end up with a local model no, it's not going to be local it's not a local it's going to be non-local potential even in one dimension in higher dimensions in general you can't integrate all the gate series of freedom not all of them are redundant if you want or spurious in terms of just given by the couch but in one dimension even after integrating you lose locality and the other question was so what goes wrong if you do periodic boundary conditions I'm assuming that what you said in one dimension with all boundary conditions generalizes to three structures you can do something but not everything for instance one problem that you have is that the background field that I wrote in here is not a background field anymore you have one value of one link which is dynamical because your Hamiltonian can induce changes of this value and you have a sample possible then you would need to keep for instance sectors for this value at least for one side but that would be the whole complexity the extra complexity would be a linear combination possible you would need to keep all these values yes now how you do it in practice you would need to figure it out I guess X X X X X X X X X amb les xarxes diferents. Si focuss en xarxes zero estats, i hi ha una potència química, s'oblida com a totes les xarxes de xarxes together, si vols. No s'oblida gaire. Però, si inclou diferents flavors, potser hi ha algun efecte. Després, per uns valors de la potència química, hi ha una transició de fes en el estat en què creu un imbalance de les dues flavors. Aquesta cosa pot ser observada amb aquestes tools i comparar-ho. Hi ha un resultat analític per la fèrmina més lenta. És un model normal de la fèrmina més lenta. Per la fèrmina més lenta, hi ha una solució exacta, per la fèrmina més lenta, no hi ha una solució exacta, però hi pot fer les mateixes calculacions. No plenio fer-ho amb les resultats, però, si ets interessat, pot fer-ho després. Ok? No, el que he mencionat és que no ho faré expressament, però el que em dic és que una manera, o la màxima manera de veure això, és que, si ara em comparteixo la fèrmina fèrmina, em comparteixo l'acció de la hamiltònia en aquesta fèrmina fèrmina. La hamiltònia només connexa a estats en la fèrmina fèrmina amb la fèrmina fèrmina, doncs que mai et treu la fèrmina perquè és un variant de fèrmina fèrmina. O sigui, en la fèrmina fèrmina, no... ¿Què és la fèrmina fèrmina amb la fèrmina fèrmina amb la fèrmina fèrmina amb la fèrmina fèrmina? Bé, formalment, pots actualitzar una transformació i definir la fèrmina fèrmina per absorber-la. Si vols fer-la formalment, la més intuïtiva pictòria és que pots compartir les mètriques de la fèrmina fèrmina en la fèrmina fèrmina, comparteix un efecte de la fèrmina fèrmina, i no fa res, perquè és just la fèrmina fèrmina i la hemetònia no t'aconsegueixi d'això. També pots veure'ls attachant a la fèrmina fèrmina. Sí, exacte, és la transformació que et faràs a la fèrmina per absorber-la. Ok, doncs vull fer la mateixa cosa per la càrrega onabilitació. Aquesta càrrega és... és famosa, de course, molt famosa per la moda càrrega, ha estat utilitzada per totes les calculacions d'expansions i les numeres calculacions en la hemetònia i la fèrmina. Jo crec que no era molt utilitzat per a les cases non-habilitacions, encara que és known, en principi, que també en una cases non-habilitacions, les càrregues de la fèrmina són dinàmiques, no independentes, en una teoria de 1-plus-1. No és tan clar com usar aquestes impraccions per reduir la complexitat de la calculació. So, vull mostrar-vos una manera de fer-ho. No diré que és l'únic, però hi ha unes properitats i actualment podem utilitzar-ho per pujar les possibilitats d'empreses, a més, per solucionar aquesta moda. Perquè, si reduim la número de seguretat de freqüència, reduim la costa de totes les calculacions i podem pujar les simulacions fàrregues i explorar fàrregues de la fèrmina. Ok, per fer-ho, per fer-ho, per fer-ho a Hamiltoni, per fer-ho, podem també posar el servei de Cougout que és el teori de la teoria de la xarxes, la formuleció de Hamiltoni, i la only thing that changes here is that instead of having, say, in the Schringer model we had this hopping term with the link variable in between that takes care of the Gaching variants, well now this has to be generalised que em diu U, que és ara una matrix SU2. En realitat, és una màtica d'operatòria quan considerem la moda de quantum. Aquesta té l'esquena de la transformació propera, que és la transformació d'aigua fermionic. Oh, perdona, aquí. El fermionic operatòria es transforma en un B. El link ha de prendre l'esquena de la esquerda i les transformacions d'altre, per mantenir l'invariant de Hamiltonian. Aquesta és la base d'esquena de càrregues i de construcció. Bé, a partir d'aquesta foto que he tingut abans de la meva sistemàtica, de la meva lattice, de fermions, d'aigua fermions, d'aigua fermions, d'aigua fermions. Quan aplico la transformació d'aigua ferma, pots pensar en el fet d'aigua a un càrregues vertical, i ha de afectar els fermions i les línies adhesives. Una poden introduir gènere de transformacions de l'esquena de la esquerda i l'altre de la esquerda. Aquesta poden ser lliure a l'esquena de la esquerda i l'altre de la esquerda. Aquestes ha de satisfar-se a l'esquena. Les relacions de commutació i també amb el càrregues còrregues. i es comuniquen amb ells. I ells han de transformar-los en la mateixa manera. Això ens dona la transformació. Aquest és just per un link. No recordo explícitament l'endemà. Però, de course, tots aquests guys són definits per cada link. Això és just per mostrar-vos què és. Per tant, potser potser he de dir què és el que vaig fer amb això. I just recordar què és la base propera per començar a estudiar el Hamiltonian en l'SU2 case. Per això, pots veure com les coses es fan més complicades quan anem a l'escola simple i nonhabilitada. Ok, doncs aquest guy ha de... anem just d'acord, així. Per fer-los generar les transformacions properes de tu en l'SU2 transformació, en l'est i en l'est. Això, bàsicament, et dirà que aquest és el mateix. En l'est. I, de fet, no són independentistes, doncs que el U transforms en l'est. Això és la representació de l'es, la formulació de l'espai de l'Hilbert, o de l'Hilbert, de l'Hilbert, de l'Hilbert, en la teoria de gaits, per analogia a un rotador quantitiu solidari. Ok? Llavors, pots aplicar dues rotacions independentes a la esquerda i a la ràbia, i això serà generada per dues objectes angularment momentàries. No he fet res aquí, que és l'indici, és la SU2, i s'està pensant de les x, y, z, per les tres components de l'angular moment. Ok? Això, l'Hilbert o l'Hilbert, són els fils cromo-elèctriques, doncs són els anàlegs, o la generalització, si vols, de les relacions que Carrel ha dit abans, per l'U1 case. I la qualitat natural per l'energia electrostàtica, per l'energia cromo-elèctrostàtica. L'energia termina és el casimir d'aquests guys, que és just l'A2, que és el mateix, i això és el j2. Ok? Amb tots aquests ingredients, l'Hilbert espais de l'ing, o una base per l'Hilbert espais de l'ing, pot ser... posat, o especificat, per tres quantitius números. Una d'ells serà el total angular momentum d'aquest rotor rígid, que és el quantitius número associat a aquest operador, j2. Així que, j2, crec en aquest guy, és el j1. I les dues quantitius números són les terceres components dels correspondents i escriu generators d'exacte, dels correspondents rotacions. L', j, m, és m, i l', r, m, és m'. M'. Ok, doncs ara tenim una base... No m'he confusat. Puc pensar en aquest rígid que és just anant al grup. Així que aquest rígid j2 és només l'operador que llegeixen funcions de l'opinió mentre el representingen Fordens. I el d'ell i l'ar són just defectes al grup enании equipatpre tears. Però són operatores que mercyen la subsèelfalta. Bé, hi ha també defecte al run de manera apejada sosp teva a l'operador. I ara bé,ces knotus joins a la base de j & m, per saber per què necessites les indicacions de m i d'emprà. ¿Això no serà bastant? Per tenir un índex i still tenir un lloc de funcions? No estic tan familiar amb la teva llengua, però cal tenir una rotació independentista. Estic parlant de què és el lloc de l'espai de la línx. I per això cal tenir dues setes de rotacions diferents. U, ara, com he dit, és una esportiva de mètrics. Hi ha indicacions, hi ha color indicacions, per una, el lloc de l'espai, per una línx. I correspon en càrrees si vols, i en un moment angular en una altra. Quan aquest nen agafa un lloc de m, això et fa una lloc de m plus l, m plus l, amb una corrupció, i et fa una lloc de m minus 1,5, m plus l, m plus l. I aquestes corrupció és, és el producte de l'esportiva de la corrupció per obrir la mateixa composició angular. I amb aquestes, altres d'ingresions que needs per saber com escriure aquests hameltoniens en aquest bases, per explorar aquest humeltoni. Dit és just per a una línx. I ara els bases per a la teva tot, són llacats, encara cal dir-te com representar el fermions. Ok, oi, potser potser sóc aquí, No és tan buit. No és tan buit, no és tan adeu. Ok, crec que aquesta porta, només la puc remover. No és tan malament. Així que, el que vaig dir era que, a més de tenir la... Ara tinc una llengua variable, i a més de tenir les flexes electròniques aquí, I ara tenim aquest jn, i aquest color indústria que he escoltat abans d'explicar, l', l', l', i l' aquí, que serà un somni. Les fermions, ara, tenen dues componentes. A cada ferma, abans d'anar-hi una ferma de fermes per sota, ara tenim un spinor, que serà, per exemple, que no s'hi posen, més o menys, utilitzant la llengua d'un spin o una altra, o em posaria la llengua d'anar-hi, si volia fer referència a dues colors en la teoria. Això és el que he de fer ara, en llocs de freqüència, per les mateixes sèries, o les fermions fermes, amb properes relacions anticomutacionals i altres. I Gauslaw, ara, és non-habilià, té tres componentes que no es commuteixen entre ells. Per això, puc escriure-los així. Les sèries no són exactament les mateixes que abans que es posaven per la llengua, però es significa exactament la mateixa cosa. Que les generatores de la righta, aquí, són constraïts per allò que eren... Les generatores de la righta, aquí, són constraïts, perquè és la lefta de la llengua. És la lefta d'aquesta llengua. És constraïts per allò que la righta era per la llengua d'anar-hi i per la fermions fermes. I la fermions fermes, la llengua d'anar-hi, és just, diguem, c, daga, n, sigma, a, c, n. Aquest operador. Això és la llengua d'anar-hi, carregat per allò que hi ha en la llengua d'anar-hi. Una de les dificultats és que ara aquesta llengua té tres componentes i ho hem d'imposar per a quins d'ells en els estats físics. Això és molt definit, doncs puc, en general, escriure un número aquí que representa una distribució d'externa xarxa, perquè puc posar algunes xarxes externes, i això és útil per a algunes de les simulacions que hem fet. Però, si no hi ha externa xarxa, puc just posar-los a zero, com abans, i cal ensorçar que aquestes tres componentes són zero. Ok, doncs, què és la base ara, per la mateixa llengua d'anar-hi? Fa el mateix referèndum, com la condensació ferma. Per a cada cita ferma, aquí puc just escriure una base que diu què és el número de plus i el número de minus. Composicions, la llengua d'ocupació, el número d'euros de les quals és ferma. O pot ser equivalent a dir què és el número de fermes que tenim a la cita, i què és el color de tres componentes que tenen. I això és just dir n-plus-plus-n-minus, i aquest tipus és n-plus-minus-n-minus, que té la mateixa informació, i és just més útil, per a veure el que em faig al cap per utilitzar aquesta representació. Perquè, a l'endemà, el que volem és explorar el fet que composem el moment angular correspondent a cada de les llengües per regalar la full base o per reduir-la tot el que és possible. Ok. El meu nom s'ha perdut quan he dit això, però a cada cita, estiguis still en la formulació de la cita? Ah, sí, perdó. No ho vaig dir, perquè jo just començava aquí. Sí, sí. Què són aquestes dues? Per cada component de l'original spinor, ara hi ha dues... Una cita d'expressió, que és el color. Corresponde a positrons... Sí, sí, però pots posar el positron per red i greu en una cita, i en l'última cita hi ha l'electron per red i greu. Aquest és el que hem de fer. Ok, doncs la base, just per escriure-la, serà una cosa com n0, n0, i després posaré un link, que serà j0, m0, m0 prime, i després hi ha una altra n1, s1, i hi ha una j1, m1, m1 prime, i so on. Això podria ser una base producte, i ara, en principi, començarem a escriure la simulació de l'Empia. Però això és, de course, still not enough, perquè aquest tipus és imbònded. Aquest és d'expressió. Positiu semíntegre, 0,5, 1, 3, 5, i so on, però és imbònded. Hi ha infinitimè... So, infinitimension per link. I no podem treballar amb aquest directament. So, again, we want to truncate. And the easiest way to truncate here is actually to truncate the representations we allow for this j. So, if you truncate including full representation, so for 1,5 you include all the possible values of m and n prime and so on, what you end up having is an SU2 invariant model only truncated, of course, it's not the full story. This was introduced, I think, by RSOR and Michele Burrell of course with the idea of using this to propose a quantum simulation procedure so that you could realize the operators that appear in there with atoms in an optical lattice basically, with operations that you can apply to them and then they actually have a proposal with that it's not the only way to truncate the model but it's a good truncation and it's not the only way to work with gauchin variant MPSs or more general tensor networks but in this case it's just the simplest way because you have these bases you truncate the bases but you do it in a way that your model keeps the symmetry. But killing symmetry is important but in the end is this a good truncation or not? Well, that's a thing, right? I will show you some results at the end that compare how much it changes things change when you include more and more representations so that means that it's a good truncation or not? Yeah, it's good if you include enough representations of course but I'm commenting now on the cost of including more and more representations because, okay, with what Carol was showing before you were doing exactly the same thing, right? So you're including more and more representations and then he was showing explicitly that with, I don't know, 6, 7 values of L you have enough but in here including 7 different values of J is very costly because if you think of truncating this to some Jmax the dimension you have per link will be the sum of 2J plus 1 square so 2J plus 1 square is dimension for a given representation. I can feel pity for the person who will have to implement it but in the end this is a good truncation scheme. It is a good truncation scheme? Yeah, yeah. Okay, I didn't understand your question. It's a good truncation in this sense. Why is it that as J increases the weight of the wave function it's suppressed? Well, I think it's basically similar I mean I don't have a very clear intuition for this YouTube model but it's basically the same phenomenon that happens in this bigger model, right? And it let you have huge contributions of very large fields in between, right? So here it's the same but with all the non-habilian settings. Sorry, we also just look at the third term in the Hamiltonian there. So there we have basically sum of cousin years squared. It's suppressing. So this is one effect. Okay, this is one term effect. Well, they could appear anyway in the dynamics, right? Also in this bigger model it happens that if you look at out-of-equilibrium dynamics then maybe this is not the best way to proceed and this is what Florian Hevenstride showed in some cases that they had contributions from very large values for some out-of-equilibrium situations but I guess for low energy states you would expect that it is a good strategy. So how costly is this? Well, if your game max is one half, which is the first non-trivial. So this last term in the Hamiltonian it also depends on the lightest spacing. Should I think that if I approach the continuum limit and the fluctuations of the J are more and more suppressed so I could always achieve... How does the distribution of J when you approach the continuum limit and become more and more peak that small value of J? Ok, that's a good question. We didn't look at that explicitly. We looked at continuum limits and we found that things converge well with the small values of J but we didn't look explicitly at whether for different values of the lattice spacing and the relative weights change a lot. So I don't know. In the continuum limit you approach with... So is this coupling being suppressed or enhanced? The continuum limit you approach in principle the ultraviolet 6 points so fluctuations are going to be controlled by the 6 points so you know what they are. In the continuum limit you take AG going to zero because this is a dimensional lattice and what happens is that this coefficient goes to infinity. So then this energetic suppression of large J is not necessarily in dominant. Potentially you would have large fluctuations in the Js. Potentially. Maybe there's some mechanism that says that it's not happening. So I thought we didn't look explicitly at the dependence on the lattice spacing of these things. What we observed was that the continuum limit converges pretty well in the values of the Jmax. The question I'm asking which is relevant to assess how this is going to work in the continuum limit is a question that has nothing to do with MPS. It's a question of maybe the answer is no. In the continuum limit are there significant fluctuations of J? Is this known? Do you know if this is known? I'm looking at... No. I don't usually think about this at least in the committed language. So then the MPS answer is... it's a question that maybe it's relevant to assessing whether these simulations will work but it's also a question that has not been asked before. For the case of QED with the electric field squares for a similar issue. So we really looked at this pushing A to zero and see whether we needed a broader troncation i, en actualitat, no. So this seems to be... this seems to work as well in the continuum limit. Right. Which is great news, but is there any intuition on what is happening? Maybe we should... Okay. I think it's like ten minutes, no? Since I... Yeah, so yeah, I should start wrapping up because I still wanted to show you some... Uf! And I didn't show you how to integrate of the gaseous of freedom still. Okay, so this was just the count dimensions, right? So like the smallest non-trivial troncation you could have in here gives you a physical dimension of five in the link variable which is okay. It's more expensive than the two we had before for the fermions, but it's still okay. One, then you have to add three squares, right? So you have 14 and if you want to have three halves you are already 30 and then you go to 55 and so on. So this grows pretty fast and in practice it means that you really cannot explore this question exactly that was discussed now very efficiently with these tools because your computation doesn't allow it. Anyway, we did some simulations when we started this project, where we had ideas of okay, this would be useful for quantum simulators and so on, this formulation with the smallest non-trivial troncation and this already gives some interesting results and allows us to explore for instance how to construct observables that would allow one to observe string breaking or to characterize this happening in this Hamiltonian picture if one would do this for instance in a lab. You would like to know that you are effectively seeing string breaking or not. Okay, but this is not something I'm showing now because I just want to very fast tell you a couple of things that you can do in here to still use the MPS to get some more information about all these things. So the first idea is quite simple and is that well here you have of course too many degrees of freedom. So all these color indices arbitrar, if you want, in the physical states your physical states are only going to be a color singlet and the Hamiltonian, the operators that you are applying in there, is only changing from a color singlet to a color singlet so in principle you could just have one representative for a color singlet possibility of all these fermionic occupations and J values and this already reduces your complexity quite a lot. So okay, let me just do it here and the idea is that so if we start with okay, should I start here? Yeah, okay if we start so this was introduced by Heimann already in 1982 so he used this for some strong coupling calculations and the way he was doing it is starting from the strong coupling vacuum which would be say 0,0,0 fermions so of course no S here 0 flags and then because it's a staggered formulation on the positron size if you want you have the Dirac C occupied so you have 0 here and so on well, you just need a representative which in this case is just the same state so you just need to indicate the N the J the N and so on and now your Hamiltonian is only connecting singlet states to singlet states so you can just write in general this for the representative of the corresponding sum we're all the third components are summed properly and you include the proper normalization factors I'm not writing the full story here the idea would be for instance you have something like 1 1,5,1 where this 1,5 is the J this is basically a sum of say 1S JS prime 1S prime or S and S prime equal plus and minus 1,5 and so on so you can just write this thing and you can just compute which transitions the Hamiltonian allows and write the effective Hamiltonian on this basis for instance so I'm not writing the explicit transformations but say for instance you could have say 0 flux 0 0 fermions 2 fermions and then say you have no flux no flux no no no no no no no no no no no no no J, J, J, of course, because this is zero, the Gauss law implies that this has to be the same J as was here, so this is the only possibility, you have zero to here, here you could have J, J plus minus one half J. And you can compute just the matrix elements of the Hamiltonian in this basis, so you can actually write an effective Hamiltonian in this basis, which is already less costly than the one we had before, because the dimension of the link now is just the number of values of J you need to keep. So in this case it would be like two, three, four, and so on. So this is already a big advantage if you want to do this simulation. But of course it's not the whole story because even though here you have these two possibilities, this is the only sort of indeterminacy you have. So if you know all the fermionic content, you know that there will be a component, which has, say, when you have one fermion, will be the J plus one half, and another component that will be the J minus one half. So you can actually include this information in the fermionic, in the vertex, here at space, and define a basis for the fermions, which is, say, we had the zero, one, two basis, so now we can just write a basis like this. See that the only thing we are trying to do here is to have an efficient representation of the physical subspace, to remove all the degrees of freedom that we don't want to deal with, so that in this basis we will then put the NPS unsets and then we can do the numerics more efficiently. And so if we call this index, say, N with a hat, then we can just write a basis, which just has this vertex, degrees of freedom. And now, of course, the dimension of these bases is 4 to the N, because, again, we have dimension 4. So I didn't say it before. In this reduced singlet basis, the dimension of the vertex is just 3, because you just have to say how many fermions there are, 0, 1, or 2. Now we are back to 4 per vertex, but we don't have any extra link variables. And, okay, Gauss law, in this basis, was exactly this, that for 0 and 2 fermions, j on the right is the same as j on the left, and for 1 fermions is plus minus 1 half, these two possibilities. Now Gauss law has been completely abelianized, if you want. So, of course, we don't have it explicitly, but the relation if we want to recover the j content as before is that jN is just a sum of k smaller or equal to N of some local operator, which I call qk, which is just how many 1 plus or 1 minus fermionic sides we find. So, this would be the k on k side, k smaller than N. Okay? And, well, that's basically it. So, this you can use to do the simulations. And I will just show you, like in the last two minutes, some example results. So, this brings you again to the non-local Hamiltonian. Ah, yes, yes, of course, yeah, I didn't see, yeah, yeah. So, the form of this thing is completely analogous to the one in the Zbinger model. So, all the things that I argued before, that this is not really so increasing so much the complexity of the simulation applies the same. There is a caveat, though, or a difference, because I didn't show you the matrix elements of the Hamiltonian in this basis, which somehow is inherited by this basis too, is that the hopping depends on the values of j, j prime. So, there is a non-local component to the hopping now, but it's sort of easy to take care of because of this structure that it has, which is, again, similar to the one I showed before, that gives you this isometry which just carries or counts the value of the j on the link. And actually, we didn't do this, but it would be very nice to combine this now with the abelian symmetry in the tensors, like the one that Carrell was using, because that would simplify actually dealing with these hopping terms, I would say. We didn't do that explicitly, so we could do the simulations without including that. But, okay. So, okay, this is the basis I showed. This is the explicit form of the isometry that I was mentioning now, that maps from one thing to the other. It's actually very simple, so it looks horrible with so many indices, but it's actually very simple because the only thing you are doing is if you know this fermionic content or, like, decorated fermionic content on the site, and you know what the j was on the link before, you can reconstruct all the third components that were missing with the proper Clebscore line coefficients. So, it's actually very, very natural and it's only ugly to be written in that way. So, again, the MPS, if you have an MPS in the reduced basis, which is what we are computing, you can just transform it back to the full basis and you can recover anything you wanted in there. So, one of the things we looked at was entropies. I'm not talking about that here, but it was important to understand the structure of these isometries precisely to be able to relate entropies in a safe way. So, I'm just showing you spectral properties here and this is already after the continuum limit has been taken. So, I'm not showing you all the intermediate steps you need to do to find the continuum result, because, okay, the goal of this project from the very beginning, we were collaborating, I didn't say it before, this is a collaboration with Stefan Kuhn, who was a student in our group, and Ignacio Thirac, of course, and then with Carl Janssen and Christoph Csigi, who are lattice QCD experts. So, they work on lattice QCD and the idea was to do the calculations like in a very systematic way to extract continuum results for this theory. We start with lattice formulation. So, this is actually the results for the energy density of the ground state for different values of the trancations of this maximum J in the link. Okay, and here, what I'm showing here is not actually the energy density, it's actually compared to the full model, which in this case was... Okay, let me see. Okay, uh-huh. Yeah, exactly. For this case, there was an exact solution. Okay. Okay, so what you can see here is what I was saying before, even for the very small trancations, Jmax equal one-half. Okay? The ground state energy is pretty good. So, the error is 10 to the minus 2. This is already quite good because it's after all the extrapolations we had to do. Okay? It gets better if you increase the Jmax, but it doesn't seem so bad, which is sort of good news if you want for the... for people thinking of quantum simulating this thing, because in a quantum simulator, necessarily if you do these simulations local terms and so on, you will have a very small dimension of the degrees of freedom you can manipulate per ninc. Okay? And if we go one step further, we also want to compute the spectrum of the theory. We compute the mass gap. So this is the mass of the vector mesons, so the first particle in the theory. Again, we have... Okay, does this work? Yes. So, again, I'm showing here the results for different trancations. Okay, let me just show you the result at the end. And this is the gap that you obtain for different values of the bare mass fermion if the trancation is at one half, so the simplest one, which is non-trivial. And you see that in this case, well, the values differ more considerably than before, but not only that. So the theory becomes much less when the mass of the fermion goes to zero. So here we can extract actually the critical exponent of the closing of this gap. And if we do this for the different trancations, we see that things differ. So here the errors are estimated after all our extrapolations and the worst one, the one that is less controlled, is actually the continuum extrapolation. And it turns out that for J max equal one half, a reliable extrapolation at all. If you do these feeds and you want to extract the continuum limit, your chi-squares tell you that these feeds are not really very reliable. So this is why we don't show here some errors, because it seems that these feeds are not really working, which is sort of consistent because it's not necessary that this theory trancated to J max equal one half has the continuum limit of the full SU2 theory, not at all. Okay? I think it's nice to see how things depend on the J max because it's also reason to push the simulations to larger J max or to tell the quantum simulators in the lab to do this or not. The stars are the last results. Ah, okay. So all these are lattice results. These are results I think from a strong coupling expansion from an earlier Hamer's paper. But I think it's that paper that I was mentioning before. Okay. Well, I think the entropy I'm not showing here because I'm already over time. But this is also something interesting that we can... Okay, maybe I'll just show you the picture and that's it. So the entropy in this case is interesting too, the entropy in a gate theory. It can be calculated that you have very easy access to the entropy in the MPS. And actually you expect this kind of divergence in the continuum so we can actually extract these central charges the same as Carrel was mentioning and again, this is... Oh, sorry. Okay. Again, here is some place where you see very clear differences if your trancation is very small. I don't think there's anything very much on the details of this thing here and this depends also on the mass of the fermion. But I think I'll just stop here because I'm already over time so thank you for your attention. Do we have any questions? I got confused. I thought you integrated out the gauge states so you didn't have to do any interpretation. Okay, in practice we could not do it because it's more costly because we have this dependence of the hopping on the J value. So the J value we can always reconstruct but this cost grows linearly with the system size. So because here we are taking system sizes of several hundred sides we don't want to keep all possible values. So what we do is systematically we keep more and more and see when things converge and when this is enough we stop the simulations. We stop doing the integration of the gauge states of freedom in a different way but not with disabilianised theory. Other questions? Let's thank Marie Carmen again. Thank you.