 OK, thank you very much. And thanks also to the organizers for the invitation to this workshop. I'm very happy to be here. And I would like to give you an introduction to two-dimensional tensor networks, where I will mostly focus on the network called IPEPS, or Infinite Projected Entanglement Pair State. And that's a 2D tensor network ansatz to represent a wave function on a 2D lattice in the thermodynamic limit. So everything I'm going to tell you about is going to be on a lattice. I'm not really working on quantum field theories myself. But I hope, nevertheless, this is going to be of interest to you to see what we can currently do with 2D tensor networks and what they are about. And maybe this will then also stimulate further discussions how these ideas could then possibly used also for quantum field theories. OK, so as you can see, 2D tensor networks look a bit more complicated. And that's also why drawing them takes a considerable amount of time. So that's why I opted for a slides talk with the permissions of the organizers. So I hope that's OK for everyone. And of course, I'm happy to share the slides after the talk for everyone who would like to have them. OK, so let me just start with a motivation and overview slide over different types of tensor networks. I guess every one of you has seen a matrix product state before, which is the underlying answers of the well-known DMRG method. One can really say that this approach has revolutionized the study of one-dimensional systems over the last few decades. But there exist also other types of tensor networks for one of these systems, like the MIRA, the Multiscale Entanglement Realization Answers, which was invented by Githry Vidal, which is a very powerful answer for critical states in 1D. And now, already more than 10 years ago, actually, so it's not something new that I'm telling you, these tensor networks have also been generalized to two dimensions. And of course, the goal is to repeat this enormous success of DMRG with a tensor network that is specially designed for 2D systems. Now, 2D tensor networks look more complicated and also involved algorithms are far more complicated than in 1D. But nevertheless, in recent years, there has been quite a lot of progress also with these approaches in 2D. And these have really become powerful and useful methods for the study of 2D systems. And I hope with the examples I show you today, you can also get a feeling of what we can currently do with these approaches in 2D. OK, so here is the outline of the talk. So the main part is going to be an introduction to PEPs and IPEPs. And I will start with very basic things. I will do a quick recap of the main idea of a tensor network ansatz, then explaining the PEPs and IPEPs ansatz. And then I would like to go a little bit into the challenges we have in 2D. And we'll see that a particular challenge is the actual contraction of the 2D tensor network. I will tell you also a few words about the optimization methods. And then in the second part, I would like to show you one example application where IPEPs turned out to be a very useful approach. And that's the so-called Chastry-Stotland model. So that's an effective model for a strontium copper borate. And we got really surprising results here with IPEPs, which were in contradiction with all the previous predictions from the past 15 years. And it turned out that these previous theoretical predictions were based on the wrong initial assumption. And then in the end, using kind of unbiased IPEP simulations, we found that these new results, which in the end, helped us to gain a new understanding of the magnetization process in this material. So this is going to be the example application. And in the end, I will end with an outlook on the summary. OK, so let me just start with the very basics. And let me just repeat the main idea of a tensor network ansatz, especially for those who are not very familiar with this topic. So the main idea is to have an efficient representation of quantum antibody states, and typically the ground state of some local Hamiltonian. So let's just make a simple example with a lattice with six sides, where each side is described as some local Hilbert space, so for example with dimension 2, spin up and spin down basis. So in total, we would have here 2 to the power of 6 basis states. So to represent the state, in general, we need 2 to the power of 6 expansion coefficients. So these are the expansion coefficients of the wave function in this tensor product basis. Now we can, so the expansion coefficients here can be seen as a multidimensional array with six indices, or we can also call it a tensor. And let's now graphically represent it with this shape here, with six legs here corresponding to the six indices. And now the idea is simple. Namely, we would like to decompose this big tensor into smaller pieces, as for example shown here. And this collection of smaller tens is connected by lines. That's in the end what we call a tensor network. And in this particular case, that's a matrix product state. But of course, this is only one of the possible ways how we can decompose this big tensor. And depending on how we decompose it, we will get another type of tensor network ansatz with different types of properties. Now to each of these connections, we can associate a certain Hilbert space with certain dimension d, which is called the bond dimension. And now to evaluate such a network, what we need to do is to simply multiply all these tenses together and then sum over all these connected indices. So this is done here. And if you do this contraction, you obtain a new tensor psi tilde. And the aim is now that this psi tilde is an accurate approximation of the exact psi. But the important difference is now that on this side, we have a number of coefficients that grows exponentially with the number of lattice sides. Whereas in a tensor network ansatz, the number of parameters only grows polynomially with the bond dimension and the number of lattice sides. And that's what we mean by an efficient representation. Now you might, of course, wonder why such a decomposition is possible. Well, that's not possible in general. But it can be done for the states. We are typically interested in the ground states of local Hamiltonians because, as you probably know, these states turn out to be much less entangled than a random state from a Hilbert space. And that's expressed with this well-known area law of the entanglement entropy, which I guess most of you also know. So the area law of the entanglement entropy, which says that if you consider some region A in your system with length L, and then you look at how the entanglement entropy scales as a function of L, if you take a general random state from a Hilbert space, you get an extensive flow. So it grows with the volume. But for ground states of local Hamiltonians, you typically have this area law, which means that the entanglement entropy only grows with the area of the boundary between A and the rest of the system. So in one dimension, this means that the entanglement entropy asymptotically is a constant, which then also means that the number of relevant states that you need to describe some region A, some block in your system, will also reach a constant. And that's in the end the reason why, with the emergent matrix product states, you can describe wave functions in an exponentially large Hilbert space just using a few hundreds or thousands of states. In 2D, the area law implies that you have a linear scaling of the entanglement entropy, so proportional to the boundary of your region A. Now, there's also exceptions to this area law. So all critical states in 1D have a logarithmic correction to the area law. And in 2D, not all critical states, but some critical states also have such a logarithmic correction. OK, now a tensor network in the end exploits this fact. So it's not necessarily a good answer for any state in a Hilbert space, but it's designed in such a way that it can reproduce a certain entanglement entropy scaling. So typically in area law, or in the case of the mirror, it reproduces such a log L scaling. OK, so let's go back to our matrix product state. And one can easily see that the matrix product states reproduces an area law in 1D, which is just a constant. Namely, if you cut your system into two pieces here, then one can easily show that the entanglement entropy is bounded by log of the bond dimension. And that's a constant, which is independent of your system size. OK, but that's fine because that's just the area law in one dimension. Now, early on, people wanted to use this idea of DMRG also to simulate systems in two dimensions. And the simplest way how you can do this is to put your matrix product state onto a two-dimensional lattice, for example, with such a snake here. But there's a problem here. Namely, if you cut your system into two pieces, so into a left side and the right side, then the area law of the entanglement entropy will tell you that the entanglement will grow linearly with L, so linearly with this cut length. And as a consequence, the bond dimension that you need to use here, so which connects the left and the right side, will grow exponentially with this cut length. And that intuitively makes also sense because as you increase the system size here, you will have more and more entanglement. And all this entanglement is captured by this single bond here, so it has to be increased in an exponential way. So this does not mean that you can't use an MPS in two dimensions. And there exist many very impressive calculations actually based on this answer. It's just that you can't afford a very large system size in this direction here. So that's why people typically use very long cylinders because you don't have an exponential scaling in this direction, but only in this short length here. OK, and now two-dimensional tensor networks have really been developed in order to overcome this exponential scaling, so in order to have an ansatz which is really scalable in 2D. And the solution is kind of simple. We just introduce more connections between these tensors. And this leads us then to a PEPS, so-called projected entangled pair state, or also called a tensor product state. And you can see this is really a very natural generalization of a matrix product state to two dimensions. So as in a matrix product state, we have exactly one tensor per lattice side, but now each tensor is connected to its four nearest neighbors. And one can easily show that such a PEPS here reproduces an error law in two dimensions in a natural way. And why is that? So let's cut the system again into two pieces, into a left side and into a right side. And then you see the number of bonds you cut is exactly equal to the length of the cut you make. And each cut you do can contribute, at most, log d to the entanglement entropy. OK, so the entanglement entropy between left and right side is bounded by L times log d, and that's exactly a linear scaling in L. So that's the error law in two dimensions. OK, so that's a very natural realization of the error law in 2D with the PEPS. So very similar as you have an error law in 1D with a matrix product state. OK, now this can be seen as an ansatz for a 2D system with open boundaries, we could also do periodic boundaries by reconnecting the tensors. But now there exists also a special version for infinite lattices in 2D, which is called an IPEPS, with which you can represent a 2D wave function directly in the thermodynamic limit. OK, so and of course the advantage of this approach is that you don't have finite size or boundary effects. Now if your state has translation in variance, then we can parametrize the state simply by 1 tensor a here, which is repeated everywhere. And that's in the end a very compact representation of an infinite 2D wave function. But since we are working in the thermodynamic limit here, translation symmetries can also be spontaneously broken. And that's why in practice, we might also need more than 1 tensor. So imagine, for example, an anti-ferromagnet, which has spin up on one of the sublattices and spin down on the other sublattice, this would require an ansatz with two different tensors, A and B, as shown here. Or more generally, what we need is a certain type, certain size of unit cell of tensors, which is periodically repeated in your ansatz. And this unit cell has to be compatible with the symmetry breaking pattern of your ground state. Of course, we usually don't know the structure of the ground state in advance. That's the thing we would like to find out. So what we do in practice here is to run simulations using different unit cell sizes. And each unit cell size will realize a certain state with a certain variational energy. And then we check which unit cell gives the lowest variational energy, and that then corresponds to the ground state. So this is the IPEPS ansatz. And as any other tensor network, it's a variational ansatz where the variational parameters are stored in the tensors. And then the next step in a tensor network algorithm would then be to actually find the best variational parameters to have the best representation of a ground state of a certain Hamiltonian. And this is usually done either by doing some iterative optimization, so energy minimization, as for example in DMRG, where you sweep over your tensors in the ansatz and minimize one after the other. Or another common approach is to use imaginary time evolution. I will briefly come back to this later again. And then once you have found the best state, you want to compute quantities of interest for it, so like correlation functions or energies. And to do that, you have to contract the tensor network representing this expectation value. And now in the case of an MPS or a MIRA, this contraction can be done in an exact way. However, with PEPPS, this can only be done in an approximate way. So let me spend a bit of time to explain this to you a bit in more detail what the challenge is here. And for this, let's first discuss how we actually contract a tensor network. So I'll just show you here an example. It doesn't really matter what it represents. But we contracted tensor network by a sequence of pairwise multiplications. So for example, we could start with these two tensors, multiply them together, which then the result would be this gray tensor here. Then we take the next two, multiply them together, and so on. So we just proceed, always taking a new pair, multiply them together until we have contracted everything. Okay, now what is important to realize is that the order of contraction actually matters. So what sort of sequence of pairwise multiplication you take can have an effect on the computational cost. So that's why whenever you contract something, one also needs to check what it's actually the optimal contraction order to minimize the computational cost. So let me illustrate this for a matrix product state. So imagine we want to contract the tensor network representing the norm of an MPS, right? So here we would have the ket of an MPS and here the bra of an MPS. So a bad way to do it would be to start up here and multiply the tensors together. And why is this bad? Because we create an intermediate tensor which has as many legs as the linear system size. And that would be an exponentially large object and exponential computational cost. Okay, so the good way to contract it is to start from the left side and then just zip things up from left to right. And you will see if you do that, the intermediate tensor you obtain here has at most three legs here. So that's the good way how to contract an MPS. And this can be done in the exact way because the computational complexity is really bounded. Okay, so this works nicely for MPS and now let's discuss how we would contract a PEPS. So let's also try to contract the norm of a PEPS. So here the ket again, so that would be the ket of a PEPS and here would be the bra of the PEPS. So for three times three system and what we can do now is to push the upper half towards the lower half, so that's shown here. And then we can always multiply the ket and bra tensors together, so for example here. And this then gives us this network here which has always two lines between the tensors, so from the bra and the ket level. And two indices can always be combined into one bigger index. So what we end up with is this square lattice network with both dimension D square in between. Okay, so if you wanna contract the tensor network representing the norm of a PEPS we have to contract this two dimensional square lattice tensor network. Okay, so let's try to do that. So again, let's maybe start up here. We multiply these two tensors then maybe this one and so on. And then you realize very quickly no matter how we try to contract this we will always have an intermediate tensor which has as many legs as the linear system size. Okay, so no matter how we contract it the exact contraction will be exponentially hard. And this seems like very bad news because we have an efficient onsets which is scalable in 2D but if you wanna compute something from it then it seems exponentially hard. Okay, but the good news is that there exist several controlled approximate contraction schemes with which you can contract this network yes, so in a controlled way with a controllable error. And there exist many families of different contraction schemes. So schemes based on matrix product states so called corner transfer matrix method or a family called TRG, tensor normalization group and there's even more advanced schemes TNR tensor network normalization and so on. So there's really many different variants here. And let me first tell you what these methods all have in common. Namely that the accuracy of the approximate contraction can be controlled by another parameter which is another bond dimension which we typically denote as chi. And whenever you compute something then the convergence in chi needs to be carefully checked. Okay, and then the overall cost of these contraction algorithms depending on the method is about d to the power of 10 to the 14 where this chi here scales as d squared. Okay, so we have a polynomial cost but with a rather large power in this bond dimension d. So let me show you an example of this convergence as a function of chi. So this data from the 2d Heisenberg model so the energy per site as a function of this contraction parameter chi here for different bond dimensions of the peps. Okay, so this would be a d4 peps and we contract it to obtain the energy and you see this energy converges quite rapidly as a function of this chi. And then you see here, if you go to d5, the energy is lower but also here we get quite a good convergence. So the message here is that typically the error you obtain due to the chi is much smaller than the effect of the finite bond dimension d. So typically we will make sure that things are convergent chi and then study quantities as a function of the bond dimension d. Okay, so let me maybe just sketch a few of these ideas of these contractions and maybe the most basic one is to use matrix product state techniques, right? Because if you look at this network here, this first row of the network has the same structure as a matrix product state, right? So this looks like a matrix product state just that this physical dimension here is also d squared. And then the next row has the same structure as a so-called matrix product operator. So which has an ingoing and outgoing leg. And now it's well known from 1D calculations how we can multiply an NPS to an MPO, right? Which gives a new NPS with an increased bond dimension and then truncate this NPS down to a smaller bond dimension, okay? So we absorb a row and compress it and represent it with a new NPS with a smaller bond dimension. Okay, so this is one step and then we can just proceed. We take the next row, multiply it to a compression and then you see as we proceed from down from the top to the bottom we can then eventually contract a full 2D network. Okay, so that's one approach. Then let me briefly tell you about the so-called corner transfer matrix method which is the approach that I'm usually using to contract an infinite PEPs. So now imagine we have an infinite 2D lattice of this tensor a here which just is repeated everywhere. And now the idea of the CTM is to start from a small system with some initial guess of the boundary. So the boundary which is given by some corner tensors c1, c2, cc and c4 and then these edge tensors t1 to t4. And the idea is now that we let the system grow in all directions and we reiterate this so that we go to larger and larger system sizes until we have reached convergence. So let me show you how you can let the system grow on the left side. And for that we introduce a new column of tensors so just a copy of t1, a and t3. And then we multiply these new tensors onto the left boundary. So this gives us a new boundary tensors c1 tilde, c4 tilde and t4 tilde and c4 tilde. But now you see there's an enlarged bond dimension in between them, so two lines. So the last step is to perform a renormalization. So also very similar as in DMRG where we do a truncation so that we now have a new corner c1 tilde, t4 tilde and c4 tilde which now contain an additional row, additional column of your system. Okay, so this is called a left move where we have increased the system size on the left side. Now we can also do a right move, then a top move and the down move and then we just reiterate this over and over until we have reached convergence. And once we have reached convergence, these boundary tensors here account for the infinite to the tensor network surrounding this center tensor a here. Okay, so that's the main idea of the corner transfer matrix methods. There's also exist different variants here but I will not really go into the details. So this 96 reference, did she know? Yes. They weren't thinking of iPads. No, but they used it for 2D classical partition function. So you can represent a 2D classical partition function as a two-dimensional tensor network. And if you contract it, then you can compute thermodynamic quantities from that. So there's a longer history to two-dimensional tensor networks than mid 2000. So it goes back to the 90s actually. So yeah. Okay, maybe just very briefly. So TRG tensor normalization group is usually used to contract a system with periodic boundaries. Let me just show you the picture. So what one does is to consider the tensors on sub lattice A and does a splitting using a singular value decomposition. So we split this tensor into two pieces where we only keep a bond dimension chi. And then on sub lattice B, we do the same thing but in the opposite direction. And now you can introduce this down here. So we replace each tensor on sub lattice A and sub lattice B. This gives you this network here. And then we multiply always four tensors together on such pluckets here, which then results in this new square lattice tensor network rotated by 45 degrees. Okay, and then you can repeat this until you have contracted your entire network. Also here, there exist many variants. And then eventually the most advanced scheme is called tensor network normalization, which was invented by Giffrey and Glenn Nevenbley. And so I will not go into the details, but there's one additional ingredient compared to TRG, namely these green tensors here, which are called disentanglers. And what they do is to remove short range entanglement between different blocks of tensors before you do a coarse graining. So if you know the mirror ansatz, then this is very similar as in the mirror ansatz. Okay, so I just wanted to tell you this as a brief overview of these contraction methods and kind of the take home message is that indeed we can contract a 2D network in a controlled way. And that's still a very active field of research. So we can still expect that there's more progress here and that we get more efficient and a more accurate contraction methods in future. Okay, another big topic is the optimization in IPEPS. And here I don't want to spend too much time. I also just want to give you a quick overview. So most commonly for IPEPS, what has been done is performing an imaginary time evolution, where the main idea is that if you apply exponential of minus beta H onto some initial state that this projects you onto the ground state if you take beta to infinity. And then the idea is to decompose this big operator into small local operators by a Dr. Suzuki decomposition, which is then applied to your tensor network. Now there exist different schemes. So at each time step, there's a truncation step involved. So each time step increases the bond dimension of your tensor network. And then you have to truncate it back to your original bond dimension. And depending on, so there exist different contraction schemes. So a commonly used one is called a simple update. So that's a very cheap and simple contraction method. That's another contraction. I wanted to say the, so I just lost it. So the, to truncate truncation schemes, to truncate the bond. So this simple update just works in a local way. It's computationally very cheap, but in the end it's not very accurate. So it's not a very accurate truncation scheme. And if you wanna use an optimal truncation scheme, then you have to use this so-called full update. But this is computationally much more expensive because at each step, if you wanna truncate the bond somewhere in the middle, you have to first contract the entire 2D tensor network. And that's why it's computationally expensive. But it's more accurate. Now more recently, people also started doing energy minimization for IPAPs. And it turns out that with this, you can even get more accurate results than with imaginary time evolution. And also here, there exist different variants. So again, the take home message is that this is still an active field of development. So more efficient optimization methods are still being developed. Let me show you one benchmark for the Heisenberg model, which shows you a comparison of these of different optimization methods. So here you see the relative error of the energy as a function of bond dimension. So that's compared to the extrapolated quantum Monte Carlo result. So for this simple update here, full updates, and then this variational update, which is the energy minimization. And then you see for a bond dimension, d equals six, we obtain already an accuracy, which is below 10 to the minus four, right? So here you'll see that's the energy per side. So compared with the extrapolated quantum Monte Carlo result, we can really get many digits of accuracy, right? And that's for a relatively small bond dimension of six, right? So in 2D, the bond dimensions are usually much smaller than in MPS calculations. Here is also some data for the order parameter, so which is the stagger magnetization in the Heisenberg model. So here is the exact result. And here you also see how we approach the exact result as we increase the bond dimension. And if you really want to get a better estimate here, what we need to do is then some appropriate extrapolation to the infinity limit. Okay, so then I would, yes? Can I ask you why is the variational update better than the full update? Any intuition? It's a good question, but in the end, so what we do in the full update, we do an optimal truncation, but we apply a gate in the middle of our tensor network. And what would be the optimal thing would be to apply it everywhere simultaneously in the tensor network and then do a global truncation of all the bonds simultaneously. I think this would then be more accurate because in the end, we optimize it in the middle and then we replace it everywhere in our onsets. So I think that's where it comes from. But I would be interested in discussing this further. More questions? The trot layer is also a problem there, no, in the full update. Yes and no, but then you would think that if it's a trotter error, you could take the trotter error to zero and then you should reproduce the same accuracy, but you don't in the end. So it's not related to the trotter error. Yeah, so that's, I mean, it's an interesting discussion point that would be interested also in hearing your opinion about it. Okay, so that was the introduction to the approach and I would like to switch to applications. So IPEPS has already been applied to a wide range of different models. So from interacting from youonic systems like teacher and Harvard model up to different frustrated spin systems. So this is really not a complete list. So it's really being more and more used. And what these simulations have shown so far is that IPEPS as a variational method has really become competitive. So we can always compare with the best available other variational methods. And in many cases, we can really get competitive for even better variational energy when energies, especially if you go to the large 2D limit. And another thing is especially also compared to variational methods is that in many cases, we can also find new physics thanks to simulations which are to a large extent unbiased. So compared to other variational approaches where you a priori put first your physics into your onsets in IPEPS, we just optimize the tensors and then sometimes we can also get a surprise. And that was actually the case for this Schastri-Sauderlin model, which I would like to tell you about as an example application. And that was a work actually done in collaboration with Frederic Mila at EPFL. So the Schastri-Sauderlin model can be seen, it's a Heisenberg model on the square lattice with some additional diagonal couplings as shown in this network here. And this model has a very nice realization in a material called strontium copper borate, where the copper sites, so here marketing in red, they carry the spin one half. So the lattice that you have realized here is actually the same as you have on the other side. So another way to draw the lattice is shown here. So where you have these, you have like dimers which are coupled with each other. Now, let's first have a look at the phase diagram of this model without a magnetic field. So here as a function of J prime over J, so if we go, if we take this to zero, then we are left with just these diagonal couplings here and then the ground state is just given as a product of singlets, so also called a dimer phase. If we go to the other limit, we go to the Heisenberg limit where we just have an anti-ferromagnetic ground state. And then in between there have been several proposals for intermediate states and with IPEPs we were able to identify this intermediate phase as a placket phase. So that's a non-magnetic phase which breaks translational symmetry which forms where strong placket correlations are formed as shown here in the cartoon picture. Now the material lies in the dimer phase but relatively close to the phase transition into the placket phase. And what is now of interest for experiments is to look at this problem in a finite magnetic field. So we had a finite magnetic field term here because what has been observed in experiments is the appearance of several magnetization plateaus. So here you see the magnetization of the sample as a function of the external magnetic field and then these features appearing here, these plateaus, at 1 eighth, 1 fourth and 1 third. So that's with respect to the magnetization at full saturation. And the theoretical challenge has been to explain what types of states are realized in these plateaus. Okay, now early on it has been found that this system has almost localized triplet excitations. So we can take the ground state given by a product of singlets and then simply create a triplet excitation by making a triplet of one of the singlets. So if you add an external magnetic field you will start creating these triplet excitations which align with the external magnetic field. Another observation is that if you put two triplets next to each other then you can easily show that they tend to repel each other at least on the mean field level. And this has led to the natural intuition that these magnetization plateaus correspond to crystals of these triplet excitations. Okay, so here are some suggestions of how these crystals could look like. So in black you have the singlets, the dimers and in red would be the triplet excitation and that's a proposed crystal structure at 1 eighth and that would be proposition for 1 fourth and so on. So that was back in 2000 and since then there have been many works in theory and also experiments. So more plateaus have been found in experiments and in theory people try to match this experimental sequence. What was that on the previous slide? What was the solid in the dashed line in the plot? Oh, so this... Sorry, on the data block. Oh, so okay, these are... It's a good question. I think it was with two different orientations of the magnetic field. So there's a little bit of anisotropic... Yes, experiment. Yeah, that's experiment. Yeah. Okay, so in theory people try to match this but even after like say 10, 15 years there were still several mismatches. So a plateau was found at 1 ninths in theory but 1 eighths in experiments and then it's a bit strange. Why should you obtain a plateau at a strange number of 2 fifteenths? So even after many years this has still been a puzzle and now this seemed to be an ideal problem for IPEPS because with IPEPS you can really also simulate quite large unit cells. So the idea was to try to reproduce all these triple crystals and then look at their variational energies and then see which sequence is realized. Right, so this was the plan but then came the big surprise because it turned out that this initial assumption that these plateaus correspond to crystals of triplets turned out to be wrong. Okay, so let me show you what we found. So here I show you a result for one triplet excitation that has been obtained with IPEPS so where the triplet has been created on this dimer here. So you see up, up and then there's some response from the neighboring sites. Here I show you two triplet excitations in the same 4 times 4 unit cell using a small bond dimension. Now if you use a small bond dimension you reproduce the mean field result and I mentioned before on the mean field level they try to repel each other and that's what you see here. So you told us that there is this answer to IPEPS that allows you to represent ground states. Yeah. Now you're telling us about excitation. No, so it's the ground state in a finite magnetic field. So yeah, so if I say about, well I mean in the end you can so yes, so in the end you can think of it it's not an excitation on top of the ground state but it's the ground state in a finite magnetic field but you could start from the ground state and then do create one triplet excitation and it would like this, right? If you start from zero magnetic field. A local. You're saying if you look at the ground state that you get a finite magnetic field and you analyze it from the perspective of zero magnetic field it looks like an excitation. Is that what you're saying? Yeah. Yeah, finite density of excitations in here. You can look at this like this. So this is kind of the expectation from mean field which we get for small bond dimensions but then if you go to larger bond dimension then we obtained something completely different namely this spin structure here which is a bit reminiscent of a pinwheel and it turns out that what this is is a bound state of two triplets. So it turns out if you allow for sufficient large quantum fluctuations that actually two triplets like to bind with each other to form such a bound state. So could it be that the magnetization plateaus do not correspond to crystals of triplets but rather to crystals of these bound state objects? But didn't you start by saying that it was known that triplets repel each other? Yeah, on the mean field level. So that was wrong. That was wrong. That was wrong. So if you go beyond well beyond mean field then you will see that it's actually not true. So we tested that. So we then computed the energy for triplet crystals and bound state crystals and then compared their variational energy. So here you see the energy per site as a function of one over the bond dimension and the two upper curves correspond to triplet crystals. So one example is shown here and there's two lower curves corresponding to bound state crystals with an example shown here. And then you see there's a clear separation between these curves. So there's really clearly a better variational energy of these type of states compared to these ones. So you distinguish between those by the choice of unit cell? That's right, yeah, by choice of unit cell. So this was at one eighth and then we found very similar results also for the other plateaus. In particular for the two fifteenth plateau this now made perfectly sense because in terms of these bound state crystals it's really a very regular structure in terms of these objects here. So this was in the end a unit cell containing 30 tensors so that's 60 physical lattice sites and that's only the unit cell. So this unit cell is then embedded in the thermal. So you have this in the thermodynamic limit embedded in the infinite system. Okay so already computing the ground state of one unit cell you couldn't do with the exact idealization. Okay so the next step was then to compute all the states, all the possible bound state crystal states. So here are some examples. Here are some more examples and here are some more examples and so on. So that was quite an involved calculation and then eventually draw the resulting magnetization curve. Right so here you see the external magnetic field and then these points here correspond to the corresponding ground state realize that this external magnetic field with the corresponding magnetization. That's then the magnetization curve and you see here we obtain sizable plateaus at 1 8, 2 15s, 1 6, 1 5th and 1 4th and this matches the experimental sequence except this plateau at 1 5th but then we found if we go to a more realistic model of the material so if we include some additional anisotropic terms which are known to be present in the material then actually this plateau here vanishes so that in the end we obtain a sequence which is in agreement with experiments. Okay so in the end we got really surprised here but I think it's a nice example which shows how kind of numerical simulations can help to get new ideas of old problems. Can you say how do you determine in your simulation where a given plateau terminates? So this is so right so I have a state here and another state or let's take this one I have a state here another state here and I can look at the energy as a function of h and the energies will intersect here as a function of h so this has an energy which depends on h and this has an energy which depends on h and at this location they're going to intersect so that one becomes the ground state. But I have more in mind the situation where the plateaus don't transition one into the other that's an intermediate phases. So these are intermediate phases these are kind of mixtures between plateaus with certain domain walls so these are also these are states which have been realized also in certain types of unit cells and these are in the end a bit more tricky to say. So I mean there could be more structure to that but in the end what was important was to really see the sizable plateaus so which are kind of the features you would then also possibly observe in experiments. Just on a sense but you could do this in relation to that any kind of bias on a unit cell you could just take a very generic starting point and then just ramp up the magnetic. That's right. Yeah but if you think about it you would need to take a unit cell which is kind of commensurate with all these structures and this would be huge this would really be extremely large. So the answer is you can't do it computationally. Well you could but it's maybe not so it would take a lot of computation time and probably it would also get tricky the more tensors you have the optimization can also get more tricky that you get trapped in some local minimum for example so I would not do that so it was convenient actually to use try different unit cell sizes to stabilize the well-defined states. Okay so that's the example and there will be more applications many more applications I could talk about but of course there's no time so another family of models where IPEPS was very useful were the so-called SU and Heisenberg models which are generalizations of the standard SU2 Heisenberg model and these have become important also in the context of ultra cold atoms experiments and it turns out that depending on the value of n and the type of lattice you obtain there's really a rich variety of different types of ground states so these are just kind of the cartoon pictures so if anyone's interested in that I would be happy to discuss this later a bit and then maybe let me mention another recent advances namely for the simulation of the 2d-hobert model which you might know is one of the it's one of the simplest models of strongly correlated electrons but extremely challenging to solve and here with IPEPS and also with other approaches what we did here is to focus on one particularly challenging point in the phase diagram and then by pushing several methods to the current limits try to get a conclusive answer of the ground state and that was indeed possible in that case and where we found that the ground state is so-called a stripe state and that was kind of the first time that there was actually consensus between several methods about the ground state in the doped 2d-hobert model so that's interesting from the perspective because it means that these challenging problems like the 2d-hobert model are getting within reach now also for 2d-tensor network calculations that's right so also here I would use different unit cell sizes so a small one to stabilize a uniform state and then different unit cell sizes to stabilize different periods of the stripe and then do a comparison of their variational energies yeah so yeah I mean I'd be very happy to discuss this in more detail later what was really an open question here for a very long time was actually is the stripe state really the ground state compared to the uniform state and in that calculation we could eventually really clearly see a separation between uniform and stripe state there's still a competition between different stripe periods so there's even a strong competition there so that's more harder to answer but there was a slight preference for this particular poignant phase diagram for this period 8 stripe but it means well okay so Steve White had older calculations using just the MRG where we argued that there were stripe states but he had this sort of fiddle with boundary conditions to make them appear so none of that is here no so no these really spontaneously appear if you use a unit cell where a stripe fits in so yeah okay so that that's another example and then let me just come to the outlook so in the end I think we have reached an interesting stage with 2D TENS networks right so they have really become useful tools for the study of 2D systems but I think it's also important to emphasize that I think we're still kind of at the beginning because there's still many ways how we can further improve the methods so improve the efficiency that we can also enhance the accuracy and that's kind of a promising perspective given the accuracy you've already reached so far what do you mean oh 10 minutes 10 minutes so that's fine now I will even I think end earlier so let me just mention a few directions how you can increase the efficiency of of 2D TENS networks one important ingredient is to use symmetries of a model so that's already partially done but this can also be made more systematic so if you use symmetries then all your tensors become we get a block structure and everything is more efficient this idea is how to combine 2D TENS networks with Monte Carlo sampling which then also reduces the computational cost of the contractions there's many levels that you can use to parallelize your code also that has not been exploited so far at least not to a large extent important in the end are really the core things are really the optimization and the contraction algorithms and I think there's still room for improvement there and then in the end exist also interesting combinations of approaches so one can for example also combine a TENS network with another variational wave function so start from a highly entangled variational wave function for example and put a TENS network on top of it to add additional correlations I think that's also an interesting directions or combinations for example with fixed node Monte Carlo okay and this talk I've only told you about ground states but of course we would like to do more and actually there has been quite some progress also in recent years in going beyond ground states so in the end all the ideas that are around for in one dimension for matrix product states can eventually also be ported to to the in the end that has already been progress so for example for example the computation of excitation spectra or properties at finite temperature has been some progress real-time evolution open systems and then maybe I put a question mark here so maybe relevant for this workshop is is there some continuum version of these two D tensor networks and I don't know what the latest state is about this topic is that maybe also be interesting to discuss okay so let me summarize so I think everyone agrees that one D tensor networks have been the state of the art for several decades two D tensor networks are more challenging but there has been quite some progress in recent years and the IPEPS has really become a useful tool and competitive tool to start study challenging problems so as an example what I showed you today was the Shostri-Shuttle model where IPEPS simulations help to get a new understanding of the magnetization process in the material and I emphasized that this might still be the beginning that there's still big room for improvement and different possible extensions and that's why I think in the end it's really a promising route to solve challenging open problems in 2D and with this I would like to thank you for your attention questions so there are some 2D systems which are near critical like okay there's this famous controversy about the antiferromagnet on the Kagome lattice where depending on whom you ask where does IPEPS stand in that debate so there have been calculations IPEPS calculations for the Kagome-Heisenberg model which hinted towards a critical state so that is a gapless spin liquid I mean these are very impressive calculations the only let's say thing is that is the simple optimization has been used for these calculations and there's maybe some remaining question mark if that's an issue or not so not the best possible optimization method has been used in this type of calculation but did it provide evidence for the for the pi flux state so I'm aware of some work involving matrix product states on the cylinder yeah yeah it seems to indicate that this pi flux state is the correct description and there are some direct calls can you get similar evidence from IPEPS so again what has been found in the end this is really a some hints towards a gapless state so with algebraic decaying correlations and so that was that was in the end the calculation that has been done but this could maybe be pushed further to get more evidence well it's like for that problem which data is better currently NPS or IPEPS do you know so you see that that's that's always this discussion which which is better because in the end you should really see the two approaches as complementary right you might get very very well converged results for dmrg for a cylinder with a certain width but then you might maybe maybe you don't you still have some relevant finite size effects that you don't know how important they are whereas in IPEPS he would typically not reach the level of convergence as in dmrg and cylinders so you have to do kind of longer extrapolations but you don't have finite size effects so in the end you see both methods have their own advantages and disadvantages and in the end the best thing is that if you if you get kind of agreements between the two approaches which in the end was also what we did for the 2d harbor model comparing IPEPS to dmrg on cylinders and where we could get agreement between the two so I think it's a bit hard to say what is better because in the end if if a cylinder with a certain width that you can do with dmrg already captures through element physics then then it's fine so very similarly you can say for a 1d problem if ed is enough to capture the physics then you don't need to use dmrg but it's always good if you can check things also by going to larger and larger to larger system sizes and see what the finite size effects are so I think in the end one should really see these approaches as complementary I can ask the obvious question is there any working 3d and not let's say not much no much yet essentially I think 3d quantum I don't think so the 3d classical are some calculations but there's not much so I think it's still not clear how you can contract the 3d tensor network in the in the best way I mean also depends on the setting but no so this is this is quite open and it would be interesting to explore that in more detail what about there is a topological order yeah so yeah you can reproduce you can reproduce peps with topological order there exist also examples of exact state so if you take the key type tory code where you can really write down the exact peps wave function of a of a topological order state so this this is a class of states that can be captured thank you