 OK, welcome back, everybody. Let me continue with the second part. I want to, first of all, I was to slow on the first part. So in the second part, I wanted to talk in quite some detail, in the second part of the first lecture, I wanted to talk in quite some detail about how to use this IMPS or how to use the MRG to extract fingerprints of topological order. Now, I just decided to just basically give a very brief idea how this can be done, and then you can look at some literature or also some of the notes that I wrote where there's more detail on this topic. So let me just give you some of the basic ideas of what I mean by this. So with the last part where we stopped, we just looked at the transverse field ising model. And there are hope that I could show. It's very easy to use infinite matrix product states to extract relevant information in particular. We could just calculate the magnetization or the spontaneous symmetry breaking to distinguish the ordered and the disordered phase. And that clearly, I mean, done there for the simplest model also applies for a much more complicated model. So if you just have some spin or some Hubbard-type model, you will be able to actually identify breaking of translation symmetry, of spatial symmetries, of spin rotation, or whatever symmetries you think of. You can just check if they were broken or not. However, there is a different class of phases of matter for which this will not work. So they are well-defined quantum phases of matter that cannot be distinguished by measuring any local order parameter. So you cannot identify these phases by local measurements. So and the prominent examples are the quantum Hall effects, both the integer quantum Hall effect and also the fractional quantum Hall effect, so-called spin liquids and topological insulators and also certain one-dimensional systems that I want to just very briefly talk about. And many of the other topics are probably captured by some of the other lectures. And these phases just, certainly they are phases that cannot be characterized by symmetry breaking, but in fact, they have some quite fascinating properties. So there's some quantized conductance as in the quantum Hall effects, there's some effect of fractionalization, so meaning that we build a system out of certain entities, so like for example, we just take electrons, build two-dimensional electron gas, and then when applying the strong magnetic field and load temperatures, the quantum Hall effect or the fractional quantum Hall effect is formed, which then has excitations that carry fractions of the elementary charge. So, and this is something I find quite fascinating that we just build something out of electrons and we find excitations that are fractions of these electrons. And again, this is just deeply linked to the fact that we have these kind of topological order. And another thing is that we have kind of protected or symmetry protected edge states that characterize these systems. So there's a lot of interesting physics that I'm not gonna be able to cover in detail, but we can just briefly roughly kind of classify these different types into intrinsic phases that are very robust to all sort of small perturbations and we have the so-called symmetry protected topological phases that are only protected or only robust as long as we play by certain rules, so as long as we preserve certain symmetries. And I want to give now a brief idea of how we can use matrix product states to understand these so-called Haldane spin chains. And this Haldane spin chain is, in fact, very closely related to the physics that we saw before in terms of this AKLT state. So we just saw this Hamiltonian before, but then we had this one-third times the bicarbonate spin term, which is absent here. But it turns out, and this was already one of the questions, the physics at this point, at this AKLT point, or like this point where we have one-third here, and the one where we just leave out this term completely is roughly the same. So, and there's some quite interesting physics, and this was discovered by Haldane in the mid-80s or early-80s, where he found that when diagonalizing this Hamiltonian on a ring, say, we find that the system has a unique ground state protected by some energy gap to some excitation continuum. So at this level, it just looks like a simple trivially disorder, like that's the simple paramagnet. However, there's a surprise. The surprise is that if you look at this Hamiltonian not on a ring, but you look at this Hamiltonian on an open chain, you find that the ground state all of a sudden is four-fold degenerate. And in fact, the AKLT state that we looked at before, like this form, you remember this state where we had to spin one side, splitting up into two spin one-half, which form mutually singlets, is an explanation for this, because we find in the bulk, all the spins are forming, this is kind of virtual spin one-half, forming singlets, except the two poor guys at the end of the chain because they don't have anyone to form singlet with. So then we have basically two uncoupled spin one-half degrees of freedom sitting at the boundary, and these form the four-fold degeneracy, like each of the two spin one-halves can point up or down, giving us the four-fold degeneracy. Good, and that by itself is something interesting because here we start off from something that's a spin one chain, so we have an integer spin chain, however, we have half integer degrees of freedom sitting at the end of our chain. So here we have some fractionalized edge excitations. And this is not pure fantasy, and in fact it can be measured, so there are certain materials that to a good approximation are described by these spin one chains, and it turns out if you take one of these compounds and materials and you just dope these systems with non-magnetic impurities, you just essentially creating an ensemble of open chains, then in the NMR profile, one can see actually signatures of the spin one-half degrees of freedom. So let me now briefly discuss a more general concept that describes these types of phases because it turns out that this Haldane phase is just one example of a kind of big class of phases of metal, and these are the so-called symmetry-protected topological phases. And because of the lack of time, I just want to give you a brief idea of what this is and then you can see if you find this interesting or not. So let us now do the following. We just take our ground state and we just cut out a segment out of this ground state, say, and using this MPS formalism, we could actually just say that, well, let us just take a bunch of these matrices out of our ground state. And then we can do the following. We just apply a symmetry operation to this segment that we have and we do this by just applying a symmetry operation to each side. Let's say for our spin system, that could be just a local spin rotation. So we have our Heisenberg Hamiltonian, which is invariant under spin rotations. So basically if you just rotate all the spins by a certain angle, the Hamiltonian and the state are invariant under this kind of spin rotation. So, which means if we apply this kind of symmetry operation, it doesn't change the state at all in the bulk because the bulk state is invariant under the symmetry. However, if we have degeneracies, the kind of transformation can basically transform the degrees of freedom that we have at the edges. And in particular, we are now looking at systems where the onsite representation of the symmetry is a so-called linear representation, which means the multiplication rules for the representation is the same as the one in the group. So UG times UH is times UGH. However, the representation at the boundary can be a so-called projective representation. So a projective representation is actually known to mathematicians for more than 100 years. And the idea is that you just take a regular representation of your group, but you just add some consistent set of phase factors to it. And we are all aware of projective representations as for example, think about half-integer spins. But if you just take a spin rotation symmetry and you just look at how it is represented in terms of the polymatrices, you find that polymatrices anti-commute, so they actually have some non-trivial phase factors in here. And again, mathematicians have looked at these projective representations for a long time. And in a few words, what they found or where's the question they asked for? Like what sure, ask and serve is the following. You just take a group with certain generators and then you just write down two consistent phase sets. So you just write down two projective representations. And the question you ask, what kind of, if I just write down these two different projective representations, can I actually sort of transform one into the other or not? And then he found that, well, not necessarily. And in fact, he found there are different classes of projective representations, which are labeled by these cohomology groups. So, and it turns out that the same mechanism or the same ideas that were put forward by mathematicians a long time ago can be used to classify symmetry protected topological phases. And this can now be applied exactly to this kind of model. And we could just say that, well, we just take this Heisenberg model, where I already argued that this is in this particular phase, where we have half integer degrees of freedom at the edge. And we add to it a so-called single ion anisotropy. So with a single ion anisotropy, we find the following. If you make this D very large, then the, so if you let D go to infinity, then clearly all the ground states will be simple product states, where we just have products of S Z equals to zero eigen states. And if D is zero, we are in this Haldane phase, one of the states in this AKLT type of phase. And as I argued, in this AKLT phase, we find that the boundaries actually have spin one-halves. So we find here is the projective representation. And in the large D phase, we actually have just a simple product state of integer spins. So the boundaries are just in the trivial representation. So we find that the representations commute. And this does in fact also tie into some order parameters that can be some sort of non-local order parameters that can be basically measured in experiments. Good. But the main part that I want to make is that this with a little bit of extra work that I unfortunately don't have the time to explain in detail, but so I refer to the notes is that because of this DMG language is exactly formulated in terms of those edges. So basically, we can just think of doing a Schmitt decomposition of a system and it gives us even in an infinite system, access to some edges, we can in fact just apply basically symmetry operations to those Schmitt states or to this matrix product states and then directly read off this. So in fact, this plot is done with a very short MPS code and it's very convenient once we are in this language. Good. Sorry for this for being very brief on this but I want to move on to dynamics. So yeah, the previous slide, yeah? How, okay, good. So if you, so by basically knowing how the edge degrees of freedom transform, we can make many predictions. So in particular, we can define so-called string order parameters and these string order parameters we can design in such a way that they are sensitive to what these phases are. So we can basically having the knowledge of, after having figured out that the symmetries transform or basically the representation of the symmetries in terms of these edge degrees of freedom is different in different phases. We can design some observable kind of order parameters or like non-local order parameters, some string order parameters that can be measured. And in fact, this picture here is showing a measurement of some string order parameter in a cold atomic lattice. But again, here I just want to mostly focus on this numerical or computational part and there it's quite neat that if you have this matrix product state representation, we can more or less directly read off these phases from the MPS. Good. So the first part I focused on ground state properties and I focused on infinite systems. So how can we actually efficiently represent ground states of infinite systems and how we can work with them? So given that we have an MPS, how can we extract various observables? And unfortunately, only very briefly, I talked about how this can be helpful for studying kind of new phases of meta and they are referred to the notes where we find a little bit more on that. So let me now come to dynamics. So I want to discuss in this part mainly how can we use matrix product states to study dynamics in the form of quench dynamics or calculate dynamical correlation functions using MPS. And for this, I want to discuss mainly the so-called time-evolving block decimation algorithm which is, I find amazingly simple algorithm that allows us to do the time evolution both in real time and in imaginary time and in the tutorials, we're gonna use this code quite a bit to play it with some different models. Gonna then show some results obtained with this algorithm discussing quench dynamics and how basically the entanglement growth and this comes in also to how efficient MPS remain in these cases and in the end, I want to talk about some very recent developments namely a so-called MPO-based time evolution. Good, and if I would have more time than I expect where I think it's probably not gonna happen, then I can also talk a little bit of some very recent developments in the field of many body localization and how we can actually use DMG on excited states. Good, how to simulate the time evolution of MPS? So the simple question that we're asking is the following. So we have a matrix product state given at time zero and we want to just evolve it by a certain amount of time and assuming that psi is not an eigenstate of a system. So and there over the past years there have been several algorithms proposed and many of them are very closely related to each other. So the first one and this is the one that I'm gonna discuss in detail is the time evolving block decimation and this I think is a very neat and extremely compact algorithm that I want to discuss. There's a so-called time dependent DMG which is very closely related to the first one except that certain updates are done in a somewhat different way. There are sort of Krylov space methods which are different in that they act by applying a Hamiltonian globally to the system but then it has the drawback that it has an error, like the scaling of the error is scaling not so nicely. And there are some recent development which is so-called time dependent variational principle which seems to be a very powerful method but has been not explored so much and there's a last method that I again want to talk a bit about more details is so-called matrix product operator based time evolution which is again a very simple idea namely it just takes a time evolution operator and it just gives you a recipe how you can write down so given that you have an MPO a matrix product operator of a Hamiltonian how you can actually write down the exponential of this operator again as a matrix product operator. Good. So let me start by discussing or by introducing in a bit more detail this time evolving block decimation algorithm. So we assume now the following. So we assume that our Hamiltonian has this simple form. So it's just the sum of some local terms kind of coupling neighboring sites. So basically all the models that I've shown so far would fall into this class. So we just had terms which are S i S i plus one and the problem is that if you want to have systems where you have also a term S i S i plus two applying this method would actually imply that you have to group sites. So you can just basically enlarge the unit cell and then you can again rewrite the Hamiltonian in this form. So then you actually just have to pay the price of having a larger local Hilbert space dimension but you can still write it in this form. Good. So let us now look at Hamiltonians that have this form and what I want to demonstrate is a tool or some algorithm that allows us to do the time evolution in real time and in imaginary time. In imaginary time we just are interested in this algorithm because doing imaginary time evolution allows us to find the ground state wave function. Is it clear to everyone why doing the time evolution in imaginary time is giving us the ground state if we just do it for long enough? I think so. I mean the way if you want to sit down and just show it by itself you can say that well let us just take the state psi and expand it in terms of eigenfunction of H and then we see that by applying the imaginary time evolution states that correspond to excited states will be exponentially suppressed in this superposition. So by normalizing it again we will actually find that eventually it's only the ground state that's left. And also from this explanation that I just gave you actually see that there are particular cases where this will perform very poorly and this is when the gap between the ground state and the lowest excitations is very small because then this is converging very slowly and it works extremely well if there's a large gap between the ground state and the first excited states. Good, so let us now consider this type of Hamiltonian and do a decomposition of the Hamiltonian. Let me just describe this. So we have our one-dimensional system and the terms of H act of course on all bonds. However, now we just do a decomposition of this Hamiltonian into a term F. F is only acting say on the even bond so F is acting only here, here and here and G acts only on the odd bonds. Why is this useful to do this decomposition? It's because we find that now if I just do this splitting of my Hamiltonian all terms F commute because they don't have any sites in common and also all sites in these are the F terms and these are the G terms, all terms in G also commute. However, F and G do not commute because they overlap. So having made this observation, we can actually use this so-called Suzuki Trotter decomposition, which means we just write our Hamiltonian like E to the minus I times F plus G times delta T. So this amounts to doing a time evolution with our Hamiltonian by a time step delta T and we just expand it as like this is kind of Trotter decomposition and to lowest order we just say that we just take the product of X and Y and this is then an approximation which is good up to first order. So basically we find that if we have exponential of let's say minus I times F plus G times delta T and this is equal to the exponential minus I times F times delta times exponential minus F times delta plus some corrections and order delta squared. So the way that we can see these corrections just by applying a Baker-Campel house door and then we see that these, so we only have corrections up to order delta squared. So if we make delta small enough we see that we can at some point just neglect these corrections. And by doing this decomposition what have we achieved now? So we find that in this term and this term the all terms does simply commute. So we can say that well this term is actually equal to just the product over all J say even of the exponential minus I F times delta like F say J and this is equal to the product of J odd of the exponential of minus I. Oops, there should be a G here, right? Sorry about that, minus G J times delta. Okay, I think this is probably clear. So what we have achieved now is that we can just write down our time evolution as just a simple product of these kind of two side gates or like operators that act only on two neighboring sides on the first of the even bonds then we just do this on the odd bonds and by just doing this successively we are guaranteed to have done this correctly up to order delta squared. So going back to MPS. Now we can just do exactly what I've demonstrated here and just apply this to an MPS and this is what we want to discuss in a lot of detail now. What we have now is the following. So say that we have our matrix product state again given in this canonical form. So we have this canonical form here you define for a finite system where we again assume all these identities that we derived in the first part of the lecture. And now we just apply these two side gates to it. And when I draw this picture here this U this is in fact just the exponential of say minus I, minus I delta H say I, I plus one. So we just apply the Hamiltonian to two neighboring sites. And if I just write this like this is like a technicality but if I just do this for example for a spin one half system then I find that this object is just basically corresponds to a four by four matrix or a two by two by two tensor. All I'm doing here is basically just translating what I said before in terms of the product of these gates in terms of the language of matrix product states. So this is what we wanna do to kind of apply half this trotter gate to our state. What do we need to do now? So now we just take our MPS and we just apply to it these gates. And by this we clearly not having this MPS form because now if we just contract all these indices together we have like these big blobs and this is no longer an efficient representation if we just keep going. But what we want to do and this is what the this kind of TBD algorithm will do for us is we want to apply this product of gates and then return to our original MPS form. And this is what we will discuss now. So in short because we can do this now for each bond individually. So sorry, let me go back. So when we do this we could say that well let us just start here with the first bond, do this, find it back and then go to the next bond, find it and so on and so forth. So if we figure out how to do this for one bond so how can we start from this object and return to a new MPS, then we can declare victory and we have an algorithm. And I want to show go with you kind of carefully to these steps of the algorithm now. So the first step is that we need to apply U and the idea is that we first construct our object theta and theta we can just contract all together and then we find it's just like this. So we have lambda say A, gamma, B, lambda, B, gamma, C, lambda. And just, and then we have here say the index alpha, gamma and N, M and alpha and gamma. So now connecting it again to what we did in the previous lecture because I was saying that well we're assuming that we have this particular canonical form which means if I just contract my MPS in this form I actually get my wave function at this point in a particular representation, namely in this so-called mixed representation. So that means I can just write down my these kind of objects, these which you can call like M and alpha, gamma times alpha say left, M and gamma right. So we have our one-dimensional system and we say that well we just focus on these two sides. Everything left of these two sides is described by Schmidt states alpha and everything on the right is described by Schmidt states gamma and the local sides are described by M and N. So what we did here is well we have this canonical form and then if we just contract a bunch of these matrices we actually get the wave function in a representation the mixed representation of Schmidt states and local basis states and now we can say that well we just take our object theta here and we just do a time evolution of our wave function and since we just sort of unpacked the indices M and we can just easily apply the time evolution operator to those so we find our theta tilde which is a time evolved wave function is just the one where we contract these tensors together. Okay so this is our U, this is the theta. Okay so what we have achieved now is that we just did a time evolution of the wave function by one bit on one bond. And now we have obtained the new wave function and we want to slightly going back to the original representation and you recall that the MPS in this canonical form is giving in this way that the bond indices always correspond simply to the Schmidt decomposition at a given bond. So what we can do is we can just say that well we just take our wave function like our theta and this is now step number two. We take our wave function in this form our theta form like alpha M gamma N and we just regroup basically the indices. We just say that well this is a matrix where we have just indices theta alpha M comma gamma N. So now this is just a matrix with a dimension of chi, chi being the matrix dimension times D comma chi times D. And this matrix is now in this representation we actually describe the one-dimensional system for this bond again. So everything left of this bond is described by a basis alpha M and everything here is described by gamma N. So what we can do is now we just take this object and do a singular value decomposition of our theta tilde. So we say that SVD, so we have the SVD of our theta where we just group alpha M comma gamma N is equal to the sum from beta from one to D times chi of, and now we have X alpha M beta times lambda beta times say Y beta N gamma. Okay, so what I did now is I just take my theta tilde and I do a singular value decomposition of this object into like unitary matrices in this diagonal matrix, lambda. And this are exactly the Schmidt values for a decomposition of our state at this bond here. Good, so graphically what we have done is we just say that well we have our theta tilde blob basically and we now write it as a, as a product so to say of lambda X and Y. So this one here is the X and we have like this index alpha and M and the index beta, lambda beta, Y beta and then we have N gamma. So we can just unpack it. And this brings us already to step number three because now we are almost there because this, we can now just insert identities and get back the new MPS. And this we do the following way. We just take our object here so where we have X, lambda, let me just, oh sorry. So this is already the new lambda. This is where I give it a lambda tilde because here we have lambda. So we have here our lambda tilde and we have Y. And now we can basically insert here the identity. So we just insert here lambda times lambda to the minus one and here we have a lambda minus one times lambda. These are now the original lambdas that were at this particular bond. And then we find that our kind of gamma tilde are A. So this is the one that we got here is just equal to lambda A minus one times X basically using it like this. And our gamma C, oh sorry, this is actually B, C is equal to our Y times lambda C minus one. And the gamma, lambda B is equal to B. It's just automatically giving from this Schmitt decomposition. Okay, so we have now obtained new, so we just have our wave function in this mixed representation and then we do a Schmitt decomposition of this mixed representation and this gives us access immediately to the new Schmitt values that we have on this particular bond. So we already have the new lambda matrix and we get these unitaries for the left and for the right and from them just by inserting this identity we can get back our updated gamma matrices. So we can almost declare victory, we are almost there. So we are back to this particular form. However, we find that doing this procedure we started off from a bond on which the bond dimension was chi, originally we had here chi, like the bond dimension chi of the original MPS and now because we added this kind of local sites the bond dimension has now increased to D times chi because when I have this sum in this singular value decomposition we have D times chi states. So after each iteration the bond dimension would increase and that would actually mean that the bond dimension is increasing exponentially as we go along and there's a time evolution. So we have this important truncation step and that actually sets in here. When we do this singular value decomposition we just could, we will get a picture roughly like this. We have here some index alpha and we have lambda alpha tilde so they will decay very rapidly and then we can just say after each update step that we only keep maybe that many states and then we just truncate it basically back to chi or to some value that we think is reasonable. Good. So this is a simple TBD algorithm. This is from this explanation clear how this algorithm works or the questions to how to implement this algorithm. Take that this is relatively clear then. So and so far I assumed that the system was not necessary a translation invariant. So I assumed that all matrices could in principle be different and there's actually a very simple insight that this method also works for infinite systems and the idea is the following. So we just assume that we start from a state that is translation invariant but we assume a unit cell. We say that we have just two different types of matrices. So we have the A sites and B sites and all A sites and all B sites are equivalent. And then if we start from this kind of state we would find that this form is always maintained. So the idea is the following. We have our infinite matrix product state. That's going on forever. And now we have this gamma A, lambda A, gamma B, lambda B, gamma A. Lambda B, gamma A and so on. And when we're doing our update step on this bond here we get some new lambda A and lambda B gamma A and gamma B and lambda A till the back. But we would find it's exactly the same as here and exactly the same as here. So if we start from a state that has this particular AB pattern we will get back a state that has again the same symmetry. So instead of kind of applying this in an infinite number of time we can just apply it once and just realize that the same is happening everywhere else. And then we do the odd bonds and again the same reasoning applies. We just do the same, like the same thing is happening on all bonds. So instead of doing this in infinite number of time we just have to do it once and just know that the same is happening everywhere else. And what this means in practice we just need to store two sets of matrices like the gamma A, lambda A, gamma B and lambda B and then just apply alternatingly updates between A bonds and B bonds and then go back to A bonds and B bonds and this will actually do all these updates in parallel. And this algorithm I want to discuss now in a bit of detail. And I saw in the program that all of you already Python experts so we can just jump right in and I want to advertise actually that Python actually provides some very powerful tools that basically allows us directly to translate these drawings in the way that I like to present them into Python code. And there are some very powerful tools to do so. So the first function that I want to advertise here is called tensor dot. And tensor dot what it does it just takes two tensors and just contracts a certain number of indices. So for example, I have here two tensors, one Y and Z, like Y is a rank two tensor so it's just a matrix and Z is a rank three tensor. So now I can do a tensor contraction so that I just pick this index here, this M index and I just contract over them and it's the first index, like this is index zero, this is index one. So I now contract this index here with the zeros index of this object. So if I just draw this in my picture language would say that, well, we have some tensor X and X is a rank three tensor. It's equal to, and now we have two tensors so we have tensor Y, this is a rank two tensor and we just contracted with tensor Z, which is a rank three tensor. And you see just from stuff that's shown here, the entire algorithm consists of operations like this where we just contract these matrices and tensors. That's very important. Then there's another operation that we, yes? This is very correct, yes and I didn't show it. So let us assume that we want to do something more complicated, let us just say that we had another index sticking out here and we want to contract it with this index here. So then the expression would look like this. So we have then X equal to NP dot tensor dot. And then we have X, no, YZ and then X is equal to, and now we have a list of different indices which would be for example, I don't know just making up some so we could have for example, one and two contracting with zero and one. So you can just put a list of indices that you want to contract over. This again will be important also. Good, so this is for tensor contractions but then sometimes we have to do these reshape stuff which is for example here when we had this theta matrix which is here a rank four tensor and we want to reshape it into a rank two matrix. So we first had a kind of chi times d times d times chi tensor and we want to convert it into a d times chi comma kind of chi times d matrix. And this can be done by using the so-called reshape reshape command. So we can just, so here for example, we had a rank three tensor and the dimension like the dimension of this I index is dim one, dim two and d three and now we can just reshape it into a matrix here. This is again very useful to do this kind of stuff. Lastly, I want to point out here the transpose command which we can use just to change the order of indices. And it's particularly useful in combination with this regrouping or with this reshaping because they say that you have an object which we call, I don't know if maybe the theta that you had had a form of i, j, alpha, gamma but you want to reshape it to a matrix of this form. You first want to reshuffle the indices maybe to alpha, i, gamma, j and then you can just use this reshape command to just transform it to a matrix. So this is one of the cases where this reshaping will be quite useful. Good. So we can apply these ideas and write the code. I don't know, is this at all readable from? Because then I could actually go through this which I find, at least when I wrote it it's the first time, I quite amazing because I first, when I started off thinking about DMG, I thought this is always very complicated but this is the entire TBD code that you need to write in Python without using any fancy libraries. It's just using standard numerical Python. And let me just go through this code and because this is also what will be part of the tutorial where you will be able to play with this yourselves. So this is actually the program doing imaginary time evolution on a transverse field ising model. So we define first the parameters. So the coupling, the transverse field, chi being the maximum that Bonta mentioned, delta, this is the imaginary time step and n is just defining how many steps we want to go. And then it just initializes the vectors with some random numbers. And note that having these random numbers it's not in this canonical form but it turns out that during the course of the algorithm the code is automatically transforming it into this canonical form. So this is just initiating the MPS. So it's just getting something ready of this form here. Now we need these two side gates and they can be done, they are produced in these two lines. So we first, this I've done by hand I just write down the ising Hamiltonian defined on two sides. So we have just the, for these two sides I just define a local basis, which is just, so on each side we have the local basis like J, Jn is basically just up and down. And now I define the bond matrix, the bond Hamiltonian. So the bond Hamiltonian is using a basis where we have just up, up, up, down, down, up and down, down. And in this basis we represent the Hamiltonian. You can see this already for the coupling J. So the J is the diagonal on this matrix and it's plus one for this one and this one and it's minus one for this one and this one. And then if you just look a bit closer or like longer at it you actually see that in the transverse field ising model the transverse field is just flipping the spin and then you see that the flipping of the spin is just connecting particular of those configurations. So this line is given us the Hamiltonian and then I can just use a built-in command to just exponentiate this Hamiltonian. So just take e to the bond Hamiltonian. This already gives us the two side gate. Equipped with this two side gate and some initial MPS we can then start the iteration and this is now exactly following this simple protocol. So we first alternate between a bonds and b bonds. So we just have a and b is just like the modulus of the step that we have. And then we just construct this theta blob here and which is done just by successfully applying these tensor dot operation. So we have l is being the lambdas and g is the gammas. And then we see that why we first apply the, like let's do this in this picture. We first take on an like the b, the lambda b and we take a gamma a and then this is the first index. This is like index zero, one and here we have index zero, one and two. And then we see that we just contract index one and one and this gives us the first building block and then we just keep going till we have contracted all of those guys and this gives us then this theta. Then we apply u to this theta block which is done here. Now we just as advertised, we just apply a series of transpose and reshape operations to get it to this matrix form, like the one shown here. And all is left to do is then to do an SVD. So we just use a built in function of the SVD. This returns the singular values for this particular decomposition into two half chains. And then all is left to do is to just update the lambda. So the new lambdas just give us, I just basically written, we just take the singular values, we just renormalize it because we're doing a imaginary time evolution. So the norm, the state needs to be renormalized at each step, so we just divide by the norm. And lastly we just obtain the new gamma and lambda tensors. And then we just iterate it till we are done or converged to something. So again, this is something that you will have a lot of chance to play with tomorrow and try to use it to figure out a few things about the Ising model. But let me now come back to the dynamics in the systems and just show what we can do if we have this kind of code. So first of all, what we can do is we can use either a DMAG or we can use a TBD with imaginary time evolution and find the ground state of some Hamiltonian. Here for example, found the ground state of the spin one Heisenberg model. So that can be done fairly easily. And then what we're gonna do is I just take the ground state. So in this protocol we just have our, we have the MPS given of the ground state. And now I just apply to the ground state an operator S plus to a given site. And then we can just, well before it was an eigenstate of the Hamiltonian, once I applied S plus it's no longer an eigenstate and we can just study the time evolution and then we can just create movies like this with the ITBD movie. So here we see, we created the spin excitations in a spin one chain and we see actually how the spin excitation propagates. And in fact, well this is just nice to look at. We can actually do use this data that I just showed to obtain the dynamical structure factor. And this is actually quite useful. Let me just show how to do this and how we get the structure factor in quite nicely. So this is quite easy. So in order to calculate the dynamical structure factor we first want to calculate the time dependent correlation functions. So and these ones are given the following. We just take our ground state in terms of the MPS and now we apply it to this sum operator O O, I don't know, O one to it at time T and we apply an operator O two at time equals to zero. And now we can just plug in the time evolution because we can just plug in the Heisenberg picture of these operators. So we have E to the ITH times operator O one times E to the minus ITH times operator O two applied to the ground state. But now we see that the state that we started from is actually, this is an eigenstate of the Hamiltonian. So this is actually just E to the IT times E naught. So this just gives us a phase factor times psi naught like O one times E to the minus ITH times psi naught O two. So all we need to do is we just create our initial state and we apply some operator to it. So we just time evolve it and then overlap it with some other state. So this is something that we can do fairly easily by just time evolving a single state. And if we have obtained this, we can just get quantities such as the dynamical structure factor which is relevant for several experiments such as neutral scattering or office for electronic systems, et cetera. Where we just look at how a local excitation is basically affecting the system. And the dynamical structure factor is just the Fourier transform of such a dynamical time-dependent correlation function. So here we just flip the spin on site zero at time zero and then we let the system propagate and then we just flip it again back on site X at time T. So we just look at correlations along this time and space plane. And doing a Fourier transform of this object, we find the dynamical structure factor S of k and omega. And this is what we can use to study resistance here. If we, for example, calculate this for a spin one Heisenberg model, we can, for example, here see the so-called Haldane gap. So we see that the gap we see where in momentum space the minimum is. And if you look at transitions, we can actually see at what momenta we find the gapless modes, et cetera. So this is quite insightful for systems. And here we have a spin one half letter and then we see like some broad structure which are indicated for some fractionalized excitations and et cetera. So this is a quite neat tool to study kind of dynamical properties of spin systems. And it can be done fairly easily with this simple code that I just showed a minute ago. Let me now go to something where we quickly reach the boundaries of what we can simulate. And this is in terms of quenches. And here I just want to provide some intuition for it. So the entire argument was based on the area law. I said that, well, ground states of local Hamiltonians are very little entangled so we can efficiently represent them by compressing the states. So we just have only little entanglement, like only few local excitations, local kind of fluctuations in the wave function. So it turns out that this breaks down pretty quickly when we go to, when we look at non-equilibrium and we look at systems out of equilibrium. And this I want to demonstrate here. Let us just assume that we have say just a Heisenberg model and we start from a simple product state. So we just initialize our system with a product state which has actually a zero entanglement clearly. And now we just do the time evolution of the system by just evolving it with a Heisenberg model state. And now we can look at the entanglement between two half chains. So we just look again how the left and the right part of our system entangled. And this will be exactly directly related to the Bonn dimension that we need. We call that this is exactly how we argued MPS are good by just doing a bi-partition of a system into two half chains and see can we get away with it by or can we get away with truncating this Schmidt decomposition. So, and it turns out that the entanglement following this kind of local quench will grow linearly with time. And the argument for this is basically provided by a number of papers but we can just understand it in this way. So we start from a product state where we say that we just have this kind of state in mind. Now we evolve it in time and it turns out the ground state has lots of like a high density of local excitations. So basically the Hamiltonian is not an eigenstate of the Hamiltonian. And now as time progresses these excitations they spread over the system. And whenever one of these kind of light cones from one of these excitations crosses the cut the system becomes entangled. So maybe it's easiest to think of really literally in terms of particles. If we have a particle sitting here at time zero the probability of the particle is one to be on the left side of this chain. But as time progresses at some point like after this light cone cross this cut then there's a finite probability of finding the particle on the left, on the right. So there's some uncertainty. So the system becomes entangled. And the light cones that cross this cut is actually increasing linearly as a function of time. Like more and more line clones cross this cut so we get this linear entanglement growth. And that's really bad news for these matrix product state type of methods because if the entanglement grows linearly we know that the Bonta mention needed to express these states is increasing exponentially. So the, so while starting with a state that basically has zero entanglement we very quickly arrive at a state that has a very high entanglement. And that means that we are very quickly sort of leaving this comfort zone that we enjoyed when doing matrix product state simulations. In fact, well, find the ground state is in this tiny corner that has an area law. If we start from this we do a global quench so we just add a finite density of defects we very quickly run out of this zone and get into states with high entanglement and these states can then basically not or only with a lot of effort be simulated. And just as a remark when I started playing with this time evolution of MPS I just did these kind of quench experiments that you're also gonna do tomorrow in the session and then it turns out you can do a time evolution up to time maybe 10 or so on your laptop in a few minutes, but just going to 12 or maybe 13 would require a super computer. It's just like the because of this exponential growth it's getting so much more difficult when there's only trying to push it by a few more steps. So this is something that you hopefully experience tomorrow. Good, let me now for the remaining few minutes discuss about some recent developments. Because especially since now we are more and more interested in using MPS also for simulating two dimensional systems when using this kind of snake technique which has been very successful for doing ground state DMHG we are now quite interested in also applying these ideas to do time evolution. However, this sort of standard method that I introduced this TBD algorithm can't cope with this. I mean because for this TBD at least if you just directly applying it we would have the problem that the in this one dimensional language the coupling become longer and longer range. So as we make if we just enumerate these sites from one, two, three, four, five, six, seven, et cetera then we see that we have even though the original model in the two dimensional system was short range in this one dimensional representation we suddenly have the coupling between site number one and site number eight. So which means that these trotter based algorithm can't really be applied to these kind of problems unless one is just reshuffling the sites for every steps but that becomes quite costly. And secondly, we like to have methods that can actually be applied to these infinitely long systems because then we don't have to deal with these boundary effects, et cetera. And we want to have something that is relatively easily implemented. And now I want to just introduce briefly a method that sort of does all of this. And the method also kind of is based on a rather simple idea. So if you want to, if you have a Hamiltonian that's expressed as a sum of terms like does the sum over hx and these hx can actually spend over multiple, like over longer ranges. Now we want to look at an expansion of e to the minus t times h for a small t. And the simplest thing that we can do is we can just do some expansion of one plus t times the sum of all these x. So this is just the expansion of the exponential. But what we can do is we can actually go from this kind of global stepper to a local one where we just have the product of one plus t times hx. So this is again just an approximation but the advantage is that while in this term we apply the Hamiltonian globally to this state we find that the error is actually scaling quadratically with L. While if we do this local step it's still an approximation of first order. However, the error will actually scale only linearly with the length of a system or the number of sites. So in other words, if we're doing this kind of approximation we have a constant error per site while here we have an error that is scaling linearly with the system size. So and the very neat observation that one can make is that if you are using, you're approximating this term just as the sum of non-overlapping terms. So we just neglect terms in which these hx overlap then this has a very nice matrix product state representation. So we can actually, if you have an Hamiltonian given as a matrix product operator we can just find the representation of this approximation of e to that Hamiltonian fairly easily. And in fact all you need to do is if you have an MPO which for experts of matrix product operators can be represented in this kind of finite state machine form which I don't want to go into detail but there is a particular way that we can understand matrix product operators. We just can change it a little bit in that instead of going basically from the initial state to the final state we go back, we start from the initial state and eventually go back to the initial state. And so it turns out if we have a d-dimensional Hamiltonian MPO we can just write it and d minus one-dimensional time evolution MPO. So that's a quite neat method. So we can then try it out and compare it to the kind of simple TBD algorithm and we find that it in terms of the error for a short range Hamiltonian is the error scales basically very comparable to TBD but then we can actually go beyond it. So we can look at Hamiltonians that actually have long range interaction such as this whole range test remodel where we can just do a quench of the system which we can't do with this TBD very easily. And we can also look at the expansion of bosonic clouds. And this is something that Johannes Hauschild has actually a paper on where he applied this algorithm to look at how bosonic clouds in two-dimensional lattices expand. And this is something that was not really possible to do before with these standard algorithms. Good. Again, I think regarding time I'm not making it to the last part really. So maybe it would be best if we discuss some questions if there are some. But it's a little more French, but understand a bit. So if people apply the Hamiltonian to the ground state of for that state like in other terms each day. Will it also produce this kind of problem? Like if this would be the solution. Right, if you. The solution of the other one, right? The minus of it. Right, if you are quenching, if you have a state and you just evolve it with the Hamiltonian to which this is an eigenstate. I'm not quite sure I'm following. I mean, if you evolve an eigenstate, clearly nothing is going to happen. Because then the state is an eigenstate. If you say that you're taking a superposition of two eigenstates, of you just have a symmetry broken system and you just evolve that superposition, it's still an eigenstate. So it still wouldn't evolve. For which one? For Haldane, Shane? You mean in this one here? Well, here I actually, I mean what I actually did, I simulate an infinite system with a large unit cell. And then I just do a quench in the middle. But on this kind of plot, you wouldn't be able to tell if this is a finite or an infinite system. Yes? Yes, I mean this trick that I was just putting forward where we say that we are. Yeah, so this is actually a good question. So when I'm looking at this infinite system where I just say that my entire state is just built from this pattern A, B, A, B, then I could not do something like this here because here I'm clearly breaking the translation variance of my infinite system because I just have a state and now I apply the sigma plus operator to one state. So here, like to do plots of this type, I would have either an algorithm which is infinite but it has a unit cell of several hundred spins or I just do a system with open boundary conditions. But for these kind of, if I just study the global quench for example, then I'm just keeping the translation invariance. So then I can just use this A, B pattern. Oh, for the topological order thing, okay. So for the topological order thing, which unfortunately I didn't get into too much detail, they are the ideas that I can actually read off everything by just looking at an infinite system. And the idea there is because what the defining property of those SPT phases is actually the edge physics. However, if I just take my MPS, which is defined for an infinite system, I just variational optimize this MPS. But now what I can do is I can actually say that, well, I actually just cut the MPS open. So I just basically look at only the half chain of the MPS by only say multiplying the matrices right of this chain. And then I get exactly these Schmidt states, right? So this is what I advertised in the first lecture that if you're having this canonical form, you have a direct access to the Schmidt states for a bipartition of a system. And then the idea how to extract all these defining properties is to look at the Schmidt states. So you just basically consider Schmidt states as some virtual or artificial cuts in the system that you can use to study the edge physics. Well, it means that in this case, my system is infinite. So the parity of, well, no, this, you mean you have to distinguish two things. I mean this AB pattern is basically just for the algorithm. It's just because the algorithm in the course of, because I have my A and then I have my B gates. This is what requires to have at least two sites in my unit cell. This is just a technicality for the algorithm. But if I'm analyzing the topological properties of my state, that will not be important. The precise idea how to extract topological order or how to extract these topological properties from the MPS is a bit involved. So probably not, but I'm happy to explain this in detail to you. Well, not in general, but for these time evolution. So I just, what I'm saying here is that what I'm saying is you have this time evolution operator. If T is very small, we can approximate the time evolution in this form here by just neglecting the non-overlapping, by neglecting the overlapping terms. And it turns out this expression has a very compact matrix products operator representation. And in fact, if you already have an existing code that creates a matrix product operator form of H, you can just reorder basically the blocks and then you get automatically this one. Yes, exactly. So if you have an MPO of H, you can just buy it on an MPO. If you have an MPO for the sum over HX, it's easy to obtain an MPO for this expression here. It actually shrinks. If you have, if the MPO of this expression is D, the MPO of this dimension is D minus one. And this comes from the fact that you just take the same MPO except, and this is what I again, find a bit long to explain, but there's this interpretation of the MPO as a finite state machine. I mean, the finite state machine is what you usually maybe think of displaying, understanding some automator. So you just say that toss in a coin, goes into the except coin state, blah, blah, blah. And the same, and we can have the same view of an MPO. So the MPO is maybe at some point going into this step of place an operator X, and then placing an operator Y, and then it's going to the final state that is basically done doing anything. And the thing is like, this is like how you define the original Hamiltonian. If you now, instead of going from the final state, going to the final state, you go back to the initial state. This is, you just basically reprogram your finite state machine, and then you just go from a D dimension MPO to a D minus one dimension MPO. How do I measure entanglement in numerics or in real world? Well, I think this is so far mostly an open question. I mean, there are some proposals how you can measure certain Renny entropies, and Renny entropies being entropies that you can get by taking certain powers of the reduced density matrix. There are some proposals, I don't know how much they actually would work, but I think this is mostly still an open question. One thing I mean, you can measure some particle number fluctuations that give some lower moments of the entanglement, but really the entanglement, I don't, I'm not aware of any experiment that successfully measure the entanglement. If there are no further questions, then thank you for your attention, and I hope to see you tomorrow for the tutorial where we can explore all this.