 OK, so yeah, I want to talk about efficient simulations of low-dimensional systems. And as Roderick said, this is a continuation in a little bit what Steve White was talking about last week. So let me start by showing the agenda. So my program consists of two lectures, so the first one being now, the second one being at around 10.30, and a tutorial with hands-on sessions where we can apply whatever I'm talking about in the first two lectures. And this lecture here will focus on matrix product states and how we can use matrix product states to study topological phases of matter. And since I believe that repetition is very good for learning, and also a week probably gave you a lot of time to forget if you think that Steve White was talking about, I want to start off by giving a review of about entanglement and matrix product states. That would be mostly stuff that he already heard last week. And then I want to shift to some slightly new concept, namely that I want to show how we can use matrix product state for infinite systems. So basically how we can describe a translational invariant infinite system by just keeping in mind a few matrices. And using mostly this infinite matrix product state scheme, I want to show you some examples how we can extract the fingerprints of topological order and also some of the defining properties directly from those matrix product states. Let me also note that there are notes on the website. So most of the things that I talk about today is written up here. So let me now start by basically reviewing a few of the things that you already heard about by Steve White. We are interested in describing quantum states. So we have just a pure quantum state that we denote by psi. And in its most general form, we can write it in this shape here. So we have here this psi, which has many indices. So as you probably recall from Steve's lecture, this is a rank L tensor. And this rank L tensor is describing the quantum state in terms of some local bases. So here we can think, for example, of a one-dimensional system where we just enumerate the sides from one to L. And on each side, we have some local bases described by Jn, and this Jn is ranging from one to D. So D is something which we call the local Hilbert's based dimension. Say for a spin 1 half system, this is 2, for a spin 1 system, this is 3, et cetera. So this is the most general form that we can use for writing down a kind of many-body state. So while this is super general, it has a big drawback. Namely, if we want to store the full information that's contained in a wave function, we have to keep track of this huge object. And the size or the memory that we need to store such an object is scaling exponentially with the system size. So if we're just using this form, we usually can only consider a relatively small system. And as you saw last week for maybe a spin 1 half system, we can reach using this technique by just exactly writing down the wave function maybe up to 40 or so spin 1 half. However, we can use some concepts that are known from quantum information to basically compress these states. And this is what you already learned. There's actually a lot of information contained in this way of writing down the state that we are not necessarily interested in. And there's a way that we can just compress states. And the idea of how we can compress these states is related to the Schmidt decomposition of a state. So what do we do for this Schmidt decomposition? Is we just take our system. So again, we just choose here a one-dimensional system where we just cut it into two halves. So we just pick a bond. And then we just say that, well, everything left of this bond is subsystem A, and everything right of this bond is subsystem B. And now we just rewrite our wave function in this form. So we just write a wave function with respect to A basis that we can choose for subsystem A and the basis subsystem B with some coefficient matrix. Just for clarity, if we just spin 1 half system, we could just have a basis of sigma z eigenstates here, like up, down, up, down on the left and up, down, up, down whatever on the right. And this is describing basis for the left and for the right. And this is the coefficient matrix. And what we can do now, and I think this is something that you did already last week, is we can perform a singular value decomposition of this coefficient matrix. And this gives us a unitary transformation of the local basis on the left and the local basis on the right to a basis in which the coefficient matrix is diagonal. That looks familiar to most of you? So we have this particular form. So we have a so-called Schmidt decomposition in which we decompose the quantum state into a superposition of product states with respect to subsystem A and subsystem B. Good. And this will turn out to be very useful for compressing these quantum states. Let me first ponder the quantity, which, again, you heard about last week. And this is a so-called entanglement entropy. So the entanglement entropy is actually a measure for how much two subsystems are entangled. Again, entanglement is a concept that you heard about already last week. It's the idea if a state is not entangled, then a state can be written as a simple product state between subsystem A and subsystem B. That is, in this decomposition that I'm writing here, we actually would have only a single term. So we would just have a sum, which consists only of one term. So we just have whatever is the state on the left times whatever is on the state on the right. However, if we have entanglement in the system, so if we have entanglement between the left and the right half, then automatically we have several terms in this sum because the state defined only on the left half, say, is in a mixed state. And the measure for how much entangled a state is is the entanglement entropy. And roughly we can say that the more entangled a state is, the more terms we need to keep in this sum to have a good representation of a state. And in fact, again, this is probably what you learned last week, is that this way of writing down the entanglement entropy is exactly equivalent of looking at the von Neumann entropy for the reduced density matrix. And a nice exercise to visualize it is if you actually use this representation of the state and you write down the reduced density matrix, you immediately see that the Schmitt states are actually eigenstates of the reduced density matrix with eigenvalues that are just lambda alpha squared. Good. So this is just defining what a Schmitt decomposition is and relating it to the entanglement entropy. And the Schmitt decomposition will be something very useful for compressing quantum states in a moment. Let me, for this, point out some very interesting or important property of ground states of local Hamiltonians. And this is, they fulfill the so-called area law. And the area law has a means in terms of the entanglement entropy in one-dimensional systems that the entanglement entropy as a function of the system size for this kind of bipartition into two-half chains becomes independent of the system size. Why is this very remarkable? If we would not look at ground state of a local Hamiltonian, but instead, we would just take a random state. Just randomly define a state in your Hilbert space that you have on the system. And you do this kind of bipartition, you would actually find that the entanglement entropy would be proportional to L. And drawing a picture like this for a random state would mean that we have just the spins in our system, say, if you're looking at a spin system, are randomly entangled. So all kinds of, so the spins in a random state, they wouldn't know if they're close to a cut or very far away, they would just be entangled equally. So as we grow the system, we would actually notice that all states, all spins that we add would contribute to the entanglement. And thus, we would find that the entanglement entropy would grow linearly with the length of the system. For ground states of local Hamiltonians, however, we can have this cartoon picture in mind. We can just think, well, we have a local Hamiltonian. And the local Hamiltonian, or the ground state, will minimize the energy for this local Hamiltonian, which means we will mostly have local fluctuations in the spin, in the state. And this is what I visualize here by these bonds. If you, for example, want to think about a charge, like a system with charges in it, like all of them. We could say that, well, Hamiltonian once kind of can lower the kinetic energy by having particles hopping between neighboring sites. And then we would find that, well, a charge that is sitting somewhere near the boundary, there's a good probability that there's some quantum fluctuation taking it on the other side. But if we have a charge sitting, like, super far away from this cut, the likelihood that that particular charge fluctuates to the other side of the cut is very small. Good. So with this, I just want to give you some intuition of how we can think of a cartoon picture for how we can think ground states of local Hamiltonians look like. And this motivates sort of the area law that gives us some idea why we find this area law. Though these are just simple pictures, they're actually relatively rigorous proofs, at least for one-dimensional system, that this area law exists. And as far as I know, this has only be strictly proven for one-dimensional gap systems. But it's more conjectured for higher-dimensional systems also. Good. So this is, yes? What is it? Why I should, well, the reason that I put gap here is that if the system is not gaped, and we have, for example, a one-dimensional gapless system, we would have corrections to the area law. So if we have a one-dimensional critical system, the entanglement entropy will actually grow logarithmically with the length of the system, which means the picture that I've drawn here is, in some approximations, still true, except that what I'm trying to draw here is something that decays exponentially fast, giving us this area law where we have a constant. If we had a system with algebraic correlations, this would just decay algebraically. And we would just have some longer-range entanglement. And all this has also a lot of consequences for the way that we can compress states, but we come to this in a moment. OK. Yes? Long-range. Well, once the Hamiltonian is not local, this picture does no longer apply. This is a very good question, because the locality of the Hamiltonian is extremely important for all the arguments that I'm making here. Once you have a Hamiltonian where you allow for, so you could, for example, just take the extreme case and just take a Hamiltonian that just has random couplings between all L sites, then we wouldn't even know how to define dimensionality. So everything I'm saying here is relying on having Hamiltonian that are either local or have long-range interactions that are decaying very quickly. So we could just, for example, have some long-range, but fast-decaying correlations and interactions. Good. Now, I already pointed out several times that this is some very special property. And as I said also, if I just take my vast, many-body Hilbert space and I pick a random state, it has a volume law. But the part of the Hilbert space that has this area law is like a tiny, tiny corner in this Hilbert space. And the proportion that I'm drawing here is definitely wrong, because this kind of red dot you wouldn't even be able to see with your bare eyes. So and these states have this particular property that they have an area law. And this is now something very powerful that we can say. So we said, or we know from this area law that all ground states of GEPT and local Hamiltonians will be in this very tiny corner of the Hilbert space. And this is now a very useful thing to have, because if we have a Hamiltonian and we want to find the ground state of this Hamiltonian, in principle, and this is what we are doing when performing exact diagonalization, we are constructing this huge, many-body Hilbert space that just kills the computer immediately once we go to more than 40 spins. However, we are only interested in this tiny corner if we are only interested in the ground state properties. And this is now what matrix product states can do for us. They actually provide a way to describe this tiny corner efficiently. So if we are interested in the ground state, we can just use this as some sort of a vibrational space which we just can explore. And we know that we're going to find the correct ground state. And let me just demonstrate this argument again. If we now pick a random state from your Hilbert space, and in fact, for those who like to maybe play with the computer, you can just generate a random vector and perform the Schmitt decomposition of this random vector for a given bipartition. And what you're going to observe is this. So you just find that the Schmitt values here are roughly equal. So all of them have roughly the same size. And that means this state is very much entangled because you need many, many terms to express this random state. Let us now do the same experiment that I've done here for a random state, but now for a ground state of a local Hamiltonian. So here I find the ground state of the transverse field ising model which we're going to describe or discuss later and do a Schmitt decomposition as shown here. And what you see here is that the Schmitt values decay extremely rapidly. So only take into account 20 or so of these Schmitt states gives us an extremely good approximation of the state. Like note that there is some exponential or a logarithmic scale on this axis here. So just looking at this picture, we see that there's a very good way of approximating a state by just truncating this Schmitt decomposition. And one illustrative example that I like to show is this one here. Namely, instead of thinking that, because what I mentioned earlier, we have this coefficient matrix Cij. And Cij is describing our quantum state in terms of a local basis on the left and a local basis on the right. And now we can instead think that the Cijs represent a photograph, right? So basically the Cijs could be a matrix that encodes the gray scale of these pixels. So if there's a small number, it means it's maybe mostly black. And if it's a large number, it's mostly white. Good, so we can now encode instead of a quantum state, a photograph into this coefficient matrix. And then we can do the following. We can perform a Schmitt decomposition or alternatively a singular value decomposition of this matrix, which decomposes this matrix into the product of a unitary, times a diagonal, times a unitary matrix. And then we can truncate this decomposition. We can say that, well, instead of taking into account all states here, like all singular values, which would give the exact matrix back, we're only keeping those with the largest magnitude. And then we can just see by our eyes how good this approximation is. So let's do this. So we just take this picture, which has roughly maybe 1,200 pixels. So the full dimension would be 1,200. But now we just keep only four out of these 1,200 states. And this is what we get. Roughly we don't see much. But if we take 16 out of these 1,200, we get already a pretty decent approximation. At least we can see that we are dealing here with a bridge. If we're taking 64, we already see most of the details. So what we see here is that just by taking into account 64 out of these 1,200 points, or like singular values, we get a pretty good approximation of our original image. And the rest we'll just add some details to this. So this is basically some visual way of seeing how we can compress states or photos. And there's actually a funny fact that some images are less entangled than others. And there's one artist called Mondrian, who is a Dutch artist, and he draws these pictures here. And these are in a very good approximation, product pictures. So these can be approximated by only one Schmidt value, except these gray block boxes. If there were not these gray boxes, these are exact product pictures. Good. So much about the images. Let us now come back to these quantum states. There's actually a way how we can then use these insights to compress states. And roughly speaking, or the basic idea, is now to just take a quantum state, that we can write using this form. You actually have a question. I mean, how familiar are you with these way, or this graphical way of representing tensors? Because this will be relatively important later on in my lecture. Is this familiar to you? Who is not familiar with this way of writing the state? Okay, these are. So Steve White did not talk about this way of representing tensors? Okay, so maybe then, again, let me just repeat this. I like repetition, so. Good. So let us now come back to what I said earlier, because at the very beginning of my talk, I was just talking about the most general way that we can represent a quantum state, which is by just writing the state in terms of these coefficients in the many-body wave function. If we just store those coefficients in front of the many-body basis, then we just keep the full information of the state. But with the drawback, that this is a monstrous big thing that we don't want to deal with. And now, when we deal with these tensor product states, or matrix product states, there's a very convenient way of writing these tensors. And this is with this graphical way of doing this. And since it's not clear to everyone, let me just briefly review these ideas. So when we have some object, which could be maybe just some vector v, we have like one index here. So this is just a vector with one index. And the way that I, from now on, going to write these objects, it would just be a circle with a line coming out. And this line coming out is basically just this index. So if we have a matrix, it's like M, I, J, and this would be just a circle with two lines coming out, and they are denoting this index. The last one I'm gonna do is if we have something with three indices, I, J, K, we draw a circle with three lines coming out, I, J, K. Good, so at this level, this is only might be helpful because we don't have to write these indices. But where this becomes extremely useful is once we talk about vector, like tensor contractions and operations that we're doing with these objects. And this is, for example, when writing a matrix, matrix product, so we have M, I, J, N, J, K, and then we have here the sum over J. The way that we represent this is we just have two objects, each of them having two legs coming out, and this tensor contraction here that we just sum over this index, we just draw by connecting objects by a line. And again, for things as simple, it's still neatly written down in both ways, but once the objects become bigger, if we, for example, have some network which might look something like this, writing it in a sum will already become sufficiently messy, and then it's much more elegant and need to work with this graphical notation. So, and using now this notation, the coefficient in this many body wave function will, it is a rank L tensor, which just corresponds to having a blob with L lag sticking out. Let us now come to the idea how we can compress the state. We can now do a series of Schmidt decompositions, and the way that we can do this is we start first by the original state, the state that we had here, and now we can just perform a Schmidt decomposition between the first spin and the rest. Then we can just write down the Schmidt decomposition where in this case, this A exactly corresponds to the unitary matrix that we get from our singular value decomposition, and then we have here the Schmidt values, and this is now the right Schmidt state. And then we just take our right Schmidt states and perform another decomposition here. So now we see that we can just successively do Schmidt decompositions of this state, and just rewrite our state. And what we've achieved here is we started from a state that was a one rank L tensor, and we just decomposed it here into a product of L rank three tensors. So here I'm using exactly this notation, yes? Oh yes, this is, you can also think about a circle, it's just that I found it. Yeah, so I just used triangle circles or squares. The reason that I'm using triangles here, it's just, there's no particular reason actually. It's just that I liked it better, I guess. But the shape of these blobs has no meaning. I mean, here I use a square and triangle circles. So you probably see different variations throughout, but it's the only important thing is how many legs are sticking out. Good, and now we see that we can actually, if we kind of do this decomposition, and at each state we keep the full Schmidt spectrum, if we keep all states, then this transformation from here to here is exact, but we also haven't gained anything because as we go to the center of the system, the number of Schmidt states that we have to take into account will actually grow exponentially. So instead of having an object like a ranked L tensor with an exponential number of entries, we just trade it in for having a product of exponentially big matrices. Okay, so here basically we just do a rewriting of this state. But, and this is now the motivation for matrix product state, is that at each state, we can actually just have a look at these Schmidt values, like these lumped up, and then we could say that well, if they are like really small, we just neglect them, like and we expect this to be the case for ground states of local Hamiltonians, and then we have actually a way to compress states. So this is already some sort of a helpful thing to do if you have maybe a limited use or limited time to use a very big computer, you could do exact diagonalization on your favorite problem, get this huge object, and then you run this sort of compression algorithm by successively doing the Schmidt decomposition and just forget about the information that's not relevant, and then you can just take home this form of a state where you just have this ranked three tensor where you know that after multiplying all these ranked three tensors, you get your state back. This is not a very helpful thing, but this is more just a way how we can think of matrix product states as being something that kind of efficiently can compress states and can compress states that have only low entanglement. So this is now the idea of matrix product states that as I kind of maybe demonstrated both with this photograph of the Golden Gate Bridge and this kind of compression algorithm is that we can actually, we have a good reason to believe that the amplitudes in a kind of slightly entangled many body wave function can be efficiently represented in this form. So, and what we have achieved then is that we started from some object that has two to the L dimensions, so that's something that's exponentially, oh sorry, that two should be a D here. I'm just gonna fix it later on. So if we start off from something that has a D to the L dimension Hilbert space, so we just need a lot of memory to store this information, we just cooked it down to something that's L times D times chi squared, where I assume that I just keep only chi singular values or Schmidt values when compressing the state. And what I've just argued here, based on these more intuitive reasonings, can actually be proven mathematically. So we can actually show that states with low entanglement can be represented efficiently in this form. Since this has been discussed last week, let me check who thinks that this is relatively clear what a matrix product state is. Okay, this is probably enough. Good, so we have a way to represent quantum states efficiently. And this concept that we can do for quantum states, namely that we write kind of rank L tensor as a product of rank three tensors can be generalized to operators. So we can just think of an operator acting on our Hilbert space or like a many body operator acting on our Hilbert space as a rank two L tensor, so where we have here the indices on the loads like where we act on and then this is what we get out. And again, we can just play the same game that we did for a quantum state and rewrite it in a product of, in this case, rank four tensors. So everything is the same that we have for states. We can do for matrix product or for operators. And again, many of the important operators that we're interested in, for example, local Hamiltonians can be represented exactly in this form with a relatively small quantum engine. But if you, for example, have a Hamiltonian of the Heisenberg model, you can just represent it in a matrix product operator where the bond dimension here is just five. Good. And with these tools that you learned about last week, we can then, or Steve White, introduce the DMAG algorithm. And we can see the DMAG algorithm as just a sort of a variational approach, a variational optimization of these tensors. So, with these reasoning that I gave before, I hope that I could convince you that those states that we're interested in, namely the ground states of local Hamiltonians, and particular gap local Hamiltonians, can be efficiently represented in terms of matrix product states. And DMAG is then the method that allows you to find those states. And using these graphical representation, DMAG looks very simple. It's just this here. So, we just say that, well, we can just take our Hamiltonian, which we can represent in terms of this matrix product operator, this product of those rank four tensors, and we just take the expectation value of this Hamiltonian with respect to this trial state. And this gives us the energy. And now what DMAG basically does, it just takes out as fixes all the tensors here, and just optimizes one tensor here to minimize the energy. And once it found the minimum, which can be done by using some linear optimization techniques, we just shift to the next matrix, find the minimum. And this way, one is then sweeping from left to right to the system until it converged to the lowest energy state that we can find. This is then the best approximation of the ground state that we can get for a given matrix product state, for a matrix product state with a given one dimension. Is that it again? The, oh, this one here? Well, this one here is basically just, so when you have, again, I assume that this had been discussed by Steve White. If you have a matrix product operator, then you need to terminate it on the left and on the right. And this is actually done by doing this. So if you have, because if you just have your matrix product operator, then you have some dangling of dimension here and here, and you want to terminate it somehow. Basically, you want to start here with the V, say left and V right. And this is encoded in this guy here. And the reason that this is connected up here for a simple, for the simple case of just the one-dimensional system, this would be just some identity. Or you could, in fact, just forget about this one and have just an identity clear up here. Well, there are various kinds of DMG, but at least conceptually, or the way that I'm writing it down here, the simplest version would be to just optimize one matrix at a time. But we can, as you said, also do two, three, or four matrices and optimize them at a time. And in fact, choosing two matrices at a time instead of one has certain advantages for the algorithm, but also certain disadvantages. There are many trade-offs to do this. And, well, let me just comment briefly on this. The reason, or when you are doing this so-called single-site update, DMG, the way that this is shown here, has the advantage that the scaling is favorable. Because the effective Hamiltonian that you write down for a single, like again using this graphical notation, so this would be the kind of effective Hamiltonian in terms of the DMG language that you have to deal with. And it has a dimension of chi-squared times D. But if you were to do the two-site update, you have just two here, and then you would have like the dimension of this effective Hamiltonian is a chi-squared times D squared. So then this one would be slower in general. However, if you just use a single-site update, there can arise certain problems because it might just get stuck more easily. So then one need to prevent a little bit more. Right, exactly, this is the point. Like, if you are using this approach here, the entanglement, if you start from a state with a relatively small bond dimension, the bond dimension would never grow in this one here, which means you would have to apply some other algorithm to seed in some entanglement into the state. That's correct. But this is done slightly faster if you do this carefully, but this one is also a bit more dangerous because it might just get stuck. The reason why I'm showing this simplest one is because I like that here we have a very simple picture of DMG and we can get an idea what the main ideas are and the main idea of just basically fixing everything in the matrix product state, except a few, in this case one, or maybe two, or maybe three matrices, the main concept or the main principle remains the same. Okay, I can very briefly elaborate a little bit on this question. So it's very exciting what you said. So first, we start from a guess or maybe like from a random state. Now, you could say that you start the algorithm somewhere at your, maybe somewhere towards the left and then you just construct the so-called effective Hamiltonian that would have, I don't know, I just do it, we have here alpha, beta, i, alpha prime, i prime, beta prime, and then I can put these indices here also, alpha, i, beta, alpha prime, i prime, beta prime. So I just contract these tensors together, like I just contract all these tensors that I have, like these are being the tensors from my matrix product state and these ones being the tensors from my matrix product operator. And after I contract all those, I get just something that I can write down as a matrix, I have a matrix that acts on this space spent by the states alpha, beta, and i. Now I can just use a standard exec diagonalization technique to find the ground state of this matrix and the ground state, basically I can write eight times, like I use a on this particular bond, which is just a E naught time state described by this matrix A. And once I've kind of done this optimization, I just say okay, let me now just plug in this matrix, let me just call this maybe A tilde, I just plug in matrix A tilde here, A tilde star, and I move on to the next bond, do the same game again, I just construct my effective Hamiltonian, plug it into my favorite kind of sparse matrix diagonalization program, find the ground state, plug it in, and then I just do this sweeping algorithm. And as pointed out, this is a problem with a single site update is that the bond dimension will never increase. If I start with a guest of an MPS that has a bond dimension of five, I will always be stuck in this space. And the details I'm not gonna explain now because then I probably can talk the rest of the day about details of DMG. So, unless there are further questions about the review part of DMG, I want to come to something new now. Good. And the idea that I want to discuss in a bit more detail now is the idea that we can do DMG directly on an infinite system. And conceptually, not much is changing. It's just that this is some technique that I find helpful for simulations. And this is something that I want to discuss in more detail now. So, let us now assume that we have an infinite system. And for now it's completely a translational invariant. So every site is supposed to be the same. And this idea can be very easily generated, generalized to system with a unit cell. We could also say that we have a translational invariant system with a unit cell of n sites. And the main point is just that we remove these indices. Like in this kind of finite size, matrix product state, we have these indices here enumerating these matrices. Now I'm just saying that, well, I'm looking at states that have the same matrix everywhere. That's already the main idea, but it's very simple. And I want to show how we can actually nicely work with these states. Because you see the problem, right? If I say I'm working with an infinite system where every site or every, on every site we have the same matrix on the one hand it's really nice because we are losing this factor L. So instead of having L times D times chi squared variational parameters, we now only have D times chi squared. So that's nice. But the problem that we might ask is like, well, if I for example have a Hamiltonian, how do I actually calculate the energy of this Hamiltonian? Or if I want to measure that some local observable, how can I actually do this? Because in principle I would have to multiply infinitely many of those matrices before I can read off anything. And I want to use now this as some sort of excuse to just explain a little bit more about some arithmetic with matrix product states. So one important insight is that for a matrix product states the matrices are actually not uniquely defined. So if I say that I have a matrix product state describing a particular state, then what I can do is I can just construct another matrix product state by just transforming my matrices. So I have here a tilde. And these a tilde are just given by x times a, i times x, the inverse of x. And x is just being some invertible matrix. And what we notice is that if we write down the matrix product state x, or a tilde, we can actually, we would definitely find the same state because the x's just cancel each other, right? So why does this simple reasoning, I can show, we notice that a matrix product state kind of representation is not unique. Good. And let us use this actually for us. And we can actually find now a representation of a matrix product state that will be extremely useful. And for this we're actually using this degree of freedom that we have to choose our matrices by choosing it such that the bond index will directly correspond to the Schmitt decomposition. Recall, we just do a Schmitt decomposition of our system into a left part and right part by cutting it a system at a given bond in this form here. And also recall that the Schmitt states are forming an orthogonal basis. So the Schmitt states for different alphas here are orthogonal. So, and this is like what we wanna do. We want to choose our matrix product state representation in such a way that the bond index corresponds directly to a Schmitt decomposition. And this is the one step and the second step that I want to do is I want to just introduce a slightly different but I think very useful way of writing matrix product states. And the idea is the following. We just write the tensors that we have in our matrix product state as a product of a diagonal matrix, lambda alpha beta, which is a diagonal matrix that contains the Schmitt states. Sorry, the Schmitt values. And we have some tensors gamma alpha j, which actually relate the local basis and the Schmitt basis. And then we just write down our state in this form. Namely, we have here these lambda tensors on a bond and these lambda tensors are carrying the Schmitt values for a Schmitt decomposition of this matrix product state at this given bond. And these values gamma are those tensors relating the Schmitt and the local basis. And again, this is something we can always do. We can always just split up a matrix A into a product of a matrix, of a diagonal matrix with the Schmitt values and a matrix gamma. Good. And if we actually choose this particular way that we want to choose the matrix product state is such that we can just terminate our matrix product state at a given bond and then automatically get the Schmitt decomposition. Let me just graphically show this maybe. So we have our state psi being represented as a matrix product state where we have here gamma, lambda, gamma, lambda, gamma, lambda, gamma, lambda. And now we could say that, well, oh, I forgot these symbols. Let us now take only everything left here. So we have alpha here, alpha, beta, sorry. There we have alpha because this is just a diagonal matrix. And then this is actually the Schmitt decomposition. So these ones here are now a matrix product state representation of the Schmitt states for the left. And everything here are the Schmitt states alpha to the right. So this is a quite convenient way of writing down the matrix product state such that we can always just by multiplying a part of these matrices, we just get exactly the Schmitt state for a bipartition at this given bond. Good. And from this, we can, we now recognize a certain condition that we get for our matrix product state. And this is because of the orthogonality of those matrix products, of the Schmitt states. So we want, or we know that the Schmitt states are orthogonal. So alpha prime alpha is just a delta function. So we want to have the product of those two matrix product states to be also a delta function, right? So you recognize that what we are doing here is actually just a product of two matrix products like the scalar product of two matrix product states. Does this make sense? So, and now we can, some of the color coding counts are not quite right. So we define now the following object and which is something we call the transfer matrix. So the transfer matrix is that we take this part here like some gamma, lambda and gamma star, lambda. So this is what we call the transfer matrix. We just write on, so we just define a transfer matrix that will be alpha, alpha prime, beta, beta prime. And this is, we have index alpha, alpha prime, beta, beta prime. Okay, so this is now the transfer matrix for a given matrix product state. And let me ask you, so the product of those Schmitt states written in terms of these matrix product states so corresponds to multiplying many, many of these transfer matrices. If we want this to be true for an infinite system, we want that the product of those many transfer matrices eventually just gives us just the identity. And so maybe I have a question to you. If I just multiply a matrix, so if I just multiply a matrix over and over again and I just apply this operator to some vector, what is left? Right, exactly. So if we just multiply a matrix again and again, this is actually the so-called power method which you can use to find the dominant eigenvector of a matrix. So applying this to this idea, we see that well, in order to be the, to actually, in order for a matrix product state to produce kind of orthogonal states, we want that the dominant eigenvector of the transfer matrix is the identity. And we want this to be true to the right and to the left. So using the graphical representation, we actually find now a condition for our matrix product state which we can call the kind of canonical form is such that the left and the right transfer matrices that we obtain have a dominant eigenvector which is the identity. And this actually defines our matrix product state up to some overall phase. Because you mentioned previously, we had a matrix product state and we said that well, the matrices are not well defined. So basically we have a lot of degrees of freedom but if we say that well, we have a matrix product state given and it's in this canonical form, it's actually kind of uniquely defined up to some phase factors. So, and this is again what I said, now written in this formula here. So we say that the left and the right transfer matrix have a dominant eigenvalue one and the corresponding eigenvector is the identity. Good. And yes? Okay, let me just, yes, good question. I was maybe too fast on this. This object here, as you said, is a four index object but I can in principle just reshape it as a matrix. So I could say that well, that's using my favorite graphical representation, I have some object which is a four index guy but I can now just basically group these indices together and by this it becomes a regular matrix. And I will find that the kind of corresponding eigenvector will be the identity here. It's just basically by merging indices. You just take two chi-dimensional indices and you just transform it to a chi-squared dimensional vector. And graphically this is just nicely just shown here. You just have this object which you think of some matrix applied to a vector and this vector is just the identity. Yeah, for these rather formal discussions I want to just show some examples. So let us just start with something extremely simple which is that we have an ising ferromagnet. So the ising ferromagnet is clearly just a product state. So what would be the bond dimension that we need to represent the state exactly? So if we have a product state, it's one. And the local dimension would be two, right? So because it's just a local degree of freedom of up or down and in this case it's very simple. So we have just simple numbers. So we have some gamma up and this is one. We have a gamma down, this is zero. And because the state is normalized on each bond we just have one Schmidt value that is one. So this represents our trivial product state and clearly it fulfills this condition of being in this canonical form. Let's now move on to a slightly more complicated state. And in fact this state here, the so-called Affleck-Kennedy-Liebend-Tasaki state is sort of the mother of matrix product states. So even before matrix product states were formally introduced, Affleck-Kennedy-Liebend-Tasaki wrote down a model to which the ground state is an exact matrix product state. They didn't call it this way I think. So the idea is we take a simple spin one Hamiltonian. So the first part is just the spin one Heisenberg Hamiltonian. And the second part is just the bicraderatic term. And it turns out if you fine tune this Hamiltonian to this point where the pre-factor of the bicraderatic term is one-third, this Hamiltonian can actually be rewritten as does the sum of projection operators on acting on each bond on the spin equals to two state. If you have some spare time you can actually show that this Hamiltonian exactly correspond to the sum of projectors. Was it? That's essential. Yeah, the thing is that if you have this pre-factor one over three, you can actually show that this exactly corresponds to a projector onto a S equals to two state. That's correct. Well again I could talk a long time about this model here but the main point is like if you look at the phase diagram basically of this, if you just set it to one over four, the model stays in the same phase and a lot of the physics will remain exactly the same as what's predicted by the simple state that I'm gonna discuss in a minute. However, if you tune it to one over four, the this state like the simple matrix product state solution will not be an exact eigenstate anymore. And again, it's a nice exercise to just sit down and show that if you take this Hamiltonian plus actually a constant of two over three, I think, then plus or minus, it is actually exactly just the sum of projection operators onto the S equals to two state. Good, so we have this Hamiltonian and what Affleck, Kennedy, Lieb, and Tazaki managed to show is that this Hamiltonian has a very simple solution and this solution is the following. So they take each of these spin one sides. So each of these circles is now a spin one and each spin one is split up into two spin one half like this dots here. And the spin one half that we have in virtual spin one half, they're forming singlets, like spin one half singlets with the neighboring side. So this is the cartoon picture of this state. So this state, as I said, has now an exact matrix product state representation. So we can again try to figure out what is the physical dimension of this matrix product state? It's a spin one degree of freedom per side which makes three states. So it's a small d would be three. And what is the bond dimension? So we see that the state has at each bond, we cut a singlet. And if we just do a Schmidt decomposition of a singlet, we need how many states? Two states. So we already know just by the geometry like how I introduced the state that the bond dimension will be two and the physical dimension is three. So we can then actually sit down and based on this picture that I've shown before to construct a matrix product state that has exactly this property that we have singlets between the two neighboring sides. That's very simple. And then we can apply these idea that I showed earlier to bring it in this canonical form. And then is this, this is what I found. Please sit down and check that this is actually correct. But what we then have is that well, first of all, we see that the state is certainly normalized because if I just take this trace of the squared, I get one and it's supposed to be, these matrices are supposed to be chosen in such a form that they actually fulfill this condition for the canonical form. So if you just plug it into this equation, if you take those matrices, contract them to get the transfer matrix, you should find that you have a matrix which has a dominant eigenvector that is one, or the identity with an eigenvalue of one. Good. So now we have some examples of the matrix product states that we have given. Yes. Let me now come how we can actually use these matrix product states now to do some algebra. And for this, I want to just give you some idea how we can do these calculations. Right, so this is true. So you can take a given matrix product state. If you just find a matrix product state or someone gives a matrix product state to you, you can always transform it to this particular form. Yes. There's some, I come to some small exception in a second, but generically, like if you have a state, you can always use this machinery to bring it to this form. Good. And now I want to discuss a little bit the following. So far, I basically showed that we have matrix product states, they are an efficient representation for these kind of for area law states. And I just introduced this canonical form and I want to show, now demonstrate, this is actually a quite useful form to have your MPS in. And I want to just show you how we can evaluate expectation values with these. This is evaluation of expectation values. So, and this I'm doing because this arithmetic with these infinite MPS can be, I think, quite useful. And let me just do the following. So we have our one-dimensional system again. And what we can do is we can do, we can just choose a particular representation of our state. So we can say that we do two Schmitt decomposition. We do one Schmitt decomposition at this point. So we have the state alpha left. We have a site local i and we have states beta right. So, this does, we have like a quantum state defined on this one-dimensional system. We do two Schmitt decomposition, like one here. And we do one at this bond. And this gives us some Schmitt states for left of this bond. And this one, we get Schmitt state for everything right of this bond. And then we can clearly just write down our state in this way. And I'm just using now our notation. Alpha i n, alpha beta, alpha left. Okay, so if we have our, okay, so we have basically our system. We can just write it in this form. In terms of these Schmitt states. Using this, like, this is now directly related to our matrix product state formulation because we have our matrix product state that we can write in this form. Where we have here, we have here basically our gammas. And I just, because I'm, it's a lot of writing, I just keep in mind like, if whenever there is a dot with three legs coming out, this is a gamma. And these dots with two legs coming out, this is a lambda. And now what we can do is we can just basically wipe this one open. And then these ones here are exactly our states L. And these ones are exactly our states on the right. And here we have our index i. And here is an alpha, alpha. So you see that this gives us exactly this form back. Now, say that we want to calculate some expectation value. So say that we just want to evaluate the expectation value on site M. Which means in this, using again this notation here, we find that the operator acts only on this side here, but leaves all the other side unaltered. Then we find the following, namely from this canonical form, we know that the states that we have on the left and on the right, gamma, we think here is just forming the identity because of the orthogonality. So we have this condition that everything on the, like if it has multiplied the left transfer matrices, we get just the identity back. If we just do the same coming from the right infinity, we also get the identity back. And then we just have here something sitting in the middle, which we then can write simply as this. So we have here our operator O acting on this side. We have gamma, lambda squared, lambda squared. Okay, to show a more beautiful picture of this. So what I hope to be able to show is that if we have matrix product states in this particular canonical form, it's now very easy to evaluate local expectation values because of having these identities. If we would not have these identities, we would have to multiply like infinitely matrices from the left, infinitely many matrices from the right for the other way around. And then we have to sandwich this local operator here. But because we have this nice identities, we actually know what the fixed point is coming from the right. We know what the fixed point is coming from the left. And then we can just plug in the fixed point which are just the identities and we can just simply evaluate these expectation values. And furthermore, looking at these expressions, we already see why this graphical representation is extremely useful. If I were to write down this in terms of indices, I would probably spend the whole hour to do this. And it's getting even worse when looking at kind of correlation functions because then again, we can just take everything coming from the left and for the right using these identities and then we obtain the correlation functions. And one thing that we nicely see here is the following. If we calculate a correlation function between two operators, so maybe a spin operator acting here and a spin operator acting here, then in between, we have just powers of the identity matrix or the transfer matrix. And we already know the dominant eigenvector of this transfer matrix is the identity. It basically gives us the normalization of the state. We will find that the kind of correlation functions or the correlations are then related directly to the second largest eigenvalue of the transfer matrix. And this is what I'm showing here. So if you are interested in the correlation lengths of your MPS, you can just construct the transfer matrix. Again, choose it such that it's in this canonical form so that the dominant eigenvalue is one. Then we find that the correlation lengths in your state is actually just given by minus one divided by log of the second largest eigenvalue. And again, this can be the argument for this is similar to what we had before about the power method. Good. And let me now come to this exception to those states where things become a bit more tricky. And these are so-called cat states. Because if you happen to have something which we call a cat state, which means like we have quantum states formed by superpositions of sort of macroscopic states, then this is not strictly true, but instead we will find that there are, in this case, two dominant eigenvalues that have an eigenvalue one. And we can actually then think of having everything split up into two blocks. We have one block for the cat alive and one block for the dead cat. And in each block individually, the same things apply that I was describing. Good. Is this roughly clear? So what I wanted to say with the last few slides is having this canonical form, the great thing is that it's very easy to calculate local expectation values. And also we can obtain correlation functions very simply in the expression for the correlation functions. We notice that the correlations decay based on the eigenvalues of the transfer matrix because the transfer matrix is just basically correlation functions are calculated by taking some left vector, some right vector and send it in between the transfer matrix to the power of L, L being the difference between these two sites. And based on this, we can actually see that the longest correlation length in the system is actually just related to the second largest eigenvalue of this transfer matrix. And this is actually again in practice when you're using this technique very useful because you just find your matrix product state, you construct this transfer matrix by just gluing together those matrices and then you can just read off the correlation length for example, that will help you to find critical points. Let me now demonstrate for the remaining minutes these states in action. This is also something that you will do yourself in the tutorial session. And the example model that I want to look at is the transverse field ising model. And this is a transverse field ising model as does the ising term. So we have nearest neighbor ferromagnetic ising coupling and we have a transverse field here. Now, if I tune this variable G, the system at some point undergoes a phase transition because of this, I think this is the fruit fly for studying these many body Hamiltonians because this is one of the rare cases where we actually have an exact solution. So we can just compare whatever we obtain to the exact solutions plus it has quite some interesting physics in it. So again, if G is very small, the system is in a symmetry broken of ordered phase and then at some point there's a phase transition into a paramagnetic phase. And then in order to distinguish these different phases, we can measure the magnetization as an order parameter. And this is like would be like the first task that we can do. We could say that, well, we can now use this algorithm. So we can just write down an infinite matrix product state and use the energy to optimize the energies. We just optimize the energy with a given kind of trial state where we just kind of improve the quality of the trial state by increasing the bond dimension here. So, and here we just have a plot where here we tune the coupling and here we plot the magnetization. And one thing that we notice is that first of all, the solution that the algorithm finds actually spontaneously breaks the symmetry. And that's already remarkable because if we were to use some finite state algorithm, you would never find a symmetry broken solution, but these infinite systems, because they actually have a thermodynamic system, the algorithm finds a symmetry broken state. So it spontaneously chooses a magnetization of plus or minus one and then sticks in that particular state. And then we see at some point, the magnetization drops to zero. And what we see here as we improve the increase the bond dimension, we see that the transition point on this scale slightly shifts. So if chi is two, it's just shifted to slightly larger values, but already for chi could be larger than five, at least on this scale, the position of this critical point does not shift any more. And again, all that is done here is writing down this infinite system, MPS, unsets, use the energy to minimize the energy and then we use this trick that I tried to illustrate here to calculate the expectation value of the magnetization and this is what we get. And I think this was Ruben Wazin, who was from my group, who did these calculations, they're testing the tutorial. Good, and you can just do it tomorrow yourself. So let me just point out one interesting fact, is that going back to this model, if we are at this critical point, we would actually find that at this critical point the correlation length is diverging. So we have an at this critical point, the correlation length is actually infinite. So basically if I plot the correlation length as a function of g, like as a parameter, we would expect that this is actually infinite at this critical point. Good, however, just what I've shown earlier, like how we can, I showed you this expression for how to obtain the correlation length for a matrix product state, we saw that for any matrix product state with a finite bottom dimension, the state will have a finite bottom, it will always have a finite correlation length because basically just from the construction, you see that the, unless the state is a ket state, the second largest eigenvalue is always smaller than the largest one, and then the correlations always decay exponentially with the difference between the first and the second largest eigenvalue. And so it's always finite. And then that means that at this critical point, we can never find a perfect approximation of the ground state in terms of an infinite matrix product state. And in fact, what we're gonna find, or what we expect to find, is that as we increase the matrix dimension, we will actually find that we can get closer and closer to the critical point and actually facefully capturing the correlation length in the MPS. So, and this is now exactly the case. If we actually do this simulation, we construct the transfer matrix and then get the correlation length, we see that as we increase the bottom dimension, the correlation length near the critical point is actually diverging. But only, yeah, but for any finite matrix dimension, we find that this is a remaining finite. And this is a, in a way, this is a weakness of these MPS or INPS because we cannot really capture two-dimensional, we cannot really capture the physics of critical states. There's always some cutoff given by the MPS dimension. But in a way, we can also use this for us, namely, we can do the following. We can go to a particular critical point that we are interested in. So for example, the Ising critical point and many of these critical points we are interested in, they are described by conformal field theories and I'm not gonna describe what conformal field theories are at this point. However, there's one characteristic number, namely the central charge. And the central charge is effectively counting the number of linearly dispersing modes. And this is a number characterizing these kind of conformally invariant critical points. So, and there's a nice relation derived by Calabresa and Tati some years ago, namely that if you detune a critical point a little bit to introduce a finite temperature or like a finite correlation length, then the entanglement is scaling as C divided by six times log of the correlation length. So what we can do now, if we just want to use this infinite systems to study a critical point, if we're interested in what might be the central charge, what we can do is we can just redo a simulation at the critical point with various bond dimensions. So we just increase the bond dimension and then independently measure the correlation length and measure the entanglement entropy. Recall the correlation length we can just get by diagonalizing the transfer matrix and the entanglement entropy we just get from the Schmidt values that we anyways have at our disposal. So, and then if we just do the plot here, we see actually this logarithmic dependence and just by fitting it for a couple of points, we can get a quite good estimate of the central charge. So this, let me see, so the lecture should go to about now or for this year. Okay, well, I could take maybe a few questions if there are some, otherwise I could just give you a rough overview about the ideas for topological order. But maybe there are some questions. Yes? Okay, good question. There are two answers to this, I guess. I mean, well, the first answer, nothing prevents me from doing this. And then there are two ways that this is actually done. So the first thing is that we can do, well, we stick relatively closely to what I'm doing here and we are just looking at systems where we take only one of these dimensions to infinity. I can just say that, well, I'm doing a simulation on some two-dimensional slap. So I say that I have a dimension like X and Y and I'm saying that I'm taking X to infinity and Y, I keep finite. And then I can just use something which I think also, Steve, why do you do this? We can just use some sort of MPS snake going through the system. And I can, again, so here are the sides of the system and I have now some sort of a quasi one-dimensional system and then I can use exactly the same trick that I've shown before. I can say that, well, we define basically some sort of a unit cell that always repeats. And then by this I have a half infinite system in terms of the two-dimensional system. This is the cheap answer, like very closely related. It's basically a one-to-one generalization of what I was showing. The second point is we can, again, we can use generalizations of matrix product states to 2D, namely that we say that, well, we actually construct the system where instead of these comms, we get brushes, something like this. And here, again, we can play the same trick of going to infinite systems by saying that, well, we have a kind of unit cell that just keeps repeating and then one just optimizes the same MPS again. And in fact, this has a name, it's called IPEPS, like infinite projected entangled pair states. And it might be that Biller-Bauer is talking about this. But in very short, I mean, this is a very powerful approach because it would capture the area law and it would allow to describe infinitely large systems. However, the problem is that while for 1D, in particular, using this canonical form, we have a very good handle on calculating, for example, local expectation values. I mean, that's why I'm also very happy about this one here because it's extremely simple. So if you have an MPS to calculate your local expectation values, doing this for these 2D systems is still in a kind of complex and unsolvable problem. For 1D or for 2D? For 2D, even calculating the norm of the PEPs is, I think, maybe an NP-hard problem. So if you just, I mean, that I find actually quite amusing. So if you have some friend who is very good in doing numerics and he just gives you the MPS for your problem, he says like, well, this is a solution of your Hamiltonian and I just sent you an MPS. Then you are super happy because you can calculate whatever you want. If you have another friend who just gives you the 2D PEPs, he just gives you the PEPs representation of your ground state, you're still not very happy because even though you have the PEPs, it's still NP-hard to, for example, calculate even the norm of the state. So you will again need to rely on approximations just to calculate the norm of the state. Exactly, yeah. But I think it might be that Bila Bauer is talking about this in more detail. But this, I mean, this is currently a lot of research going on, like, on new ideas. How can we actually efficiently approximate or simulate these 2D systems? That's still not completely agreed on what the best way would be. Mira is yet adding a different idea. So in terms of pictures, so MPS basically, this is like what I've drawn 20 times or more. And this is like the MPS. So we just basically correlate or entangle sites just through this chain of MPS that we are multiplying. Mira, like this is MPS. And if we do Mira, we again have some sort of physical space. But the idea that we introduce basically the entanglement is by some network that's extending in this direction. Oh, thanks. So, and in Mira, we just say that, well, we could have, we have here some spins or some degrees of freedom. And then we have a network of, on the one hand, we have so-called disentanglers. So these ones here would try to disentangle the spins here as adding entanglement between these spins. And then we have some sort of coarse-graining step. So we just go to a system that has fewer sites. And then, again, we can just add disentanglers and then we just go up and so on. So in short, for MPS, we just have like a simple framework where we just carry all the entanglement by multiplying these matrices. For Mira, we have some sort of a network that extends into one more dimension. And then there are certain differences. Maybe the only thing I wanna say is for MPS, they only can describe states that have an area law. As I showed with this plot here, for example, we cannot describe a 1D critical state for an infinite system because we need the entanglement to be constant and here it's growing logarithmically. For Mira, if you look for a graphical picture, if you look how spins can be entangled over long distances, you can actually, you find that this logarithmic growth of the entanglement entropy is captured by Mira. So Mira can actually represent critical states. But on the computational side, again, Mira becomes very difficult to handle and that's why I think this is a very interesting approach, but so far it hasn't been able to replace MPS. Any more questions? Then we go for a coffee break.