 It's actually the first of four hours all together today and tomorrow morning and Given that this is quite a diverse audience I had a look at the posters right now And saw that sort of like there's a few people in the audience who probably know most of the stuff I will be talking about others probably know very little so as I more or less anticipated that I Split up my presentation into two parts Which will then sort of like merge into each other the first part will be a relatively basic Introduction into the field, but where I want to go into some details. I know that Miles Has been presenting you with some of the stuff already at the beginning of this week Couple of weeks ago. We thought we might be able to sort out sort of like who does what but ultimately we decided It would be too much work and to get that done and that anyways given for most people This being new stuff a little bit of repetition doesn't hurt and two different perspectives on the material So I presume you have heard a little bit, but in some sense if you are not there it would also be fine Then the second part of the presentation will be actual research work Which is partially not even published yet so that you get an idea What is currently being done with this kind of methodology as It is a relatively a sort of like complicated method in some sense I Will probably jump forth and back a little bit like John Chalker did it between the blackboard and the Screen and do sort of like the simple things or kind of mainstream results on the on the screen And then from time to time to do a few calculations by hand Okay, so What is the entire thing about so here the question is what is the fundamental problem of solid-state? Physics because the methods we are talking will be talking about in this context the most famous 1d Margie and also variants of it the question is actually what is it good for because it's not all problems in many body Quantum systems that you can successfully address with this kind of methodology or at least not now People are working on trying to extend this and it remains to be seen Where we will get and I think all of you certainly otherwise you wouldn't be here have seen a Hamiltonian like I wrote it down here Which is just kind of a solid which doesn't even have a vibrating lattice it just has Electrons Which move around which attract which repel each other and then they move in an effective potential Presented by the lattice. We don't even have phonons in this model But even this problem has been keeping us busy for the last almost Yeah, almost 90 years by now and the problem of that is To solve the electron-electron interaction actually some people at this point talk about the dirac challenge because dirac about 90 years ago Wrote that basically all these problems of solid-state physics having this Hamiltonian and the Schrodinger equation have been reduced to computation And so in some sense they have been solved, but he was as smart enough to realize that in fact This might still make them extremely difficult and we are still struggling and so to position the methods I will be talking about I Break down all of solid-state physics in two very simple pictures The one is the case where you have say a lattice potential here in green on the left And the electric the valence electrons. We don't worry about the core electrons are all relatively well Delocalized so they are smeared out like a soup of electrons Which means the interactions are relatively well screened and this is the picture you all are extremely familiar with Kind of the band picture of essentially non interacting electrons where for example You would say if the top band which Contains electrons is half filled and that would be a great conductor if it were totally filled It would be an insulator and you know that for many metal semiconductors. This is a wonderful picture And in this context methods like density functional theory are working extremely well So we are not concerned with this type of problems We are rather concerned with the second type of problem where the valence electrons are tightly bound We will see in which situations that might pop up or also in kind of simulations of that similar situation Which means they have very strong local interactions So if you have an electron here and it wants to hop to the next lattice site it will basically very strongly feel that there's a Well localized electron on that second a lattice site And so what you have to do is that you really have to take into account the full picture of the motion of All the electrons because what they do will be strongly correlated Among itself and I guess most of you here are working in that field anyways So just to give you an example if you take the high TC parent compounds They would have a band structure picture or a density of state picture rather which looks like that These are half filled parent compounds and you will say oh conductor, but in fact these are extremely good insulators so this These systems are very difficult to describe and so there are sort of like there's one approach that has been very successful in the last Decades is that you form model Hamiltonians which are an extremely simplified Cartoon version of what what you are doing like the last lecture you heard this morning This were also kind of cartoon models of the kind of really very complicated physics Which are ever you expect bring out what really matters and in this kind of methods in this kind of Hamiltonians Where the methods I will be explaining today and tomorrow will work what I will show tomorrow And this is why I brought in this slide Despite the fact that you probably all know that that there is currently a trend in kind of material science oriented Aspects of condensed matter physics to bring to try to solve the Correlation problem in a more realistic context and one of the ways of doing this is the so-called dynamical mean field theory combined With density functional theory we will hear about that tomorrow. Hopefully and then there also this kind of methodology might be extremely helpful To make these methods work well So then we are away from the model Hamiltonians, but that's a current trend. So mainly we are still concerned with model Hamiltonians Okay, you have that in many different Situations like in zero dimensions would be impurity physics quantum dots in one dimension spin chains and letter ladders like Haberspoon this afternoon. We'll be talking about that then we heard about frustrated magnets Before and what I will be talking about tomorrow is then the realistic modeling Of transition metals and rare earth compounds where methods like the one I'm explaining today are also Potentially relevant my examples today in my way of thinking will however be mainly focused on one-dimensional systems You will see that in a second Okay, we can't jump that for the moment and Another way of looking at all that which basically serves me as a motivation Why non equilibrium physics is so important is that as you I think you all know I mean this has been ongoing business for 20 years now almost That you can form Bose-Einstein condensates with ultra cold bosonic atoms They are extremely weakly interacting, but you can make them strongly interacting and then you are back into sort of like the picture We are looking at Today and there's various ways of making them strongly interacting One of them is is there someone having problem or since coming from the outside Is that you use opt so-called optical lattices the pioneering experiment was actually done back home Sort of like at my place in the group of Immanuel Bloch Where they produce optical lattices to basically reproduce the Hubbard model for bosons? I mean you have I'm sure you have all seen the Hubbard model many many times boy But here it comes about in an almost perfect realization You have an onsite repulsion of the of the atoms they can hop from one lattice side to the next with the usual amplitude t The fun thing is you can tune the interaction by making the lattice more or less deep In fact, this is a sloppy way of speaking the interaction hardly depends on the lattice depth But what you do is you can exponentially suppress the kinetic energy And so relative to that because it's always the ratio that matters of course the whole thing Becomes very weakly or strongly interacting can also do that with a fermions And the whole thing that is nice about that is that as you can tune this ratio of interaction and Kinetic energy you can do all sorts of non equilibrium things You can stay very close to equilibrium Then you have a like by an adiabatic change of this ratio Then you can actually achieve a quantum phase transition or this is the famous pictures from this 2002 experiment where they started out with a superfluid which is shown by a peak This is in momentum space at K equals zero Then if you make the lattice stronger you see Bragg peaks Well, it's not exactly Bragg peaks Well, that's a detail and then you make it stronger stronger stronger and that you get into the mod insulating phase Because they the electrons repel each other or the atoms you see I'm saying electrons But in reality these are bosons Simulating the behavior of the electrons They become so strongly repulsive that they basically block each other you get into a mod insulator Correlations becomes short-ranged and the picture in momentum space becomes diffuse What you also can do is you can however make this is as closer to what I will be talking about today And you can make this change very suddenly and again This is the first experiment ever done in that direction and what you then had is you went from the superfluid You made the interaction strong So you would get into a mod insulator that because it was a sudden change what happens is you change the entire Energy levels so the phases of the different parts of the wave function Evolved differently and then you get a Defacing and refacing of the phases and you come back to the original Wave function so there's a movie actually from the experiment and what we were the question I want to ask is It can be can we simulate can we calculate numerically the ground states or Finite temperature states of such systems we will answer both these questions with yes, and also in view of Such experiments can we also look at far from equilibrium? Dynamics and the answer will also be within certain limitations. Yes And there will be more limitations which of course I will not sell right in the beginning so And this is what I will be talking about in fact The non equilibrium stuff was really started by this field because in solids Most of the experiments in non equilibrium were actually quite close to equilibrium in the past linear response Which of course you can also treat by these methods And also quantum mechanical decoherence in a typical solid is so strong that in some sense You wonder whether you need a technique that follows your wave function in all details because this is what? Tenser networks matrix products say it's actually do so Imagine I would be I were able to give you the full wave function Of a system of say 10 to the 23 particles would this be of any use to you? No, it would not because I mean What can you do with this? I print it out and staple it in here a big pile in this room But what you really are interested in what is the pressure? What is the temperature? What is the susceptibility? What is the order parameter where sits the phase transition you are interested in such questions and answers so as you all know the problem in quantum physics and the entire presentation will be entirely Concerned with quantum physics. I mentioned that because from what I got this morning John Chalker had been presenting also classical results here. It's only quantum and So what you have is the usual problems that say for a spin one-half way of spin up down The number of degrees of freedom in the thermodynamic limit is even exponentially large So this idea of piling up a printed out wave function in this room wouldn't work anyways Okay, so what can we do and this is in some sense the question what do you do in an exponentially diverging Hilbert space and One way is because we don't have the quantum computer We have to do it with a classical computer one way is doing exact some diagonalizations Where you really diagonalize the Hamiltonian matrix in reality? What it means is you determine Some extreme eigenstates a full diagonalization is not usually meant by that and even if you are only interested in the extreme Spins extreme eigenstates in terms of spins and electrons You are quite limited these numbers are approximate because depending on the amount of symmetries You might exploit and lattice structure. It might actually be larger I think for for spins the largest stuff I have seen is by Andreas Leuchli in in spoke 56 spins or something and he this guy is really the crazy guy for ed But you see it doesn't really make a big difference and nevertheless this method is extremely valuable because what you learn from there You can really rely on all the other methods have their methodological bias which may land you in the soup This one doesn't but of course you are far from the thermodynamic limit What you can also do is of course the stochastic sampling of state space this is another answer we have to huge problems and What you get there is all the sue of quantum Monte Carlo techniques Can anyone tell me when a crowd to spoke if you still remember two and a half weeks ago? At the beginning of this seminar did he cover only classical or also quantum? Classical okay because Werner has been doing both in his life So it seems he focused on classical you can also apply this to quantum systems This of course not now my not now my lecture to do that But let me mention that there is something called the negative sign problem Which pops up for fermionic systems not all of them, but unfortunately many of the interesting ones and for frustrated spin systems Where basically the interpretation of weights as probabilities becomes a problem because of negative signs popping up You can get rid of that, but the statistics of the simulation becomes very bad I should mention here because that could be very interesting for some of you There's currently a huge effort of development going on in that field something called diagrammatic Monte Carlo Mainly driven. I think by Nikolai Prokofiev Boris Wisunov in Amherst, Massachusetts Also by my colleague Lorda Pollet in Munich some people in Paris were in some situations It's now much better with the sign problem So that might be a problem for people like me in that field who live off the fact that they can't do that Yeah It's So that could be potentially interesting because for them this negative sign problem turns into an advantage Okay, wait wait and see watch out for that That could be one of the big interesting methods in the future this diagrammatic Monte Carlo So what I will be talking about today and tomorrow is a different approach Which is that you say well, I try to concentrate myself on a small aspect of Hilbert space and Somehow the choice which part I look at should be physically Motivated and so I have to decimate it in many ways This is also what you had in the last lecture where you were presented with setting up variational wave functions I mean even if variational wave functions may not form a linear space Maybe they do maybe they don't they are of course a subset of the state space Simply by their form you're constrain yourself. That's one way of doing a Selection of a subspace a very successful method in physics RG methods are another way of doing that this integrating out of fast degrees of freedom in an RG step is exactly Doing a selection of a subspace. This is actually where the word decimation is mainly used the question of course is how do we find the good selection and I claim that matrix product states as kind of the most simple incarnation of tensor network states are A very systematic way of doing that So the whole business Actually now I will give you an a historic presentation in this Lecture I will present it to you in the language we would use right now But it is to some extent also interesting to appreciate the importance of notation in that field Perhaps I do that because notation will be a little bit unpleasant and unusual perhaps at first for you but I want to To kind of motivate you to make the effort to try and understand it because in some sense All this has an extremely long prehistory which goes back into the 40s and then stuff was forgotten but the the modern history of all this kind of Network states starts with something which is called Something totally different the famous DMRG the density matrix renormalization group invented by white in 92 and this method started out with solving exactly one problem and Steve managed to get that past the reviewers of a PRL after one example This new formulation appears extremely powerful and versatile and we believe it will become the leading numerical method for 1d systems It eventually will become useful for higher dimensions as well I mean this is an extremely strong claim But it must be said that actually it has come true and this is what I will be presenting to you On the other hand progress and in this in the methodological development was relatively slow And I remember that from the days when I was your age and then sort of like what happened that in 2004 an insight That DMRG is linked to something called matrix product states a formalism which goes back as I said to the 40s There's this book by Baxter exactly solved models in statistical mechanics many of you perhaps know that but in some sense It's all in that book already. It's just that they didn't have the computers at that time But so like that there is a link that it's more or less the same that had been around For quite some time and we all thought it's something mathematically nice But okay nice for the mathematicians we go on doing it like we always do it with RG And whatever and then in 2004 or so suddenly this insight Exploded and you see this from the enormous number of papers that appeared Within one year where a major part of the methods that are now currently the mainstream in the business Where it were pioneered or invented is just because this new language Which I will introduce you to is so much more powerful for thinking about it Some algorithms which had been extremely difficult to invent before suddenly became extremely easy Basically the formalism more or less forced them on you You will see stuff which you will say well if I look at it Is there any other way of doing it and I will say no there isn't but in in other formalisms You didn't see that so this is why we will go through that and At this is the point for the advertising the publicity interruption If you want to read some reviews There's an old RevMod fist which was still in this old statistical physics perspective If you want to write your own code many people find this review Useful and then there's one which takes more quantum information Perspective there have been many more reviews since but as the field has been exploding so much There you would really have to look more precisely what you're looking for in detail So now let's get started after kind of trying to Introduce you to this idea that we have to go through a heavy formalism perhaps. It's not so heavy after all Let's start out with the definitions First of all, I imagine a quantum system that lives on L lattice sites at the moment There's no need to think about that as a one-dimensional system But you may if you wish because this is where this will be most useful anyways on each lattice site I I introduce local states Which I will Consistently call sigma i and there will be lower lower case d of them So for a spin one half lower case d would be just to spin up spin down Then the local Hilbert space is formed by these local states sigma i One to d and the total Hilbert space is of course simply the tensor And then the most general state you can write down is of course I mean I'm talking about pure states. We will cover mixed states at some point The most general state you can write down is of course this superposition of Exponentially of the exponentially many basis states with some Expansion coefficients you all know that and we will perhaps abbreviate them from time to time in this kind of notation Okay, so far so good in that sense you can discuss any problem Of quantum physics, but of course we have this exponential Complexity so what do we do about that in one standard? Approximation you all know here you will encounter it in a perhaps somewhat unfamiliar face Is the mean field Approximation that you basically assume that each particle is exposed To say external fields and to effective fields if you think about vice mean field theory produced by all the other Particles that are around which means that effectively the wave function of these many particles will factorize that means that Side per side that means that this very complicated C sigma 1 to sigma l so exponentially many Coefficients they get down to factors one factor on the first side one factor on the second side and so on and so forth and Of course the value that the factor has Will depend on the local state that act is on site one two three and so on and so forth So instead of D to the power L coefficients you get down to D times L Coefficients we all know that mean field theory is extremely powerful and efficient Someone once told me you get a Nobel Prize for mean field theory not for the fifth order perturbative correction, okay? That's probably true but The big problem about mean field theory is that it misses perhaps in some sense the Essential quantum feature namely entanglement In a kind of a in parenthesis. There's many people talking about something like quantum physics 2.2 point 0 and then you wonder what was quantum physics 1.0 In technological applications, we all know of course our the computers well the laptop Your smartphone these are all sort of like quantum physics 1.0 devices They use basically the Pauli principle and wave mechanics Entanglement as a sort of like as a resource is not yet used much I mean there's now quantum cryptography a quantum computer would use it but that's sort of like on on the way of becoming important and A mean field ansatz Totally cannot capture that and the simplest case is for example If you take two spin one halves then this would be the most general wave function You can write down and yeah now you try to and now you say I want to encode The singlet state which is obviously a very important state Where what happens on site 2 is totally conditioned? But what by what happens on site 1 so they are entangled and if you now try to bring that Into this form that of course is easy You just read off the seas and then you say can I factor the sea the sea coefficients in this way like I Wrote it here. I will not show this try it for yourself. Maybe you have already in some other lecture you simply can't and The so there is no way there is no way of and we will see this in a more mathematical form There's no way of capturing entanglement in a mean field Approximation or is such a product factor as such a factorization But what you can think about is and this is how I want to motivate this type of state is you can say Well, let's do this simple as a generalization of this product of scalars to a product of matrices That's a simple. That's something very simple So the product would be the original product would be a product of one times one matrices So here we are going to two times two matrices say as the most primitive Extension or something much much larger as large as your computer will be able to handle at the boundaries You have to do something because you need a scaler ultimately, but that's not worry about that At the moment. So is this potentially a good answer. So this is just bullshit so The answer is this is an extremely useful Ansatz and in fact it had already been used a lot in the late 80s and early 90s in The context of the so-called Affleck Kennedy leap to sake model, which I will not discuss here But it's it's actually a model which is a very simple representation of also topological states at that time It went under the heading of the whole day in model and the whole day in gap or the whole day in spin gap And there it turns out that just going to these two times two dimensional matrices as the most simple Generalization of the mean field ansatz is able to capture basically everything that matters in this model Yeah, and present actually exact solution in some sense But this is sort of like not what I am aiming at there's an entire field of research where you study this kind of ansatz analytically with small dimensional matrices and work out everything by hand What I am aiming at is say what can you do for whatever? Hamiltonian but allowing yourself to make these matrices as big as your computer can handle and then you arrive at the general matrix Product state which will be the topic for the next half hour until you get lunch Where you say? This is still this coefficient C sigma 1 to sigma l But it's now constrained to have the form that is that it is the product of matrices Where on each lattice side, of course, there are various matrices depending on what the local state is like I had the numbers before and so that I can work out that to be one of these scalar coefficients The matrix dimensions of course have to match Yeah, so I start out with actually a row vector and end with a column vector And in between I have real matrices What I should mention right here because we will make extensive use of that It looks like a technical remark But in some sense it's what makes these algorithms ultimately work And if this possibility does not exist you are in trouble Is this representation of a quantum state is not not unique Because you can just take any matrix x which is invertible And then insert it here like x x minus 1 in here And then you redefine the one matrix as m x and the other one is x minus 1 m And of course the state has not changed. We will use that a lot in the following And I think now comes the No, not yet I still prepare you for the one really bad slide and the question is why would you want to make This ansatz I just mentioned to you that there's this aklt model where it seems to work But then this is relatively special I claim and we will work out that on the blackboard because it's useful Even if in practice you don't do that. It's numerically inefficient You can show that any quantum state can be brought into that form It may not just not be useful But sort of like that's of course important. They say in principle I can write any state like that Um that what is more important practice is these states are hierarchical The size of the matrix you need to get a good representation good approximate representation of a quantum state Will be related to the degree of entanglement you have So right now you should understand that all these methods I will be presenting today and tomorrow have the bias that they love low entanglement So if you're in a situation of extremely strong entanglement and we will see such cases Do not expect these methods to work. That's a clear bias And a second remark is we will not pursue. I will show one slide is that funnily enough They emerge naturally also in traditional renormalization groups And this is with hindsight the connection between MPS RG and DMRG What is extremely important and in some sense what I will be talking about today Is you can manipulate them very easily and efficiently Otherwise it would be useless. Why would you come up with this complicated representation otherwise? And perhaps the best thing is If you have a bunch of these matrix product states you parameterize them by these matrices Um, you you can search them efficiently because typically what you will have is not the situation is I give you the state and do something with it That what you will typically have is I have a physical problem meaning I have a Hamiltonian age And then I want to know what state represents the ground state Yeah, I mean what matrix product state is the best approximation to the ground state What matrix product state is the best best approximation to a mixed finite temperature state at first sight? This may seem impossible, but it is so these are questions you want to ask and actually we will provide efficient algorithms for that Okay, so the one mathematical tool you basically need to understand Is the so-called singular value decomposition? And if you allow me for one second to ask you who in his his or her undercredited training Was exposed to singular value decompositions Raise your hand Okay, the numbers are small, but they are increasing when I was a student you were not taught that stuff at all Engineers have been taught that since I don't know the last hundred years or so because they use it every day google Basically the google empire is in some sense based on singular value decompositions We it seems in physics we are so much focused on the eigenvalue decomposition that We do not know about this, but this is how the work goes this method We should understand and actually those of you who say my god. I am in this entire stuff here About dmrg matrix product state so on doesn't interest me if you want to take one thing home, which might be useful in your future It's really the singular value decomposition. It's perhaps in my view. I would say the most Interesting and most powerful decomposition linear algebra has an offer Yeah, and many of the other techniques you can derive from it actually So what you take is you take a general matrix of a of dimension m times n And there's I call k the smaller of the two dimensions, whichever it is and then the claim is I can decompose this matrix a in the form us v dagger And the three matrices us and v dagger have very special properties. That's of course why you want to do this The dimensions are such that the product sort of like matches up to m times n that has to be and the claim is u has the property that u dagger u is the identity Or in other words the columns of these u matrix are all orthonormal If it happens that the u matrix is square then it's actually unitary The v dagger matrix the same thing there It's in fact that the rows of v are orthonormal so that the the columns of v dagger are orthonormal And it's unitary if that is a square matrix So that's already very nice because orthonormal vectors you start thinking about bases And in between you have the matrix s. It's a diagonal matrix. It has only non-negative entries They can be zero of course, but they are not negative And these are the so-called singular values and for example the number of those guys that do not vanish Immediately give you the rank of the matrix a this is not what we will mainly use But so this is where it ties back to standard Linear algebra So and a notation which you will find a lot in the literature Maybe I will also use it today on the blackboard whenever we will see is that for example the These orthonormal columns You write them as vectors say like as ket vectors u1 u2 and so together They form the matrix u you do that because you want you call them singular values Where singular vectors these are the left singular vectors because they sit on the left you can do the same thing for the v you do that because Ultimately you will use them to build orthonormal bases. So that's a popular notation Good. So you can do this decomposition. Why should it be useful? Well Let's first compare it to the one which you definitely know All of you is the eigenvalue decomposition of a matrix a Where basically the here the vectors in u are now the eigenvectors and then of course lambda is diagonal and contains as diagonal entries The eigenvalues lambda i and you can connect these two if you square a Well a is not necessarily a square matrix. So what you do is you look at a dagger a or a a dagger And if you work all that out Do the singular value decomposition? What you ultimately find is that the that the the That the matrix a has a singular value decomposition such that the squares of the singular values are the eigenvalues Of Of a so this is the connection the same thing happens for a dagger a What changes is the eigenvectors the eigenvectors of a a dagger are these Left singular vectors the eigenvectors of a dagger a are the right singular vectors But we don't need that detail, but you see in some sense. There is a close connection The reason why in that case just for those of you who do numerics Why you don't use the singular value decomposition, but go for the eigenvalue decomposition is because Squaring a matrix is numerically always bad because it makes the condition number worse And in fact the algorithms we have nowadays for doing for doing singular value Decompositions are not as efficient as the best ones we have for eigenvalue decompositions So by all means if you want eigenvalues do it in that way, but more generally you will have to turn to singular values Okay, so now this is the most complicated slide of the entire presentation And here I will go to the blackboard and leave this Leave this well, let me take this along perhaps and leave this On the screen and I will use now the singular value decomposition Step by step to show you that you can decompose any state into a matrix product state So what do what do we do? We take these exponentially many coefficients sigma 1 to sigma l Now I I under I think of these coefficients as a matrix I call this matrix. What did I call it here psi? Well, this starts well psi And I do an operation that those of you that are good at matlab and python They know this under reshaping now you can interpret this as one really huge vector Where this is this entire thing here forms the index of the entry of the vector But I can also kind of say this is the vector I can say this is the first line of a matrix This is the second line of a matrix and so on and so forth So I can turn this Into an object which looks like this and this is What I call reshaping Actually these languages actually have commands to do that. So I call this sigma 1 comma Sigma 2 to sigma l. So this is now a matrix. This is why I insist on the comma on this matrix You can of course now apply your singular value decomposition. That's the matrix a I had But I need a unfortunately for something else now to stay with standard mps notation And then what I have is I have this matrix u Then I have s Then I have this matrix v dagger and now let's think about the indices The matrix u will start with the index sigma 1 The matrix v dagger Will end With all this rest Sigma 2 to sigma l and now because they guys these guys are multiplied to each other I will in principle have a double sum, but as s is diagonal It's a single sum. So let me call that sum. What was it here a 1? So I have here Okay, so that's that's my first step So and what I do next is is um, I do another Operation, which is very important. Mainly I do something which is called slicing All this stuff one is doing in this field is more or less Reshaping svd and slicing if you understand those operations you are basically in business So what I do is is let's take the following picture um, this is um This is a matrix This is a matrix I have We're sort of like these guys if I do a matrix multiplication will be multiplied to what's happening further on in my In my sort of like Matrix ansatz. So what I can do is I can slice this into a bunch of Matrices which are now kind of these rectangular stripes and this like is sort of like Matrix one matrix two matrix three matrix four and so on Let's give it d right away because this is related to this d the local state dimension So and I now I do this here to this matrix and say I do the following slicing use sigma one And let me introduce an artificial dummy index one here I can do that. I mean this doesn't do anything a one I write this as a set of matrices namely these guys here which I call a sigma one That's then basically the value of sigma one one two three and up to d and these matrices have then dimension one comma a one So that's basically what's going on On here and you see what you get in this special case. We will see from step two onwards We won't have a dummy index anymore, but a real one What you see here is a bunch of row vectors and as we may remember that I told you the first matrix in a matrix Product state has to be a row vector otherwise. I don't get a scalar ultimately. So is this step This slicing that where I had one matrix. I now have a bunch of d of them Is this step understood? Ask me again. It's this. This is the one step which I think From teaching this several times which I always find with hindsight. That's where most people stumble It's in sense very simple, but usually you haven't seen something like this before Okay I will I will make the second step Sorry, ah, okay What you do is or perhaps what I do is I will do the the next step where the slicing appears again And explain it again there because the first slicing is a little bit special So hold out for a second and then we will we will we will do that so What I can do now is I rearrange I rearrange this result as I have my sum over a one now. This guy is now written as a sigma one one comma a one and this entire Stuff here. I multiply together Then I have this in this a one to sigma two And I can if I want call this c a one sigma two to sigma l Okay, and now what I do is Don't worry. We will not feel fill the entire blackboard is I do this reshaping again a sigma one one comma a one on this object here I turn this into a matrix and now it's the matrix why I put these first two guys together as a row index A one sigma two comma Sigma three to sigma l. So this is the reshaping So now comes the svd one I will put the sum of the svd right here a Sigma one one comma a one now. I get u um a one sigma two comma a two from the singular value decomposition s a two a two and v dagger a two comma the entire rest Okay, this is the svd step and now comes and now I explain again um, how this a slicing works Let me first write the stuff which stays unchanged I will multiply this together right away to be c um a two a three How can this be sigma Ah, this is a two sorry um a two a C a two sigma three sigma four to sigma l. Okay, and now comes This step here, which is this slicing now. I have a a matrix u Which has the form following form Here I have um a two as the index which labels this here and then I have the double index sigma A one sigma two So and now I will I'll do it in the following way. I set here sigma two is one Down to sigma two is d They one two d and then this is a one equals one To a one equals this will here be actually d squared But this is now not so important So I split up sort of like the multi index in this way And then this object here Is then a sigma two equals one a one comma a two This one here is a sigma two equals two a one a two And so I get this set of d matrices which replace this big one here Okay, I hope The dimension of a one here happens to be d Sorry, then I yeah very good a two is then d squared and so on and so forth. Thanks a lot Good, but sort of like what we get here is now a sigma two a one a two Okay, and now we are done because what you can now do is You can continue doing that as long as you wish you can get then a one I will not go through the last step, which again is a little bit special a sigma one one comma a one a sigma two a one a two Until you are at sigma l minus one a l minus two a l minus one a sigma l a l minus one comma one And this one one is really so that the product gives a scalar and there in fact you see now where the dimensions come from Where I just made this little mistake This here is dimension d This is d squared And that will be d That will be d squared. So the powers Grow towards the center. This simply comes from the dimensions that a singular value decomposition produces Which means at the center if if the system is large The dimension will be exponentially large. So in principle, this is now the mathematical proof that you can decompose any state But you could argue in some sense It's it's useless because you're you're back again to an exponentially large number of Coefficients and what do you do about it? Of course the entire point will be about truncating it But to close this off, of course what we have here is now a bunch of matrix Matrix multiplications. So what we get is simply a sigma one to a sigma l as a product of matrices And this is what we um What we started out from and this is what I wanted to show Um at this point. So why did I go through that in so much detail looking at the clock? I get a little bit nervous The point is that these techniques Which we have that you reshape Matrices vectors sort of like by rearranging the indices between rows and columns This isn't take a technique which you use all the time Then you use the svd technique all the time And then you use this slicing and of course you what you can also do is You give me these matrices It may also be interesting to put them back together again to one big matrix. So basically you you unslice That is also an operation that one needs But very clearly apart from matrix matrix multiplications which pop up in this business here obviously all the time That's all you need So in some sense I could say well until three o'clock work out the following mathematical expressions Because there will not be much more in some sense so Now I want to Use this To make the connection Yeah You know guys there is a ventilation and it's so loud that I actually I have to come up to understand you No, these are the local states on on a lattice site. So sigma of five for a spin would be either spin up or spin down Yeah, sure Let me put that on the blackboard because it's interesting for everyone because we will be doing For example for electrons for the typical Hubbard model situation, you would have the state empty Up Down And double occupancy. So that would be that was would correspond to a case d equals four and if you need multiple orbitals well You either divide them up sort of like that you have One site is split up into several orbitals That's actually what you should do You should not do as a side remark say make one fat site where you say I put Sort of like all the orbitals of of an of an atom On one side because then this d would of course become quite large and it turns out that in all these algorithms Distribute length is not the problem, but local dimensions Are rather a problem. So in doubt divide it up if you can We will see that in the case of probably tomorrow Where I will have an example for spin ladders where you can think about do I make a rung into one big site Or do I really keep them separate and go zigzag or something? They will see see this point. Okay Any further questions at that stage for bosons? By the way, you run into the problem that of course for bosons your local lattice state will be n and so the dimension formally Is is infinite Often repulsive interactions save you because they suppress large occupancy So you can say I keep d to some d max Say 10 if you know that basically you are done at five or six bosons And then you you check your calculation and vary this number a little bit And then you see whether it works or not if you have really extremely large Occupation numbers as they can pop up in In for example in polar on problems or so there are techniques of compressing that if anyone is interested in that Just drop me a come by or drop me an email or whatever Because this is this is of course one of the big constraints That may be a problem in these typical optical lattice Hamiltonians. It's not yeah Okay, let me before the break Um, just finish off This slide here about this the schmitty composition because this is important to make the connection between this kind of very formal ansatz and something which is Which is very useful in the following This goes back to 1910 or 11 um That I think about the the the the the the lattice Here picture 1d as a universe a b and I divided into these subsystems a and b and here they match This is length l and this has length l plus 1 to l and so of course I can introduce local orthonormal basis on a and b Then the most general state you can write down is of course this psi ij where every state basically talks to every other one And what you can do of course is you can say oh, this is a matrix Then I do a singular value decomposition on this matrix psi ij Um, then uh, basically what I have is here are now the singular values And the matrices u and v dagger I hide In the sense that basically they are unitary or yeah, they are transformations Maybe I have to extend them to be unitary They are transformations on the original basis i and j because of their properties These states alpha which I form are still orthonormal sets which I can extend to basis And I have one set of alphas on sub subsystem a and one set of states on subsystem b And they are orthonormal And so I have this beautiful representation of the state which is the so-called schmidt decomposition if you haven't seen that Before and now instead of every state being connected to every other one The states only talk to each other pair wise And of course they don't have to be the same because a and b can be totally different But now you have reduced you have brought the system into a form where states talk to each other pair wise This will be extremely useful in a second, but you see the way to get it Is the singular value decomposition Okay, why do we need it? Let me get this one slide, but then it's promised we stop for lunch As I said entanglement is the big thing So what is entanglement more precisely bipartite entanglement? Which we understand well in these states and the general way of measuring entanglement is You you form the reduced density matrix of the system, which is entangled with its environment So this was my state. This is the Density operator of the total state I form the reduced density matrix by tracing out all the degrees of freedom of the of the environment This could now be a and b or vice versa And then the entanglement is simply given by the von Neumann entropy the rolock row The off the subsystem. So this is the entanglement If you write it in terms of the eigenvalues of the other weights of the reduced density matrix It takes this form. This is the expression which in one way or another you have seen I think very often, but now what does this impression imply for the for these matrix product states? Now let's make an arbitrary bipartition Say the dimension of the matrices which match here is m I mean, they have to be the same in row and column so that I can multiply them together So that means that because the product of all these guys Spans basically the environment and of those guys spans the system that the schmidt decomposition I don't really know what the values here are Can have at most dimension m because row column of matrices when you multiply them to each other They talk to each other pairwise just in as in the schmidt decomposition. So Our state after being schmidt decomposed will have this form the important point is not this here But that it's at most m contributions So the reduced density operator then after tracing out looks like this and this is the Entanglement and now comes the point. What is the maximum value that this expression can take? You can work out mathematically. Maybe you know it by heart The maximum is reached for a density matrix if all the weights are exactly the same Where sort of like the information or the statistics is spread out as much as possible maximum entropy principle in some sense And if you work that out if this is m Then w alpha is 1 over m in that case Times m and then here it's log 1 over m minus gives log m So which means and this is basically the take home message now for the first part of the lecture The these these kind of ansatz can at most encode an entanglement Which is limited by the log of the matrix dimensions And that will basically in some sense Define what can be done with this kind of ansatz and what cannot be done with it and that we will see at Three o'clock and enjoy your lunch now. Thanks a lot