 Thank you. It's a pleasure to be here, to see so many young people learning all these subjects. So my topic is the density matrix renormalization group. And originally I was scheduled to come somewhat later in the school and I wanted to connect up with another conference that's later in the week. So I had it moved here. But then I realized that there are certain things that normally come before DMRG, things like exact diagonalization that naturally help you with DMRG. And so since I'm right at the beginning, I'm going to start with some of an overview of a set of things that sort of lead up to DMRG. And so I'm going to start out with topics that probably nowadays should be taught at the undergraduate level. But because our quantum mechanics is getting a little bit out of date, it probably usually isn't. And so when I've taught senior quantum mechanics for our physics majors and started introducing some of these things, and they're fairly straightforward topics. Now it turns out that somewhat what I'm going to tell you, I spend a few weeks on when I teach at the undergraduate level, and we don't have that much time. So normally I give this on the board. And one of the things about giving a lecture on the board is that it slows you down so that people have time to think about it, which is really important because it's all about thinking about it instead of just the words coming out of my mouth. So we don't really have time for that. So I'm doing something sort of experimental, which is I tried to write with my best handwriting, which isn't very good, lecture notes. And I scan them with my iPad. And so we'll see them. And so it'll go faster than it would as if I was writing them on the board. You'll have them afterwards. But it'll be intended to be a lecture. And the thing is that these are, they were starting out with fairly simple things. So hopefully you can keep up just because it's not really so new. But there's probably things that a lot of you didn't hear about as in your classes. So we'll start out just talking about sets of spins, mostly spin one-halves, and Hamiltonians for them doing exact diagonalization for a cluster of those spins. And then we'll talk about ideas related to entanglement, something called the Schmidt decomposition. For that you need a matrix linear algebra little tool called singular value decomposition, which should be taught at the sophomore level. But often isn't. And so I'll explain what that is. Tell you about the entanglement entropy and something called the area law. And once I'm done with all of this background, then the idea of matrix product states will be a lot easier for you than when we first started doing this. It was harder to understand. But with more recent ideas, many of them coming from the field of quantum information, a lot of the concepts have sort of gotten a little bit easier. And then I'll lead up to DMRG. That will, the later part of this will come in tomorrow's lectures. Okay. And then to have this all sink in, you really need to do exercises. And a lot of the exercises should be computational. I've been a C++ programmer since the early 90s and done pretty well with it. But C++ has gotten more capable, but it's also gotten more and more complicated. And so just recently I've been using the language Julia, which is a pretty new language, has some things in common with Python and Matlab. And I find it quite nice. And it's a language that has a really fast startup time. So I can just start typing some lines in it, and you'll be able to see what's going on. And you'll be able to do things, simple things really fast with high level commands. So I'm going to give exercises. So Julia is free. It's faster than the other choices. It's got a good online documentation. It's quick to download. So you can just get started right away. And so we will have exercises in that. So you'll be forced to learn a little bit of Julia, and hopefully you'll like it as much as I do and play around with it. Now, once we get into sort of the more state-of-the-art DMRG calculations, there are things get a little bit complicated. And you need software that's sort of specifically designed to help you do matrix product states, DMRG, tensor networks. So we have a library that is online. It's at the website, itensor.org. And more than a dozen, I don't know how many, several dozen maybe groups are using it. And so I'm going to show you how that works. This introduces some nice notation and sort of makes some programs extremely easy to write in this sort of calculation. And it also allows you sort of state-of-the-art efficiency. So we'll do some of that towards the end. Okay. But let me start with an overview of where the field is. And so I just want to talk not about all numerical methods used in condensed matter, but just the ones that are targeted at solving the Schrodinger equation for a set of many electrons. And since spin systems normally are made up of electrons, also it will include spin systems. Okay. So there's two general types or two general areas that people usually are in one or the other and they don't cross between them very much. Although one of the things that I'm doing now is crossing between them. But the two areas are, one is where you are trying to understand the chemical details of your material. And so you include all of the electrons and the details of the orbitals. And you usually use density functional theory or variations on DFT. But those techniques are really only reliable for weak correlation. And so you look at either certain materials that are only weakly correlated or you look at certain properties of strongly correlated materials with that. The other side of the field is people who are really interested in the strong correlation. And so, you know, the most famous example of a strongly correlated material would be the high temperature superconductors. To do those, to study those things, you need something that tries to treat the Schrodinger equation more exactly. And the price that normally you have to pay in order to be able to use these techniques is that you have to simplify the Hamiltonian. So you can't include all the electrons. You can't include all of the interaction terms or try to get all the interaction terms exactly right. So we write down very simple model Hamiltonians. And then we try to solve those model Hamiltonians very accurately. The simplest sort of example of this sort of thing is the icing model for magnetic systems, which was introduced as a super oversimplified model way back in the early part of the 1900s. And yet by solving this model, precisely you've got the first sort of understandings of phase transitions. Nowadays, we have models like the Hubbard model, various forms of the Heisenberg model, and the TJ model. These are models that have strong correlation. And we're still working at trying to solve these systems well enough in order to understand what goes on in strong correlation. So the model Hamiltonian methods we consider just the active electrons or spins. See, I should mention that there are methods that are trying to combine the two things. So you may have heard or you will hear about things like density functional theory plus dynamical mean field theory. And there's a number of other types to try to bridge the gap. Okay. So I'm going to focus, I think you'll have other people talking about DFT related things. I'm going to focus on these model Hamiltonian, this area. And then to study this area, then we have to pick which algorithm that we're going to use. And again, people tend to get specialized in one or a few types of algorithms. One of the oldest was quantum Monte Carlo. And quantum Monte Carlo is an extremely useful technique. In fact, one of the first Monte Carlo methods I think was mentioned in this morning, the Metropolis algorithm paper. But the Pudding and Kwan mechanics followed pretty soon afterwards. And so those are powerful techniques. They also have significant limitations. E.D., that stands for exact diagonalization. This is, gives you the exact answer. It's just that it's limited to a very small system size. Dynamical mean field theory. I won't say too much about, but you may hear later. And I'll focus on DMRG. And then there's a set of related techniques, sort of cousins of DMRG. And the broader field that includes DMRG is called tensor networks. And so these are, these are all sort of based on the same type of ideas. They're sort of different, DMRG is one type of simple tensor network that's sort of optimized for one dimension. And there are other ones that are more optimized for higher dimensions. And so this is one of the newest types of methods. And it's generated a lot of excitement. It's also generated, tensor networks has also generated a lot of excitement, even in things like quantum gravity. If you can believe that one of our condensed matter techniques is actually doing something in quantum gravity. But anyway, that's a very interesting field. And the other thing is it's connected to quantum information. And so we'll talk a little bit about some of the chapter one ideas of quantum information. And then that will actually help us understand the DMRG algorithm better. Okay, so the background that we'll cover, exact diagonalizations, model systems, some of the quantum information and entanglement. And those will be mostly the topics today. And then I'll throw in just a little bit of showing, typing you, typing a few lines in Julia just to wet your appetite. And there'll be some exercises that I sort of give out as we go along, which you can try to do as soon as you can. Mostly, mostly computational, but we'll have one in class one where you, I'm going to ask you to everybody to write, try to solve a little simple spin, write down a simple spin Hamiltonian for me. And that'll come up pretty soon. And it will try to get everybody sort of thinking or sitting on the same page. And so tomorrow we'll go on to the more advanced topics. Okay, so let's start with exact diagonalization small clusters of spin one halves. So the standard quantum mechanics books tend to tell you about at most two spins. You add their angular momentum. You learn about the poly spin matrices. But do you ever get four spins in a row and say, what's the ground state of that system? Normally you don't. So we're going to do little diagonalizations like that, which is sort of a key part of condensed metaphysics now. Okay, so let me just start reviewing. We remember the poly matrices for spin one half. The spin operator is the vector s is h bar over two times the vector sigma. And the vector sigma is a vector where the x, y and z components of that vector are each matrices. Okay, so this is all written in the z basis. So we think of the natural sort of direction for the spins to point is up or down. And the sigma z operator is diagonal in this basis. The first element in a vector is the up part and the second one is the down part. So here's a little spinner. The up index is a and the down index is b. And that represents the wave function where you have coefficient a times up and b times down. Okay, but then we have the other components of the spin. The sigma x has these off diagonal ones and the sigma y has the off diagonal i's. And so the off diagonal piece means that we have to do more work. If your Hamiltonian is diagonal, you're already done. You can just look at the diagonal elements and read off the eigenvalues. But we have off diagonal elements there. Okay, let me go. Okay, so sigma z is diagonal. So sigma z times an up gives you an up. Sigma x times an up gives you a down, so it's not diagonal. Okay, so now let's go to two spin one halves. And I'm going to imagine that I write down a Hamiltonian which is h times s1 dot s2. So I have the spin operator for spin one dotted with the spin operator for spin two. They're vectors so I do the dot product. And then I'd like to know what are the energy levels of this two spin system? Well there's a way to sort of shortcut the answer to this with a sort of algebraic trick which I'll show you first. But it's not as general a trick as you'd like, so then we'll sort of do a more systematic way of doing it. But starting off with the algebraic trick, so s total is vector s1 plus vector s2. And that's the definition of the total spin. And we should keep in mind that spin operators on different sites commute. Okay, then s total squared is the dot product of the s total operator. So we expand out s1 and s2 and then do a little bit of algebra and get this expression. And the thing about this expression is it has our Hamiltonian in it. That's where this is a sort of trick. Ordinarily you do s squared and you don't find the Hamiltonian magically pop out. Okay, and we also know from the basic properties of angular momentum that the expectation value of s squared is given by this s angular momentum quantum number with this expression. And so if we plug into this expression for spin one-half you get s squared equals three-quarters. And I'm going to set h bar equal to one from now on. And for spin zero you get s squared equals zero. And for spin one you get s squared equals two. So I have pieces and just a little bit higher I have the s squared. So I have all the pieces and so I can solve. I can plug into this equation and get s times s plus one is these values and then solve it for s1 dot s2 and find it in terms of the total angular momentum quantum number of the two spins. Of course the angular momentum number for a single spin one-half is fixed at one half. Okay, and so there what are the possible values of the total angular momentum for two spin one-halves? Well it can be in a spin zero state or a spin one state. So for the spin zero state you plug it in and you get minus three-quarters. For the spin one state you plug it in and you get plus one-quarter. And so that's sort of a shortcut of just doing a little bit of algebra to find out what the energy levels of the two spins are for this Hamiltonian. And the triplet is triply degenerate. And so we have energy structure that looks like this assuming that yeah, assuming that if there was a j in front it was positive. Okay, so now a more general approach that works for more spins is to write h as a matrix in a basis. Okay, so what's our basis? Well for two spins you have to list every possible state for every up or down value for each spin. So for these kets here the left arrow is spin one and the right arrow is spin two and I have these four possibilities just like doing a binary arithmetic with two digits you have four possible numbers. Okay, then a wave function is a vector and you just have to give the complex coefficient in front of each of these basis states and then that gives you any possible wave function. So the length of it the vector is four here but if I had n spins it would be a vector of length two to the n. An operator is a matrix and it would be four by four here. If it was n spins it would be to the n by to the n. So an operator you have to write every possible basis state twice. One choice gives you the row and the other choice gives you the column and then you have to do the matrix element sandwiched for that operator between these two basis states and that will give you the expression for that operator. Okay, so we take in s is one half sigma, h bar is one. It's convenient to not use for doing the Heisenberg model which involves the s dot s interactions. It's convenient to avoid the sx and sy pieces and right define the s plus and s minus spin raising and lowering operators just as sx plus or minus isy. Okay, that gives you sz is again diagonal but it's got one half and minus one half on the diagonal and s plus has a one in one corner and the rest are zeros and s minus is transpose. Okay, so we can use these matrix forms to evaluate any particular operation of one of these operators on a basis state. So these operators are only for one spin at a time. So we apply them to say, say we apply s plus on the down state. Okay, we write down the matrix for s plus. We write down the spinner for down. The first index is up, the second is down so I put a one on the bottom and then I multiply and I see that I get this one zero vector which is the up state and so I've evaluated what this is. Okay, so if you and then you can put any other state on the other side of this up or down and use this expression and find out what matrix elements of in this case s plus which we already know. Okay, and s plus on up if you do that you find it zero, s minus on down is zero. It's trying to lower something that's already at the bottom and s minus on up is down. Okay, so a little bit of algebra gives us a better expression for s one dot s two for numerical calculations. Okay, so for numerical calculations, so we'll use this z basis and we leave the s z s z part of it alone. This is a nice part because in the z basis this is diagonal so it only gives you diagonal elements and Hamiltonian matrices and then the s x s x and s y s y gives you this s plus s minus plus s minus s plus times one half. Okay, and what this operator does actually is it flips two spins that point in opposite directions so if you have an up down one of these two terms will flip it the other way and then the factor of one half will go in front of it. If it's two ups one of the s plus or minus will give you zero or if it's two downs one of them will give you zero. Okay, the other thing to keep in mind is that if you have a system with several spins each of these spin operators is attached to a particular site and it only works on that particular site and the state of any other spin is left alone. Okay, so if I take s one plus s two minus on this three spin state down up up I can move the s one first of all I can always expand this vector with three spins into a product of three spins together so an up with a cat and a down with a cat and up with a cat and up with a cat. Okay, then I can move the s one operator next to that spin this goes along with the spins spin operators on different sites commuting and I can move the two over to its spin and then I have the third spin it just sort of sits there at the end sort of goes out there but I ran out of them. Okay, and then the s one on the down gives you an up the s two on the up gives you a down and you get this state. Okay, so you can see an exercise has appeared here so get ready because I'm going to have you actually work on it with me not talking for about five minutes make sure I don't oops I skipped way down don't want to give the answer. Okay, so I've given you the pieces that you need to do the following exercise. Okay, so we want to write down we already found the energy levels for two spin one ass this is the same problem. We want to show for two spins. Okay, there's a typo here that I'd left off the s one dot s two right there so the Hamiltonian has a j in front of it the usual coefficient s one dot s two which is just what we've been writing. Okay, so I want the Hamiltonian matrix for this two spin system. Okay, and so this is the the basis states for two spins up up up down down up and down down and so any operator would be a matrix in this basis and I'm telling you here that the Hamiltonian if you use this form of s one dot s two up above if you use that you find that the Hamiltonian has a number of zeros sort of along the edges okay and then it has only two different numbers an a and a b except that the a is sometimes negative. Okay, so I want to let you all work for five minutes on trying to find out yes the Hamiltonian matrix looks like this and second what is the value of a and what's the value of b. Okay, and then I'll show you the answer. Okay, so five minutes everybody try to work this out and I'm going to move this up a little bit so you can see some of the crib sheet sorts of notes. Okay, so I think this is the key stuff. This is the key stuff and so everybody let's see right down this basis the basis has up up up down down up and down down and I guess I better shrink this so you can see more okay so everybody think and write have you fixed the typo hold that thought okay why don't why doesn't everybody compare with the person next to you match up with the person next to you and see if you got see if you got the same answer or see if you're stuck at the same place everybody talk to the talk to the people around you and see if you see if you know something that the other person doesn't okay okay we'll have to move on um let me okay so first first let me show you how one way to look at this problem if you it's sort of natural to do a row at a time or column at a time so you can do something like hit the Hamiltonian on up up and see which terms you get and you should get some terms that look like you know pieces with the with all the basis states and so that you read off the coefficients and if this a term is missing then that's a zero okay so for instance if we hit up up times the Hamiltonian okay well there's the Hamiltonian we're looking at is right here and the first the more complicated part is this plus minus part but up up one of those pluses or minus is going to kill what that and so this term gives you nothing okay so all the expression for up up comes from this guy and sc for up up is one quarter sorry one half times one half and so a is one half times j okay so sorry one fourth thank you to the big okay so here's the answer it was one a is one fourth and b is one half okay and so that's a pretty simple thing pretty pretty simple algebra if you're not used to it it just takes a little bit of practice to sort of get get quite good at it we can diagonalize this matrix it's got the first and the last ones are have no off diagonal elements so the eigenvalues are are already given so we see that two eigenvalues are one quarter the first and the last and the eigenvectors would be that up up or the down down that goes here and then we have a little two by two matrix that you can diagonalize okay and it's a two by two matrix that sort of has a symmetry so that the eigenvectors have to be one one or one minus one with the root two in front okay so you can diagonalize this analytically pretty quickly and so what do we find we find that the singlet state is this one over root two one minus one with the zeros top and bottom so the singlet state is this one over root two up down minus down up okay that's the spin zero state that's got the energies minus three quarters and there should be a j okay the triplet we already have two of the pieces of the triplet we know their energy has to be one quarter I've just left off the j here but here's the first two eigenvectors and then the third has the middle two guys with a plus sign on both okay so let me pause a little bit and show you the very simplest thing with Julia so here I have a blank looks like a Linux screen but this is a Mac and Macs underneath are Unix which is just like Linux so you get terminal windows that look like just in the tutorial and I've got Julia installed so I type Julia and there it is and now I have an interactive session okay so I'm gonna I'm gonna write down this h matrix so I'll do h equals okay so so that one of the nice things about Julia is they're sort of you get to start typing the key stuff right away there's none of the headers and stuff okay and there's a nice notation for matrices okay so I'll have to do this from memory let's see the matrix was one quarter so write it as one quarter 0.0 0.0 0.0 okay and I have spaces in between the numbers okay then I'll hit the next column let's see there was a 0.0 and what was this one minus one quarter 0.0 0.0 there was an off diagonal one this was one half okay 0.5 0.0 okay and another 0.0 0.5 and this one was minus one quarter again right 0.0 0.0 all these are 0 and this was one quarter again okay hit enter okay so it's created a matrix printed it back out so I can see it's all fine okay so that's how you enter a matrix you just know the elements and then to diagonalize it i fact of h diagonalize it and gives you the eigenvalues and eigenvectors okay so the eigenvalues are minus 0.75 0.25 0.25 0.25 of course I've left off the j because it's not mathematical or maple where you can do it symbolically you have to do it numerically uh but it's uh got a lot of built-in capabilities so it first lists the eigenvalues and then it lists the eigenvectors as columns and so the ground state you can they're lined up like this so the the ground state it has this minus one over root two and plus minus one root two etc you can see the other ones okay so that's just a little uh taste of julia okay oh let me show you one let me show you one more thing where is my let's do safari okay let's just go up here and type julia lang.org okay and you can go to so you can see this where you can download it so on the linux workstations here it's been downloaded and and tested you can go to docs and find all of your basic commands and so you use this page a lot and then you have a search here and so I can do eigenvalue and find listings it only gave me one okay so sometimes you know sometimes these searches um don't give you everything right away but uh there's actually several different eigenvalue routines there's lots of built-in linear algebra okay so basically you can search through this you can this eigen fact if you didn't remember that you would look through here here's the first of the routines eigen computes eigenvalues and eigenvectors of a c i fact for details on the balance keyword argument so you can learn all the commands okay let me now go back and here were the two key commands that I used in julia okay okay so let's now move to more than two spins and so the first thing to remember is that this hillbert space is spanned by this set of basis vectors which has every possible spin value on every site and that makes it exponentially large in the number of spins so there's two to the end different uh possibilities for slots that's how long the vector is and this is the key difficulty of quantum mechanics for many particle systems it's that the problem that you're trying to solve is exponentially large uh in terms of the size of the hillbert space with the number of degrees of freedom say the number of spins so the sort of direct approaches that we're talking about right now writing down the hamiltonian matrix those can't go very far because the problem is exponentially hard okay but it's still very useful to take it as far as we sort of reasonably can and so we'll talk about this for a little while dmrg is a way of sort of short cutting where you can do much larger systems one of a number of ways for shortcuts and the shortcuts are essential are essential for doing the size of systems that we need okay so here's psi with a to the end long vector you write down the basis h is a to the end by to the end matrix and we just want to find usually the lowest eigenvalues of h and uh but it's just too big so treating n equals 100 to the 100 is just too large to store on any computer okay but we'll see how we can do it with three or four or ten spins okay so the n equals two k's we already have the hamiltonian just did that okay and one of the things we should note is that the zeros there's a lot of zeros there's a lot of sparseness that's showing up already just with two spins and it gets much more and more sparse the more spins you have so this huge hamiltonian matrix is going to be almost all zeros the elements in general uh look like this you put um so you can if you're doing it symbolically you put primes on the indices say on the left and no primes on the right because it's it's two set of duplicate indices to do an operator okay and uh so it's useful to use the same indices but just put primes on them which just means you know the the other side of the matrix and each one's up or down okay so let's look at n equals three okay so here's our little system okay and we're not putting in periodic boundary conditions here because later on we won't we won't want them you know it's uh it's just a different problem with periodic boundary conditions but dmrg doesn't like periodic boundary conditions as much we're just doing an open chain uh so here's the hamiltonian it just has two of the s dot s's so this is the heisenberg model and the heisenberg model the standard form of the heisenberg model is you just put an s dot s for every near neighbor uh bond okay s one dot s two only operates on one and two and uh so for instance if we put s one dot s two and i look at this uh s three is left alone what that means is that s three for this operator has to be diagonal there has to be a chronic or delta so when i say proportional here i'm saying basically this is zero if s three is not equal to s three prime because there wasn't any spin operator to change it you start on the left on the right and you didn't change it so it has to be the same okay so the zz part um also doesn't change the it doesn't do any spin flips so the zz part doesn't do anything uh so the zz part just gives you a factor of plus or minus one quarter okay so if the two spins are parallel it gives you a plus one quarter and if the anti-parallel the one quarter times minus one quarter gives you a one half times minus one half gives you a minus one quarter okay and then the uh s plus s minus piece just flips a spin okay but it doesn't flip up up to down down it gives zero on up up or down down so it takes an up down and it just trades the places but it throws in a factor of one half okay so it turns out that uh rather than do all of the uh algebra that you started on with the the two spin case i can write down two simple rules to write down the Hamiltonian uh for the n spin case okay and so here are the two rules so it just is based on the observations that i just said about what each of the terms do so first of all the diagonal elements well the s plus s minus didn't do anything to those so that's out so you just have the the z z part z z part gives you a plus or minus one quarter but you have a sum of terms in the Hamiltonian for each placement of s one dot s two so you have to add up all of those one quarters and they might might cancel because you might have two parallel and then the next one's anti-parallel so that one quarter would cancel because they're all going in the diagonal slot so the diagonal elements are j over four and then you just you take that spin configuration on the that's you know the same in the row in the column and you add up the number of parallel spins that are neighbors and you subtract the number of neighbors that are opposite to each other okay and that adds up all those pieces together and that's the simple rule for the diagonal elements okay the off diagonal elements are zero if more than two spins are different you know if three spins are different there's no way that you can do an s dot s and change three spins to match the other guy right you do have a lot of s dot s's but they're all added and so you can only change three spins you only change two spins if if you have a sum of two spin operators okay so off diagonals are zero if more than two spins differ if two nearest neighbors spins are flipped up down to down up then you get a plus one half and you don't have to worry about minus signs otherwise this is zero yes only two spins flipped yeah so if if no spins are flipped it's not off diagonal okay so then there's one spin say one spin was flipped well the s dot s has two spins so it's going to do something to two it can't just change one you know other terms in the Hamiltonian you know like a sx piece could change one spin but here I don't have anything so the only and three spins can't be flipped forth you know it's only the twos okay and that tells you you know that's a small fraction that are you know each the the row basis state and the column basis state are almost identical you just have a few differences between them that can give you off diagonals which is an important thing for doing the calculations because you know if something is zero probably you don't have to store it you can use a sparse form that skips it okay so those are the rules okay and mostly it's just sort of involves playing around with doing a few more examples and thinking a little bit about it rather than some fancy proof and you'll see that those rules are right and so you can take the n equals three case and it's a so first of all how big is if you you know how big is the matrix that's the first thing we should know which well it's two to the three by two to the three it's eight by eight okay and you can see there's a lot of zeros okay but then you go through okay so let's let's do this upper left one it's uh oh and what did I do I wrote it in qubit form so qubits are just like spin one halves except we call say one of them a zero and the other ones a one so here the zero is an up and the one is a down okay and so the upper left diagonal one well it's got three ups in a row so it's got two parallel bonds so each of those parallel bonds gives you a plus one quarter so you have two times one quarter okay let's take a middle guy here's a zero one zero so that's got two anti parallel bonds and so this is a minus two times one quarter okay and then some of the other ones are canceling because it's got a parallel and an anti parallel so you get some zeros along the diagonal okay and then you just go through and you look at uh you know look at somebody and say well what spin flips your neighbor spin flips could happen it takes a one zero to a zero one you know this one zero could be as could go to zero one zero okay and that's this one half okay so you can quickly write this down or you can write a program to follow these rules and all of a sudden you you have a really short program that can do a the diagonalization of a spin system that's sort of you know it's just limited by this exponential growth of the difficulty of the problem okay okay now there's some more simplification that comes in um there are there are um there are whole rows that all the off diagonal elements are zero more generally it'll be sort of you'll have blocks that are uh sort of talk to each other and then off diagonal stripes that sort of rectangular areas that are all zero and there and so this comes from the conservation of total angular momentum so this uh spin operator just involves dot products of spin so it doesn't pick a direction in spin space so it it conserves angular momentum and what that means in this case um the simple way to use that is just to use the conservation in the z direction so what that means is that the Hamiltonian can't change an up to a down all by itself there has to be another down that went up to leave it with the same total number of ups and downs in other words you can sort of group the all the possibilities by the number of up and the number of down and that's one group and that doesn't talk to any other group because the Hamiltonian can't change the total number of ups there aren't any pieces that do that okay so uh so if you count the number of ups and downs you sort of know the total s of z okay and another way of putting it is that the Hamiltonian commutes with the total s of z operator okay so this makes it block diagonal so often we're only interested in one particular value of s of z usually s z equals zero or we happen to know that the ground state is in that particular value of s of z total and so we can pick that and then just work with a smaller set of states we don't have to do two to the n we can use a smaller set so for the three spin case one of the blocks has two ups and one down there's three possible arrangements for that uh up up down up down up and down up up now in the original in the big matrix that I wrote down earlier they weren't in order so you couldn't see that this was a little block but if you just sorted the Hamiltonian and it's just a reordering of the basis states reordering of basis states is always allowed it gives you the same problem uh you would find that this block was sitting there and it was not connected to anything else so we can just use this block and diagonalize it that and so this is just a part of the other block the the elements are the same and so you can diagonalize this one and it's a shortcut and you get this uh ground state is this vector and so you can just rewrite it in terms of these states the other states are zero okay so how much does that help us well it's a big help because as n gets big the number of blocks is proportional to n so you get to cut down your problem by a factor of n but the whole problem is blowing up is two to the n so instead of having a two to the n problem you have two to the n over n so it's a help but it's not uh still doesn't make it not exponentially hard okay so uh here's an exercise which is uh for do you want to get started tonight or um let's see so this is a mixture of um writing down the matrix analytically and then using say julia to find the eigenvalues and eigenvectors would you have that should I just showed you how to do so that's pretty easy okay so this is uh exercise for n equals four okay an open chain there's no term connecting one and four and we use the sector with sc total equals zero okay that it turns out that there are six basis states in the sector so this is a six by six matrix you get to use those rules to uh write down the non-zero elements and then diagonalize it and uh that's the exercise okay and right after lecture i'm gonna send the the pdf to someone to to try to get it to you right away so you'll have the sort of rules and stuff okay so that's an exercise yes well there are theorems that sometimes tell you and the theorems if they if they apply they probably tell you that it's at the sz equals zero sector uh so there's some some you know there's a theorem that says in certain circumstances the ground state is a singlet so it's s total equals zero that means also sz equals zero and so that sometimes applies sometimes you have to do uh so if you have to repeat a breach value of s sub z it's still a big win you might think oh that just gives me my you know factor of n back but the calculations in between have been you know cut down exponentially so it's still a huge win okay here's another uh example sorry an exercise uh that's uh somewhat harder okay so to really make a lot of progress you don't want to just have to do everything by hand you want to build analytic smarts into your program and have the computer calculate matrix elements for you you do not want to try to solve a big problem by calculating every one half where it is and then diagonalizing only at the end that's you're doing too much work and the computer has only got the easy job so you want to do a clever program that does it for different sizes and just figures it out using those rules you want to build the rules into the program okay so this hard exercise is for a chain of n spins say up to n equals 10 so the 10 limitation is just because it would start getting slow to run write a Julia function to calculate this h matrix and find the ground state energy for that system okay and this this can be done in a fairly short Julia program and so with this is one of the things that we'll be working on tomorrow but you can think about that okay let's think about the one of the things that you should do is a you know in doing computations is to uh think about the memory and calculation time on back of the envelope sort of rough estimates of how long your calculation could be taking so you write a cute little program that's supposed to do something and it's and you hit enter and instead of coming right back it it's it takes an hour or maybe it just takes a really long time and you don't have the patience you kill it okay well should have taken an hour well you should have in the back of your head how long it should take and so you do you knew back of the envelope estimates of how long things take pretty easily and so that's what this little slide is about okay so the first thing to know is that if you have an m by m matrix and you want to do first of all is talk about the storage okay it has m squared elements so that's the storage so you should also have in mind what the storage is in say a desktop say 10 to the 10 bytes uh a double precision number to store is eight bytes we can run that to 10 and uh so roughly our computer can store about 10 to the 9 double precision numbers okay let me say another thing you should always use double precision well 99 percent of the time you should use double precision because single precision floating point just makes two big errors and you'll be constant even if it sometimes can work sometimes it won't you'll be constantly worried about it so double precision is almost always necessary just you always use it okay so you can store 10 to 9 double precision reels or if it's a matrix you can do 10 to the 4.5 by 10 to the 4.5 or about a 30k by 30k thing can sit inside your your RAM okay so how big is that in terms of n to the n this will give you about n equals 15 okay so we're not going to do n equals 30 and have it fit in the computer if we just store it this way that's storing all the zeros though so so that this that consideration won't apply but we should also think about the calculation time so for a full diagonalization of an m by m matrix the calculation time goes as m cubed there's lots of matrix operations that go as m cubed you know just about any matrix factorization or matrix matrix multiply it's m cubed so it's easy to remember and so we can translate this to cpu time so let's see m cubed so suppose we had this 10 to the 4.5 m cubed would be 10 to the 13.5 so that's how many floating point operations our calculation is going to have to do to diagonalize it 10 to the 13.5 um the calculation time okay we should have in your head how fast a computer is well a computer might be able to do 10 to the 10 and this is you know there's not so much difference between a laptop a desktop or one node of a supercomputer you can do 10 to the 10 operations per second okay this notation here 10 to the this floating point operations per second is called a flop and the usual way of uh writing it is well it used to be gigaflops now they talk about petaflop machines massively parallel but in a single process single computer you know desktop we're still sort of in the gigaflop range 10 or 100 gigaflops okay so you've got 10 to the 10 and so you can estimate how long this takes there's about 10 to the 3.5 and what we have here is seconds once you you cancel the operations and and so that's about an hour okay so this is okay so uh we have got crunched by memory in this particular example before we got crunched by uh by computer time because you can wait an hour you wait overnight and sleep uh so so but this but it wasn't such a such a dramatic difference and so um you could easily have something that that really got dominated by the computer time okay so how can we do better given these limitations that's not a very big uh system remember it was 15 spins that we could do okay so first of all we can take advantage of the sparseness so each row has only of order n non-zero elements and uh the the number of non-zero elements is usually around n so it's round 20 or 2n something like that uh so the storage if you only store the non-zero pieces you have to look at how big a a a row is which is the same as a vector and so you get something like n times the size of a vector n times 2n or if n is about the 20 or so it's say 2n plus 4 okay so if you do that if you look at this you can store on your computer up to about n equals 25 or so so a big improvement for using the sparseness okay so that the full diagonalization even if your matrix sparse produces matrices that are not sparse so you can't use that full diagonalization okay uh so we're we'd normally just want to say the ground state or a few low lying states okay so the simplest method for getting those without it doing any full diagonalization is the power method this is not a very efficient so you wouldn't use it but it's easy to understand so if you if you look at this h tilde which is the diagonal one matrix the identity matrix minus epsilon times h where epsilon is is a small enough number um this because you're just adding a constant and a shift to the matrix it has the same eigenvectors and it just transforms all the eigenvalues to one minus epsilon times the eigenvalue so it's the same problem but um what this matrix has is the biggest magnitude eigenvalue will be the one you want so if you keep multiplying by this it's going to make the biggest the the ground state keep getting bigger and bigger relative to everything else it'll have it'll multiply each eigenvector by some different coefficient in you'll just be using a single vector but it'll take all the components and change them and gradually project out the ground state and i actually the power method is uh is it's not used for diagonalization because the lang shows method is better which i'll tell you at about in a second but um this but you can't use lang shows ordinarily in uh quanta money carlo and so sometimes you do with quanta money carlo you do use this power method okay so you just take this h tilde raise it to a very high power you start with any state that's not orthogonal to the ground state and this will project out the ground state if k is large enough and if epsilon is small enough so if the ground state is really the dominant vector okay and so if you do that well your calculation time will be you know we'll have this extra factor of k you don't actually raise k to a power you just keep hitting it times one vector so the storage is one sparse matrix and one vector or two vectors okay and so this will allow you to do a big problem and not being memory limited okay now the the the problem with this is that the quite a large k might be needed so the lang shows method is the the standard way of doing exact diagonalization and the lang shows method is a way of cutting down the k with having a more clever method but also an iterative method multiplied by h it's a more clever method okay so the lang shows method so we're still going to start with a initial c vector that's supposed to be not perpendicular to the ground state we'll just write it as a vector then the Krylov space is this space it's c and then h times c h squared etc and this is a vector space spanned by all of these guys more importantly you can also have a truncated Krylov space that only goes up to a certain power okay and that the whole space is all the linear combinations of this and notice that so here's this thing that comes in in the power method and it's really just involving powers of h and the identity so it just gives you different powers of h all mixed together so it's really just doing something sort of overly simple but within the Krylov space okay so the Krylov space has to do if you say give me the minimum energy in the Krylov space up to some certain size it has to do better or at least as good as the power method in fact it does much better okay so lang shows is an efficient way of using this space and it finds the lowest energy vector in that space and so what it gives you is that you only have to say use k up to around a hundred or 200 and it'll give you a very precise ground state okay so so here's the lang shows basis okay now the trouble with hitting a vector times h is the new vector isn't orthogonal to the old vector so it's not really very good as a proper basis state we like to use our orthogonal states to write our matrices in okay so the but you just start with the first state and you make a sequence of vectors x that are going to form your basis and so the second state starts by hitting h times x but then we orthogonalize it to the first state so we subtract off this piece of the first state and it's easy to calculate what the alpha coefficient is here that you subtract off and it makes it orthogonal so that x is x2 is orthogonal to x1 and then you throw in a normalization factor you know and the it's easy to look up exactly what these these factors are in terms of dot products and things okay then we can go along and make x3 okay now you hit you hit h2 you hit x2 by h again and it's not orthogonal to x2 and it's not orthogonal to x1 so you have to orthogonalize it to both so you subtract off a beta x1 and choose these coefficients and you might think that this is pretty cumbersome because you have to keep orthogonalizing it to everything but it turns out that once you go past this level an alpha and a beta that subtracted off you get to drop all the others it's automatically orthogonal to all the guys in the past by sort of a mathematical sort of identity okay so you might think you have to do this and get a gamma x1 which is sort of three back but you can throw away gamma it's going to be zero okay and so the fact that you get two of them turns out to mean that you you know you're this orthonormal basis turns out to give you a tridiagonal Hamiltonian matrix and so it's sort of and this Hamiltonian matrix now we're talking about something that only goes up to the dimension k so it's a small number k is this number of sort of how big the Krylov space is so that's going to be like a hundred at most and so you have this tridiagon matrix it's only 100 by 100 it's really easy to diagonalize in that and and it sort of gives you this shortcut for this sort of best way of doing in the whole Krylov space okay so that's the Langshos method and this is the standard for doing the biggest exact diagonalizations around so the biggest exact diagonalizations use this angular momentum symmetry that we saw they use Langshos and they put in every other symmetry that they can also okay the other symmetries get a little bit more complicated and then they also do it on parallel machines and so Andreas Leuchli is going to be one of the later lecturers and he is the the guru of doing diagonalizations of say up to 50 spins which you wouldn't think was at all possible okay okay so with this you look at the calculation time put in m if m is the size of the matrix this two the the two the n divided by n then if you have a sparseness factor and you have a number of steps that you have to do it sort of multiply together and you find that doing matrices up to about a billion by a billion is okay you take advantage of all this okay okay now Julia has a Langshos method built in it's not called Langshos get the search for it but you look up here's an exercise look up the Julia Langshos method and then if you did this general purpose the hard problem of doing the general purpose h matrix okay then you can speed up your program a lot for a big system by calling the Langshos eigenvalue routine instead of the regular one and just tell it to give you one ground state or a few low line states and it'll be much faster okay so uh so then the the challenge is how big a system can you do with this and sort of like you know half a page program that does an exact titleization and can you compete with the fancy programs out there okay okay we'll have time to get started a bit on some of the ideas that are the basis for a lot of quantum information so we're going to start talking about entanglement and uh the first thing to think about in terms of entanglement is what's what's an unentangled state what's which is is a product state okay so the product a product state is the simplest type of state where the terms just involve one thing happening on each site and multiply it together so here's here's some examples of product states up and down you know that one site the first site one is definitely up you know that site two is definitely down um and there here's two different ways uh you know we can always expand it in this notation another notation that once you're doing quantum information they tend to use this uh uh direct product or chronic or product sort of form for outer product okay here's another product state okay it's more complicated it's not just it's not just a product in the z basis so over here i do something complicated but it only involves uh spin one and then i multiply it by something complicated for spin two that's still a product state so in general if you can write psi as a product of something going on for one something going for on for two etc then it's a product state okay so then what's entanglement so if you have a wave function and it's not a product stage then we say it's entangled okay so anything more going on is considered entanglement now usually we're gonna that what i was talking about was sort of product states over all the spins but we usually special uh specialize to only consider entanglement across a boundary so let's say we had a system of lots of spins and we divided those spins as either the left part or the right part with some dividing line it could be an arbitrary arrangement you know the the math won't care but we divided into a left part and a right part okay then then we would say then a product state is a state where we can do anything we want on the left but we have to have a simple multiplication of another complicated thing on the right there's no sums of different things happening on left and right you know a general wave function has lots of things going on on the left and right all mixed together it's what makes a quantum mechanics weird because you know you you don't know what exactly is happening on the left until you know what's happening on the right okay so but let's consider an operator o which acts only on the left then if we uh take the expectation value of this wave function this product state wave function on this operator that lives only on the left and uh commute all the spin operators and we'll we'll get the right part just contracts with itself so it gives us a one the left part has the operator part and so we get rid of the right part and we have something that only involves the left part so it's an operator on the left that only involves the left state phi l okay so this is a another way of saying that the two sides are independent by not putting it with an operator it's sort of like i'm looking at it with that operate operator and so i can look at the left side with that operator and and forget about the right side okay and so that's a product state it's independent systems okay so uh entangled states how do you tell if the system is entangled and uh okay let me hide the bottom part a little bit okay so let's do an example where we have our two spins again and i'm going to give you two different states they're both in the z basis again um and i have state a is this one over root two up up and a one over root two downtown down okay and i have state b which is this uh more complicated piece that has all of the four possibilities for the spins with uh the same coefficient in front of all of them okay so the question now is okay which of these is entangled one of them's entangled and one of them's not and how can you tell okay so so if you think well entanglement is lots of terms and funny quantum stuff going on you'd pick the second one say that's that's got every term there is it must be entangled okay but that's not true because the question is is there some other way of writing it and it may just be in an x basis instead of a z basis where it's not entangled okay so in fact b can be written as this state times that state and it all has to do with exactly what the coefficients are and in fact this is just up in the x direction and the other one's up in the in the the other one's also up in the x direction and so it's just like you'd rotate your axes and it turns out it really is unentangled so entanglement is not something that depend on your axes but when we look at this we see it looks like we can't tell because it's it looks like it depends on the axes okay so b is a product state and the other one a is entangled okay so here's an analytic exercise yes what that's that one particle can't entangle with itself that's not a that's not a something right well if a if a if a if a one unit is actually composed into more things then you can talk about the entanglement between the parts which always have to have two parts at least to have entanglement so you know if the spin really was an atom with a spin then the different parts of the atom are entangled you know so if you if you if you go in deeper that way it's entangled and this is actually the way it works a lot of times it's like okay yeah but we don't it's like is a baseball a particle well did you cut it open or did you just throw it you know it's like a particle is it's like you don't get to look inside so entanglement's the same way if you if you don't look inside you know there has to be two things to get the entanglement okay so here's an easy analytic exercise to prove that this state a is entangled which means you show that there is no alpha beta gamma delta where you can write it in this product form because this is the most general product form for two spins an arbitrary state here in an arbitrary state there and so this is just quick algebra to show that this expression cannot equal that okay okay but that's just two spins and it's sort of like we're having to work hard for two spins to find out if it's entangled how do you do this in general you know it's crucial to know how entangled things are okay so it turns out you need a singular value decomposition okay so how many people had singular value decompositions in their previous classwork sometime yeah it looks like I guess about a third if you are all engineers you probably all get it you know what they're doing it's like they realize how important it is so of course we you get everybody has gotten diagonalization right singular value decomposition is almost as useful as diagonalization for for lots of different things and so so this will probably be about the last thing that we covered today and we will we will start we'll start in the computer room at 8 30 tomorrow and then we'll have more analytic stuff later on in the in the other lectures okay so a singular value decomposition is something that you can do to any matrix and it doesn't even have to be square it can be complex no no properties really needed it always works we you do it with an assumption that if it's rectangular it's rectangular sort of with with one size bigger than the other so it's like it can be like this and if it if it's if it shape the other way you just do the singular value decomposition on the transpose and there's just a slightly different way of writing it okay so it's a factorization and it says there exists a u an m by n u so i'm choosing the dimensions n by m and there's a u with this size a d of that size d is diagonal and all the all the elements along the diagonal are are non negative real so they're positive or zero and then there's a v that's m by n and m is written as this product of u times d times v um usually this is written with a v transpose and i think they do that the math books do that just to make it look like a diagonalization i just have stopped doing that because i it's not in the diagonalization but i'm doing being sort of non-standard here so the sizes of the matrix sees i write like this there's a parentheses over here that got cut off but uh you know it's sort of rectangular and there's two small ones and a bigger one okay then the u and v have have properties the u the smaller one is unitary and the v is row unitary you know a unitary matrix has to be square so this rectangular one over here can't be unitary but its rows can all be orthonormal okay so v dagger v times v dagger is one okay okay so that's the svd there's a there is it's a it's a sophomore linear algebra theorem that says the svd always exists and um and that's what it is the diagonal elements that you find from this oh and and it's a it's another m cubed operation to do it on the computer just like the diagonalization let's see the di i that these diagonal elements here are the singular values and they're unique the the whole singular value d dot composition is is clearly not quite unique because you can multiply one row by a minus sign in the v and as long as you put the minus sign in the right column of u it'll cancel or you put a phase so there's some minus signs and phases that are arbitrary just like eigen vectors but otherwise it's unique the singular values are unique um let's see if you want to do it so that both u and v are unitary you can use you can put in some extra zeros which is what we're going to be using later okay so this d tilde puts in an extra sort of block of zeros here so the diagonal piece is over there and then there's zeros here and then you can enlarge v and the the extra rows that you put into v or v tilde don't matter because they get multiplied by zero but um this v tilde can then be unitary so in this form with this funny d um both u and v are are just the ordinary unitary kind of guys and um if uh if m is real then u and v can be taken as real that's the usual case okay so um svd's have lots of different uses uh you know you it's like when you'd like to diagonalize it and maybe you're you're you don't have the same number of unknowns as you have equations but you still want to get a solution that's best the best you can do you know these squares sense you can do it just with the svd it gives you the sort of best answer right away there's all sorts of uses for the svd one of them is compression right so we we talked to you know the a key to doing exact diagonalization is using a compressed um uh a Hamiltonian matrix not writing down the zeros so sparseness and using a sparse matrix form is sort of a form of compression just says don't write down the zeros you know it's like it's like gzip or any of these other compression things it makes it a lot smaller okay so here's another type of compression a completely different type of compression that has nothing to do with zeros in the matrix suppose you you thought oh there's this matrix it might have special properties let me do an svd on it okay suppose you found that these a bunch of the singular values were either zero or just negligible okay then you put it in this svd form and the this is a this scribble here's a a bunch of rows and here's a another scribble for a bunch of columns here those are the only ones that matter because the other guys get multiplied by zero okay so you get to throw away most of your matrices and just keep the the the rows and columns that have the non-zero singular values okay and so this is a nice form of compression by the way one of the places that you can read about things like this and all sorts of other numerical things is there is the there's a set of books called numerical recipes which were written by top-notch computational physicists and they're either numerical recipes in C or in FORTRAN or and they give little programs that do standard things but the nice thing about about numerical recipes is it tells you the background of whatever calculation you're doing in the nicest little five page summer you ever saw so you want to really find out what the svd does and what's good for you read that chapter in the numerical recipes you don't have to use their program because you'd rather just use a black box anyway but you read this and it tells you exactly what the key properties are okay I took enm way back when from one of the authors of numerical recipes okay okay so this is a compression this turns out to be one of the key ideas of dmrg this compression is used in dmrg okay and so that's we got almost through what I wanted to cover today we'll have to make up a little bit for it later but the next thing we'll do tomorrow in the next lecture part will be the schmidt decomposition which directly uses this svd okay thank you