 Okay, welcome back. Let's start on the second lecture, which again will be on ground state projection, and specifically done for quantum spins in the valence bond basis. Okay, so here the review article I recommended to you before doesn't really talk about this, but there's a fairly detailed paper that I can recommend if you want to read the details. But I will also talk about some things that are not in this paper, but there will be some other references further along. Okay, so what I will do first, I will talk about the basic principles of ground state projection, including there was a question before, you know, why it converges to the ground state and so on, so I will, you know, talk about that. Then I will discuss the valence bond basis, what it is and some of its properties, and then how to formulate quantum Monte Carlo in that basis and why that would be a good idea. And then I will again talk about an implementation for the spin one half Heisenberg magnet and you will actually see that it's very similar to what we already talked about for the stochastic series expansion. If I have time, I will say a few words about valence bond solids, which are, you know, states of possible states of quantum antifera magnets. They are basically dimerized states and I want to actually discuss how we can use this valence bond basis, not just to study valence bond solids, but also how to discuss the excitations of valence bond solids, which in some way you can see as in terms of spinons, so I will illustrate that on 1D systems. And again, there's some program online, same URL as before, you can find a little menu there. And if we have time, I will just maybe show a little bit of the program. Okay, project to quantum Monte Carlo, so I already showed these equations here. So you have some state, you act with some operator which has the property of filtering out or as we say projecting out the ground state when some parameter becomes large. So you can do a large power of the Hamiltonian or you can do an exponential operator, those are the most common one. Actually, this name trial state to refer to this state psi naught, I don't really like it so much, but that's how it's used. And my understanding is where it comes from is that, you know, time ago people used to not have as powerful numerics and you would take some good variational state. And then you did a little bit of some kind of projection and see what you get and it's like you try with this state and you try with that state and you see what gives the best result. So I guess that's where the trial state comes from, it's somehow what you try to start from. But the point nowadays, at least in the context of what I'm talking about, it doesn't really matter what the trial state is because we will get the correct ground state. No matter what, unless the state has some wrong symmetries or something like that. But anyway, I guess I should use the name trial state because people seem to use that. Okay, so let's look at this trial state expanded into the eigen states of the Hamiltonian. So since we are going to get the ground state, I of course have to assume that the expansion coefficient for the ground state here is not zero. Then of course I cannot get the ground state. But in general it would be even hard to make such a state unless this state has the wrong symmetry. Even if you do a random state, it will be containing some part of the ground state and any kind of state you can cook up normally contains it. And in what I'm discussing, the choice of this trial state is not so important, almost anything goes. Although you can do a little bit better if you choose a good one as I will also show. Okay, so now having our trial state, we act with the power of the Hamiltonian. This is the one where there's a subtlety which was mentioned in this question, so I want to show that one. But you can see that you can do some similar things with the exponential as well. So in the basis of eigen states these just become numbers and then you can just shuffle the terms around a little bit and pull out the ground state. And just put a factor in front of it, it doesn't really matter, we don't normally need to worry about the normalization. So this is just basically E naught to the power, whatever power you use times the expansion coefficient in front of it. And then the rest of the terms are here and now since I pulled out E naught in front of the whole thing basically and combined also that into here. There's a ratio of energy eigen values there. So you see that if this one is the largest in magnitude, then it will converge to the ground state. So that's the condition that E naught absolute value has to be larger than all other ones. And this is often the case even without taking any particular action. And if it's not the case you can also always subtract some constant from the Hamiltonian and work with that Hamiltonian. Then you shift the spectrum and this can always be true. So this we can use either with the power of H or the exponential. I will discuss the power of H because that's more similar to, well actually if I would do the exponential you could do something like the stochastic series expansion. If I just do the power of M I don't need even to worry about the expansion part. But you can see already that I have powers of H and we had that last lecture as well. Okay, now about the basis that we use. So we are normally work with the basis of up and down spins, right? And we are considering the Heisenberg model, which I again write with a diagonal and off diagonal term like this. So that's the normal working basis that we have including in the previous lecture. But there are other basis that people use. For example, if we have a system with some highly modulated couplings, let's say couplings are stronger on some bonds, then we could do a dimer expansion around those bonds in some kind of analytical calculation. And we could also use then the basis of singlets and triplets on those bonds. And of course, even if we have completely uniform couplings, that's still a completely valid basis. So we could in principle formulate some scheme in that basis and that can be done. Sometimes it's a good thing to do. So here we have chosen the bonds where the spin pairs, where we have the singlet and triplets and there are singlets and triplets. And the Hamiltonian is a bit more complicated looking in this basis though. Okay, but the valence bond basis is something else. In the valence bond basis, if we consider a total spin zero state, so a total spin singlet, we can actually only need to work with singlets. So I define a valence bond between two sites as a singlet. And now again I consider a bipartite lattice, so I have a and b sub lattice sites. And I will always think of the first spin as being on sub lattice A and the second on sub lattice B. Okay, so then we can form a state which is a product of singlets in some sort of random fashion. So I just pair my A and B sites up in some way into singlets. So this is now representing a product of all these singlets wherever they happen to be. So this is a valence bond basis state. So it's clearly a singlet because all these two spins are paired up into singlets. Okay, so this basis is over complete and non orthogonal. So it's over complete because if you count the number of possible tilings of the lattice like this, it's much, much larger than the number of singlets in the Hilbert space. I forget how many singlets there are in the Hilbert space, but the number of tilings is quite easy because you can think of it as a permutation. So all the A sites, there are n divided by two of them, they are connected to some permutation of the B sites. So this is just the number of tilings and that's larger. I can guarantee you than the number of singlets in the Hilbert space. But it's over complete so we can at least express any singlet state in this basis. But because it's over complete, the expansion coefficients are actually not unique. There are many ways to write such a superposition. But it turns out in some cases for example when we are looking at ground states on bipartite lattices, as we will be doing, these expansion coefficients can be taken as all positive. And with that restriction I believe that they are actually unique at that point, if you require that they all are positive. And this is also in the end the reason why you can do sign problem free Monte Carlo simulations in this basis because of this property. Actually this property corresponds to, I didn't include the slide on this, but many of you I'm sure have heard about Marshall's sign rule, which is the rule for the sign of the wave function of bipartite spin systems, the ground state wave functions. So if you write the ground state in terms of just up and down spins, so we have some A sites and the rest are B sites. So if the wave function or the ground state is some, what should I call them? Ck, let me just call them like that. So these refer to up and down states. The sign of this coefficient is, you can write it as minus one to the number of up spins on sub lattice A. You could also choose down spins or sub lattice B, it doesn't matter. But that should be the sign of these in the ground state of a bipartite system. And you can see that this actually conforms with that because the first one is all actually, let's see, I think to really make it conform to this I should say B here. Because if the second spin is on sub lattice B, so if it's up, I get a minus sign. So then you can see that when I tile the whole lattice, the sign is exactly this one. So this basically is the reason why we can use this basis to study ground states of bipartite. Let this is because this conforms with Marshall's sign rule. Okay, so this basis is quite interesting. It has a lot of neat properties. So let me mention some of them. So it's an overcomplete non orthogonal basis. Actually all these basis states overlap with each other. And the overlap you can actually compute quite easily using the overlap graph or what's often called transition graph. So if you have two bond configurations like these and then you just superimpose them and then you will see that you form some loops here. In this case there are three loops I believe. So then the overlap is just two to the number of those loops minus n over two. It's actually quite easy to see because these bonds are singlets. So on each bond you can have up down or down up. But when you form this transition graph loop, of course if you have on the red bond up here, down here, then on the black bond you also have to have down there, otherwise they wouldn't overlap. And then you would have up here and so on. So on a loop if you put the spins back in you have to have this kind of alternating staggered spin configuration. And there are two ways to do that. So you can think of each loop as having two states. So then the overlap is just the number of such states which is due to the number of loops. And this just comes from the definition of the singlets. There's one over square root of two in the number of singlets. Another interesting thing is that you can easily compute some matrix elements. For example if you want to calculate the spin correlation function, you will need matrix elements between such states. And there you just again have to look at these loops. So you just have to see if i and j in the same loop, for example these two, then this matrix element has a well defined value. If they are in different loops, then it's just zero. And the reason is pretty much the same as here. You see if the spins are on the same loop, then because of this staggered spin configuration on the loop, this will be, this is just this staggered phase factor actually. This will be the spin correlation. And if you flip the loop, it doesn't change. The product of S, well okay, actually this is not so easy to see if I write it like that. But if I just consider this one, if I just do the Z component, then you see that you just get some well defined number if they are on the same loop. But if they are on different loops, then you have to remember that this contains all the orientations of the loops. So you have to sum over all the orientations of the loops. And then if you are in different loops, everything will average out to zero. So that's why this holds. And it turns out that almost anything you may want to calculate can be somehow related to this loop structure. So you can, this paper here has a lot of different results for other types of correlation functions and so on. Okay, any questions about that? Okay, now how to do project to Monte Carlo in this basis. I list some history, there are some papers that consider this. Okay, so we have already discussed what project to Monte Carlo or projectors are doing. We project out from a trial state and now consider we have again Heisenberg model. And now I again want to write the interactions in terms of these singlet projectors. Singlet projectors, let me show that. So this minus, so if I act on, okay, so there are two sides there. If I have a singlet on those sides, meaning if I have a valence bond on those sides, that's just minus one. All right, because this is minus three quarter on the singlet and I add that. And then there's, oh sorry, it's one, let's see, no minus one. Okay, and if I act on a triplet, you can easily see that that's zero. So this is a singlet projector and that's very useful to know in this basis. Okay, so what we are going to do now in a way similar as we did in the SCC, we are going to write H to the N as strings of these operators. And now I'm having these singlet projectors in mind. So in this case I don't divide it up into diagonal and off diagonal, I just act with these singlet projectors. Okay, so what can happen when we act on a valence bond state? Let's just look at a small piece of a configuration here. So these are again my valence bonds. And here I act between sites A and B, so that's right on a singlet. So then I just get one. And that's because my Hij doesn't contain the minus sign there, I have pulled that out. Okay, but what happens if I act between two singlets? That's the other thing that can happen, right? Either I will act on a singlet or between two singlets. So then you will see what you can actually check this quite easily. I will not do it explicitly here. So the black ones are the ones I act on. And actually what are these arrows? Well, the arrows correspond to how I define my singlet because the singlet has a minus sign. So it's up, down, minus, down, up. So I could also define it as down, up, minus, up, down. So the arrow in some sense can correspond to your definition of design. So if I act between those, what happens is that you actually project a singlet on that bond where you act. Well, that's just from the property of that being a singlet projector. And then you actually also make a singlet between those two other sides. So in effect what you did was to reconfigure those two singlet bonds. You go from one bond configuration to another bond configuration. And luckily the sign is one half. I'm sorry, I mean the factor is one half and there's no minus sign. Okay, and these arrows illustrate actually if you do it in detail how the signs change, but you basically get signs that cancel out. So basically what this is doing is you start from the valence bond state and then you do a lot of reconfigurations of these bonds. That's what it's doing. Right, okay. In a little while I will show you that again for frustrated systems it doesn't work. In that case we can get some signs there, but for bipartite systems it works. So it's always easier to look at some pictures. So here I show a case where my trial state is just a fixed bond configuration. And then I have some operators acting. So these are the singlet projectors acting. So the first one doesn't do anything because it sits right on a singlet. The second one reconfigures those two bonds. This one reconfigures two bonds. And in the end after four operations I have that bond configuration. Okay, so now I also want to introduce a simpler notation here. So it's just a very long thing to write down this projecting string all the time. So I will just call P, as a string is denoted by P and formally K is just numbering all the possible strings of operators that I can have. Okay, so I have to sum over all possible strings of operators, but now in the shorter notation that's just summing over PK. Okay, right, so when I have done this projection here I started from a basis state. And then the result is some other basis state times some number, which is you can say the weight associated with that path. So this is again some kind of path that we are doing. Now in this case the weight depends very easily on these matrix elements that we looked at before. So for diagonal operation you just get one and for an off diagonal one you get one half. So the weight is just one half to the number of these bond flip. Of diagonal operations. It's very simple. And in this case you should notice that there are no what I call dead paths here. So when we did SSC I pointed out to you that it was important that you only do the legal operations, that there are some in fact many operator strings that are not allowed because they would have some illegal operations in them. But in this case because one of these always happens there's nothing that is illegal here. You can act with your operators anywhere you like, no matter what the state is. Okay, but how do we sample these things? That's actually not so easy. So initially we did it in a very sort of trivial but inefficient but still okay way, namely just move them at random. So I say here two to four operators, meaning I take that one and move it to some other place at random and same with another one. But the problem with that is that as far as we know to calculate the new weight is to completely redo this propagation. You have to propagate it from the beginning. So that takes quite a long time. But because of some nice properties of the valence bond basis when you evaluate things it was actually still not that bad. Okay, but now luckily we have some better updating schemes. As you will see, we get more or less exactly the same kind of loop updates that I talked about before in this basis if we put the spins back in. But for now let's just consider this way a bit more. Well, I will actually not even talk about the sampling but just how to calculate some things. So we can actually work in two ways here. We can work with just a wave function, sampling a wave function or we can sample some expectation values. Actually let me skip this one because this has to do with just sampling the wave function where the only thing you can really calculate is the energy. So let's just talk about calculating an expectation value. Okay, so again I have these strings that we work with. But now we have two states. We have to project a bra and a ket state. So I do one projection from on the right side. I could call it VR. And eventually if I have a high enough power that should just give me something proportional to the ground state. And then from the left side I also get the ground state. So by combining these I just do what I need to do for an expectation value in the ground state. And again it's easier to look at a picture. So here is the right starting state, the left, the bra and the ket. And then we have some random configurations. And what we want to do is to sample them. And now you see what we get here is exactly those overlap or transition graphs as I talked about before. Because when I propagate this state I get a state here. I propagate that one, I get a state here. And in the end we want to measure something. And that becomes some property of the loops that form and we just superimpose those configurations onto each other. So this is formally what it looks like. It looks a little bit messy. But it's just some weights coming from the propagations. And these matrix elements, well this is just the overlap. So this is the two to the number of loops things that I mentioned. And up here it's just those, you check, the loops depending on what the operator is. You may just check some property of these loop configurations. Okay, but this was in the simplest case where this state is just a fixed bond configuration. And you can imagine that if you think of it as a variational state that that may not be a very good variational state. But as I mentioned already it doesn't really matter because if you do this long enough it still should project out the ground state if this is, the number of operators is large enough. But there's one important thing here. Namely, in principle you can build in some quantum numbers in these states. And actually we have already built in the spin quantum numbers, the total spin, because we are working with singlets. So if you think about how we project out the ground state from a superposition of states, that superposition in this case is only of singlets. We have completely thrown out from the beginning all higher spin states. So that means that the convergence is faster than if we do finite temperature, for example. Because in finite temperature, if you go to zero temperature and really want to get rid of all the excited states, we have singlets, triplets and so on. We have all the spin states. But in this case we just project out of a singlet so there are no higher spin states. So that's an advantage. But another thing we can actually do is to build in the momentum of the ground state. So normally in these systems the momentum is zero in the ground state. And that you can easily make a state which has zero momentum. So that means that you not only throw out your higher spin states but also the higher momentum states. So if I try to illustrate this again in a different way. So if I just draw all the energy levels I have in the system, I have some ground state. And that normally actually in these bipartite systems almost always this ground state spin is zero. The ground state momentum is zero. And then we have some other states. So this may have S equals 1, K equals, let's say we are in two dimensions. Pi pi and so on. You have lots of states here. And they have different momenta and different spins. So if you would do a normal project or Monte Carlo where you don't build in any special symmetries, then your initial state is some superposition of all these. And when you do your projection, you have to decay them all away. In particular of course the one that is going to be the slowest to decay is the first excited state. The one which goes away the slowest. Okay, but now if you build in, if you use the valence bond basis, then you don't even have that state there because that's a spin one state. So the next singlet may be up here somewhere. So all this is going to your effective gap. The singlet gap is typically much larger than the singlet triplet gap. So that improves the convergence. Okay, and now if you even fix the momentum, you may even, I don't know, this may even have some other momentum, let's say this is k equals pi over two. Then you have also thrown that away and maybe this is the first one which has the same quantum numbers. So you can improve the convergence a lot, typically if you build in all the quantum numbers that you can. So let me talk about how you can make a state which has zero momentum there. Okay, so those are what we call amplitude product states. These were introduced a long time ago by Phil Anderson and collaborators. So basically an amplitude product state is just these valence bonds states that we have talked about, weighted by a product of amplitudes. So amplitudes are positive numbers and they basically are associated with the length of the bond or I could say the shape of the bond. So each, if I have a short test bond that has one amplitude associated with it and the one shown here has another one. So all shapes of bonds have, meaning the x and y lengths that define the shape, have some weights associated with them, some amplitudes. So the wave function coefficient of that state is just the product of the amplitudes of all these bonds which then depends on what the configuration is, of course. And often when we work with these we will actually use the length of the bond as something that decides what the amplitude is. But in principle it depends on the whole shape. So now one can actually say that, okay, the trial state is such a superposition and that is actually a state which has zero momentum. You can actually easily see that because if you translate it, it's identical and there will be no factor in front of it. That means the momentum is zero. This is a completely translational invariant state. Okay, so then there's some simple, you know, boundary configurations one can do and do a Monte Carlo metropolis accept, reject step with those. So in addition to sampling those operators we can also sample these trial states and in that way ensure that the momentum is zero. It turns out that there's actually some more efficient way to sample these two. One can make some kind of loop updates there too but let me not talk about that. This is actually quite sufficient for sampling these. Okay, so the only difference from before is that instead of just a fixed state here we have this superposition which could be of this kind that we also sample. So also these states will be changed in the course of this. So in principle what we want to do actually is to use some amplitudes, maybe parameterize them in some way. I mean there are lots of amplitudes so in terms of variational parameters you have a lot of variational parameters so maybe you want to simplify it. For example what was done in this original paper by Anderson and collaborators they basically said that h of xy is just h of rxy where this is just the length of the bond and they said okay let's try something so they tried this is some power law of xy so they could try what power was the best or they could try some exponential form and so on. So they played around with such states. So in principle what one could do is to do a variational calculation to optimize these states and then improve that state to perfection by projection. Okay so that's one thing one can do. But as I already mentioned it's not so critical what you actually use but I would say at least use a state which has translational symmetry meaning it has in this basis zero momentum because that has the advantage of throwing out some other momentum states from the outset. Actually let me show you some properties of such states. So it was Phil Anderson and collaborators they first used some parameterization of the amplitudes like that what we decided to do several years ago with a former student was to really optimize all the amplitudes so we have h of xy and we don't know what they are. We can of course just say that okay h for the shortest one is one as a sort of normalization that doesn't matter none of the normalizations matter here. And then we can optimize all the other ones by some variational Monte Carlo methods let me not discuss exactly what we did just to show the results. So we just focus on the energy as a function of system size so the black dots here show the best variational energy that we could get. And then what we did we did this projection out of that state to basically get the exact energy after that and so this is the exact energy. But you can see that well there's a clear gap there but if you look at the scale here these are actually very good variational states. So the energy is better than 0.1 percent which for most people who do variational calculations that would be considered really a great state but one should always take such things with a grain of salt because even if the energy is good in principle other properties don't need to be good but it turns out that even the correlation functions in this case or actually this is the sub lattice magnetization squared you can see even that agrees very well between the projected and the variational state and you see that the variational curve is not completely smooth here also here it's not completely smooth but it looks a little bit better and that's because it's actually not easy to completely optimize these amplitudes so in some cases the states we have are not perfectly optimized but still it looks quite good. So these valence bond states themselves can actually be really good variational states if you are interested in such things and then what we did in our work we also looked at well when we have optimized these bonds how do they depend on the length of the bond? I mean Phil Anderson and collaborators they concluded that some power levels seem to be the best variational state but if we don't even assume that form we actually still got a power law so here I just showed a bond amplitude for the longest bond in the system and it decays pretty much like 1 over L cubed corresponding to 1 over R cubed which actually one can derive in some sort of mean field way which I will not talk about one can actually do it. Okay so that was about the states and the basic project or scheme but I already mentioned that this projection scheme that I talked about so far wasn't that great actually so a few years ago with Hans Servetz we realized that actually these loop updates that we had already used a long time for SSC which actually build on the previous work by Everts and collaborators those can actually be very easily adapted to the scheme as well and it's actually amazing how simple it is so this is the kind of pictures I showed you before we have the bonds you don't see the spins anywhere because the bonds just mean up down minus down up but now what we can actually do is to put the spins back in so on each bond we have down up or up down and it's the same probability for both so we can just choose one so on each in the starting state and later on also we just pick one of the possibilities and then we get pictures that look essentially the same or very close to what we did in the SSC okay now what I should have done is to take this picture and convert it but unfortunately I have not done that so I just show another picture for a smaller configuration so this is a configuration with just four spins what happened now I should connect my power supply I think just normally I'm fine without using the power supply but this was a bit longer than normalism where is an outlet here it's on the table oh yeah of course it's so close I didn't see okay we are saved okay yeah so I show a smaller configuration but here I still keep the valence bonds of the trial state in one configuration of the trial state but now you see I put in the spins so here black and white is up up and down okay and then you see when I now again if I look at what happens during the projections since I have now chosen one component of the singlet then the operators which here are the full singlet projectors they effectively only the diagonal or the off diagonal operator can work when I put it all together so basically it becomes like in the SSC case with these vertices except that here we don't have periodic boundaries so in SSC we had periodic boundaries and there were no bonds here now instead these bonds actually act like parts of these loops because now we can again build loops here and the loops when they hit the end state here the valence bond is like a continuation of the loop and that's natural because if we flip spins along the loop if we flip this spin well in the valence bond the other spin also has to be flipped so it becomes a natural continuation of the loop so the loop updates will now also sample the spins of the initial state as they did before we still have to do if we sample the trial state we have to do some bond swaps in this trial state but that's also very easy one can just reconfigure two bonds as before just make sure that the spin configuration is compatible with what you put in so now we can actually what we then do is as I say in the last line here we sample the configurations using the spins but when it comes to measuring some observables then we just go back to this picture where we just say okay now these are singlets and these are the full singlet projectors and then we just get those transition graph loops and we measure in them so basically what this becomes then it's just in some sense when we do the sampling the only thing that is left of the valence bond picture is in the trial state so it's just that the trial state is a valence bond state but we express that valence bond state in terms of the normal basis and then everything becomes like basically an SSC except for this boundary condition and this is again very efficient much more efficient than before okay so here I show some sort of animations that was made by my former PhD student Yng Thang yeah okay so that's the shape of the loop is given by the location of these operators so okay so I forget to say one thing in SSC we insert and remove diagonal operators right and that's because we sample the Taylor expansion so the number of operators should fluctuate here we always have a fixed number of operators so the only thing we do is move the operators so I may move this operator from here to here and that's done exactly like you do in a diagonal update in SSC you start with this state just propagate the state and then instead of inserting and removing an operator a diagonal operator can be moved and where the operators are will exactly determine the loop structure so after you have done those diagonal updates you just look for the loops you don't decide where the loops are the structure is just there you just find it by this deterministic rule you just go in this list no so the only thing you have to do is you have to again store the kind of structure we had last time maybe it's easier if I actually than what we had in the last lecture because it's really almost the same so this is what we had last time this is the kind of loop structure we had then so if it was clear for you in this picture then it's exactly the same now it's just that in the projector method these states are not the same instead they have some bonds connected to them and those bonds will act as a continuation of a loop so if a loop goes out to that state there's a continuation maybe it's connected to that one and so on so the structure of the loops is just decided by where these operators are located and that is decided at random at the diagonal update I know it can seem a bit confusing because this is going quite fast but in the end this is very simple as you can see if you look at these programs and these pseudo codes that I have how compact everything is and it's really you know conceptually quite simple when you understand it but when you first see it I know it can feel a bit confusing so even if you don't understand the detail here the point I want to make is that now this projector method is almost exactly the same as the finite temperature and we can actually well I will put them side by side in a moment let me just show you this animation that my former student made so this may answer your question so this is now showing you saw that it was just an empty empty thing these spin states are shown there just for reference so when we started there was just these valence bonds there and you have chosen up down or down up on the bonds on both sides you saw that everything was empty and now we just put in some diagonal operators so these are now the diagonal parts of the singlet projectors but it's just exactly like the diagonal part in SSE so this was the first step of the simulation was done here now comes the next step then we create these links between the vertices and when we do that those spins on the straight segments are not even needed these are now really the links of the vertices and now we see that we can think of already these bonds as also being links so instead of periodic boundaries there are end cap states which link different sides together so now Ying made the next step here actually to be honest she made these graphs for lectures I gave here like three years ago so I'm showing the same ones now and actually she gave tutorials then which that's why I didn't want to do it this year because she did it for me the previous time anyway I'm using some of her animations anyway so now she has created some loops here and this is what you again would do in this linked vertex list so these are two loops and that she decided to flip so now what happens when the loops are flipped so when the loops are flipped some spins changed and some operators changed and I think at that point we are done with the updating so now we switch to the pure valence bond propagation because now we want to measure things so then we have the bonds we have the bond configuration and now we just pretend that these are the full singlet projectors that's how it works when you basically sum over all spin configurations which is what we do in to get the valence bonds then these become the full singlet projectors so then we can just propagate those states and we can measure something and basically this state becomes propagated into some other bond configuration and like here and we get this transition graph and its loops that we can measure things on and of course to get the ground state it's important that this power the number of operators is large enough so that has to be tested okay so a bit more one other animation she made really to show how this is done in in the data structure you store so including this bit flip operation so now she's putting in here the diagonal operators and in this representation we are using it's a kind of binary representation okay but this is just the number seven but if you look at it in terms of its bit representation I think actually now the convention is yeah it's like before so the even numbers correspond to diagonal operators so now there's all even numbers here and that's why the last or the first bit the bit zero is always zero here so this number is just the location of that operator along the chain times two okay and now the links are made so that is done in some other structure which is not shown here but now we make some loops and the loops are going to be flipped so now I want to show you what happens when you actually flip the loops so then when you flip some loops the operators will change so these will become off diagonal operators and that actually affects then the bit of those the first bit because when you change the type of the operator the number just becomes odd in the way which corresponds to changing the zero bit from zero to one so this is what's actually done in the program okay so that's about it okay so convergence so actually are there any other questions about these technical issues yes yeah so this is what I want to show it to you I will show some actually sorry I will not show that but yeah that's a good question so actually how is beta and this M related to each other so we can get that from what we did did last lecture so in the last lecture I told you for example that the energy let's say absolutely of the ground state let's say we go to really low temperature now so we have the ground state is N divided by beta so that means N is basically let's say N times beta because the energy is extensive so that's where this distribution is peak so we have N and probability of N and it's something that looks a Gaussian like that but it's some kind of Poisson distribution so this average N is proportional to N times beta so that's why we okay so now what do you need to beta to be okay beta is 1 over the temperature and the temperature should be much less than the gap if you want to get to the ground state so now of course then it depends what the gap is in the system but now we are not working with the exponential so we could work with e to the minus beta h or we work with h to the m now we work with h to the m but this shows how this should be related so if this is what we need so let's say we have figured out roughly what beta we need and we know what the system size is then we know that if we do the exponential projection the contributing power will be proportional to N times beta so that means that the m that we should use if we just use the power of m should also be of that order because really if you do the exponential this power is fluctuating a little bit but the width of this distribution is square root of n so square root of n divided by n is 1 over square root of n so basically the fluctuation should become essentially unimportant so there should be no real difference in doing the power of h or the exponential and this is basically how we relate the two on the same order except what I told you before that effectively we may see a different gap because if we do finite temperature we will see the very smallest gap in the system which is typically a singlet triplet gap but if we do the valence bond basis the smallest gap we see is the singlet to singlet gap which is higher so this scheme should converge faster because instead of t having to be much less than delta triplet we have t has to be much less than delta singlet and which is larger than delta triplet so maybe the momentum can play some role as well here because we throw out some higher momentum states so the experience is that it converges a little bit better but it's normally not many orders of magnitude but maybe you can gain a factor of 2 or 3 or something but it really depends on what is the difference between the singlet and triplet gap so I'm supposed to go on until noon, right? Okay, but let's look at the convergence a little bit so I will not compare I have some graphs actually I could have brought those where I compare with the temperature case and you can see how it's better but I don't have that here but here I will at least show you the effect of the trial state so let's actually discuss exactly the convergence a little bit more so this is the trial state and if I project with the mth power this is what I get so I can write some expectation value of some operator in this way I get the ground state expectation value and then I get various contributions from the excited states and the leading one will be like this so if I do with the power again I can write this in terms of a ratio of the energies and actually if I use the gap here I say that okay the gap is e1 minus e0 then I can replace e1 with e0 plus the gap since the gap at least for a big system is much much smaller than the energy itself that can be approximated with an exponential so that's also how one can relate these two in a different way than looking at the distribution so basically you can say that the correction is basically an exponential that depends on the gap but this m is also here and okay n times e0 so this is just the total energy so it converges exponentially with m eventually and it depends on the gap so then the conclusion is again similar to what I said there but expressed a little bit differently that you should always consider m in relation to the system size and this is the same as it said here as well so in your program it's better if you want to read in what m should be it's better to say that okay m is some multiple of the system size and what is that multiple so m divided by n should be much larger than the normalized size normalized ground state energy which is the one which converges when the size goes to infinity divided by delta so a small gap for a small gap you need larger m and again it should be the singlet-singlet gap and we already discussed the moment and so on but okay here I show you some examples and this is for 32 by 32 so moderately large 2D Heisenberg system let's look at the energy first and here you see I showed many curves and what are they well these are using as trial states the amplitude product states where I just use a power law so the amplitudes decay exactly in this form as a function of the length of the bond with the power 2, 3 and 4 so that we can look at those first so 2 is the black one and if you go to 4 it becomes worse if you go to 3 it's better so 3 is the best power out of these 3 but then what you can also do which I mentioned we can optimize variationally these states and use that as the starting point and that should be better and then I did it twice because as I also mentioned it's not completely easy to completely optimize them so they are never completely optimized so I just wanted to compare two cases how close to each other they are and actually interestingly these energies they are much better but they are very close to each other the sublet is magnetization squared there's a big difference between those states that I optimized although they have almost the same energy you can still see differences initially in the spin correlations but then of course everything converges as a function of m here divided by n but okay you can see that it does help to have a well optimized state but eventually even poor states converge to the correct result and by the way these results are the correct results to very high precision as one can get from SSC or well there's no even reason to doubt this but of course one can compare with SSC okay let me actually there's some questions about the convergence or anything like that okay so let me say a few words about frustrated systems why this doesn't work so if we do the valence bond basis I always talked about the bonds connecting A and B sites of a bipartite lattice so in principle one can also consider a more even larger basis where you can have bonds also between A A and B B sites that's perfectly fine it's even more massively over complete but it has very similar properties and so on so I can call these frustrated bonds now if you consider a basis with such frustrated bonds but the Hamiltonian is bipartite when you do this kind of projection then these bonds actually will go away so because for example if you act on sites B and C on such a configuration then you just get a you know configuration with normal bonds and you can also see if you act on normal bonds you don't get the frustrated bonds so basically in a bipartite system this kind of basis can be used but in the projection it will go back to the normal basis anyway but if you have a frustrated system these frustrating bonds will not go away and they will actually be associated with sign problems when you consider the overlaps and so on the properties are not as nice as before there are some signs coming in but you can still eliminate these bonds actually so if you have a frustrated system you will always create such bonds because you act between A A and B B sites for example if you have interactions on these diagonals you will clearly project bonds on the diagonals but there are some relations between bonds having to do with over completeness so this bond configuration is equivalent to the sum of these two or difference between those two and actually that's exactly why this in them doesn't help because if you want to always when you have done an operation you can go back to the normal bonds so you can choose which one but you have a sign there so that sign is the manifestation of the sign problem in this basis so people have tried quite hard to overcome these problems for frustrated systems with valence bonds and there's a little bit of progress but nothing really that has helped us really solve the systems let's see, I think I still have time to talk about, okay let me talk a little bit about valence bond solid states so VBS means valence bond solid states and this will take me also into talking about spin-ons in a moment so I have already talked about the Heisenberg interaction as a singlet project so I don't need to repeat that so the Heisenberg model is basically a sum of singlet projectors we can actually make some extended models with products of singlet projectors so if I denote a singlet projector by a red bar like this then my J exchange interaction is just a singlet projector I can consider interactions that are products of such singlet projectors for example here I just show it on a chain first paper here this was done in two dimensions but here I will just discuss the one dimensional case so I can consider two or even a product of three or even more so in a formula I can write the Hamiltonian like this this is the Heisenberg interaction and then I call this one the next one Q which is the product of two singlet projectors and I can let those appear anywhere so I sum over all translations of not only J but Q as well and it turns out that although these models are not frustrated in the conventional sense there are actually no sign problems if we study them if you have a minus sign here but interestingly they have some similar properties as frustrated systems so that may be a controversial statement still in two dimensions although I definitely believe it's true but in one dimension it should not be controversial anymore so in I will talk a little bit about the J1, J2 Heisenberg chain and this kind of model actually has the same kind of phase transition and phases and I should say that the methods we have discussed can very easily be adapted to these interactions as well okay so the frustrated Heisenberg chain I'm sure many of you are familiar with it we can consider it as having next nearest neighbor interactions like that or you can consider it as a kind of ladder where J2 is the interaction in the ladder and J1 is between the ladders that's completely equivalent I normally prefer to look at it in that way and then as you change the ratio of these two couplings and I will consider both of them ferromagnetics sorry, antiferromagnetics that means the system is frustrated geometrically there are different phases so just the most important phases and phase transitions the only thing I will talk about here is that if you turn the ratio to about 24 you get a special point where for value smaller than that you had basically the same kind of state as you have in the Heisenberg chain meaning you have some spin correlation functions that decay algebraically as I mentioned from the Mermin Wagner theorem you cannot have long range order in 1D in this case so you have instead what you have is some critical quasi ordered state and there are gapless spin on excitations and so on that's well known if you go to this point here then the ground state changes actually to a valence bond solid there's actually one point when G is one half where the ground state can be solved exactly and there are two ground states and there are exactly the ones where you have singlets so this means singlet you have singlets on alternating bonds if you are away from that point but you are larger than point 24 or something like that then you still have valence bond solid order in the system although it's not maximally ordered as here it has some fluctuations so this is well known for this J1J2 chain and it turns out that this JQ chain has actually exactly the same phases as well so there's a critical Heisenberg type state and then there's a valence bond solid state here for both the Q2 and Q3 interaction and okay to characterize such states we should actually measure some bond strengths and so on let me not talk so much about that but basically we can measure some four spin correlations to detect that kind of order and those four spin correlations can also be related to those transition graphs that I talked about oh yeah so you know just the Heisenberg chain has a critical state where the spin correlations decay in this fashion so that's a critical state now if you take this model if G is small the system stays in that critical state so you know that Heisenberg chain at J2 is 0 it's not a critical point that you immediately destroy it's a whole critical phase it's similar to Kostelitz-Stowler's kind of thing actually but you have exactly the same form of the spin correlation functions actually an interesting point is that you see that there are these long corrections to the spin correlation function so what happens at this critical point is actually that the long corrections go away and then after that you get the VBS so there's a field theory description of this too and it's described by this Weiss-Tumino with a field theory and what happens is that there's a marginal operator which changes sign here so the marginal operator is marginally irrelevant it causes these long corrections and here it's zero and here it's marginally relevant and causes the VBS so that we can also study with these JQ models but now okay so we can simulate those states and measure things now I just want to show an animation again he made this animation so this is an animation it's actually showing the imaginary time or this propagation in the time direction you start from or I think it's only close to the center of the system but anyway it's basically showing the bond fluctuations in the ground state you can think of it like that so the bra and the ket are shown in one outside the circle and one inside so you see this is for the pure Q model I think it's what we call the Q3 model so this has clearly a very strong valence bond solid order you can see that it's completely ordered with just some small quantum fluctuations of that order but now I can change the value so now I put in some more J interaction and then you see that actually this may be the transition graph loops I should actually remember what it is so it may be as a function of simulation time or something like that it doesn't really matter as a function of what in either case it's showing the ground state transition graph loops okay so you see that there are some more long bonds now but still order and now I go to where the critical point is and then you have a lot of fluctuations and now actually the order is gone so now there should be power law dimer-dimer correlations at this point if we were to measure them so it's hard to see here but if you see that there are still some segments of ordered bonds but then there's some gaps and phase shifts and things like that so there's no long range order in this system anymore okay when you see sometimes you get some really long bonds okay now let me talk about spinons a little bit and for that I need to talk about extended valence bond the extended valence bond basis so we discussed it for singlet states now we want to look at higher spin states so let's consider states which have a magnetization equal to the spin so to be specific let's consider spin one state where the magnet Z component is also one so then in the valence bond basis what we can do we can pair up all spins as before except we leave two up spins unpaired so then it turns out that the total spin of that state as you can intuitively see is one as well so this is for general S here okay so here's some examples this is what we already did this is just for six spins on a chain a transition graph for the valence bonds of a singlet if you have an odd number of spins you can also look at one unpaired spin so then the spin is one half and then you see that the transition graph has an open loop, a string so there's one unpaired spin but it can be in a different location in the bra and the cat and those locations are connected by a string so now we don't have just loops to worry about we have loops and strings but it's a very easy generalization if we have spin one we have two unpaired spins and they are actually always on different sublattises so that's what open and solid means here but again they are in different locations in the bra and the cat so here, oops, so here in this case there is no string there on the same side and here that's the string so intuitively this has something to do with the spinons now spinons are objects with spin one half and one question people have in many systems is whether those spinons are really the elementary excitations so for example if in neutron scattering as Bella Lake has done if you excite a spin one half chain by a neutron you excite a magnon which has spin one but that actually decays into two spinons and you can actually see signatures of that in neutron scattering here we can actually look at that more directly because these basically are the spinons but it's not only the unpaired spin it's actually this whole string is representing the spinon so by basically looking at the statistics of strings we can say something about spinons and the only thing I will do is to show you some animations again so this is deep in a valence bond solid just for illustration so this is basically a spin one half state so there's one spinon but it's in a different location in the bra and the cat and it's hard to see here because it's moving around but they are connected by a string but you see here that they stay close to each other and that actually can be interpreted as the spinon having a well-defined size so the size of the spinon is not just one unpaired spin it's actually some sort of average distance between those guys okay in one day whether it's a critical state or valence bond solid state we will have spinon de confinement meaning that the spinons are not bound to each other and that you see here if we put in two spinons again there's one version in the bra and in the cat but you see that those stay close to each other and those stay close to each other but they go far away from each other so basically they don't interact with each other so this is a manifestation of spinon de confinement in the valence bond solid and okay this is pretty much all I wanted to do so this just gave you a little bit flavor of what you can do in the valence bond basis and in particular looking at spinons is something quite recent that we are working quite a lot on these days but in two dimensions okay so let me just mention again there's a code I didn't show an pseudo codes now but actually everything is quite similar to the SSC and I have codes here and at least there are some comments in the code and I think you will see how similar they are of course in this case there are some extra elements that have to do with the bonds and so on but it's not much more difficult okay so I guess I can stop there and take some questions