 Mä haluan katsoa, että meillä on tämä finaattisesta dataa, josta meillä on generatioon monen Karlo-simulatioon, ja toivottavasti meillä ei ole mitään eroja, jotta data on oikeastaan hyvä, ja meillä on tehnyt paljon simulatioon, ja meillä on eroja, ja niin on, ymmärtää oikeastaan natioon dataa. Ja nyt haluamme tehdä jonka terveydenpäin. Tämä on normaali, mitä me olemme tehneet, tietysti minun puolesta. Ei ole aina, mutta tietysti terveydenpäin. Mä haluan sanoa, että tämän finaattisesta dataa oli finaattisesta jälkeen, mutta en ole ensimmäinen puolesta, kuten annaamme terveydenpäin. Se on tärkeä puolesta, ja minä olemme otanneet esiin tämän puolesta, jotta olen tullut tämän. Joten, että ensimmäinen, jotta haluamme tärkeää puolesta, jotta puolesta on tärkeää puolesta, tämän puolesta on antifaromagnat, ja kuitenkin mitä tärkeää puolesta voimme, jotta me modifimme Hambiltonen vähän. Ja sitten todella, että of how to analyze data. I want to talk about a case which has some experimental relevance. There's this material which you can actually tune through a quantum phase transition. It's a quantum magnet. You can describe it by a spin one half model. I will discuss a model which we have studied in relation to that. Then I'm going to get to how to analyze critical points. This is actually based on some recent work. It's a lot of known stuff people have used for a long time, but we have some new twist on how to make everything very systematic and well controlled and easy. I think that's the key to that things should be relatively easy so that people can actually use it. If I have time and I think I will just say a few words about some pitfalls that are unfortunately quite common when people try to extrapolate all the parameters close to critical points and things can go horribly wrong with disastrous consequences for everybody. Let me say a few words about quantum phase transitions first and compared to classical phase transitions. We have a lot of phase transitions driven by thermal fluctuations. They happen at finite temperature or temperature larger than zero. Then we also have these phase transitions that occur in the ground state and those are what we call quantum phase transitions. They are driven by quantum fluctuations. There's some parameter g in the Hamiltonian that you change but at some point there is a really drastic change in the nature of the ground state. We can draw very similar pictures for both these cases. We have some order parameter and we have the control parameter which could be temperature or it could be in the case of a quantum phase transition. It could be or it should be some parameter in the Hamiltonian. Because in the thermal case as well you can also tune the transition by changing some other temperature but the point is that it's thermal fluctuations which is causing the transition. But here I just put T here to make it clear that we are studying something at finite temperature. If we have a continuous phase transition then at some point the order parameter let's say the magnetization or something like that starts to continuously grow but there's a singularity there so there's some power law behavior that starts there. If you have a first order transition there's just a discontinuous jump and in some cases the jump is big and in some cases it's small but if there's a discontinuity we call it a first order transition. In terms of the correlation length what happens here is that here you have a correlation length which diverges from both sides if you define it correctly whereas here the correlation length stays finite across the transition. These pictures look the same for classical or thermal transitions and quantum transitions and in fact the two are similar in many respects but they are also different in some cases. We talked about the quantum Monte Carlo simulation methods last time and there you saw that we map, when we do the simulations we map the quantum system onto some kind of classical statistical mechanics problem which exists in one higher dimension. So it's really the case most of the time that the quantum phase transition can't be described as a classical phase transition in D plus one dimensions but sometimes some other aspects come in as well which I will not really talk much about today. We will discuss continuous quantum phase transitions and I also want to when we talk about analyzing critical points I'm actually going to show the illustration of that for the icing model. Okay, let's see now. I thought I had moved this slide further ahead but let's see should I talk about this one now? Let me check what, it's okay actually. We do Monte Carlo simulations on finite lattices. Just as a minor remark here normally we use periodic boundary conditions so we have the translational invariance. Normally that helps to get a fast convergence to the thermodynamic limit although in some cases one can use some other boundary conditions as well. So we have to analyze the size dependence in some way to be able to extract the thermodynamic limit and sometimes it's clear from the data what happens but it's not so clear. You should always try to have some sort of theoretical background or expectations for what kind of approach you expect to the thermodynamic limit. Sometimes that's available and sometimes it's not but at least if there is something available you should know about it and use it. So let me just quickly show an example for the icing model so you all know what the icing model is and we can define it and we can define the magnetization in that way. So here I show some simulation data as a function of the temperature so there's no magnetic field here and I calculate the order parameter squared because in a finite lattice if I do my simulations eventually the expectation value will be zero if I don't square it because there is a symmetry in the system and we don't break it in finite lattices. But if you look at the magnetization squared that's fine then we can see for different system sizes, for small sizes we see some rapid increase but it's not clear, it's a phase transition but as the system size increases you start to see very clearly that something singular is really starting to happen there. So then what one should do is to look at things as a function of size at fixed temperature so that's shown here. So if you are in the ordered phase you see here that things converge very rapidly in the case of the icing model. In fact they expect exponentially fast convergence when you have a discrete symmetry that you are breaking. So if you plot it somewhere around here in this case it's at 2.25 so it's just a little bit below Tc. You see that as a function of size it pretty much quickly goes there constant note that I have, this is on log scales here. Now if I go above the critical temperature that's these points here then you see that I get a power law behavior which is linear on the log scale and if you think what we are doing here we are doing the magnetization squared so if I take this quantity and square it at least you know that there is some contribution from the same spin squared so that this value is 1 plus something and that 1 divided by 1 over n that's what actually gives you this trivial power law here. Actually it's not just one you can think of it as a correlation length squared or something like that which you get there but at least you can see even at infinite temperature you will get one as one contribution there. So 1 over n. But then what happens exactly at the critical point so in the case of the icing model we know the critical point exactly let's check this numerically and there you also see that you get a power law you get another straight line but this is a non-trivial power law and it depends on the universality class of this phase transition right so in the case of the icing model in two dimensions the slope here is one quarter and that comes from the exact solution of this model and in other models other universality classes will be something else. Sometimes you know ahead of time what the universality class is going to be or sometimes you are dealing with a new universality class and your task is to determine that exponent. Okay so that's generally what we do sorry in general what we may want to do is to study the magnetization curve or in general some order parameter curve in a thermodynamic limit so then it's just a matter of extrapolating to infinite size to get this curve and here you see it's easy if we are at relatively low temperature and the convergence is exponential but if you see here close to the critical point it's not so easy because there we are affected by this critical scaling behavior so eventually even if I'm really close to TEC eventually if I plot things like this it should converge exponentially fast but if I'm close to the critical point first it will decay almost like this critical one and then it will flatten out so it's not easy to do the extrapolation close to the critical point and in other cases it may not be as easy as here to do the extrapolation either because the convergence is not always exponential as I will show actually next so let's first talk about extrapolating long range orders in different systems and then talk about studying criticality okay so the 2D Heisenberg antifaromagnet we talked a lot about it last time we did simulations of it and I showed you the algorithm and so on and in this case I'm considering the square lattice again okay in this case the order parameter is the sub lattice magnetization because it's an antifaromagnet so we have a phase factor there so it's not the spins but again we should look at M squared because M itself is vanishing strictly speaking and we want to look at it for infinite size okay or as we go to bigger and bigger size right okay so this was for this model done the first time in a reliable way by Peter Young and his postdoc Joseph Reger and at that time you know it's 27 years ago I guess the computers were a bit slower than they are today and in fact the algorithms were you know not at all as efficient as they are today so in particular those loop algorithms that I talked to talked to you about the previous time they had not been invented yet but nevertheless they showed that this sub lattice magnetization has a pretty large value so the maximum possible value it could have which corresponds to the classical value would be 1.5 and they got something like 60% of that just to brag a little bit here I want to show our recent result just to illustrate how things progress over time so here we have used this projector Monte Carlo algorithm that I talked about up to lattices with 256 squared spins and again we plot as a function of 1 over L which these people did as well but we have much better data so here I actually plot two things one is exactly the square of this quantity and the expectation value of that and you see that it extrapolates nicely to a value which actually when you take the square root of it gives you that so it's very consistent with that but the robar is what is it 2000 times smaller or something like that and I also plot another thing I plot the correlation function at the longest distance on each lattice size and that you expect to go to the same value as well the correlations go to the order parameter squared so these both agree with each other and just to illustrate here how small the error bars is here I took I guess it was the correlation function data and I subtract the fitted curve and what's the fit by the way the fit is a low order polynomial I think something like fourth order because you expect from theory of this kind of symmetry breaking you can also get it from spin wave theory you expect that eventually for large size this should be linear but if you have really good data as we have here the error bars are really small then you can also detect deviations from the line so you actually need if you want to fit all the data you have to use maybe a cubic polynomial I think that's what we had to use here and it shows the difference between that fitted cubic polynomial and the data and you see here how small the deviations are and you also see that this is consistent with the form there is no sort of systematic deviations from the curve it's basically consistent with random fluctuations and it's also consistent with the size of the error bars that's also important so if you remember what an error bar means normally the error bar means that means one standard deviation of the distribution of the mean and what that implies is that the probability should be about 66% or something like that that the true value is within your error bar so that also means that about one third of your points should be outside this fitted line outside its error bars so here that one is a little bit outside that's barely outside and there's maybe here it's a little bit less than a third but there are statistical fluctuations of that of course as well too so overall this is very consistent with this form and by the way when people write results in this way so the one here means that this is the error bar on the last digit I know some people are confused by this notation it's just used a lot because it's much if you know what it means it's much easier than saying plus minus 0.0001 or maybe I forgot the zero now anyway but some people I have realized believe that somehow this is the following digit and that somehow as accurate it is but no that's not what it means it's the statistical error of the last digit before it anyway this is a quite precise value probably more than anybody would ever need actually it's not true a lot of people want to benchmark other methods and then it's really good to have very very precise results so there's somebody in Japan Nishinok is keeping a list of these very accurate numbers for various models okay so that was the Nail Order so this has been understood for a long time that this system is Nail Order as it's called now let's see how we can destroy that Nail Order we want to be a little bit destructive here now and get rid of that order so there are many ways you can do that many ways to modify the Heisenberg interaction to get rid of the order and one way which I eventually want to discuss in the context of an experiment also is by dimerization so what does that mean well it means that we have dimers that are coupled to each other so a dimer here you should think of as a pair of spins that is in some sense strongly coupled so these red bonds indicate stronger couplings and the blue ones are the inter dimer couplings that are normally you should think of them as weaker so there are many ways to do this in two dimensions one way is to actually do a bilayer and couple the layers by the stronger coupling this is still a two dimensional system but you can also just stay within one layer and make some pattern of these dimers and there are infinite number of ways you can do that and several of them have been studied so what happens here is that if this ratio between the strong and weak coupling is very large then basically you form singlets on those dimers so of course if you have an isolated dimer which is the same as saying g goes to infinity then the ground state of this Heisenberg interaction on the dimer is exactly a singlet so for g large the ground state is essentially a singlet product okay in the case of this system where if g is one it's just the case we just discussed which has no order so on one end you have no order on one end you have a singlet product and the singlet product clearly doesn't have any long range magnetic order and in between those there is actually a critical point separating them so what happens is that this sub lattice magnetization vanishes continuously here and as you know these antiferromagnetic states can be described by spin wave theory so they have gapless spin wave excitations so there's no gap in the system but on this side here you have a finite excitation gap so that opens continuously here okay and here I just list it's not important for what I'm going to talk about today but this is the finite temperature behaviors of the correlation length in this case so as you remember from my first lecture I mentioned the Mermin Wagner theorem which actually says that this system in two dimensions can only have order at zero temperatures all what I talked about here was zero temperature and if we go to finite temperature in that state or whatever we should call it we have an exponentially divergent correlation length but there's no finite temperature transition there's no really phase transition as a function of temperature there's only an exponentially divergent correlation length so but we have a phase transition at zero temperature as a function of this coupling okay now just by the symmetries of this system you have you know the spins have three components and the order parameter you can actually think of the order parameter as something which has all three symmetry then it becomes it can point in any of those x, y or z spin directions and since we have done the mapping or if we do the mapping from the 2D quantum to 3D classical systems we would then guess that this should be in the universality class of the 3D classical Heisenberg model and of course in that case it's a thermal transition so this if you do this model the temperature is your control parameter so g here corresponds to changing the temperature in the classical model okay and this has been confirmed by a lot of studies actually that to quite high precision that you get the same critical exponents okay let me not discuss this in detail but just show a few results for this particular system so when you do these simulations you are interested in the ground state so you really have to make sure that you have low enough temperature to reach the ground state and of course strictly speaking you never reach the ground state you need an infinitely low temperature but you can do some experimentation so what you should do is to do many simulations as a function of beta which is the inverse temperature and you can calculate for example you know the magnetization sub lattice magnetization squared and you will see if you do several simulations if beta is very small meaning high temperature then you know you get some small values and then they will increase and eventually they will flatten out okay and that happens when T is less than the finite size gap and it turns out in this case the finite size gap goes like 1 over L okay these are things that if you don't know them you have to study a little bit but these are all well understood so that means essentially beta has to be of the order of L or maybe a little bit lower so you should check things like that and then if you have done that carefully you can plot the magnetization as a function of G for many system sizes and then you want to extrapolate if that's what you are after the magnetization curve to infinite size now you immediately see something different here when we compare with the icing model that we discussed before there if we are far inside the ordered phase the convergence as a function of system size was very fast it was exponentially fast in this case and this is related to the 1 over L behavior we had on the previous slide in this case well it's exactly the same thing in this case the convergence is goes like 1 over L and again it has to do with the different symmetry of the order parameter this is a continuous O3 order parameter and if you read books by Cardiff or other books in many body theory and things like that you will learn such things so here you can see things graphed as a function of size so the black dots here is exactly what we had on the previous slide and then there's another case G equals 1.5 so down here and you see that clearly also goes to some finite value and you could do the extrapolation which I haven't shown here and here I show a value of G very close to the critical point so then maybe it's not so clear I could also have shown something in between here and you would start to see that it's not so easy to say if you are close again to the critical region what happens if you are far inside the disordered phase then again it should go like 1 over L squared so in principle from data like that you can get the magnetization curve at the critical point you expect again power law behavior as we saw in the Ising model in this case if you believe that it's the Heisenberg 3D universality class then this is what you expect that it should be almost linear here this exponent 8 is very close to 0 for this universality class and you can kind of see that here this is actually pretty close to the critical point so you can see that it's linear but not quite linear but really check that you have to do very careful analysis which I will not really do here I will later talk about some ways to extract the exponents and the critical point which is much more sophisticated than just plotting things like that but there is interestingly an experimental realization of this kind of phase transition but not in two dimensions as far as I know but there is one in three dimensions which is also interesting and I want to discuss that a little bit so it's a three dimensional coupled dimer system and this is the name of that compound and there are actually many articles about this is a fairly recent one by Christian Rueggs group at PSI in Switzerland so this material is quite complicated I actually can only work with square and cubic lattices my capability to imagine other kind of lattices is very limited so this is very complicated for me so I stole the picture from this paper here so basically it's these copper atoms form the dimer so these have a strong exchange coupling and then it's not shown really in this figure but you can look at these papers there's all kinds of couplings between these dimers so it's not just nearest neighbor couplings as we had in this neat two dimensional lattice but it's more complicated but it's dominated by this coupling here and then there's one other let's say okay I was wrong it does say here so the J is here and then there's J1 well I guess that must be J1 and I think this must be the next strongest then anyway people have estimated those couplings so it's believed that they are known pretty accurately and what the experiments show is that you can actually change the couplings as a function of pressure in some way which is actually not quite clear what is clear is that if you do neutron scattering measurements which can actually probe exactly the magnetization square that we have discussed then you can see that the order vanishes vanishes at some point you can also of course measure then the ordering temperature which they have done so you see this is pressure in kilobar so this is basically a thousand atmospheres but that's considered low pressure I think by people who do such things so it's not so bad it's possible to actually build a cell and do neutron scattering so you see here that in atmospheric pressure its Qd means quantum disorder so that's on the side where there's no magnetic order so apparently what happens when you apply pressure is that somehow relatively speaking these inter dimer couplings grow so that eventually at some point the system can order they have studied all kinds of aspects of this system including the nature of the excitations there's something analogous to Higgs boson in the system and all kinds of interesting things okay so we decided to study some toy model to describe this material so since we expect some kind of universality we can study some simpler lattices which again are easier to deal with so we studied three different kinds of dimerized lattices the first two are somehow corresponding to those single layer cases I showed you in 2D but now we put them in 3D and the last case is the analog of the bilayer that I showed you in 2D so here it's like a double cube so just imagine two simple cubic lattices and bring them close to each other and put a coupling between the nearest neighbor ones and then you have the other couplings in the cube as well so in these systems we can also go from a nail to a quantum paramagnetic phase and one question we had here was that since experimentally they measure very nicely the nail temperature, the ordering temperature if there is something universal about that and you expect I think some kind of universality in terms of exponents and so on but we wanted to see if there is anything more than that okay so what we did was we determined the ordering temperature and then again I haven't really talked about how to determine critical points I will do that later so the main thing I want to talk about here is our extrapolations of the sub lattice magnetization and the thing is one has to be very careful to do it accurately that's the point of all this you saw from the previous slides here that okay it's pretty clear that you can do a nice extrapolation here but if you go closer to the critical point it becomes harder and harder and how can we be sure that we are doing it correctly that's what I want to talk about okay so here is some we could have shown this data instead of going back to the previous slide so here is the actual data for several values of this ratio so here the critical ratio is around 4.8 or something like that and here you see it again as a function of 1 over L in this case since it's one higher dimension the leading behavior as we approach infinite size should actually be L squared and you can see that here that there's no linear term I think you can even see that visually and here again we did some polynomial fits and we extracted the ordered moment the sublet is magnetization and I will tell you about some more accurate calculations and exactly how we do it in a moment and we also extracted the nail temperature and here we use a method which is called curve crossing method and that's what I want to discuss at the end of this lecture so we don't care so much about this so now other than the fact that we have determined that as well for several values of this coupling so this is just one example there okay so then as I mentioned in the experiment the way those couplings depend on the pressure is not really known so then it's hard to compare with a model our model, our lattice is even different so it wouldn't make any sense but even if you did the actual lattice that corresponds to this material you don't really know how to change the couplings as a function of pressure so then the question is can we do something which circumvents that lack of knowledge and we can do that if we actually plot the nail temperature or the sub lattice magnetization that doesn't explicitly include the couplings of course the coupling dependence is in some way but the way we analyze the data we don't need to know it but then one question is how do we normalize the nail temperature because the nail temperature has some unit of energy and it depends of course on the couplings that we don't know what they are so there's some overall energy scale that somehow should be divided out the sub lattice magnetization is somehow easier because that's a number that should be between zero and one half there's nothing to really divide that to make it easy to deal with but the nail temperature we have to imagine some scenarios what we can do so we have two couplings J1 and J2 we could just normalize with one of those couplings and see what happens then we get a dimension less quantitative or we could imagine that we have some kind of average coupling so let me draw it in two dimensions because my 3D abilities are a bit limited so if we have a 2D dimerized lattice like this let's say these are the strong dimers and the rest are the weak couplings now if you think about a given spin it sees, so this is J1, J1, J1 and this is J2 so it sees three of those couplings and one of those couplings so if we change the couplings the average or total coupling that this spin sees is changing as well so another way to normalize would be to actually normalize by the total coupling which you can also see as some kind of average coupling if you just divide it by something so we call it Js so in this case it would be 3J1 plus J2 and of course in these 3D lattices it's something else okay so we tried that and also we used an intrinsic energy scale that you can read off in principle experiments but also in the simulations namely the peak value of the magnetic susceptibility I will show it in a moment anyway so this is what we get when we just normalize by J1 and by the way J1 we normally set to 1 and then this J2 coupling is equal to what we called G before if we just do that we can see that it looks like the nail temperature behaves linearly on the sub lattice magnetization so note again I do many different couplings and I calculate the nail temperature and I calculate the magnetization or I should say my student did those calculations and then we just plot one against the other so the G dependence is somehow hidden in this way of doing it anyway so we get some linear dependencies here that's clear and that's actually what you expect from mean field theory that they should be linear a good point yeah so that's right so we do some calculations again we actually do it at T equals 1 over L and then you know since in 3D if I plot TG we have a whole ordered phase so this is all ordered so if I want the magnetization here or let's say I look at it on a line here if I plot the magnetization as a function of T and let's say I do it along that line now it's something like that so if I as a function of system size I change the temperature so for increasing system size I go lower and lower I will hit this region where it becomes very flat and it converges quite nicely so yeah so it's the sub lattice magnetization at zero temperature and how the nail temperature behaves on that so you expect if you do some simple mean field argument you expect that to be linear but it's not universal in the sense that they fall on a common curve but interestingly if we use these other normalizations you know almost perfectly I would say these points fall on the same curve but the curves are a little bit different if we use, okay so what is this T star this is a well known method by experimentalists to extract an overall energy scale of an anti-ferromagnetic measure the susceptibility and okay in our case we calculate it sometimes if we do simulations we also say we measure even if experimentalist probably think that this is heresy to say that but okay we measure the susceptibility and it has a peak and that at some temperature and that temperature is what we call T star and that's some kind of reflect some kind of intrinsic energy scale of the system and here we plot that T star normalized by J1 which is one for these three different models and you see that this is not a constant at all it's really varying but if we divide this data this T-nail by that we actually get something which is quite nice data collapse I would say this collapses a little bit better than that one but this is also quite good okay and it's very linear so based on that we say well it looks like if you normalize the nail temperature in an appropriate way then you really get some universal behavior and the nice thing is that in principle at least this T star is accessible experimentally so they could do an experiment like that and in fact I guess motivated by our proposal in their later experiments after our paper they did attempt to do that but this experimental susceptibility data is not really available in this complete sense they have some points and I think it's not clear how good it actually is but anyway they did it and what they found looked actually quite a lot like our curve so the T-nail divided by that which they call T max we call it T star and then this is the subletist magnetization in Bohr magnetons per copper ion okay now you cannot still compare these exactly side by side because this is the actual subletist magnetization measured in units of Bohr magnetons and okay what's uncertain there well the g-factor of these electrons in this compound I'm not sure if it's known maybe it's known but at least they didn't talk about it and then there's this okay so then if you assume that it's 2 then you can actually normally the g-factor should be very close to what it is for an isolated electron namely 2 but it could be 2.2 or something like that but anyway if you assume that it is 2 I could have done it here but I didn't divide these numbers by 2 so what's 0.4 here should map on to 0.2 here and then you see that this is a little bit below and it's maybe 25% below but considering that there's this uncertainty in what T max is I would say this is quite nice and actually it even has the same kind of upturn there although it happens a little bit before here but hopefully they can do more on that point okay so I just make some comments here that this doesn't necessarily have much to do with the critical point because if this extends very far from the critical point you see where the magnetization is already pretty big but this change from linear to nonlinear behavior is quite interesting because the linear behavior you can actually explain from some very simple mean field like arguments and it corresponds actually to decoupling of quantum and thermal fluctuations so it just means that the quantum fluctuations act effectively as just a normalization of the coupling in the system which is just and with that you again get this linear behavior and then the nonlinear effect would then be that the quantum and thermal fluctuations somehow become more intertwined or whatever we can call it anyway maybe I'm talking too much about physics here maybe I should talk just about numeric computations but I think we have to talk about both so and this is just motivating what I'm going to do next okay so I think this was quite encouraging that it worked so well but now I want to talk about some recent results where we wanted to do more work on this system because this system is actually it's one of the few examples where you can really tune a phase transition in a very good quantum magnet an insulator which has really well localized spins and is described by a well defined Heisenberg model and you can actually go from the NEL state to something else it's not very common lots of interesting compounds for sure that you can study but to really go through a quantum phase transition is not common maybe Bella Lake can correct me if I'm wrong but I think this is maybe the only example okay so now we want to this was posted quite recently talk about logarithmic corrections and study those numerically so what is logarithmic correction well this phase transition, the O3 transition it's part of the ON family of phase transitions and all phase transitions if you think of the universality classes as a function of dimensionality there's an upper critical dimension where above which mean field theory applies and in this case that upper critical dimension is 4 and in fact that's exactly what we are doing because we have a 3D system and we do quantum so that's 3 plus 1 equals 4 so we are actually at the upper critical dimension and exactly at the upper critical dimension what you expect is mean field critical exponents but there should be logarithmic corrections to essentially everything meaning that things behave as power loss times logs okay and then experimentally you know several of these papers have addressed these issues and tried to see if they somehow can detect that it's not really a pure mean field transition but that there's some logarithmic corrections there but it's very difficult to detect these logs as you can imagine and actually even numerically there are very few works that have even attempted to do that so there's some old works on the icing model but that's pretty much it so we decided to you know bite the bullet here and try to see if we can say something numerically and if that could somehow help the experimentalists to see if there's any chance of seeing it in the experiment as well okay so then we do these SSC simulations and one reason I want to talk about exactly this point is that this is a case where you really have to do everything carefully because you are trying to see something which is very hard to see and if you do some mistakes in your calculations it will ruin everything so now we want to study simulations close to the critical point but I can still use this thing that Sebastian asked about here we are still in the ordered phase so we can still use the temperature going to C as a function of L as long as L is big enough so here we used T equals 1 over 2 times L and okay we have a cube and actually there's a mistake here actually let me correct it because then it looks even more impressive should be 128 and I forgot to put in here but I did know that the system we are actually studying now again it's not the experimental lattice although we have actually started to do the experimental lattice now but so far we first did the double cube the double cube has more symmetries than the other cases so it's somehow a little bit more convenient and we did N equals 2 times L cubed so the cubes are up to 40 cubed and there's two spin units so up to 128,000 spins and we want to say something about the ground state so it's not easy and okay one reason we can do it is because my collaborators here Jan Cicin and Se Young Meng they have access to the Tianhe supercomputer in Beijing which I think is the world's fastest so okay they don't get the whole supercomputer but they get many cores like thousands of cores running all the time so it was possible okay so first we determined the critical coupling and again I will show you curve crossing methods to do that in a moment critical coupling for the double cube turns out to be 0.4837 I say approximate here maybe we have some error bar too but it's pretty much there or it's less than 1 in that digit actually it's much, it's like 4, 8, 3, 7, 0 something okay and then again we compute the sub lattice magnetization as a function of 1 over L squared and again the temperature is somehow tied to the lattice size so this is completely systematic and you expect it to go to the correct result so here I plot it as a function of L squared because we do expect that it should be linear than as a function of L squared in 3D and you can see that it seems to be and now again I have fitted some polynomials or Jan Cicin has fitted some polynomials here for different points and you see we are getting pretty close to the critical point but these magnetization curves always tend to be pretty steep close to so they normally look something like that if you remember from the actual icing data I had before too so if we are even very close the values can still be relatively large but I think you can already see in this data this linearity all these are actually pretty close to the critical point if I had shown something further away you would see much clearer linearity but you see that this is more linear than this one so the corrections beyond the leading ones ones are becoming larger as we go closer to the critical point and you can really question us here do we really believe that these points are correctly extrapolated so I want to show how we are trying to make sure that that's correct so we do polynomial fits but we want to try different orders of the polynomial and use different subsets of the data so starting from some smaller size and going up to some not necessarily even the biggest size but systematically see how sensitive everything is to the kind of thing we do so when is a fit good so I think you all know what chi-square fitting is so you have your data and you have your fitted curve and then you sum up the differences between the two normalized by the error bar and that's chi-square and then you divide by the number of degrees of freedom which is in this case the number of sizes so we have the number of data points minus the number of degrees of the number of parameters of the polynomial in this case that we are using okay so for in the limit of a lot of data if the number of sizes is large you expect that value chi-square per degree of freedom to be very close to one and you can look up the chi-square distribution and its properties and you can look at the fluctuations of chi-square so the fluctuation the standard deviation of chi-square should be one over square root of sorry, square root of two over the number of degrees of freedom so we characterize a fit as good if this value is chi-square minus one is less than three of its standard deviations okay that may you may say that it should be two or one but when it comes very close it almost doesn't matter anymore so those would be reasonable fits at least and then what we do is okay so if we have a maximum size L then we exclude smaller sizes until we can satisfy this criterion and the point is that the smaller sizes have stronger corrections so they are less well described by whatever fitting function we use so if we use a low order polynomial and the largest size then we may have to exclude many small system sizes okay and then we try to systematically see what's going on so here I don't show you what the smallest size included is but I show you as a function of the largest size included and data for different g-values and using different order of the polynomial okay so now you see if we are far away from the critical point or relatively far, this is still not that far away then these are very stable you see that this is almost a perfect straight line and these polynomials of order 3, 4, 5 they agree perfectly with each other there are some error bars of course when we do the extrapolation we also do error propagation and we extract the error of the extrapolated value but as we go closer to the critical point things at least become a little bit worse there are some weird behaviors dependence on the polynomial order there and sometimes the error bars are large but if we go to big enough system sizes and big enough polynomial order you see that it actually converges quite nicely so that's under good control okay but this is not still extremely close to the critical point if we go even closer to the critical point so these are the three values the closest to the critical point that we use then you see that things change a lot and keep in mind here that all these fits are statistically okay so even if somebody just stops there and says oh look this looks great chi-squared is good everything is hunky dory and it even looks pretty flat here but then it goes up and then it becomes flat here well you could ask if we go further does it jump again well we have actually extrapolate these in two different ways I didn't want to show too many plots but one can analyze a slightly different quantity and when you do something similar they are different here but eventually they become the same so we think that these are really truly saturated here and we can trust them and then in the end looking at how these things depend on the polynomial order we concluded that polynomial order four is what we want to use and we go to of course then the largest system sizes we can so we are pretty confident that this is all okay that we have done and I want to stress that if you really want to do accurate work you have to do things like this and actually numeric is almost useless if you don't do accurate work because if you have a well defined model then the goal should be to find the properties of that model it may be a toy model as some people may say but toy models are useful reference points for experiments for theory for whatever so if you have a model and you have a method which can in principle do good calculations for the model if you do things right you may be able to get results that are really truly correct and can serve as a reference point but if it's wrong it's worthless that's like a map which is wrong it leads you in the wrong direction and worse for yourself if it's an important model next year or next month somebody else will certainly do a calculation and show that you are wrong it pays too to be accurate okay so we would like to go as close to the critical point as we can so you have a very good point you can ask well why don't we go further the reason we don't go further is that we simply don't trust our results if we go further so this critical point is extracted in a completely different way in a way which I will discuss soon if I have time to take longer time than well I think I should have done it's extracted in a completely different way it's not based on this at all and then the question is only how close to the critical point can we trust our results because things get harder and harder as you go close to the critical point and even with TN we can only do up to 40 actually we did up to 48 I guess I was even wrong on the previous slide we did even more spins than I said but what is sufficient of course if we really want to study then the critical magnetization maybe that's what you mean okay we can see in the next slide maybe we will be the answer to your question okay so the goal was to see if we can see some log corrections in this quantity and there are lots of predictions for log corrections in different universality classes and so on so what do you expect is that the sublet is magnetization and this is the distance from the critical point it should be a power law times a log okay and this constant here is actually not so important asymptotically it's unimportant but this beta is mean field so it's one half and then this exponent on the log is also important and it has been predicted based on perturbative RG calculations and so on and it's believed that it should be that so if we somehow just assume that this is the form and just play with some constants here we can see what we get okay now we also want to do the fitting if we neglect the log just see if we can do a pure square root fit so that's the green line so indeed if you are close to the critical point you can of course make some of the fits work but you see that it starts to deviate here if we use the log form it works much better and actually you can see down here it looks like it doesn't work so well but actually there are error bars here too which are down here of the order of the size of the plot so this doesn't mean a statistically significant deviation in fact I think this is not even our latest graph because we improved the data a little bit after we submitted the paper and now it looks a bit better but I could actually not find the picture last night when I was making this slide so I just used the one from the archive paper but you see the log actually improves things improves things out here quite a lot so we believe here that you can see logs but you see also that it's not easy to see the logs so if you have an experiment and there's some answer actually to do this it's very important also that GC is determined accurately because if you change GC down here this curve changes a lot so we actually I just showed our GC to three decimal places but we have it actually better than four decimal places so well let me not discuss whether this is experimentally observable or anything we discussed that in the paper okay this was just an illustration of the fact that okay if you are careful you can see even subtle things and by the way we have even good enough data that we can keep this parameter as a free parameter and see what we get and actually we get this value to within I think 3% or something like that in a very stable way it's discussed in that paper if you are interested okay so now let me come to how to analyze critical point do finite size scaling and extraction of critical points so we all know that the correlation length diverges at the critical point and we can have a thermal phase transition where delta is you know the temperature difference for example or in a quantum phase transition it's always some distance from a critical coupling okay I cannot discuss all the background of finite size scaling theory but just to present you the form which everybody believes is correct so if you have some quantity it could be for example the magnetization or susceptibility or something like that which also has a singular behavior some power law behavior at the critical point if you put it on a finite lattice then you expect to have a particular form which is not just a very general function of delta and L but it has a specific form namely there's an overall L dependence where this exponent is the same as that one and this is the correlation length exponent and there's a scaling function which is regular when delta goes to zero so if you put delta is equal to zero you're at the critical point then you just get the power law of the system size if you go away from delta then this from zero then this is a function which has a non singular behavior close to delta equals zero so it can be Taylor expanded and so on so this came out you know long time ago from experiments initially people noticed well not in the finite size scaling for me but one can write down similar scaling of course in the thermodynamic limit and that's what people did and then when people started to do numerical simulations you know people started to look at finite size scaling and initially you know this was called a hypothesis because people realized that it worked and later it was proven formally by the renormalization group method and so on and this is just in a moment I will show a little bit more detail of this form but this is the most important part of it so one way that this can be used is in so called data collapse let me show what that is so here I show some data again for the icing model this is the susceptibility which is essentially the magnetization fluctuation of the icing model and you see as I increase the system size a sharp peak develops this is on a log scale so this peak really grows fast and now what you do is you are saying well okay so let's take this L dependence on the left side I just multiply everything by L to kappa nu then I'm left with a function of you know delta L1 over nu so that I can consider as my argument so if I do that I take the susceptibility in this case what I should do is multiply by L to the minus gamma over nu so kappa here I use as a generic exponent and gamma is the exponent corresponding to the susceptibility which is 7 fourth in the icing model and on this side I plot okay I call it delta here I call it t unfortunately I also said that t is the absolute value it's just t minus tc no absolute value so you know it's this is a function of this argument so then if we plot it like this what we expected that all data should collapse on the same curve and that curve will then be the scaling function f so this should hold only when L goes to infinity so you expect that there are some corrections to this and you can clearly see that for small sizes it doesn't work that well but for the bigger sizes it works really well and here I know all the exponents and tcs I can just do it and check in general people often use this to extract the critical point and to extract the exponent because you can use these as kind of fitting parameter or parameters to be optimized and you can adjust them until you get the best data collapse that's what people often do but I have started a little bit to dislike this and I think I'm not the only one because you have to make a lot of choices and so on when you do data collapse you have to say okay what window do I use of points do I throw away some small sizes what do you do it's a little bit hard to be completely systematic and unbiased so that's where the crossing point analysis comes in and again it's a well known method in principle but we have made some improvements I think which makes it even more versatile and easier to use so our goal here so this is some manuscript which will be published pretty soon done with Huai Xiao actually I put her name in the wrong order her first name is Huai and Xiao is the last name and when I go from Beijing it will be posted soon so our aim was to have somehow completely systematic and unbiased method to analyze the critical points okay I don't know how well we succeeded but I think we succeeded pretty well okay so the scaling function is actually a little bit more complicated because as I mentioned the one I showed before should be valid only for L goes to infinity because there are what's called irrelevant variables here as well so these are arguments that you see when L goes to infinity these vanish and then it just becomes a function of that variable again this is something that you learn if you learn the RG so this is what we can tune the distance to the critical point but if our model itself is not at the fixed point of RG then there are some so called irrelevant fields and the arguments depending on those will decay away as the system size goes to infinity so this is the actual form and we just for the analysis we are doing and just to keep things looking neat I will just keep one of these irrelevant fields the one which has the smallest exponent that's the one that is most important so this is actually what we really expect so as I mentioned this function can be tailor expanded so we do that here and I just took the leading terms coming from these two arguments but of course all kinds of terms there so what one does in a crossing point analysis is to consider two system sizes you can call them L1 and L2 but there is some specific relation between the lengths so the most common but it's not necessary the most common is to take the larger one to be a multiple of the smaller ones so for example the larger one is two times the smaller one and then what you are doing is often it happens that if you plot those functions for your actual numerical data of course as a function of the distance from the critical point or actually just as a function of your control parameter and you plot it for those two sizes they will cross at some point so what we do is study the points where two of those curves cross each other and first I will analyze that based on this tailor expanded form to derive some results and then we can check it with data so if you just take that tailor expanded form and set A1 equals A2 so I just do exactly what I do here just do it in this form here then you will find the crossing point I call delta star you will find delta star as this form so this ratio of the two system sizes which normally will be two is R here so it appears as a factor there and you see that there are okay there are many other terms after but these are the dominant terms but now you see that there's a special case here if kappa this exponent that governs the overall L dependence if that's zero then this term here vanishes so if I find a dimensionless quantity a quantity which doesn't grow or shrink as a function of the system size then this point converges faster and okay this is delta star but we can also write delta star as some coupling as a function size minus the true critical coupling that's what this is I should call it G star actually so that converges then to zero meaning this goes to the critical point at a pretty fast rate L2 minus 1 over nu minus omega in the generic case where you have a kappa that this is the leading term so normally you want to find some dimensionless quantity to work with if you know the exponent you can always just divide multiply that out as we did in the previous slide and that's then a dimensionless quantity and actually if I go back to the data you can see here here you don't see a crossing point clearly I haven't actually looked at it in detail but once I have multiplied out there you see that the data cross here so there's some crossing point there so those would be the kind of points we can investigate but that means you have to know what that exponent is the kappa and nu exponents if you don't know what they are you can use some quantities that are known to have a scaling dimension zero so for example the Binder ratio I will mention it in a moment so one can always find some quantity which has kappa equals zero okay we can also look at the value of whatever function we are looking at at the crossing point in some cases that's interesting too and important so then what one can do and this is often used is to extract the critical point by doing a series of crossing points and extrapolating them and one can do the similar thing with that if kappa is zero this is just the value at the critical point so in principle it looks like one should be able to extract the exponents from that too because if you fit your data let me draw a graph here so you have your quantity A sorry no you have your let's say your crossing point let's call it G star of L and you can plot it for example as a function of one over L it may look something like let's say like that or some error bars as well so in this case here you expect by the way I'm always sticking to this side I never go to that side sorry right so you expect that to converge with this power loss so it should go to some constant value which is the final you know GC that you're looking for should be that form so it looks like you should be able to get exponents out of that as well and in principle you can but in practice actually it's much easier to get that value than to get the correct exponent here these exponents tend to somewhat change as you go to bigger system sizes because we have left out a lot of other corrections too so what you get is something which effectively accounts for also the higher corrections and then the exponent is not quite right so this is not so good to get the exponents but it's very good to get the critical point as I will show you okay but then we did something which I think is new here namely also for the exponent new which is the more interesting of them actually work directly with the crossing points you don't have to get it from there well actually people have done something like that before too but we do it slightly differently so okay let's now assume that this is a dimension less quantity so this kappa is 0 and now I tell or expand it to slightly higher order okay now I want to take the slope so I just take the derivative of this quantity which eventually I will compute in simulations but now it's just formal I take the derivative of that with respect to delta and by the way that's the same as taking the derivative with respect to g my coupling because delta is just g minus gc right so if you actually calculate it and take the derivative then you would do it in when it's available as a function of g and then you would also write g there of course okay so this is what you get if you take the derivative okay now take the log of this derivative then you see that you get a and then you get one of a new log l and then you get some other power loss of l so now you see that in principle you can extract one of a new from from here and this is actually what people do a lot if you have found the critical point for example by doing that first then you calculate the slope of your quantity at the critical point and sometimes one can actually calculate this derivative directly in the simulation so you can do that or you can do something I will talk about in a moment in any case you can extract it and then you plot that log slope as a function of log l and the slope of that curve you just have some points but you can do some curve fitting the slope should asymptotically be one of a new but you have to go to large l because you have that correction there and one can do this but it's again you have to think okay what size is to include and all kinds of things so we wanted to do something a bit simpler namely instead of using the derivative at the critical point we use the derivative at the crossing points that we derived before so if I just take my expression here for the derivative and just insert the expression I had derived before for the crossing point then this is the slope at the crossing point and now the point is I have two curves because I have two curves that cross each other and those two curves of course have different slopes because they come from different l we have l1 and l2 so then you can use that to your advantage and what you can do is you can take the difference of those slopes and then you see that that just one of a new times log r and that's just the constant r equals 2 for example plus a correction so now you get something very similar to what you do here you have one of a new on this axis you have some points and you are going to fit to a power law correction that's much easier than analyzing this behavior that's the point and a big advantage you don't even need to extract the critical point first it somehow doesn't depend on that of course you get it as well because you have the data but you can still wonder what's the effect of the fact that GC is not completely well defined there it has some error bar so I think this is much easier so now let's look at some data so this is what we do we do one of a new star which eventually goes to one of a new and the quantity that we often use it's what's called the binder cumulant so you calculate the order parameter to the fourth power divide by the second power squared and do some subtractions and multiplications by one half this is a quantity which has a neat property you can show quite easily that it goes to one if you have an ordered phase so this is the numbers you should put in for the icing model if you have some other symmetry you should use some other numbers there but it's easy to calculate those so you see this becomes a step function in the thermodynamic limit and it has a crossing point it's dimensionless quantity because these have the same scaling dimensions and you can see here that indeed you have crossing points and asymptotically these crossing points move towards the critical temperature yeah quite close because we have already derived here this is how the crossing point moves well it depends on the lattice size but so that's the whole point that what many people do actually is they just do the following they say okay we do many systems and then it looks like they cross each other in one point let's just use the two biggest systems and then we are happy with that you can do that but then you still have a small error so what you should do is you should actually do calculations very very close to the critical point very close to the crossing point something like this first to get the rough idea where it is and then you do many points in this region and then you do interpolation so let me show you here here I show something very close this is the theoretical or actually the correct TC and here three different system sizes the binda cumulant so you see 16 and 32 cross each other here and then 32 and 64 cross each other there so the crossing point definitely moves eventually it should move move right there but horizontally and vertically it should go to that point there so this is the kind of data we do we do 20 to 30 points yeah okay so that's a good question so here we use this wolf cluster update that Wernacraut talked about yesterday but still since we want extremely good data now you know we did up to hundreds of millions of samples yeah but okay you don't have to do that but I want to do something really to show how that there's no error left basically the binda cumulant actually has not very high fluctuations because if you analyze the errors correctly there's some error cancellation because you have similar fluctuations in that one and that one so actually the binda cumulant is good in that sense because it has some error cancellation okay so I'm formally out of time but I know yesterday somebody went like 15 minutes over so if I go 5 minutes over this should be okay okay so what we do is we do a lot of points we do polynomial polynomials for interpolating and when you have the polynomial you can also take the derivative in the polynomial to get this exponent new but you have to be really careful with all the statistical errors so okay you don't see it here but all the points have some statistical errors okay we should use bootstrap sampling to compute those I don't know if you are familiar with it if not I cannot talk about it in the lecture but I can tell you later if you like so basically you can think of it in the following way you have some data and now you have error bars on the points and then you can add some noise to the points gaussian noise which corresponds to the error bars and just repeat these polynomial fits many many many times and then you get some fluctuations in the crossing points and in the values and in the derivatives and everything and so you can actually do it with gaussian noise in some cases in some other cases it's better to use bootstrap sampling where the noise is coming from the data itself okay one thing which I want to do almost skip over now is the fact that you see for example here l equals 32 appears in two crossing points 16 32 and 32 64 so all these crossing points are not statistically independent they are correlated so you actually should use the covariance matrix when you compute chi-squared I think that normally people also don't really do but we have done it now so here you could represent any quantity that you calculate like the slope or the crossing you know temperature or whatever but I don't have time to explain it in detail but keep in mind covariance okay so let me show some data and then I'm almost done so this is again data for the icing model and now we went up to 128 you can go much larger but one point here is to see what you can go even if the system is not so big if you just do it carefully so these are those crossing points we have extracted and this is the value of the bin the cumulant of the crossing point and I see it's similar to what I drew here so we should fit to a power law correction but you see that the smallest sizes don't really fit and you expect that there are some higher order corrections so we basically again we throw out points so we always in this case we always keep up to the biggest size but we throw out you know points until the fit is good and here we are a little bit more conservative so we say that the fit should be within two arrow bars I mean chi-squared should be within two arrow bars of its expected value one then we are happy with the fit so in this case I threw up everything up until L equals 12 so the actual fit starts somewhere here and this could be omega minus one of a new in this case and in this case it's just the actual omega okay so this fit gave us this Tc so you see this is good up to almost seven decimal places and it agrees with the exact value to be within that precision and if you exclude more sizes their arrow bar goes up but already here it's actually fine fine so again 12 yeah L minimum is 12 that's when we start to satisfy this criterion yeah that's right yeah actually in the icing model it's known what this should be it should be seven fourths 1.75 and actually if you exclude enough points that comes out but even actually when Tc is okay omega doesn't come out quite correct and again it's it's much harder to get the shape of the curve than the extrapolated value I think that's the point and that's why we also now want to get new using this kind of method which I will show in a moment let me just point out one thing here so here chi-squared is something like 1.6 per degrees of freedom which is an acceptable value in this case if I take the points minus my fifth you can see that this doesn't quite look like random noise there is some shape to it but it's barely resolvable within the arrows and that means that there's a little bit of a higher correction here left which is not apparently affecting the extrapolated value it's probably affecting the value of the exponent in the correction but if I exclude more points then this just starts to look like random noise so this is our criteria means that we are barely at the points when the higher order corrections become irrelevant to us and it still works fine at that point the feets are actually very stable okay so let me just for the last thing show you what we get for the exponent which often is the more sort of interesting thing you are after if we do this slope analysis plot as a function of 1 over L the first thing you can notice that the arrow bars are much bigger here you can actually see them even before doing any subtraction and that's because the slope is much more noisy than a value when you do interpolation you get the values quite well but you can imagine that the slope is actually much noisier so the exponent we cannot get as accurately but the point is that it should be unbiased and you can really see how it flows as a function of size and the correct value of this exponent is 1 so you see that this is consistent here actually we used all the points from L equals 6 and it's still the chi squared is good and it gives a very good value actually 1.0001 with the arrow bar of 7 in the last point so as far as we can tell this is an unbiased method and one can also by analyzing other quantities at the crossing points one can also get other exponents okay so I had one more slide on this dangers of extrapolating close to the critical point but I think maybe we want to have coffee now so I'll just skip that it's maybe enough for now but maybe you have some questions maybe somebody else yeah for examples if you want to get the exponent eta which controls the critical correlation functions then you can look at at the magnetization squared itself you know the magnetization squared squared should go like L to okay is it plus or minus nu I forget eta I forget now so if you use these crossing points from the bin the cumulant and you evaluate M squared at those crossing points you you can analyze the L dependence at that point and extract the exponent again by in a more sophisticated way I think that we use here you can combine the values for the two sizes and take the logs and so on yeah anything else yeah exactly that's the universal value of the bin the cumulant which is actually not known exactly TC comes out exactly from the on saga solution but this value here nobody knows the exact value but there are people that have extracted it from transformatrix calculations actually not so large up to 17 squared this is a value actually from from Henk Blöter's paper from 1996 where he did transformatrix calculations up to L equals 17 so with that method you can get basically the values up to machine precision and then you can try to extrapolate I think eventually you know with this data we can beat him a little bit in precision but oh yeah that's correct so it's no longer believed that it's completely universal I mean it's universal for a given shape but if you take a rectangular lattice it has some other value if you makes the model anisotropic so that it's effectively not you know a square anymore yeah so it depends on boundary conditions and can depend on other things yeah I think there was another question at some point yeah good point so the question is what to compare with you know I haven't actually found a lot of papers where somebody tests methods very thoroughly on the 2D icing model for some reason to me it seems like a very natural thing to do but in general I think people think well 2D icing model why should you study that on even people that I told about 2D icing oh why but okay it's a test to test the method right yeah so you know honestly I haven't compared for example our value of extracting new I mean you could get it from in principle from you know first extracting the critical point and then you know analyzing the data like that but in some sense I don't even want to do it because I just know that what we are doing is more systematic so I don't even care maybe so much about you know what whose value is better in the end because I want something where I can really say that at every step I took into account all sources of statistical fluctuations and I did everything completely correct if you do something you know analyzing this you have a set of well let me not draw anything because then it takes even longer you have a set of points you have to fit some curves okay you decide okay which points do I include which do I exclude you know you can fiddle around until you get something that you are happy with but is that you know statistically correct and actually one big reason we have been doing this now is because we have some very challenging quantum phase transitions where which I wish I had time to talk about here but I don't well if I chose another topic for my last lecture I could but I wanted to talk about something else there but anyway very challenging quantum phase transitions where people argue is it the first order transition, is it a continuous transition and what is the value of new and you know many people have started it including myself and the value of new seems to be changing over time and clearly there was some size dependence and then we decided to try to do something where you know there's no question in the end if what you did was correct so this is what we came up with right so topological phase you know if you do Monte Carlo there's not a lot I think you can do because normally the modest people are interested in health sign problems but actually you know people do a lot with DMRG and things like that and you know one of of the things I had wanted to talk about the last slide actually maybe I can quickly just at least flash it the thing is people often try to extract all the parameters and okay spin liquids I don't know if you consider that you know that's a topological phase but I don't know if that's the kind of topological phase you had in mind but people are interested in spin liquids and then they want to find the boundary between let's say an antifera magnet and the spin liquid and then they do DMRG and they extrapolate the order parameter for the antifera magnet and so on but that's exactly where the danger is one can easily because of these things I talked about sensitivity of the fits close to the critical point here is just an example of let's not even care what this is but it's a more the parameter as a function of one over L if you only have small sizes with this data and fit a polynomial you would go to a negative value and then some people say oh that means it's zero if it goes to a negative value I would say well if it goes to a negative value that means something is completely wrong in what you are doing so you shouldn't even do it but anyway whatever you do if you have just small sizes you cannot extrapolate this order parameter because you see what happens eventually in this case again this is a valence bond solid so eventually it crosses over to an exponential convergence so it's very hard actually to see that you really need to get to the crossover length scale where the behavior changes from near critical to long range order so that was the last thing I wanted to mention so I guess I'll post this slide I don't know if I really answered your question but I took the opportunity to show my last slide you can ask more later time for coffee I guess