 Okay, let's start. Welcome back. So now in my last lecture I will talk about something slightly different but still related. So we have talked about simulations in equilibrium, either thermal equilibrium or a ground state of a system. So now I want to talk about how to do in particular quantum Monte Carlo simulations, how to describe out of equilibrium situations. And also I want to illustrate and so on all this by some classical things. So we will do some kind of out of equilibrium scaling approaches that I want to talk about as well. Okay, so okay, you have learned everything about Monte Carlo simulations here from Werner Krauts. I don't need to remind you what Monte Carlo simulations is. I just showed this slide anyway as a build up to the following ones. But I don't need to explain what it is other than this is an illustration of an animation of a simulation. Not of hard spheres that Werner discussed but so-called sticky spheres which have a small attractive shell. So here I just show a very simple metropolis simulation and after each Monte Carlo sweep where these particles are removed, move that random and accepted with the metropolis probability just make a frame and then show this as a movie. And this is at some high temperature where this system is in a gas phase. Okay, now let's lower the temperature and what happens then. Well now the simulation has already been going on for some time and I show here the same simulation but on two different time scales. You see here to the right time goes 100 times faster than on the left. And the point I want to make with this is that now it's taking us a long time to go to equilibrium. So of course I could use some cluster algorithm or something like that that Werner talked about. That is really nice. But here I just want to make the point that if you do something with local updates which is often what nature does. Nature has more local kind of updates. Nature does molecular dynamics I guess. But anyway it's not easy always to reach the equilibrium. And why do I say that this is not equilibrium? Well because I think we have reached the liquid phase here and we have formed some droplets. But I think if we expect that the equilibrium should be just one droplet. That minimizes the interfaces. Ok so what we can do then is something that I think Werner didn't talk about. Although I think he had it in the original abstract namely simulated annealing. But then I can talk about it instead quickly. So simulated annealing is basically if you do a Monte Carlo simulation. But you change the temperature during the simulation. And of course the name comes from the fact that this is a computational analog of physical annealing that is done in metallurgy. For example that you heat something and then slowly cool something to form a more perfect crystal. So this is done then in the simulation and it has a similar effect that if you want to reach equilibrium without defects. You can change the temperature during the simulation. So this is illustrated now here. So now you see that the temperature is changing here between each frame. So I'm still in the gas phase but now you start to see that liquid is forming. And now it has a very easy time to form a single droplet. Because it had enough fluctuations at the higher temperatures and when it started to get into the liquid state. That it could form one single droplet. Okay so this simulated annealing is something that is also used for optimization. I think many of you probably know that. So when we for example here I went with my simulated annealing all the way to zero temperature. If I do that slowly enough I should reach the lowest possible energy state of the system. And you can say that that's an optimization problem. It's optimizing the energy of the system. And of course simulated annealing is a very often used and successful optimization method. So you can express many difficult optimization problems with many variables. As a statistical mechanics problem and you can do Monte Carlo simulations on it. And you can reduce the temperature as a function of time. And you can reach the optimum or at least close to the optimum in that way. And of course the slower you go the closer you would typically get to the best state. Okay that simulated annealing. Okay so the question that people have started to ask a lot recently is well already some time ago. But it has become very popular now is can we do something similar in quantum annealing. With quantum mechanics and then it's called quantum annealing. So again in thermal annealing we change the temperature as a function of time. And we could do it in a physical system and we can do it in a simulation. In quantum annealing you can think of doing it in some physical system. Where you have some way of regulating the quantum fluctuations. So you can start with some simple quantum system. Okay and then you can as a function of time change the parameters in such a way. That in the end you just get a classical system. You completely remove all the kinetic energy. And then if you do this. Okay so let's write down what I mean in Hamiltonian form. So you have a Hamiltonian which has two parts H0 and Hs. And S is your control parameter. And these are some non-commuting terms. And then I can imagine that this parameter S depends on time. And of course it could depend on time in any number of ways. But here I will use the simplest case where it just depends linearly on time. And the velocity is just one divided by the total time I take to do that. Okay so then we know from the adiabatic theorem of quantum mechanics that if this process is very slow. Then you will always stay in the ground state. So that means when you start you have H0. And if that's a simple quantum system you can prepare the system in its ground state. Then if you go slowly enough to Tmax. Then you will be in the ground state of the system. Which is now just consisting of H1. So if H1 is some complicated classical potential. For example some hard optimization problem corresponds to some hard optimization problem. Then you have succeeded in finding the optimal solution by this quantum annealing. Okay so then the question is can quantum annealing be more efficient than thermal annealing. And then we are imagining that somebody is actually building a machine. Which is you can program and you can have some Hamiltonian in that machine. Where you can encode some optimization problem as H1. And then there is some suitable quantum fluctuation which is often called the driver. Which you can play with. And then the question is can that solve some optimization problems that are very hard to do classically. Including this simulation simulated annealing method would not be very efficient. So there are of course many such problems that can be solved. Only with great difficulty in times that scale exponentially bad with the number of degrees of freedom of the problem. So the question is if we can do something better with thermal annealing. So as you see here these ideas go back quite sometimes. There are many other important early papers. But now people are really exploring this seriously as a paradigm for quantum computing. And in fact you can already buy this machine if you have 10, 15 million dollars hanging around that you don't know what to do with. Then D-wave systems will be very happy to sell you one of their nice looking black boxes. And I think Google is also now going to build a machine. I don't know if they also want to sell it or just use it for their own optimization purposes. Probably something to do with commercials and ads on your web browser. But anyway what's claimed is that by D-wave is that they can already solve some hard optimization problems in these black boxes. And this has caught a lot of attention including I think this was about a year ago or maybe a little bit more. Time magazine had this cover talking about the D-wave machine. I don't know if you can read it here if you are a Lufthansa employee or a pilot. But it says it promises to solve some of humanity's most complex problems. And then it says something about the price, 10 million dollars. And nobody knows how it actually works. But I should correct that I think the statement should be nobody knows if it actually works. I think this is Fahrenheit. It's an American journal. But it's cooled with liquid helium I believe. It has to be very cool. It's based on these kind of... I mean you build qubits based on these... What are these called again? Squids or whatever. I forget the name now but you know what it is. You see from the picture. But anyway the question is this really doing quantum annealing. So people have been quite skeptical but there's a lot of work going on. So physicists have access to this machine and investigating it heavily. So one question is okay what is the D-wave machine doing. That has become a big research topic and I guess soon another topic will be what is the Google machine doing. And it's all very cool of course. But another more fundamental question is quantum annealing really better than simulated annealing. If you could really build a D-wave machine or whatever which actually does... It's completely isolated from its environment so it's really coherent quantum dynamics or so. Is it still a good idea to do this? So that's more the question I want to address here. But let's see in this D-wave machine what is it actually that they claim to implement. So they have this chip which had some fancy name which I also forget what it was. But basically they claim to implement the icing model with random couplings. So you have some icing spins which are those flux qubits in the flux qubits that I showed you before. The up and down spins are encoded in those. And then they can somehow in this thing control how these are coupled to each other. Okay and in the current machine I think a new one is coming out soon. But in the recent one that I know they have 512 qubits that are coupled in a geometry called the chimera lattice. And you see there are these 2 by 4 cells where you have all connections possible within the cells. But then the cells are connected to each other in a more sparse way. And you see some red dots and I think you can imagine what the red dot means. This is taken from a recent paper by these authors. So that's the qubits that apparently don't work anymore. And actually if you look at all the papers you see less red dots. Apparently they stop functioning one by one. But this is still enough to play around with I guess. Okay so and you can actually even go online and learn how to program the machine. I mean it's really the real thing. So you can program the quantum computer. And then okay what's quantum mechanical? This is just classical. But here this is where you encode the problem that you would like to optimize or solve. And then the driver is the transverse field. So your Hamiltonian is a complicated icing model depending on what kind of couplings you program it. And there's a transverse field which is again physically realized in this flux qubit in some way that I don't really know how it's done. Okay and then there's a function of time. And I think that I shouldn't say what the time scales are. Maybe milliseconds. I'm not sure. Maybe it's just microseconds. They change the strength of this field. So it's convenient to write it always with this s and one over s. Then you go from in a somehow easy to see way from h0 to h1. Okay and in the ideal situation in what people call adiabatic quantum computing. You would like really to stay in the ground state the whole time. But the D-Wave people they realize that they are not really doing adiabatic quantum computing. They are not staying completely in the ground state. But anyway they talk about this assimilated quantum annealing. More recently people have started to talk about quantum enhanced optimization which I think means that it's less quantum mechanical than one would hope. Anyway let's just I just want to talk about true quantum annealing. There's no environment. Just address the question that how good in principle can this be if you really have an ideal situation. Or maybe in the end you don't want to have an ideal situation. Maybe the noise even can help you. That's something people are exploring as well. But let's mean this for sure. Well I mean that's what the company claims if you go to their website. But I think any physicist who has done this or maybe probably even if you talk to the D-Wave people. It's not completely clear how quantum make. I mean there's clearly something quantum mechanical there but what is the coherence times and so on is. No no they are manipulating the cubits right. I mean these are these are the cubits right. But what I'm going to do will not be manipulating the cubits. I'm going to just study this question from the model theoretical computational classical computational perspective. Meaning using some classical simulations to try to address the question right. So maybe that was your question. So basically this has stimulated a lot of interest in studying the dynamic of the transverse field icing model. In particular ones that have you know random and frustrated interactions and so on. Which people studied already you know in the past people have studied quantum spin glasses. And you know this is basically a model of a quantum spin glass if you program some or I should not say program anymore. If you just theoretically make these couplings frustrated let's say plus minus J on a nearest neighbor cubic lattice. You know that's a quantum spin glass and then if you put the transverse field it's a quantum icing spin glass. And people have studied those a lot but focusing on you know the equilibrium ground state properties and so on. And now it's very interesting to study the quantum dynamics of those models. Okay so if we do this quantum evolution from in some sense a simple state. So the simple state will be the ground state of the transverse field so all spins just pointing one direction right. And then to some state which is complicated because it should be the ground state of this complicated interaction. Then at least if the system is big in some sense you would expect that there should be a quantum phase transition along the way. Because the ground state changes from something which is trivial to complex. And to me that means there must be a quantum phase transition and this is what people have been saying for a long time. So you expect a quantum phase transition at some critical value of this S. And of course if we talk about a quantum phase transition we imagine that we are taking the limit of an infinitely large system size. Which at first sight doesn't seem to be relevant to these machines. But the thing is people are interested in how problems scale with the system size or with the problem size. So if you solve a problem with 10 qubits and then 100,000 how much longer does it take to solve the kind of problem that you are interested in. And then you can formally at least ask the question about the limit of n going to infinity. And then you really also run into the issue of a quantum phase transition. So in the clean case this is well understood. Again if you have a transverse field icing model we can do the mapping. If we have it in d dimensions we can map it to d plus one dimensional system and for example do simulations. But just formally we can do that mapping. So it should actually map onto an icing model in one more dimension. So we know that there's a phase transition in it. And if we look at again what happens when s is zero or s is small. We have the ground state of the transverse field. And when we go to s equals one the ground state is wd generate all up or all down. And in reality if we do something we should get symmetry breaking. We should get up or down if the system is big. So in 1d this is known the critical point is s equals one half. And in 2d it's around point 25. Okay so if we do adiabatic quantum computing quantum annealing very slowly. We want to stay in the ground state essentially all the time. Then the bottleneck is actually to pass through this critical point. And why is that? Well because if you look at the excitation gap of the system it's smallest at the critical point. And if you look at some basic considerations of quantum dynamics. If you want to stay in the ground state you have to go slower if the gap is smaller. I will talk a little bit more about that. So you can ask how long does it take as a function of the system size. If you want to do it for example in the ferro magnet. Or if you want to do it in a more difficult situation where it's some spin glass or something like that. So let's look at the prototypical single spin problem for that. At least get some feeling for what's going on. So if you just have one single spin and you put it in a magnetic field. This is your Hamiltonian. And then you also put it in a transversal field. So you have the Ziemann field. And then you have a transverse field. And you can write it with these ladder operators. So the energies you can easily solve this system. So this is the energy levels of the system as a function of this H. For some value of the small value of the transverse field. Or sorry, let's see, right. So there's a gap here. Which is equal to two times epsilon which is the transversal field. So if the transversal field is small it looks like that. And if you look at the eigen states. If the field is negative then it's down and the field is positive. It's up in the ground state and vice versa. And of course in this region here you mix them. So if you go adiabatically you want to go very slowly. So if you start here you want to stay there. If you go too fast you will have a lot of excitations up to here. You will always, no matter how slowly you go, you will excite a little bit. But you want to excite as little as possible. And then if you use some reasonable criteria for how to stay adiabatic. Let's say that you want to be 99.9% in the ground state when you come here. Then you will find that the time you have to take to go from here to here. Will scale as 1 over delta squared. Meaning the velocity of the change has to scale as delta squared. So if the gap is small you have to go very slowly. So all this time just comes from passing this point where the gap is small. It's very easy to stay adiabatic here because the gap is large. As you approach here you have to go slower and slower if you want to stay adiabatic. So this little problem has guided people's intuition a lot in this field. And it suggests that the gap is important. But of course in a real situation if you have many spins, let's say you have a ferromagnet. The picture may look a bit similar. But you will have lots of states here. So then you may start to guess that it's not just the gap here that matters. But actually what matters is the actual shape here. In the single spin problem this is a parabolic shape here. But in general the shape may be something else. If you have a really large system and you approach some phase transition you could have some shape that is different. I don't know how to draw it. And you could have a high density of states here. So then there's a larger probability to excite something if you have more states. So what should you really expect if you have a quantum phase transition in a many body system? Okay, it turns out that the gap still plays some role. So let's talk about the gap at the quantum phase transition. So in particular we have to talk about the dynamic exponent. So the dynamic exponent at the phase transition is the exponent that relates time and length scales. So we have talked about continuous phase transitions now. So we know that the correlation length, which is the space length scale, diverges as a power law. The time scale, or you can call it the relaxation time, something like that, that goes with this length scale is a power of the length scale. And that power defines the dynamic exponent. So that means if we go to finite size one can actually show that using this kind of finite size scaling techniques. And basically the gap is one over the length scale to the z. So one over the time scale, that's natural. So you just replace the length scale by the system size. Then you see that the gap should go like this. And if you write it in terms of the system size, this is the length. So if you have a d-dimensional system it may be better to write it like this, sometimes like that. But in any case what we can guess from the Landau-Ziena problem is that if the gap is very small you have a problem. And now apparently the gap becomes very small as you increase the system size. But the Landau-Ziena problem would still tell you that this is fine because if the time is one over delta squared, then it's just L to the 2z is my time. So that's still a polynomial of the system size. So if that would still hold for some difficult optimization problem where the best known algorithm scale exponentially, it would be big news because that would mean that you can actually solve it in a polynomial time, a power law time. But actually what people have realized is that in some cases these transitions are actually of first order when it comes to the kind of problems that computer scientists have considered in encoding in these sizing spins. So if you have a first order transition actually the gap closes exponentially fast. So that means also the time scales exponentially. So this is an important issue that was pointed out by Peter Young and collaborators some years ago. So if you have a first order situation it's bad news. But there's at least some hope that you can formulate at least some classes of problems in such a way that you avoid the first order transition. So let's assume now that we have some continuous transition and look at well is this London's inner picture really correct? So according to the London's inner the velocity should go like one over L to the 2 Z which is the gap squared. Is that really true? And again it may not be true because of these reasons that I mentioned. So it turns out that this solution to this problem has been out there for a long time but initially and maybe still the people that work in this quantum-mediabatic field they have not been very much aware of this. So there's something called Kibel-Surek scaling. So according to Kibel and Surek let me not discuss why now but it has actually to do with exactly these things that I discussed. The critical velocity is actually not L to the 2 Z but L to the minus Z plus 1 over 1 over nu and we know what nu is from the previous lecture and I mentioned it already here. So this was in some form derived by Kibel and Surek and my colleague at BU Anatolipolkovnikov and some other people around the same time it was initially done classically for classical phase transitions and he came up with the generalization to the quantum case. It's actually not so difficult to derive this but let me just state it as a result here. So it turns out that the criterion is actually identical for quantum and classical phase transitions. Okay, so we talked about finite size scaling last lecture and that's good because now I need to talk about some more finite size scaling but now it's a generalized finite size scaling which has what we discussed last lecture namely if you have a quantity and you approach the critical point then there's an overall size dependence and there's this argument that tells you how things scale as you move away from the critical point, okay. But now there's another argument there as well and that argument is basically the velocity divided by the Kibel-Surek velocity which is exactly V that comes in as another hypothesis and this has actually not really been proven in the sense that normal finite size scaling has but it's a very natural hypothesis to propose and we can write it as a function of L or as you will see later sometimes it's really better to write it as a function of N and then I just define some velocity, some dimensionality renormalized exponents like that, okay. But the point is we have two arguments there now and this is actually what we will be exploring to study some dynamics of classical as well as quantum systems. Okay so let's look at the classical case first and now I will show an actual Monte Carlo simulation so this was done by my former student Chang Wei Liu so what's shown here is just an icing simulation and it's done with the metropolis algorithm but it's actually like simulated annealing so when the simulation starts we see that some rather high temperature and then the temperature is reduced as a function of time and we go to the critical temperature of the 2D icing model and now we can do it at different velocities so first I show a fast ramp of the temperature and what you can see here is we start in some equilibrated configuration, okay and then we just do a few steps and not much is really happening, well the configuration is is changing of course but overall it doesn't really look much different, it doesn't really look like it's approaching a critical point and this is clearly because the velocity, the time is so short that the system doesn't have time to adapt okay now let's do it slower, more slowly so now you can see that actually the final configuration looks more like a critical configuration similar to what you saw yesterday in Warners lecture okay so now we can actually do an analysis of simulations like this it's exactly like it's shown in this first animation here we repeat it many times but of course the animation just repeats the same thing many times but in our work we of course start from a different configuration every time so there is an equilibrium simulation going on and then we have a configuration and then we do this simulated annealing down to TC and then we do another one another one and then we calculate things like the squared magnetization and then we look at how it depends on the system size and the velocity so now we can simplify things a little bit because in this case we know the critical temperature so we can actually just collect the data at our final point okay we can collect data everywhere and analyze it but what I show you now is just the data at the final point okay then the scaling is much easier because if delta is zero the first argument delta L1 of a new is just zero so it's like there's no first argument so then we just have this form here and now this looks quite a lot like what we did before with data collapse right you have some overall L dependence and then you have a function of some argument okay in this case we even know that exponent it's just one as I told you last time okay how about z well z is the dynamic exponent which let's see did Werner talk about the dynamic exponent I forget but anyway it's again telling you how the time scale is related to the length scale and that of course is the time scale is the time scale he was talking about the time scale needed for the system to generate independent configurations this is exactly the exponent which governs that time scale so if you have a bad Monte Carlo algorithm the value is going to be big and if you have a good algorithm the value is going to be small ideally the value would be zero so people have of course studied this for the metropolis dynamic and dynamics and other dynamics as well and it's known that the value is around 2.2 for the ising model but what we will do is we will pretend that we don't know it and then we will see if we can extract it okay and this is what we do so this looks very similar to the kind of finite size scaling we did before except that on this axis okay it's L to some power but the power is a little bit different from before there's the Z there and instead of the distance to the critical point we have the velocity but on this axis it's exactly the same as before if we want to get data collapse and you see we get quite good data collapse so you see we have many different system sizes and we can actually you know to do this data collapse we actually do it with some data fitting so we model the form by a straight line here and actually the straight line is expected from this form as well you can actually relate the slope to the exponents as well and then this shoulder here we describe by a polynomial and they match to each other and that actually is basically how we do the data collapse we somehow do the best fit to that form and the form contains Z as an adjustable parameter so Z is the only unknown thing okay you can see for each system size when we go to high velocity it splits off from this collapsed curve and that's expected because this exactly like finite size scaling holds only when the system size becomes big enough this holds only when the velocity becomes small enough so we expect data collapse when exactly when the velocity is you know less than or on the of the order of the Kibelsure velocity so if we scale the data in this way for each system size there will be some point where you know the velocity is higher than that value whatever it is and then it starts to split off and if we really go to infinite velocity this would just go to the value in the starting equilibrium state so that we understand and actually we can plot the data in a different way where we get data collapse on that part but if you imagine chopping off all those you know tails you would actually see essentially a perfect data collapse and what what is the okay so here we used again dicing exponents of course and we adjusted Z and our result well I don't know why I didn't show all our glorious four digits here but we got it to something like four digits and we believe it's the best value anybody extracted although that was not the goal the goal was just to explore how this works but we do believe that this is actually a very good way to extract the dynamic exponent if you don't know it so for example we have a very recent paper in collaboration with Peter Young where we apply it to the 3D classical spin glass which has been very challenging to find the dynamic exponent for and the reason it's so challenging is because in that case the dynamic exponent is very large in glasses normally the dynamics is very slow it's actually around six and that's the result we got and that's why it's so difficult and we did it exactly in this way right so we have a paper if you are interested in more about this you can read about you know we tested this on also on this cluster dynamics that Werner talked about okay but now the real question is that you are interested in is can we do something like this for quantum systems right okay so let's talk about quantum dynamics okay we have some initial state and we want to study the time evolution so we act with the time evolution operator and in our case the Hamiltonian itself is time dependent so this is the time evolution operator it contains an integral of the Hamiltonian from time time t0 to t okay this is not easy to study numerically right okay you can study small systems I say exact diagonalization here but actually a better thing is just to use a standard differential equation solver and adapt it to matrix problems you can solve the dynamics very easily but only for small systems because of course the matrices grow exponentially with n okay people are making some progress in DMRG and so on but still it's not easy so what we have proposed and other people have proposed it also but we are doing things slightly different and we have some I think new claims is to study the shredding and dynamics in imaginary time with quantum Monte Carlo okay so okay you may say well imaginary time that's not what we want but okay I agree we want real time but we cannot do it so we do something else and of course we still want to get something useful out of it and we actually can relate the two in several ways okay so we have the imaginary time evolution operator and the thing is that this can be implemented quite nicely in quantum Monte Carlo algorithms of the type I talked to you about yesterday or on Wednesday okay so that was the first version of that was done a few years ago but then you can ask well what really can imaginary time tell us about real time dynamics so let me before I talk about quantum Monte Carlo let me talk about these exact solutions of small systems and we have actually done a lot of work recently on these random glassy models but let me just show you something about an icing ferromagnet in the transverse field so this again is the Hamiltonian and now we are going to be on a small two-dimensional lattice so N equals L squared and okay it's a pityfully small lattice we will just study a four by four but still it tells us something so then we start from the eigenstate of the transverse field of course and we are going to look at the instantaneous well so the actual ground state we would like to stay in if we are adiabatic we denote like this so each time we have some value of s and for that value of s the ground state is what we call psi naught of t but the actual state since we are not going infinitely slowly it's different and we call that psi of t and then we want to compare these and we can do different things we can look at the energy how much does the energy deviate from the ground state and so on here I will just show what's called the log fidelity overlap between these two states and then you take a log to make it easier to manage and you take minus one half in front of it to make it a little nicer so then if this is zero it's a perfect overlap and then it becomes positive the minus log fidelity is defined like this it's positive if they are different okay and then we integrate the Schrodinger equation numerically and do these things both for real and imaginary time okay so the question you may then be interested in is that which one is more adiabatic which one is easier to stay in the ground state and actually there's a very simple intuition about this so we discussed project or simulations last time so if you take this operator which is the time evolution operator with a constant Hamiltonian when beta goes to infinity I could have called this tau for now to make it this projects out the ground state so imaginary time dynamics brings you towards the ground state all the time so here the Hamiltonian is changing but you can imagine that this is done for in some small time steps and somehow it's clear that this somehow brings you towards the ground state although it cannot quite keep up because it brings you towards the ground state keeps changing but still the intuition would be that imaginary time should stay closer to the ground state because it has this projection property which real time dynamics doesn't have if you just do real time evolution of some arbitrary state the energy is conserved so it doesn't bring you closer to the ground state okay but let's see what actually happens so now we'll show results for several velocities okay so the red is real time and this is this minus log fidelity and the black one is the imaginary time and here happened exactly as the intuition tells you so remember if this is zero that means we are in the actual ground state so here you see that there's just a little peak and then it goes to zero here there's also a peak but then there's some oscillations and it just tends to some well oscillations continues but it doesn't drop to zero and actually this little peak here although this is a small system there is already some hint of a phase transition in it and this shows you that when you go through the phase transition that's when you start to excite the higher states okay so I should go until noon right alright let's do lower velocity okay it looks a bit similar but now you see that real and imaginary time have come a little closer to each other and actually one interesting thing here is that momentarily real time quantity drops below imaginary time so that immediately tells us that it's not true that imaginary time is always closer to the ground state sometimes real time can be but normally it's when some oscillation just dips you below like that yep now this is done numerically again just solving the differential equation the Schrodinger equation numerically this is really the yeah it's very difficult to do analytic continuation here actually it's you know in principle we know how to do analytic continuation in the equilibrium out of equilibrium in principle you can write down something but it's absolutely impossible to do anything with it in practice as far as I can tell so that doesn't really work anyway let's go on so now well by the way notice that the values on the y axis continue to go down because we really get closer and closer to the ground state even at the peak value but now you start to see that overall all the real time has oscillations that imaginary time doesn't they actually start to look very similar right okay one more time lower velocity now they are almost exactly the same and why is that well we can show at least my colleague Anatoly Palkonikov can show using what's called adiabatic perturbation theory one can show that these are the same up to and including order velocity squared the differences come in at order velocity cubed and at that order you can actually not tell numerically which one is faster real time could be faster imaginary time could be faster but what we find in general except for some exceptional cases imaginary time when we come finally to this point here is closer to the ground state and actually one can formulate these things with some define some dynamic susceptibilities which you know because they are these things are the same up to some order these dynamic susceptibilities are the same okay and what one can also show which we did in this paper is that if you go through a phase transition and you are interested in the dynamic exponent you can actually do it in imaginary time because they are exactly the same okay so that motivates us then to actually study imaginary time dynamics of you know including spin glasses and models that are somehow of interest in quantum computing because we can actually say something on you know the criterion to stay adiabatic based on imaginary time dynamics which we can study with quantum Monte Carlo at least for the models where we don't have sign problems right of course if we have a sign problem in the equilibrium QMC we will have it here as well but let me say a little bit about how this algorithm works so again this is the time evolution operator we start with some initial state so this is what we did in our first paper we just say okay let's do what we do in stochastic series expansion do a tailored expansion of this exponential then you get all these integrals here and know that it's time ordering here okay and now this actually works very much like stochastic series expansion except that you also have the time integrals but actually it's almost trivial to add that to the thing and the main difference is that the Hamiltonian is now not constant but it starts at something here and then it changes always monotonically as you go here but that's also a very simple change to put in if you have a quantum Monte Carlo if you have a projection quantum Monte Carlo projector code it's really easy to implement this okay we also have explored actually a simple scheme it turns out that these time integrals are really not so important here believe it or not the important thing is that you have a sequence of Hamiltonians that change so actually if you completely forget about the integrals and just do Hamiltonians that change uniformly as a function of this index here you can still think of this as time now it's not real imaginary time before but it's fake imaginary time but it's almost the same so if we have a delta s which is the s maximum value so this s again you can think of as the s in the transverse fieldizing model that I talked about m is how many of these you have so in each step it changes by delta s so we can actually show that again this is the same as Schrodinger dynamics up to order v squared so any sort of leading dynamic response is going to be the same as Schrodinger dynamics up to some trivial factors that we can put in if we like but it's important to see what is the velocity so naively you may think well the velocity is somehow given so you have the total change and then you have how many steps you take but actually you have to multiply that by n it turns out and it's somehow very easy to see if you do these series expansions and look at what orders contribute and you can show it in adiabatic perturbation theories as well so the velocity will be proportional to n times delta s there's some prefactor which we in principle can calculate okay the point again is that we can access basically dynamic behaviors that are same in real and imaginary time okay just quickly about implementation let me just illustrate it a little bit and it should be pretty clear because it's not so different from what we looked at on Wednesday although now the model is different there we looked at the Heisenberg model which has a diagonal part which is pretty similar to this one but in this case I'm showing it for a ferromagnet so I add a constant here and I also put the Pauli matrices here instead of the spin operators but that doesn't matter so this is the diagonal operator and this is the off diagonal operator so the vertices we have in this case we have two side vertices and now I draw them in the horizontal direction instead of vertically so these are incoming spins and outgoing we have two ferromagnetic vertices and then we have these single spin vertices where the spin flips okay and now we just put these together in the network exactly as we did in SSC this is one component of your starting state so the starting state would normally be the superposition of equal superposition of ups and downs because that's the X magnetized state so this is just one component and here you see I have some operators that are doing things or just sitting there and then again one can formulate some algorithm which is updating this network of operators by moving them around and replacing them by each other and flipping spins by making in this case some kind of clusters and so on so one can sample these configurations so this is a configuration which represents basically this overlap here so you project from the left and from the right and this is the starting transverse field eigen state okay so it's very similar to ground state projection the only difference is that here the matrix elements, the actual values associated with the vertices are changing but again a very simple thing to add to your code okay so you know it's here it's not transition graph but if you measure some diagonal quantity you can just go in the middle here and evaluate it and for the energy there is some expression and so on so it can be done okay this graph I already talked about so for some reason I have some duplicate I think I move them and forget to delete them here so this is the new one okay so now we want to look at more scaling in imaginary time so I already talked about this for the classicalizing model where we did stochastic metropolis dynamic now we are going to do actual Hamiltonian dynamics and actually speaking about the expectation values I said we should measure something in the middle that would be the actual measurement that you are interested in let me go back to this it's too far let me not go back to the picture well I am there now so if we actually measure the expectation value we should measure it here but it turns out that actually we can also measure things anywhere we like and to sufficiently good approximations they actually represent the time dependence of some quantity so you know the time changes from here to here so this is at the final time where we really should look at things but it's actually even fine to look somewhere else and to say that this is the expectation value evaluated at whatever the time is here again it's not exactly the same but it's the same to high enough order in the velocity that it actually works to do scaling so that's what we are actually doing so let me illustrate that so this is the expectation value but now you see I have I don't put the operator in the middle I can put it anywhere so then one can actually in a single simulation get values for all times so that saves a lot of time so there's an animation of that made again by Chiangveliu so this is actually showing the spin configuration as I move within this imaginary time space so it's just a single Monte Carlo configuration which has the time dimension and I'm just showing the configuration at the different time slices and you can see what's plotted here it's also the magnetization in those slices and you see this system is already pretty big so you can already see that there's a quantum phase transition happening here even from a single Monte Carlo configuration then of course you should sample and do many and then the curves will become smoother so I hope that's clear so it's just moving within the configuration and looking at the states so now we can get expectation values as a function of this space here and this is really related to time which is related to S and we can look at the kind of scaling that I did before but now I want to do something a little bit more difficult let's say that okay, let's say we know all the exponents but we don't know the critical point so can we use this to extract the critical point and in this case we actually expect theoretically and it's very well established that the dynamic exponent is one so now we are no longer in the realm of stochastic dynamics although we are doing a stochastic simulation but the dynamics we are probing is the actual quantum dynamics coming from the Hamiltonian evolution so the dynamic exponent should be one and we know in this case that nu is around 0.7 it's known more precisely than that so what we want to do in this scaling form is to eliminate one of the arguments because it's difficult to deal with two arguments so if we know what z is and nu is we can make this a constant then that's no longer an argument and we just are back to the normal finite size scaling form and again I showed the bin decimulant which we have talked about already so we already have seen it in the classicalizing model and now we plot this as a function of S but now it's non-equilibrium so v times Lz plus 1 over nu is kept constant and it's done for different sizes so it looks not very much like the standard finite size scaling you can see the cumulant crossings and we can analyze the crossings as we did before and we can get, it turns out we got again the best so far value for the critical coupling of this model for whatever that's worth but it's just a by-product of our intention to show that this is working okay let me make a short note on quantum Monte Carlo simulation dynamics because people have done a lot of dynamics studies related to the D-wave machine and there's a method called simulated quantum annealing when you see the term you may think that it has is doing Monte Carlo simulations of the quantum dynamics so it's a Monte Carlo method but actually what the method is doing it's doing a normal Monte Carlo simulation typically the finite temperature kind of simulation that we discussed before and then as the simulation goes on you change some parameter in the Hamiltonian like the transversal field okay so something is changing as a function of time but the time now is your time or the computer's time it's not the Hamiltonian time but anyway this has been used as a sort of well it has been called simulated quantum annealing so people somehow seem to believe that it has something to do with quantum dynamics and based on such studies it was claimed actually by these people many co-authors that the D-wave machine is actually quantum mechanical because they got similar results from simulated quantum annealing and the D-wave machine but the problem is this is not really Hamiltonian dynamics it only accesses the dynamics of the quantum Monte Carlo simulation method which we also demonstrated in our paper it's somehow quite clear I think from the outset but we decided to really show it because it seemed like people didn't really believe that so what we did was to study just the clean system the 1D transverse field icing chain and then we did exactly the kind of Kibelsure scaling I told you about first namely we know that the critical point is at s equals one half so we can do the ramp to that point then we can do measure at that point and we can do different system sizes and so on okay if we do the right thing which is doing the simulation time dynamics no sorry the imaginary time dynamics if we pretend that we don't know the dynamic exponent is one we still get the exponent one actually here we just say okay we know what it is let's see if the scaling works but it would be the same if we try to optimize it we will get 1.001 or something like that so we get the right dynamic exponent because we do quantum dynamics if we instead do what they do we get 2.17 for the dynamic exponent and you recognize that as the dynamic exponent of the 2D classical metropolis algorithm and why do you get that because this 1D chain maps onto a 2D model which effectively is a 2D icing model and now you do some local updates on it and then this is what you expect you expect that you can also do cluster updates on it then you get 0.3 so then you see quite clearly that this doesn't probe the system it probes your algorithm whereas what we are doing probes the actual system okay Matias will be here next week I guess so you can ask him about this if you don't believe me I don't know what he will say alright so that was a little parenthesis here and again to stress that what we are doing is really truly Hamiltonian dynamics and now I don't think I have much time okay I have 10 minutes left so I can at least tell a little bit about what we did for a quantum spin glass problem actually one which has been used quite a lot in the past to sort of study this idea of quantum annealing okay so at the classical level we have a graph and it's a random graph so each spin is connected to three other spins but at random so it looks like a beta lattice but it's a bit different because in the end everything closes and all spins are really there are no hanging branches so for example for n equals 8 these are some of those random graphs now it turns out that amazingly this classical model if you put icing spins on it and you make all couplings antiferomagnetics that means it's frustrated it has a mean field glass transition where TCX is actually known exactly so that we actually studied this Kibelsurek approach on the classical model as well and confirmed that it worked and then we did the put on the transverse field and actually I was involved in some work some years ago too where we did this in some other ways the first author is Eddie Farr he was one of the pioneers of this quantum adiabatic algorithm so he and some of his collaborators at MIT they did what's called the quantum cavity approximation which is a kind of improved mean field way to study the quantum system and then we also did quantum Monte Carlo and looked at excitation gaps extracted from Monte Carlo data so we believe that the critical point should be around there we decided to take another look at this with this imaginary time annealing and then we actually study also the spin glass order parameter I don't think I have much time to discuss what that is for those who don't know it but basically in a glass you really you cannot use something like the magnetization because you don't know the pattern of ordering and you have many symmetry broken states or I should say many glass states that correspond to some kind of replica symmetry breaking what you can do is you do two independent simulations and you look at the overlaps of them that will actually serve as an order parameter so we analyze q squared which is the spin glass order parameter squared and we again studied the binda cumulant but now we don't know anything we don't want to assume anything we don't know the exponents well we know roughly what s is but we don't pretend not to know it so how can we now eliminate one argument here well one could actually do some sophisticated two parameter scaling but we decided it's too complicated at least initially so then we just do like this we say okay this second argument here if that argument goes to 0 when n goes to infinity then we are again we can just neglect it neglect it and use only one argument scaling but we don't know the exponent but if we use the velocity it's n to minus alpha and this alpha just happens to be big enough then the argument goes away but we don't really know what big enough is so we have to try several values and see in the end that we have some sort of consistency and okay the largest alpha we did was 1712 and okay this was again done by my former student Chang Wei Liu and I guess we have to ask him exactly why he chose 17 divided by 12 but it's an okay number we have other numbers as well which give the same results but then we can again do similar things as I discussed for extracting critical points crossings of the binda cumulant and eventually we do an extrapolation and it's quite consistent with the previous value but with better accuracy okay so I still want to just say a little bit about what we get for exponentsia because those are in the end what we can relate to say have some relevance for quantum computing so now we look at now we have determined the critical point so now we do scaling at the critical point we know what the value is so then we can again get back to this kind of one parameter scaling with the velocity and then we do again scaling collapse as I showed you before and we have to adjust the exponents because we don't know what they are now but we adjust them until we get scaling collapse and try to do it carefully and we get some pretty big arrow bars on them and actually we from this analysis we only get these exponent combinations but actually that's fine because these are the exponent combinations that are important and we also as a check we did a fully connected graph Sherrington Kirkpatrick model meaning we have some spins and they are all connected to each other completely that's known to be a mean field kind of spin glass and for that one we also got those exponents I should say it's believed to be because actually these values differ from what they are supposed to be so there's a theoretical paper based on some field theory arguments that claim that the value should be like this so we think we are actually not in agreement with these and actually it could be that it's fine because this paper also points out that it's not so clear cut because there are some logarithmic corrections and all kinds of things so it may be that actually this transition in the end is quite complicated it may not be just Paolo scaling but we see at least effectively good Paolo scaling but not with the correct exponents okay but what's the point with these exponents well let's look at the Kibelsurek kind of arguments according to Kibelsurek that if we want to go to the critical point so we have some system and we are asking about approaching a critical point so there's some parameter s here and we want to go here okay let's say we have a glass here this is eventually where we want to go to s equals one but actually the Kibelsurek scaling can only tell us about how long does it take if we want to stay in equilibrium up to that point okay well that's at least part of the problem right you want to stay in equilibrium here but then you know it's not as easy as in the standard case that we talked about we have a gap like in the ferromagnet we have a gap you know that is the smallest here but actually in this case it may be more like okay the gap vanishes but you know then there's actually a lot of gapless states here so actually all hell breaks loose when you go in there so we just ask can answer something about the first stage to the critical point okay so that's now that we have an exponent we have a value for that exponent okay now but there's another thing we can say because when we come to the critical point we have some critical scaling of the order parameter at the critical point and that tells us something about how close we are to the final solution because this order parameter has some value at the final point here and if you have some power law scaling here how close we are to that somehow depends on the exponent so the order parameter scales like this and now you see I have actually expressed everything with n because actually in this thing you know there's no spatial length everything is connected there's just n there's no l so we have a value for that exponent okay let's compare that with the classical exponent so if you do classical simulated annealing to get the classical glass transition these are the exponents so those are very solid now you see that actually this value is larger in our case which is actually bad because that means that we are a little bit further from the ordered state and also this combination of exponents is larger so it actually means that it takes longer for us to reach the critical point if we want to stay in in equilibrium so the picture is this there's some phase diagram in the as a function of the field I could say s here but you know s is related to the field transverse field so we have been looking at the transition in the ground state if you look at the classical one then you are looking at the classical glass transition so actually there's a glass phase and where we really want to go is in the corner there when the zero temperature and zero field and the question is what path is best to take now we have just said well it's actually if you just ask how long it takes to get to the boundary it's better to go here than to go here so actually in this case it seems then that you know quantum annealing is not such a good idea well we don't strictly speaking know what happens when we go in there but at least this is telling us something okay then the other proposal we had was that in principle with the D-wave machine they could also do this kind of scaling exploration which apparently they haven't done I just talked to somebody recently who said that one reason they haven't done that is that D-wave doesn't allow people to change the time they only want the time to be what they think it should be okay so that is it and I will take some questions no I think well at least in the paper I cited what they said is that since they do quantum dynamics and the D-wave machine looks similar it must mean that the D-wave machine does quantum dynamics but my argument is that they actually don't do quantum dynamics so then it doesn't mean anything probably I think well again the problem is that we know that there are classical problems that if you apply any of the known methods the scaling is exponentially bad okay in this case it's not exponential bad to oops to go to the critical point but it becomes exponentially bad when you go further into the glass phase so what we have done addressed here is really just how long it takes to go to the phase boundary but inside the phase the time scales become exponential at least you know people know that from for simulated annealing for example if you continue down here you need an exponentially long time in the system size the hope has been that if you go this way the power law scaling will continue all the way down I don't know if anybody still believes that at least in general I think nobody believes it but I think the hope is that at least there is some class of problems for which you know classically it takes exponentially long quantum mechanically it takes some power law time then you would have made some significant progress but if that's going to be true I don't know but what I'm claiming is that what we are doing with imaginary time dynamics is a way to test that without having a quantum computer because you can do the quantum simulation in imaginary time we know that they're the same to order v squared so you can formulate a criterion for being adiabatic in imaginary time which will hold in real time and then you just do the simulation and check it's still not trivial to do the simulation because these systems are glasses so this comes back to what Werner Kraut talked about okay we need better methods right so he has invented good methods for certain classes of problems where you overcome this long sampling time scales in the case of a quantum spin glass of the transverse field easing type we actually don't have a good way to sample these configurations we know how to do it in principle but it's very slow but I think that's fine because okay we will waste some hundreds of thousands of CPU hours but at least we can answer some relevant problems I'm sure that we can do big enough sizes to still see the scaling in the same way as people have done classical spin glasses even if they are strictly speaking or intractable problems because the scaling is so bad people still do it and it works Mr. President, can you do this for regular systems and you can do what I said you can do imaginary time and relate it to real time but if you really want to do the simulation in real time you cannot because then you get phases and it just becomes completely crazy you cannot do it but if you want to study something close to the adiabatic limit which is often what is interesting what are the corrections to the adiabatic limit you can get it up to power v squared by doing this so I think it's quite useful why are there two things? it's just a practical reason because when you do data collapse how do you do data collapse? I guess there are many ways to do it but the simplest way I know is that you have your data for different system sizes so let's say you first you just take your data and you have let's say in our case we have a velocity so let me just plot the data as a functional velocity and then you have your m squared if you plot your data maybe they just look like this so this is for different system sizes let's say L equals 8, 16, 32 so now you want to multiply that by L to some x and here also times L to some y okay and then you want to have some automatic way of finding the best values where they collapse okay the way we do that is you just take a polynomial and then you fit it to all the data you know this looks crazy but okay this has a very bad chi-squared value but you know it's well defined and then you just so you get from this you get a chi-squared value which depends on x and y and chi-squared is defined relative to that curve and now you just optimize x and y until all these points come which means that all these points come as close to as possible to that curve so to do that you need a curve relative to which you define chi-squared and you see this data here it's not really good to fit this to a polynomial because okay on the log-log scale it's a straight line if you don't do log-log it decays you know like a power law so it's not good to fit the polynomial but on the other hand we know that this should be a straight line so up to the point where it is consistent with a straight line we use a straight line and then actually I think Chang Wei did it even on a log yeah he did it on this scale right so then you know he did it on the log scale so it's a straight line and then you know this is described by a polynomial so it's just to have a reference point to define chi-squared and you know maybe there's some better function that we could use which has this form but we didn't think of any at the time and this worked quite well and the point is that if you have a lot of points you know even if your function has eight or nine parameters it's fine because we have hundreds of points actually I forgot to point out but it's probably clear that you know here when this flattens out here that's when the normal finite size scaling applies that's when this argument is so small that you have reached basically it's a constant so this is where you know you just have in this case the power law alright