 The next speaker is Dr. King, and he's going to talk about quantum critical dynamics in 5,000 QB-speaking glasses. Thank you very much to the organizers. It's great to be here. I'm going to talk about some extensions of the one-dimensional work that Mohamed largely talked about yesterday, and this is a collaboration between Anders Sandvik and D-Wave. So this is an extension from the 1D chain to three-dimensional spin glasses. In the late 1990s, there were some laboratory experiments done on disordered alloys that suggested that annealing quantum fluctuations via a magnetic field that is transverse to the icing moment could offer a dynamical advantage over annealing of thermal fluctuations. So in other words, quantum annealing can offer a dynamical advantage over thermal annealing for three-dimensional transverse field-icing model spin glasses. Reproducing this in a programmable context has remained a central goal in the field, and I hope to convince you that we've achieved this in this talk. So along with the experimental results in the late 90s, there were important simulation results that went along with this, and together these really motivated the field of quantum annealing. And you'll see some familiar names in these author lists, some of whom have spoken today and some of whom are on the steering committee for this conference and even organizers. So these are very important fundamental works that motivated the development of D-Wave programmable quantum annealers. So just to give a quick recap of the 1D results, what we did was we annealed very, very fast. And when you anneal very, very fast, you remain coherent, and that means that the thermal environment doesn't affect you. So there's a collapse with respect to changing temperature in the environment, and what we see gives a very good quantitative agreement with the closed system quantum model. So this is strong evidence that we are annealing a closed quantum system with negligible thermal effects. So the way that we get this coherent theory line is we just plug in the annealing parameters derived from single qubit measurements into the annealing schedule for our processor, and then we just appeal to the closed form solutions to this system, which were developed in the 2000s. So this is a very simple experiment on a very simple system. The benefit is that it's very well understood. The problem with this system is that there's no advantage from using quantum annealing. In fact, this inverse square root scaling that you see in the kink density, which in this case is also the residual energy density, this is exactly the same as you would see from any classical dynamics that you would come up with, so in particular random walks of the domain walls or diffusive dynamics. So what we want to do, it's also not a very interesting optimization problem. This is an unfrustrated one dimensional chain. There's absolutely nothing complicated to do in order to minimize the energy. So we want to go to harder systems and in particular systems where we might see an advantage from quantum annealing. I'm going to talk about two types of systems, small systems and big systems. So the small systems that I'm going to talk about are 16 qubit spin glasses and the reason we choose 16 qubits is because this is basically what runs overnight on the HPC cluster if I want to simulate the time evolution of the Schrodinger equation on these systems. And then we're going to look at big instances and we're going to appeal to a dynamic finite size scaling unsets, which I'll describe. So what we do is we follow the typical d-wave quantum annealing schedule. So we turn down the transverse field, which is parameterized by gamma of s, and we turn up the energy scale of the Ising model, the classical system that we're optimizing, which is parameterized by j of s. Now depending on which way or what system we're annealing, you go through a different phase transition, and this is the phase diagram in temperature and transverse field of the three-dimensional system. And this has two very distinct phase transitions. The first is the phase transition that thermal annealing goes through, and this is in the gamma equals zero line, and this has its own set of universal scaling exponents associated with it. And the other one is a quantum phase transition at zero temperature, and this has a quantum scaling exponents that go along with it. So what we're going to do is we're going to run analogous experiments. We're going to appeal to a theory of critical phenomena that usually anneals to the critical point and stops. But there are two reasons we don't do this. The most important reason is that we can't do it, and the secondary reason is that we don't want to do it. Because we're trying to optimize the energy function of this spin glass, you don't just stop at the critical point. You actually go to the t equals zero, gamma equals zero endpoint in hopes that the energy will decrease even more. So we're going to blow through the QCP or the quantum phase transition here, and we're going to do the analogous thing with simulated annealing. And it doesn't really change the details. We're going to anneal to a very low temperature instead of stopping at the critical temperature. It doesn't change the details because there's a critical slowing down in the spin glass phase. So even if you go through the transition into the spin glass phase and continue, the dynamic slows down so much that it doesn't really make much of a difference in terms of your observables. So we can go through the phase transition and still see the critical dynamics. So onto the small systems. Again, these are 16 qubit systems. We tile it like 200 times over the advantage qubit graph. And we look at an ensemble of 100 random spin glass instances. So these are plus minus 1 J instances. So we set each of these couplers to plus or minus 1 uniformly at random. And then we throw away 99.9% of the instances. So these are very small instances, and a lot of them are really easy. They don't have interesting energy landscapes. So we generate 100,000 of them. And then we take the ones that have two ground states classically, two classical ground states, and lots of classical first excited states. And this sort of suggests that they will have interesting energy landscapes and interesting Eigen spectra. So these are small instances. We can actually diagonalize the time-dependent Hamiltonian. And we can look at the minimum gap. It's actually the minimum parity preserving gap or the second Eigen gap, which is denoted delta min here. And we can also simulate classically the Schrodinger dynamics. And we can look at the projection of the current state onto the instantaneous ground state and the final ground state. So what we wanna look at in terms of the success probability is the final ground state. So where does this solid line end up? And these are all 14 nanosecond anneals for three different instances that I've just picked to represent this distribution. And you can see that the success probability kind of depends on the size of the gap. And we can look at the scaling in terms of annealing time. And we can see that they basically follow an exponential scaling in annealing time. The excitation probability does. And this is what you would expect from a system whose excitation mechanism is dominantly described by a single Landau-Zener transition. So what we see here is experimental results and Schrodinger results. And they agree quantitatively very well with each other. Again, there are no fitting parameters here. And so this gives us quite a good deal of confidence that we are realizing coherent closed system Schrodinger dynamics on these small systems. And here you can see the success probability for 14 nanosecond anneals, for the simulation and for quantum annealing for the entire ensemble of 100 instances. And you can also look at the distribution of first excited states, because this is interesting as well. And we also, in the back matter of the paper that we're writing, we look at the KL divergence and show that the Schrodinger dynamics does a better job of explaining what we see in quantum annealing than a path integral Monte Carlo or simulated annealing. Okay, so that's microscopic evidence. Now we want to look at macroscopic evidence. And what we're going to do is we're going to look at systems from L equal 5 to L equal 12, this is actually L equal 4. And we're going to look at L by L by L by two spin glasses. So these have two qubit dimers or chains. And then aside from these chains, we just have a three dimensional spin glass. And the yellow couplings are just half magnitude. They're divided across two physical couplers rather than just one, as in the purple case. We're going to look at three observables or pieces of evidence that the scaling is faster in quantum annealing than in simulated annealing or simulated quantum annealing. The first is just what's been called today the Parisi order parameter or the Edwards-Anderson overlap order parameter. And we're going to square it. So this is just the typical overlap of two independently annealed samples for the same spin glass. We're also going to look at the Binder cumulant, which is used to identify phase transitions. And for matters of convenience in the scaling arguments that I won't get into, they're kind of described theoretically. It makes the collapse simpler. So we're going to look at a finite size collapse for the Binder cumulant. And we're going to anneal for various system sizes and anneal times. Now, here we see quantum annealing data on the top and simulated annealing data on the bottom. And we can see that we basically have power loss scaling, which is typical of critical dynamics in the second-order phase transition. If we look at the order parameter as a function of annealing time, and you can see L equals 12 all the way up to L equals 5, the first piece of evidence that quantum annealing scales faster is that this slope is higher. And I'm not going to show you this, I'm just going to claim it and you can believe me. The more convincing piece of evidence is that if we do a dynamic finite size scaling collapse of the Binder cumulant, which is achieved by collapsing this data horizontally. So based on the system size, we shift the data for a given system size over by L to the mu, where mu is the Kibble-Zurk exponent. And we do this in such a way that we get a good collapse for multiple system sizes and multiple annealing times. So we extract mu as a fitting parameter of this collapse. And this is how we estimate the Kibble-Zurk exponent. So one good thing about this dynamic ansatz is that we don't need to anneal very, very slowly and get to equilibrium. We can just anneal fast and remain coherent over tens of nanoseconds and then look at the slope and the finite size scaling to extract the dynamic exponent. So this is very convenient. So you'll notice that I have mu equals 2.99 for quantum annealing and mu equals 6.2 for simulated annealing. And this is actually very close to what has been published in previous works. So here we've got literature values in the horizontal lines and we don't have literature values for simulated quantum annealing because it hasn't been studied to my knowledge. So we've got extremely good agreement between quantum annealing experiment and expectation from independently extracted critical exponents. And we have very good agreement also for simulated annealing with some deviation here that is actually a result of finite size effects. So we only go up to 3,000 spins, which is actually kind of small for this dynamic ansatz. And the reason that this starts to deviate is that your chains become strong and then your system effectively becomes smaller. So this issue becomes more problematic. So on the x-axis here, I'm just varying a parameter. What parameter is it? It's the chain strength effectively. So we keep the chain strength at 2 and then we change the magnitude of the spin glass coupling, Jg. So the fact that we have a smaller kibble-ziric exponent for quantum annealing shows that we have faster dynamics. And again, this is very consistent with literature values where the this is mu is equal to z plus 1 over nu. And these are extracted by a quantum Monte Carlo. Equilibrium estimates. OK. So most people who use D wave processors don't care about binder cumulants. The figure of merit for optimizing hard optimization problems such as three dimensional spin glasses is the residual energy. So how far are you from the ground state and any advantage that I'm talking about here is for penetrating the glass phase. So this is an approximation to the ground state. It's not actual. It's not a polynomial speed up in finding the ground state. So finding the ground state is an intractable optimization problem. It remains intractable even if you have a polynomial speed up. So in my view, in terms of practicality, it's more important to have a polynomial speed up in an approximation sense. So Muhammad talked about this exponent. He called it x, but I call it kappa. And we have certain theoretical expectations based on the theory of critical phenomena around the phase transition of how the residual energy should scale. And it's a bit complicated. So we have this space time dimension, which is D plus Z for the quantum system and D for the classical system. That hardly seems fair, but that's why we're doing this, I guess. And then we want to look at which solver has the bigger kappa. And so this is all around the critical point, but it actually applies approximately for ground state energy, both in theory and in experiment. So if we look at quantum annealing, simulated annealing and simulated quantum annealing again, we can do power law fits and we throw away the data here because it is we're starting to see effects from the environment. So we're only fitting over the coherent regime for which there's very good power law scaling. And we see a steeper slope for simulated annealing. Then for quantum annealing, then we do for simulated annealing or simulated quantum annealing. As always, simulated quantum annealing is between the two. So it is getting an advantage because it's got this extra space time dimension, but the dynamics are slower. So if we look at the values of kappa that we can estimate from these power law fits, in the hollow markers, we show data where we just plug in to this equation, the exponents that we estimated from the bender cumulant. And in the filled markers, we show data where we just take the power law slope. So this is experimental and this is kind of theoretical, experimental. And then the lines are theoretical, theoretical. So for simulated annealing, everything agrees extremely well. For the two quantum systems where you have the extra dimension, there is a systematic deviation, but it's not so bad. But more importantly, both in theory and in experiment, we have quantum annealing with a larger exponent than the two classical solvers. So this is the main result. We have a larger exponent for quantum annealing. And so this is a dynamical advantage in penetrating the spin glass phase with the theoretical foundation. And that's all the time I have, so I'll stop there. Thank you. The anneal time you're using is in tens of nanoseconds. So I wonder what devices are capable of performing this and are there any plans for such short anneal times to be available through API? The device I'm using is actually currently online. It's the advantage system 4.1. But I'm accessing anneal times that are inaccessible to external users because what would take another hour to explain is all the work that goes into making this work when you are way outside the parametric regime that the system is actually calibrated for. So we have to kind of roll our own in a sense. We have to calibrate the system ourselves in order to make these fast anneals work. And so we want to get that automated so we can get this into the hands of external users. But it is the same system. Hi, Andrew. Sorry, Nishimori-san, go ahead. Okay. This is a very impressive result, I should say. This is a groundbreaking result, I should say. My first question is if these lines, straight lines on the plot are just linear fits to the data or is it grounded on any theoretical? So you see that there are three exponents here. Well, one of them is the dimension, one of them is the dynamic exponent and one is the equilibrium exponent. So you have d, z, and nu that all go into this. These exponents have all been estimated independently with Monte Carlo studies and we just plug those independent values into this equation and get this line. I understand it, but I'm asking about coefficient in the upper panel. In one case, we had the exact solution, complete exact solution, including the coefficient that fits very well to the data. But in 3D, we don't have, I suppose. Oh, so you're talking about the height? Yeah. No, we haven't looked at that at all. But because of the studies that we've done on small instances with the Schrodinger Dynamics, we have a good amount of confidence that it would at least be close. So of course, when you go to larger systems, any non-idealities, so your qubit Hamiltonian is not exactly the same as the Ising Hamiltonian and these non-idealities will become an issue for larger systems. So we expect some deviation, but not too much. Okay, my comment is that they understand this as a first example of quantum advantage in the sense that no classical methods can simulate these dynamics in this scale of the system. Yes, I would agree with that. I would be very careful with the statements we make and it would have to be very precise. So some of these terms like quantum advantage mean different things to different people. So, but I do agree with the sentiment of what you're saying. Okay, thank you. Thanks. Yeah, you went a bit quick for me at one point when you were talking about the Binder cumulants. So I mean, that's a finite size scaling analysis and then you said that the fit to the simulation annealing data didn't work because of an additional finite size scaling effect. Yeah, so there's finite size and there's finite size, right? So what we have is we actually don't have, so there are corrections to finite size scaling always and they become worse and worse when you have lower dimension typically and in the classical system you effectively have lower dimension than in the quantum system. So all of this stuff is way worse for simulated annealing. But more importantly, we have open boundaries in two dimensions and periodic boundaries in one dimension. So, I mean, it is better than fully open boundaries but normally in these finite size studies you would have fully periodic boundaries which we're not able to do on this processor. Okay, thanks. So is the reason SQA results and QA are different? Is it because quantum Monte Carlo relies on equilibrium dynamics? Well, yeah, I mean, that's one way of saying it. So simulated quantum annealing simulates the equilibrium dynamics, whatever that means, of the quantum phase transition but with different dynamics than the quantum annealer. So we know that simulated quantum annealing or path integral Monte Carlo can simulate equilibrium and some aspects of dynamics but this is extremely strong evidence that we don't simulate the full dynamics with simulated quantum annealing. Is there a better way to simulate non-equilibrium dynamics when there's no sign problem? I hope not because I spent a lot of time running these simulated quantum annealing runs. Awesome, thanks. I mean, hidden in this is the fact that SQA is like 10 million times lower than quantum annealing or something like that. So thinking more towards application scale, a lot of times when you do embedding, you know, you'll have different chain lengths and so the effect of transverse field for your logical variables is different. Do you think that there might be some benefit of trying to make sure that you're going through the phase transition at the same time or something like this or just can you just speak to this point a little bit? Yeah, I mean, maybe we can talk offline in more detail but just briefly, yes, it's extremely important and there's been a fair amount of work done not only on the effect of having homogeneous chain length but also solving the problem of inhomogeneous chain length. And you know, on d-wave we can advance or delay certain qubits. So if you have a big heavy chain, you would want to delay that so it goes through the phase transition later and so the tunneling dynamics are sort of on the same scale whenever then goes through the phase transition. Okay, so let's move on to the next talk. Let's start.