 on-going work, but hopefully will be published soon. So we're interested in digital quantum simulation. We're interested in digital quantum simulation, and especially want to relate it to physical phenomena, like quantum chaos and energy localization. I will explain how these things relate. But first, why are we interested in digital quantum simulation? Well, there are very complex, many body problems that we find in nature, and build models for this, like high-temperature superconductivity. But these things are very difficult to solve. And there are many open questions, if you write down a phase diagram like this. Or this is an example from high-energy physics, where you have this high-energy and heavy ion collision with two pancakes of ions. So they are accelerated almost to the speed of light, and they're smashed into each other. And then lots of stuff is happening. There are many quarks and gluins come out. And we would like to understand what is happening through this time evolution. But these are things that are very hard to solve. So the idea is that we take the Hamiltonian that we think describes these situations, and then we implement it in a more easily controlled system, like trapped ions, cold atoms, photonic systems, and superconducting qubits. And we have heard about all of these things already. And so there are several ways how one can implement this Hamiltonian. And one is by this digital quantum simulation. Let me just remind you of the main idea. So what we want to do is we want to have a time evolution given by some Hamiltonian age. And this can be built up of individual parts. But usually, if we have, for example, a trapped ion system for a complex Hamiltonian, this will not be naturally in there. So we need a way to actually synthesize this Hamiltonian. And one idea is if we can do individual gates, individual parts where we know how to do the time evolution, then we can chop this thing up and do the time evolution, first corresponding to the first set of terms and to the second set, and so on. When we do this for very short time steps, so we split the entire time into n small time steps, do this once. So we go once, so today, through the Hamiltonian, and then repeat it again. And if we do this very fast in a time-averaged way, we get very close to the real Hamiltonian that we want to do. OK. And this has actually been used in several experiments that are demonstrations for complex spin models, for fermionic models. And recently, we collaborated with a group of Reiner Blatt, even for a baby lettuce gauge theory, where they've looked at the creation of electrons and positrons in a very small system. These are just four sites. But OK, they could see that in the experiment, they see some generation of particles, and it corresponds nicely to the theory prediction. OK. So this is something that is being used, but for very small systems. And we have the question, so in the future, how far can we go? So how reliable will this be when we scale this thing up? And this is what I want to talk about here. So this is the outline of this talk. First, I'm going to tell you a bit about a worst case error bound that is known in the literature, and then also give some indication why maybe this is actually too pessimistic for what we have in mind. And then I will show you some numerical results that actually illustrate this, which then we will explain with analytics. And in the end, I will show that there is a breakdown of this quantum simulation that we can connect to quantum chaos transition. So let's start with this worst case bound. So this in Lloyd, especially, what is down first, I think. So this chopping up of this sequence of gates with this, by the way, this is called trotterization. And so Lloyd showed that you can approximate the desired unitary with this chopped up thing in this way. So you have a well-controlled upper bound on the error. It goes with the time. And you can, so here's the time that you want to reach squared, and it's divided by the number of steps that you're doing. So you have a well-controlled error bound. If you do more steps, the error will become smaller. And you also have here the commutator of the different gates. But for a complexity sense, this is very good, because it's only polynomial scaling, so everyone is happy. But actually, in practice, this polynomial scaling is still very, very hard. And so you get an error that diverges as you go to larger times. And this thing here, this is typically these are commutators of an extensive number of local gates. So acting locally on single qubits or pairs of qubits or so. So locally in the sense of few body operators. And so if you take the commutator, this will again give an extensive number typically. So this means that this thing here will scale with the number of qubits that you have. Which means that the error, if you go to larger systems or larger times, it will just diverge. So this is a worst case bound which looks at the entire unitary time evolution operator. Which if you do quantum computing, that's the thing that you look at. But here, we are more interested in physical systems. So maybe this is actually demanding too much for what we have in mind. And I can give you a very simple toy model just to illustrate what I mean. So imagine just that you have a collection of spins, which live in a magnetic field. So it's a very simple model. They don't do much. They don't even talk to each other. They just rotate over time. Now, we can write down the time evolution operator for this, which will be just given by the tensor product of the individual time of evolution, so these things here describe this rotation. And now we want to get an idea how robust will such a thing be. So let's change something so we can change the magnetic field a little bit from h to some h tilde, which then is the same time evolution just to replace the h tilde here. And we can ask what error in the unitary we will get. So we can just compare the two unitary revolutions. If we expand this for short times, we will get something like this. So this is given by the time in the linear way, the difference of the magnetic fields. And then there's also the number of qubits. So we can look at the error bounds. This is the maximal eigenvalue of this thing. Over time, it increases linearly. And then as you increase the number of qubits, it gets sharper and sharper. But if you look at this, this means that for a large collection of spins, you get an error that is immediately very large and you have to stop. But somehow, this doesn't feel very right, because you have, why should the evolution of one spin get worse if you add 1,000 other ones, which are not even talking to this one. And indeed, if you are actually interested in more physical observables like the magnetization, so this is just the mean of the magnetization in some direction, and compare this under these two evolutions with h and h tilde, then you get something that is much more regular. So these two curves, one is for sigma z, the other for sigma y. And these are for all the different system sizes. They don't depend on the system size, clearly, because you're taking the mean magnetization. So yeah, so this is independent of n. So this is just to illustrate in a very naive, simple model that sometimes when you ask for the entire unitary, this is demanding too much. This is also why, for example, we are really interested in this context in phase diagrams, for example, because a phase is something that is very stable. We are not interested in completely the precise state that we have at some point in the phase diagram, but we want to know the magnetization or correlations and things like that. So this was a really simple model. So now, what about quantum many-body systems? And then connecting to this digital quantum simulation. So let's look at this. So we first did some numerical checks to see what is happening. So we're making this thing more interesting. Now we let the spins interact. So we still have this magnetic field. We add an easing interaction between nearest neighbors. And to make it more interesting still, we add also longitudinal field. So the system is non-integral. It's a generic many-body system. And now we numerically simulate the digital quantum simulator. So we're doing an exact time evolution by chopping up this Hamiltonian into the different gates. So we apply this z gates, then the x, z, x, z, x, and so on. And do this for short time steps. And then repeat this many, many times. Now we want to quantify how well does this work. And so we need something to look at. And what we do is we look at the energy. And the energy measured with respect to the target Hamiltonian. So what we have here is this is an initial state. Doesn't need to be an eigenstate of the target Hamiltonian. So it could be anything. We let the time evolve exactly with this chopped up time evolution. Now here I introduce this small time step tau, which is just the entire time t divided by the number of steps. And just take a different notation now here. The important thing later on will be this time step tau. So we apply this time evolution to the state, and then we measure the energy in this time evolved state. And this is a good quantifier, because if we do the ideal time evolution, then we would do this devotion with e to the minus i h t. So these things commute, and we just get a constant. So now we can compare how much energy we pump in. And we define something like a simulator fidelity. As up here, this is the heating above the ideal evolution. So we compare this thing to the ideal one, to this constant. And then we normalize this to the case of infinite heating. So if you had an infinite temperature state, we just distributed over the entire spectrum, which would mean the quantum simulator is maximally out of control. Yeah, exactly. No, exactly. So we do this, because then we can just conveniently normalize it, because then the maximum that we can reach is 1. And so the worst case is bounded at 1, and the ideal case would give us 0. OK, so let's look at what we get in this numerics of this easing chain in the longitudinal transverse field. We have done for numerics for more models, but I'm just going to show one. And all of them behave qualitatively very similar. OK, so here we plot this heat, the simulator fidelity. And here we change the trotter step size, and we look at very long time. So take the infinite time limit, just to get, so to say, the worst case thing. And now look at what happens at the function of the trotter step size. So there are a few limiting cases, which we know can be immediately, essentially, if the step size is 0 exactly, then as I showed before, we're doing really the ideal time evolution, so that should be a constant. And this thing should be 0. Whereas if you have a small trotter step size, this means that at each change of the gates, you do a small quantum quench. You let the system equilibrate a little bit. You do again a quench, equilibrated it. So obviously you hit it, pump in energy, let the energy distribute a bit, and repeat this many, many times. And so you expect that if you wait long enough, you will have infinite heating. So if you do, you repeat this many, many cycles. Another question is, so these two points are somehow clear. But the main question is, how does it go from one to the other? So it could be that at long times, this is only a pathological point, so that you get always infinite heating, and then exactly the ideal thing, we are perfect. This would be one thing. Down here you see that there is something coming, so this is not going to happen. The other thing could be that it's just a smooth, very smooth crossover from one to the other. But what we find is actually this thing here. So we see that when you're doing a very coarse trotter step, we get infinite heating. But then there's a very sharp threshold where we get something that becomes very smooth. It's also interesting to note the system size dependence of this. So up here we see that there is some system size dependence at this sort of transition. But down here in this regular regime, there's no system size dependence anymore. And there, as you see, one can also make on a log log point, you see that this is actually a polynomial behavior. So there are things become perturbative, so we will understand this a little bit more. So it's interesting to compare this to the Lloyd bound, which had the scaling with n built in. So we wouldn't expect a regime where things become independent of n. And also one would expect if you wait long enough, everything goes wrong. So of course, let me stress. This bound is, of course, not wrong. It's just that it's looking at something that is very, very demanding. So we want to be 100% precise, whereas we are thinking more. We want to be precise on physical observables. So we are demanding less, but still answering relevant questions. So we see this very sharp transition between two regimes. And then we also observe this not only in the n energy, but also in other observables like the magnetization. So for example, here, again, here's the trotter step size. You would take the mean magnetization at large times and would expect that if you do everything perfectly for this particular quench that we're looking at, some value that is slightly above 0.8, if you have infinite heating, so we distributed over all states the mean magnetization will vanish, so we would expect to be down here. And again, we see this very sharp threshold here. And now the big question is, of course, can we understand where all of this comes from? And for this, we are trying to do some analytics and connect it to some physical phenomenon of this. And the first step that we did, which helped a lot in this interpretation, is to realize that this can actually out of discussions with Anatoly Polkovnikov, to realize that this sequence of gates is really a periodically driven system. So you're applying z gates, x gates, z, x, z, x. So it's really like a periodically driven system. And now you can reduce results from heating dynamics and periodically driven systems and so on and apply this here. And importantly, what you have is you have this small period tau, this small time step, which gives you a small expansion parameter. And this allows you to do analytically calculations. So let me see. Because, OK, so we have this. Before we saw this, there's this transition in the system as a function of how hard we drive it. So if you drive it or how fast we drive it, and there's actually a very interesting classical analog, which is described very well in this paper, which is the so-called Kapitza pendulum. And maybe someone of you knows it, but I found it very interesting. So let me see. I think I have to close this. OK, so let's go back. OK, so what is this Kapitza pendulum? It's something that is just, this thing here is a pendulum that is mounted on a support. And this support can move up and down. So you can periodically drive it. And now let's see what happens. So if you don't actually drive it, so you see it's just, if you perturb it, it just goes out of equilibrium and moves around. Yes, but it's an ordinary pendulum. But now you turn on the driving frequency. So this thing becomes, you see it becomes blurry. It goes up and down. And now suddenly there is, now suddenly it becomes stable. If you put it up there, it gets stabilized by this fast driving. So the transition, as you can look at the transition as a function of driving frequency between something that is just moving around and something that is very stable. So this is just in a classical system, but actually you can draw a connection. So the first is that, I mean, there's two regimes also as we have observed, they have this fast driving regime where small trotters, where things become stable and something which is, when you drive it slowly, that there's unstable. But actually there is a stronger connection than only this apparent one, which has been exactly described here, which is given by the so-called Magnus expansion that you can do for periodically driven systems. And so the main idea is the following that we have here now a sequence of gates and we rewrite this, this is a unitary operator. It may be very, very complicated, this thing, but in any case it's some unitary. So we can always write it as e to the minus i t times an effective Hamiltonian, which is called calligraphic age, which will depend on this step t over n, time step tau. And now exactly for small time steps, one can do what is called the Magnus expansion and write down this effective Hamiltonian in a perturbative series. So we have at zeroes order, we get just the time average. Then at next order we get the commutators of these different elementary gates that we are applying and then the higher order commutators. In some sense it's, for this special case it's actually something like the Baker-Campel-Hausdorff formula applied on many gates. Okay, but this gives a handle on describing the system. You can also again compare this to this bound due to Lloyd, where so what you're doing here is we're doing this perturbative expansion upstairs. In this bound it's directly done comparing the unitary so it's downstairs and the difference is that essentially it's like what we're doing is like resumming in this perturbative series, taking an infinite number of terms and resumming them to get an exponential. So we are taking sort of say more orders into account partially at least. And this is also why the Magnus expansion is very powerful for treating time periodic systems. Okay, now having this Magnus expansion we can actually go further and study what this does, so what this means for our system. So we have, again we have here this Magnus expansion. As I said the first order term this is just the time average so this is the target Hamiltonian. So we see that as we get to small tau we actually obtain an approximate constant of motion in our system. And the consequence is that when you have a small tau that the energy will be almost conserved and it turns out that in finite systems it spreads very slowly a little and then in finite systems even it stops. So there's a bounded spread of energy which again has been in a different context written down in this paper here. And if you have a large strata step then this expansion doesn't work anymore and you don't get this approximate constant of motion. So what happens is that at each step energy is not conserved when you pump in energy. And this can explain our two regimes. Well this is for the energy but we also wanted to understand better why all observables down here behave in a perturbative way. So we can look at this, the next order correction in this Markov expansion and actually do a perturbative expansion in this for the observables. So perturbation theory in one way to do this is by doing something like a linear response theory. So the idea is you have this periodic sequence of gates you're applying H1, H2, H1, H2 for example if this is only two. Now the Hamiltonian, you can describe this by a time dependent Hamiltonian so just rewrite this thing as having an average Hamiltonian and then adding a square wave that adds or subtracts the other parts. So essentially in the first time step it adds half of H1 and subtracts H2 so actually we're doing only H1 in the next. So it's just a fancy way of writing this sequence turning on H1 and H2 alternatively. But this looks exactly what as a system that one treats with linear response theory namely we have some mean Hamiltonian and then we add some perturbation to that add a given frequency, oh my God. And so we can now do exactly the same as in Kubo's paper from 62 exactly the same derivations essentially using the assumption that our state remains close to the unperturbed state. Okay so this is what enters in the linear response calculations but we know that in this energy localized phase this we can do this because we remain close to this initial state, right? Okay and then we can do this theory and look at observables for example so we have a slight generalization because the observables can even in the ideal case they can time it off so we're not at equilibrium but now we compare what happens in the non-ideal case how does this differ from the ideal one and okay so then one gets an expression which I didn't write down because it's too long but if you take the limit of large times you see that it comes down to a very simple expression which is just the commutator of this observable that you're looking at with the perturbation that you add to this mean Hamiltonian then the expectation value of this thing and there is just a linear dependence in tau in the first order and so you can do a systematic expansion in order of tau for arbitrary local operators and then we understand why also the magnetization for example behaves in this perturbative way. Okay, good to know we understand from this very well why down here everything is robust and the nice thing is that this means for the experiments that Reiner-Blatt is doing or John Martinez so actually this is Esteban Martinez so it's not the same guy this is a PhD student in Reiner-Blatt's group okay, so these experiments these people are doing now they have actually a very nice prospect because it means they don't need to be down here at tau equal to zero, they just need to be good enough so they need to be in this regime that is stable they just have to cross this threshold and then even once they are here they could think about taking several points at tau that they can access and then make an extrapolation very systematic extrapolation and yeah, so this also shows why I mean here they have very coarse trotter steps in this experiment but they look very close to theory so this is sort of say justified and also I should mention that maybe this was too fast in the previous slides but here this threshold this happens on a scale where tau is proportional to one over the interaction strength so it's not an insanely small scale that you have to reach but it's something that is on the order of one but of course the interesting thing would be to predict also this threshold for this the problem is you need to evaluate the breakdown of the Magnus expansion and there have been various works and I've tried to look at this but actually you would need to go to arbitrary high order because the breakdown can happen at any order and if you solve this problem you're actually solving the many body problems so you don't need the digital quantum simulator anymore so that's a bit of a problem we can get some ideas when this breakdown in the worst case should happen what I explained here so you have here this is the frequency range with which we are driving the system up here is the good regime we have a small trotter step here on its bed and we can overlay this with the many body spectrum so up here if you have very large frequency we saw that this is a good regime and we know that this works because once the frequency is larger then the many body bandwidth even then we are not driving any resonances so we can't do this perturbation theory and so there will really be no heating at large times and we can't do this because here we have a spin system so it has a restricted bandwidth not like bosons which would go to infinity but then once we are no longer above the bandwidth but somewhere in the band then there can happen two things one is if we are still large compared to something that we get from perturbation theory which is matrix overlap elements divided by energy differences this thing here is typically something of a single particle energy so a single spin flip or so so it's actually a quite small energy scale so if you're in this region we can at low orders we can do this perturbative expansion but we're not safe that it doesn't break down at very high orders so at very high orders we can hit resonances and this has been nicely explained in this work by Abanyan which then showed that from this you can get you can get heating but it will be very very slow and logarithmic timescales and there have been more works on this but I think this was the first one actually in our numeric we don't see this regime and it's not very clear to us why we don't see it because we go to very large times it's tens of thousands of cycles so we think we should see this type of heating but it could also be that our systems are just too small to really find this because this argument here is for large systems and then of course everything goes wrong once you are no longer in this perturbative regime then you can just hit resonances and things heat up immediately good and finally, I don't know, how much time do I have? One, okay, the quantum chaos needs to be very fast okay, so just to say that there is you can understand this transition this heating as a transition to quantum chaos by getting motivated from single particles so there is this nice book by Fritz Hake from some time ago what he looks at is the kicked rotor so you have a particle on a ring that moves and you have a kinetic energy of this thing and then at periodic times, you give it delta kicks multiples of this time tau and it just adds a potential to this ring so what this does is, well at random times it just kicks this particle and it moves a little bit then it gets kicked again and so if you look at the energy as a function of the period you find this plot here so if you have large periods you kick the system then you allow it to move and then you kick it at some other point again so it will just get randomly kicked around over this circle and it will accelerate so you get heating in this thing and whereas if you do it very fast you do not allow the particle time to move so you're kicking it all the time and actually it gets localized and then you see this transition this is actually a transition in the sense that we see if you kick it fast, we get localization if you kick it slowly, we get this heating behavior and okay, just in the interest of time I'll just say that one can connect this thing is a prime example of quantum chaos in a single particle system and we can understand this also the same I'm just going to notice in a many-body system and this is an observable that shows that here it's actually chaotic and here it's regular, the behavior so we see that we can interpret this in this many-body sense and it's actually in the same way this breakdown dynamics and it has also the same threshold behavior okay, but let me conclude so what I've shown you is that this digital quantum simulation actually works much better than what one would maybe think having in mind that this is for local observables we have this very sharp threshold it's important and we can connect it to this physical phenomenon of energy localization and to transition to quantum chaos and finally we understand very well what is going on down here because we can do perturbative expansions and this is a final remark this is not only for digital quantum simulation but also for classic computers when you're doing a trotterized thing the tensor network methods the same considerations apply actually and we are prepared for that in preparation and this, thank you