 a Bologna, sono tornata a Bologna. Mi stiamo spiede, come vedi dalla foto. Perché volevo discutere delle cose scientifiche, ma magari qui non ci possiamo vedere a Bologna? Sì, assolutamente. Ti dico, io ho sto gesso che lo terrò parecchio un altro... Io ti vengo anche a trovare volentieri, se voglio di lavorare. Vediamo, ti scrivo e ci metti a accordo, sì, sì, perfetto. Vengo il microfono da qui un attimo. Sì. So the first speaker is Laurento Campos Venuti from USC, who is online, unfortunately, I think he will tell us. Or at least the picture, I think it's very clear. So, I think, Laurento, you can start as you want, I start the recording. Grazie, Elisa. Thank you very much. Thanks to the organizer for the very kind invitation. It's a pity I cannot be there with you to enjoy this conference, but unfortunately, as you can see, I broke my leg. This is what happened when you do these things on the left. But it's nonetheless very nice to participate at least remotely. So I'm going to talk about adubatic tiering for open systems and the possible way to use it to gain some acceleration and speed up. And given the title of the conference seems to be a very fitting title. May be even too fitting and you will wonder that I'm going to tell you about things that you are already very much familiar with. Well, I can't assure you that you'll be falling asleep, but I'll do my best to keep you entertained. So without further ado, this is the outline of my presentation. I'll just review the adubatic tiering for open and closed system. And I'll present the ideal boundary cancellation and do some other generalization and then report on implementing these new methods on D-Wave device. So let's start and as promised, I'm going to start very simple with the simplest and probably also the most important differential equation of quantum mechanics. So this is a first order differential equation with the linear, right? With this operator L, it's a linear operator, I'm going to assume actually I'm always going to be in a finite dimensional space. So this is a bounded operator acting on some vector, which here I indicated raw, which could be, for example, the density matrix of our system. And this generator L depends on time and on some other timescale tau, which I indicated here. Well, you know, you can find the solution of this, at least formally, of this differential equation. The linearity allows us to write this solution in a compact form. The interesting thing is that the solution is unique and it's under a very general condition. For example, one simple condition is that this integral converges, the same is true under a more general condition, but we won't need it. So that's a simple differential equation, but which nonetheless encompasses most of the physics we deal with in quantum mechanics. Adiabatic expansion is about being interested in the solution of this differential equation when the, with this generator L, varies slowly. What do you mean with that? I mean, as it is, varying slowly, it's not a really precise sentence. E in fact, here I'm quoting Arnold, he warned us that without further assumption, the adiabatic invariance is wrong. So just to make it more precise, I'm going to assume the following. So I'm going to assume that this generator L depends on this external timescale tau in this form. So this tau is basically the total annealing time. And of course when tau is very large, this implies that this generator L somehow gives you intuition that this generator L varies slowly. So you've changed the variable from time to this rescale time and then you see that the tau parameter becomes just a parameter of this differential equation and comes out in this form here. So now the problem is well defined and we are so we want to find the solution of this differential equation. So what happened to the solution of this differential equation when tau goes to infinity? Or in other words, what happens when epsilon, which is tau to the minus one goes to zero. Mathematician code is kind of perturbation expansion singular perturbation because you see if I set epsilon exactly equal to zero, then the left hand side here is zero. So this is no longer a differential equation. Actually this is going to tell us something about the adiabatic limit. So these are a little bit complicated perturbation series to develop. In this form, as I said, the setting is completely general. So this could I could be talking about Schrodinger equation, for example, or I could be talking about, you know, L could be a genuine L'Uvillian and rho could be a genuine density matrix and your favorite Lindblad master equation. So actually, which of these two settings should we consider? Well, actually, because of the third law of the dynamic, we can never achieve really zero temperature. So whenever we deal with physical systems, we should really be talking about open systems. So, for example, with an embedded description or something else. In this case, you know, the instantaneous steady state, which I indicate here with sigma, is most likely, you know, the thermal state like a Gibbs state, which I wrote here, and the rule of the game now are about not preparing ground state, but about preparing Gibbs state, for example. We still can be hard, classically, so this is still an important task. And of course, if the temperature is smaller than the gap of the Hamiltonian, the Gibbs state is close to the ground state, so we recover basically the zero temperature limit. In this setting, you can also ask a meaningful question. Let's say you have a speed up for the closed systems case, right? Then you can ask yourself, well, what happens when I, let's say, open the system, when I let my system interact with the back, will the speed up survive? Well, this is a very complicated question also because when you deal with, when you talk about speed up, it means you have in mind a classical algorithm, right? And so you are comparing your quantum adiabatic preparation with some yet to be specified classical algorithm. And it's complicated because, you know, if it's value, you don't really know if you're comparing the apple with apple. Nonetheless, we gave an argument in this paper that if there is a speed up, zero temperature, the speed that may survive when the temperature is of the order of the Hamiltonian gap. And the argument is in this paper. This is not necessarily a no go theorem in the sense that, you know, actually in some cases being open, so interaction with the environment may even help. So you can actually achieve speed ups where there was no speed up before. So this is not necessary. In fact, I will give a few examples of this behavior in the following, namely behavior where the open system case, so to speak, behaves better than the closed system case, which is a bit counterintuitive. So, good. So we want to generalize now the adiabatic theorem for open system. So in this case, so L will be our general luvillian that describe our system after we traced out the bath, right? And I'm going to assume these are sometimes called ergodic or primitive luvillian. So basically I'm going to assume that the steady state of this luvillian is unique. And is this state sigma s. So these are instantaneous eigenstates of the luvillian, okay? So basically they're eigenvector with zero eigenvalue of this l at time s at this riscate time. Instead, rho s, I integrate with rho s, the state, which is evolved under the dynamics, starting from the steady state at time zero. So basically we prepare, so ideally we prepare our thermal state at time zero, then we turn on our adiabatic algorithm and we end up with rho s. But the state we want to prepare is sigma, okay? So how does the adiabatic theorem work? So we have to generalize it to the standard adiabatic theorem that we learn in quantum mechanics courses to this setting. And quite surprisingly when I was dealing with this problem, it was a few years ago actually in 2016, but still adiabatic quantum computing was already very well developed. But there was no generalization to open systems at that time. So what do you do? You go to your office and I actually, I studied all the paper by Cato mainly and I was able to generalize the proof to the open system case. And this is the result. So basically for some standard assumption here with standard assumption, I refer to some regularity assumption that assure you that the differential equation has a unique solution. These are super general. Then there is a little bit more of a technical assumption. So I'm assuming that this L generates a contraction syndrome. Mainly the norm of this object here is smaller than one. So this is more general than unitaries. And actually even more general than uvillian that generates a completely positive map. And on the same time, it's easier to check. So this basically it's technical, but it's a physical assumption that assure that our uvillian produces physical states. And then comes another physical, very physical assumption. Namely, if the uvillian has a gap. Now the gap is a complex parameter no longer since L is not a mission. But nonetheless we require that there is a gap above the zero eigenvalue for all S in zero one. And then you can prove the following namely the distance between the exact state you end up with. So this row one at the end of the evolution to the state that you want to prepare this instantaneous steady state at the end of the evolution. So this distance is bounded by some constant divided by this total near time. So if you make the total in time large then this you converge to the instantaneous steady state. And actually you can work out also explicitly what this constant and you can derive many interesting things. As I said, this setting is more general than CP maps. So includes also, for example, right field master equation, for example, which do not generate necessarily positive states. But are nonetheless useful in some situations and include as a special case, of course, the unitary case. So, okay, good. So that's the standard, let's say adiabatic theorem in this most standard form. Can we do better? So namely, the question arises, can we make this error smaller, right? Instead of decreasing less one over tau, can we make this error smaller? So it was actually known for a long time that in the closed unitary or Hamiltonian case, if you have, if you engineer the schedule in a particular way, you're able to make this, what I call this adiabatic error. So this distance between the exact state and the state you want to prepare. Well, you can make this adiabatic distance decrease as an inverse power of this tau parameter, of this annealing parameter, to the power k plus 1. And k is the number of vanishing derivative of the Hamiltonian at the beginning and at the end. It's a bit of a, well, I don't know if you have, well, there are ways to give an intuitive picture of this requirement. Anyway, basically you require that the Hamiltonian starts, or the scalar starts in a flat way, right? And end up also in a flat way. So at the beginning, all the derivative is up to some power k or zero, and at the end, all the derivative to some power k or zero. Of course, if this is the case, you can make the error smaller, right? And so this would be highly beneficial. So can we generalize this to the open system setting? Well, again, you go back to the office and try to generalize, and the answer is yes, indeed you can. And here actually there is an interesting difference, one of these differences that was mentioned before between the closed and the open system. So here actually to obtain the same result, so this improved accelerated scaling with tau with the annealing parameter, you only need that the Ljouvelian has vanishing derivative at the end. I mean, the reason for this are several reasons, most of which are technical, but ultimately basically the reason is that the Ljouvelian generates an error of time because of this non, yeah. And so this is what makes zero and one different. Of course, in the unitary case, we could have started from the end and end up at zero, and this would be a valid dynamical evolution. This is no longer a case here because the inverse of our generator wouldn't generate quantum state, wouldn't generate simply maps. So there is this, that's nice, okay, good. So maybe we can use this to achieve better convergence. And so you can ask, okay, this is true in general, but can I ever achieve this? So can I engineer a situation where I do have this vanishing derivative at the end of the Ljouvelian? So because when you deal with Ljouvelian, Ljouvelian has, and here I'm, for example, I'm mentioning a realistic Ljouvelian, for example, this is a very common Ljouvelian derived in Tamino, Bash and others in some time ago. It's also called adiabatic master equation. It's essentially, well, it's a Ljouvelian in time-dependent, time-dependent Davis 4, okay, where this A operator here are the system-bath operators, so they keep memory of the bath and the gamma also as the bath correlation function, essentially. So there is some information about the bath, which is hidden in this, well, not in this A operator here. So the question is, can we engineer such a Ljouvelian to have vanishing derivative at the end? Of course, we, without controlling the bath, right, because this is by definition something we cannot control. And the answer is yes, hopefully, so it's good. Essentially, it suffers to, if you engineer the system Hamiltonian, which is the Hamiltonian here, H of S, to have vanishing derivative at the end, this will imply that the Ljouvelian, the overall Ljouvelian has vanishing derivative at the end, and so, as the consequence, we will have this accelerated convergence to the steady state, to the state we want to prepare. So that's good. So then you can also ask if this result is solid, so now, because here I derive it for a particular Ljouvelian of this special form, and if you are familiar with master equation, you know that there are gazillions of different master equations. Basically, everybody has his own preferred master equation. So is the result somehow solid and doesn't, oh, here, of course, this tells us that, you know, it allows us to have the same accuracy with a smaller and in time. And as I said, the question is, is the result solid? Here I'm recalling just schematically the approximation, which I employed to derive the Limblad master equation. So small system bath coupling, you assume that the full density matrix factorizes, then you have the board mark of approximation. And after this step, basically, you end up with the Redfield master equation. You can also do this in the time-dependent setting, and you derive a sort of adgebatic Redfield master equation. And then if you apply a rotating wave approximation, you get the Limblad master equation, which is what I just showed before. And so the Redfield numerics on the adgebatic Redfield master equation show that actually the previous result, so this accelerated scaling of the adgebatic error, still works also for the Redfield. So that's good. And then, of course, I mean, the result always holds in a more general setting. If we assume unitary Hamiltonian evolution for the system plus the bath, and where the bath is unknown, and then we trace out. But in this case, we must assume that the system Hamiltonian has vanished in derivative also at the beginning, as I was showing before. By the way, just let me mention in passing that one of the benefit of using master equation is that first we can discard the bath to a large extent. And also, implicitly, we are assuming that the bath is infinite, which is something that actually we use many times, because if the bath is finite, then you have Poincaré recurrences and all sorts of somehow pathological behaviors. So it's good, at least mathematically, conceptually, to have the bath sent to infinity. But that's a somehow another, an aside remark. So that's a theorem, so you should trust it. But if you don't, you know, here are some mnemonics. So here, as I said, is this schedule, which is engineered to be linear at the end, or quadratic, or cubic, or and so on. So in such a way that it has K vanishing derivative only at the end. And well, these are simulation for the single cubic, but I also have simulation for several cubic, which I'm not showing here. And indeed, as you can see, the results matches very well the theory. Well, namely, because, I mean, you can also ask the theorem in itself is a bound, right? But here, we, this plot actually showed that the bound is basically captured the correct scaling for large and near time. And it also tells you that it is a sensible way to decrease the dramatic error, because, you know, if you set yourself at a certain total time, this changing schedule can allow you to move several order of magnets to have several order, maybe smaller adibatic error. So that's nice. Then you may want to, you wonder, okay, if this seems interesting, can we implement it on, for example, on a D-Wave device? And before going to the actual experiment, I'm going to recall a little bit of theory. So, as you all know probably, so the D-Wave machine prepares this system Hamiltonian, time-dependent, which is made up of this transverse field Hx, multiplied by this schedule function usually called A. A starts large and goes to zero, as time goes by. And then you have the longitudinal term Hz, which includes the, those grounds that is the solution, encode the solution of our computational problem multiplied by this schedule function B. And whereas B starts from zero, and then progressively increases as time increases. So this is what D-Wave machine does, essentially. And now, let's say we model, we try to get some insight to how the machine works. And we model the D-Wave with the adibatic master equation that I showed before. And if you talk to experimentalists long enough, well, you realize that the system bath coupling is predominantly longitudinal. So, meaning this A operator that I wrote before are basically Pauli, sigma, z. Okay? And then we immediately face a problem because at the end of the evolution, when the A parameter is zero, the system bath coupling commutes with the Hamiltonian. Because at the end of the evolution, the Hamiltonian is just along z, right? So the bath operator, the system bath operator commutes with the Hamiltonian at the end, and actually you can show that this implies that at the end of the evolution, the zero eigenvalue, so the steady state manifold is at least D for the general, where D is the dimension of the Hilbert space. So it's a huge degeneracy. So presumably before that point, the L'Uvillian is ergodic, so there is a unique steady state. So which means that the gap closes at the end, the L'Uvillian gap. So, this is actually bad news because we cannot use the adiabatic theorem that I showed before. Sorry, I'm seeing now a question. Yes, there is a question in the chat. I don't know if you want to keep it for the end or else now. Do you have two questions? Yes, yes, yes. So maybe I should read the question. I have two questions. Could an analogous relation on the trace norm between the state and the instantaneous thermal state be derived at any S? Yes, of course, there's not a special about one. Here I'm showing this adiabatic error for the last, for the S equal one point, but you can derive, of course, you can put yourself at any S, if you like. And the second question is, is the Markovianity assumption strictly required? It's a good question, not entirely, in the sense that as I showed some degree of Markovianity, it's maybe, I mean, we have to agree what you mean with Markovianity somehow. But as I mentioned before, so these results are valid for Redfield master equation, which are not less Markovian than the, yeah, sure, no memory Markovianity, no memory effect. Yeah, let's say they work when the generator is time local in this way. So if you have full memory effect, then the generator is, I take it back, actually, because the Markovian, sorry, the Redfield master equation has memory effect. So then yes, the answer is yes. So, okay. So as I said, here you see that the form of the noise of the system bus coupling, so now seems to be a bad news because, you know, we cannot use the, because we didn't get close and so we cannot use the standard divisive theorem. So what do you do? Oh, sorry, I think you, I moved it. Did you hear me? We hear you nicely, but. Okay, okay, I disappear for a second. Okay, so as I said, you cannot use the standard adiabatic theorem because the gap closes. So what do you do? You go back to your office again and try to generalize the results to the case where the gap closes. And first I consider the case where the gap closes at the end, at the end of the evolution. And you assume you also enforce this boundary cancellation procedure. So, namely, you make the schedule flat in such a way that the uvillian is vanishing very to the end. So they get close at the end and at the same point you enforce boundary cancellation there. And then you find out that the adiabatic error is now bounded by some constant over tau to some power eta. And now this power eta as this special form. Where alpha is the exponent that takes away the way the gap closes at s equal to 1, okay? And you can know that this eta as a function of k is always larger than 1 over alpha plus 1, which is the case without boundary cancellation with k is zero. And then, you know, if you make the schedule infinitely flat, so you have infinitely many derivatives, which are zero, the most you can get is this eta becomes 1 over alpha. So you see, this is somehow bad news, so not entirely unexpected. But it tells you that the boundary cancellation, in this case, is only mildly effective, it's not really powerful. And to be complete, actually, you would like to see what happens if the gap closes in the middle. And here, well, somehow I've been stuck on this problem for a long time and unfortunately I couldn't prove in full generality. But I have extensive numerics that show that actually, so now I assume the same thing. Standard assumption and so on. And now the L'Uvillian gap closes in the middle, but we perform boundary cancellation at the end. In this case, the adiabatic error scales with the inverse power of tau to the k plus 1. So this is the same as there was no gap closing at all, same for the gap case. This is actually very quite surprising because for the closed system case, the result is actually that the adiabatic error scales with the exponent, which is fraction, which is more than 1. And it is this 1 over alpha plus 1, basically. And all the benefits of boundary cancellation are gone. But somehow, for some, as I said, for some strange reason, in the open system case, you get this benefit. And here I have this one of the numerics that show this very well that this is the case. And, as I said, this is one of these examples where the open system case behaves better than the unitary case. I can almost prove it, but not in the full generality, which is I always must assume something more. So this is a little bit of work in progress. But anyway, actually, for D-Wave, we can never set ourselves in this setting. So this is just for completeness now. So there is another phenomenon, if you are familiar with the way you probably heard of it. And it's the fact that actually it's so-called freezing. And it's the fact that the dynamic essentially stops at freezing time, which is before the end of the year. E' even before the time where this A parameter goes to zero. And I can give an explanation of this phenomenon based on this adiabatic master equation, which I wrote before. If you set yourself in the energy again base, namely this N and M labels are instantaneous against state of the Hamiltonian. Then it is known that the diagonal element of the density matrix evolved with the Pauli master equation, which is now time dependent. Where this W rates, W at this form here for D-Wave, with the same assumption. And now, because of, again, because of the fact that the system bath operator sigma z, and they commute with the Hamiltonian at the end, you can show that the weights vanish at the end of the year. And in fact vanish with the large power. So this weights, which are the transition rates. So tell you how do you deplete level zero and go to level N or vice versa, jump from level N to level zero. Well, this transition rate vanish with the large power of A. So A to two to the Q, where Q is the hamming distance between the ground state zero and the excited state N. So essentially you see these terms here are all very small, even before A is zero. Because, as I said, this vanish with the power. In fact, we can basically define this freezing time to be the time where these rates are basically inverse of the, you know, typical timescale of the system, which we can set to be the total annihilata. And this also gives good results, this definition, with our numerical findings. In the paper that I'm going to cite later on, the explanation is a bit more complete, but basically this is the gist of the idea. So now, good. So now let's, let's, we have all the ingredients, so to speak, of the theory. So we, now we want to try to implement this boundary cancellation on D-WIF. So for, we have to avoid freezing, because after freezing the dynamic doesn't change. So it doesn't matter if you perform two boundary cancellations after freezing, because the system wouldn't see it. This is even true for, for, you know, for the matrix, you know. So basically you have to set yourself before, at a time where A is non zero, and it's before freezing, and then we engineer the schedule A and B in order to have approximately vanishing derivative at the end up toward the K. And this is achieved to some extent by the, some of the latest generation of D-WIF devices, the D-WIF 2000, which allows you to specify A and B to be piecewise continuous with the straight lines. So, and you have 11 points to engineer A and B. So if you have some limit, very limited engineering capability, but nonetheless you have some capability. And so basically we engineer the schedule with the capability that we have in order to be flat-tish somehow at this point where the A parameter is non zero. And finally we ramp to the final values of the, which are required by D-WIF after which we do measurement in the computational basis. So this is, for example, a physical schedule that we use. So this is a schedule that you see we stop at the value of A. This is the A schedule that doesn't go to zero. We stop here essentially. And this is for a value that approximately is quadratic. But of course, since this is, these are piecewise constant, it's not, the theorem is not satisfied exactly. But nonetheless, there is a result, which I haven't mentioned before, according to which, if you try to enforce boundary cancellation, but you only achieve it approximately, at least if your system is what you expect it to be, then the adiabatic error is still decreases. So the scaling maybe is not the one that you expect, but the adiabatic error is smaller. So there is theory that tells you that it's nonetheless good. So unfortunately, well, unfortunately, I mean, it's part of the beauty of physics that the horse is not really a sphere. So our D-WIF machine doesn't behave exactly as you would like it to be. Namely, there are several Hamiltonian programming error, also called integrated control errors. Namely, the Hamiltonian you are trying to, you end up not with the Hamiltonian that you're trying to program, and there are some noise on this. Then there are discretization error in the annealing schedule. There is a further phenomenon of anomalous heating, which is namely the fact that the temperature is actually not constant during the whole annealing procedure. Then there are marker of approximation in the, deriving the limblade and the form of the noise, which is unknown and so on and so forth. In particular, in all of this, basically, most of these error we have no control over, and we don't really know the extent to which they play and so on. The only thing we know is actually this discretization error in the annealing schedule. So the fact that we approximate a quadratic function with this piece, where is linear function with only 11 points. Actually, this numerically, we check numerically that this give rise to large errors. And here we show results, I don't know if you can see it, but basically here are the exponents that you obtain with this so-called, these are numerical results, but obtain with the d-wave piecewise linear beta schedule, which is implementing the boundary cancellation, but with only 11 points. And then you get this exponent 07, 1.49, 1.7. And if you use many more points, so this high precision, then you go from 1.5, let's say to 1.6, and from 1.7 to 2.6, or 2.7. So you see you, actually this discretization error, it shows that this discretization error are large. And finally, I can present the results in the paper where there's much more, but I don't have time to present. So here, basically you see the ground stage probability for different schedules, so schedule approximating the linear case, so basically they end up in a linear fashion, or so with k equals 0, or with k equals 1, or k equals 2. And the result are, you know, here the gap, at least from the theory, you, in principle, this, you read it now, it's gapped, so you would expect the exponent eta to be 1 plus k, and instead you get here, considering the smaller numbers. So, by the way, the top panels are just native standard, this is a problem of 8 qubits, a ferromagnetic chain, essentially, with different couplings, and they differ from where we employ the boundary cancellation, so basically the freezing is at a point where s is more or less 0.5, and we do see that if we approach the freezing time, boundary cancellation doesn't change, so here you see that these curves are basically all the same, so it doesn't really matter if you employ boundary cancellation because the system is already frozen. If you move a little bit from the freezing time, so you go a bit, you stop a bit before that, then you start to see that boundary cancellation become effective, and here you see that the best results are obtained for when s is roughly 0.45, and the lower panel are the same result, but now with quantum adiabatic correction, namely you substitute basically each qubit with 3 logical qubits, and you see that in this case you also see a good improvement on the exponent, and now the exponents are roughly half of the expected value, but I would say given all the sources of error that are there, including as I said the discretization error, in this case, let's play the big role, after all these results are not so bad, and in the hintside phase value actually they show you, since they are not consistent with the gapless form of this exponent that I showed before, basically these are proof that our rebellion is gapped, and I think I can stop here, my conclusion I showed that basically I generalize the adiabatic theorem to open system, and in some cases actually it's better behaved than the closed system version of the adiabatic case, and boundary cancellation on d-wave, it's somehow effective in the sense that it can increase the ground state population, and if you also use it in conjunction with the quantum annealing correction, then it makes results which are better than each alone, so thank you for your attention and if you have any questions ready to take questions. Thank you. So I think we can start from questions from here, or okay, Paolo. Hi Lorenzo, thanks for the talk, thanks for the talk, it was nice to hear all these old results and the new as well, so I've got a question for you, in your theorem you have this constant C, that well there's no K dependence there in the notation, but I believe there must be one because at least dimensionally, well if you have higher power over the time, well the constant should change dimensions accordingly, right? So my question is, do you guys have a general expression for the constant C as a function of the system gap, of the Liouvilleian gap? Is it like something like 1 over delta to the K power? So in other terms, how does this pre-factor changes with the system size? And by the way, Lorenzo, delta in general as you mentioned is a complex number, but of course here is a real one, are you guys taking the real part or the modulus of the gap? Okay, these are my questions, thank you. Yeah, very good question actually, so yes, the C, yes of course, this constant does depend on K, of course also because of the dimensional argument. And so the short answer is, in principle you may obtain it, it would require really a lot of work. Yeah, as I said, and the work would be, you know, for each different K you would have to do a lot of work. I don't know if you want to go through that pain. And by the way, you know, even for K equals zero, the value of this constant is, there's a lot of interest in this constant actually because it sets the adiabatic time scale in fact. And as of now, most of the times the adiabatic theorem will underestimate somehow or overestimate the adiabatic scale usually that you get from the theorem. So, I don't know, but the nice fact is that, for example, as given by this plot here, that you can see this constant K, this constant, sorry, C or K, which probably grows with K, doesn't grow too much in the sense that you do see sensible, you know, you do see that the fact that the adiabatic error diminishes. And finally, the gap, no, the gap is complex. What I wrote before was meant to be a complex number actually. It just depends on S, I mean, it depends on a real parameter, but then there are also other parameters which are now complex. So, and now you require, well, I mean, if the gap closes, it means that both the imaginary and the... Yes. Another question from here. And you showed that when the system bath coupling is... the bath couples to the sigma z, the system Hamiltonian and the interaction commutes, and in that case, the Liubidian gap closes at s equal 1. But you showed that adiabatic theorem holds when the Liubidian gap is scaled as 1 minus s to the power alpha. So, can I... Can we estimate the exponent alpha from the function of the unifying schedule? Yeah, good question. So, actually here in this case, alpha is the exponent which determines the speed at which the gap closes without boundary cancellation, because when you employ boundary cancellation, basically you are rescaling this s, right? So then it will depend on k. So this is the way the gap closes without boundary cancellation, where k is 0, if you want. And coming to this exponent alpha, there are some interesting facts. So in the unitary case, basically, most level crossing would be just exactly level crossing, so where this alpha would be 1, essentially. So that would be an x, right? Two levels that cross linearly. So, as I said, in the unitary closed system case, the most common value for alpha would be 1. For Liubidians, now alpha cannot be 1 because you cannot go lower than the zero eigenvalue. So alpha is... And there are some theorem that if the dependence of the schedule is analytic, which is basically what we are... Well, it's sufficiently smooth, let's say. Well, alpha should be an integer. So the most common guess, I would say, is that alpha is 2. In fact, this is the value that you observe for the single qubit adiabatic master equation where you see that alpha is 2. That's all like... So without further studying or assumption, this is the value I would imagine. Basically, you would need something else. You use some sort of symmetry, some other argument why alpha should be larger. But 2, I would say, is the most probable value. Okay, there is a time for another question here. Hi, I have a couple of questions. Firstly, how do you define the adiabatic error when you have degenerate ground states at the end of an ending process? Yeah, so basically, that's a good point. I mean, what is sigma 1? Very good question. So in fact, this is actually another... This allowed me to point out another fact where the open system case behaves better than the closed system one. Because basically, in that case, you should consider this sigma 1 actually as the limit of sigma s when s goes to 1 from the left. And that would be actually the... Assume that sigma is your thermal state at any s smaller than 1. This would mean that sigma 1 is the thermal state also at the limit. So basically, this behavior is not... I mean, in the open system setting, actually when you have a level crossing, sorry, in the closed system setting, if you have a level crossing, sorry, if you have a ground state, you have a level crossing with a ground state, the adiabatic theorem still works, but then you don't end up in the ground state anymore. You end up in an excited state. The closed system... Sorry, the open... The adiabatic theorem for open system is that... Guarantees that you always state in the... So to speak, in the ground state or in the steady state. And the second question is, you showed that if the gap closes in the middle of the annealing process, then your results are basically the same that you have for a gap system, right? So I was wondering how much of this is related to the fact that in open systems, the dynamics intrinsically tries to take the system towards the zero eigenvalue. So after the gap closes in the middle and then you let it evolve for the rest of the annealing time, what the effect of the gap closing in between is basically smeared out, because the dynamics itself is trying to take the system to the ground state. Yeah, absolutely. I mean, you can make up a lot of intuitive explanation and I think yours is a good one. When you actually have to prove this theorem, you have a series and actually it looks a bit of magic because there are terms in the series which diverge because the gap closes, so there are some terms that diverge. And this which should cause trouble, but for some reason they don't cause trouble. So that's the technical reason why... So you have to find a smart way to deal with those terms. Okay, I think... Thank you, Lorenzo, again for a very nice talk. Thank you, Lorenzo, again. And...