 Sorry for the small DNA, there has been a small glitch with the transport, anyway, we're all back, so we have the pleasure to welcome Glenn Bank from the University of Innsbruck, who will continue the introduction of related foundations of quantum computing. And after this course, there will be a welcome reception that is going to take place on the terrace where we had lunch, okay? Okay, so thank you very much. So as probably you know, I'm going to speak today about adiabatic quantum computation. So let me start by giving a few references on the topic. So just an overview is going to be this recent review by Albash and Lidar, adiabatic quantum computation. So this is a review of modern physics in 2018. So here you're going to find basically all the references you can about the topic and you're going to find the latest results and the references to the latest results and latest theorems in this topic. But this is not very self-contained, it's not an introduction, it's just a review. So if you want to read something which is self-contained and I'm going to recommend for you the lecture notes by Charles Andrew and it's going to be lecture notes on quantum algorithms. So here you have an introduction to the topic which is self-contained and if you're willing to read it in depth, there are also rigorous proofs about the statement that I mentioned. So till now you have probably seen the gate model, right? For quantum computing. In this model essentially what happens is that the system undergoes a sequence of gates that form a circuit. So I'm going to go pretty fast on this because you've already seen it and the final result is going to be measured and you're going to get output. So in this kind of approach you have what would be a digital quantum computation model because you're applying these digital gates, so these elements. While now I'm going to discuss adiabatic quantum computation and this model is quite different because instead of being digital it's going to be analog and the system is going to be evolved with a certain time dependent Hamiltonian. And we're, yes, sorry, okay, maybe you can still make measurements at the end and you're going to always get an output. So here the dynamics is governed by the Schrodinger equation which is implemented by this time dependent Hamiltonian. And the output again should be the solution of your computational problem. So why would we try to do this kind of approach and why is it called quantum adiabatic quantum computation? So the reason is that the goal in the end is to prepare the system, let's say C out, is going to be of the final time, which is the final Hamiltonian. So the output is going to, we want the output to be more or less the ground state of a final Hamiltonian and we want that the ground state of this final Hamiltonian would be encoding the problem. So the reason why we do this is because in physics we more or less have techniques to prepare ground states of Hamiltonians. And the idea is to apply these techniques in this case to come up with a new computation, with a new algorithm and model for computation. So now the question is how can we prepare the ground state of a certain Hamiltonian given a certain time with the time evolution, with the Schrodinger evolution. And this can be done relying on the adiabatic theorem, which is the backbone of adiabatic quantum computation. So let me state it. So the adiabatic theorem, first of all, let's, before stating it, let's try to discuss what are the ingredients. So we have an Hamiltonian, h of t, which is time dependent. So this Hamiltonian being time dependent is going to have a spectrum, which is also time dependent. So we can solve the time independent Schrodinger equation or the gain value problem for each independent time. So we will then get such a spectrum that's going to be given by, let's say, this, the energies, and the eigenvectors as a function of time. Sorry, again bigger. This is the right size, hopefully. So, okay, so from here, as we said, so the Hamiltonian is going to have eigenstate and eigenvalues, which depend on time and satisfy the following equation. So if I apply the Hamiltonian to an eigenstate, it should just get, be multiplied by the eigenvalue. Okay, so we have this, and if we look at the Hamiltonian at the spectrum of the Hamiltonian as a function of time, and we're going to have energy levels. And we diagonalize the Hamiltonian and look at the spectrum at each time. So would have the first energy level, which is going to do something like this, which will depend on the time would have, let's call it put it on top because this is a function. And so this is an a zero is not an important, because it's small, it's not really important. And then you're going to have other levels for, let's say this is going to be for the state zero of T, this is going to be for the state one of T, and et cetera, et cetera for all the levels of the Hamiltonians. Okay, and we make the hypothesis that these levels do not cross in particular that the ground state, so a zero is always different. Or in this case, since it's a ground state, a zero, a m of T is going to be always greater than a zero of T, meaning that this cannot happen. Okay, so this is not contemplated within the theorem as I will be setting it now. And having this hypothesis, we can now start stating what the theorem says. The theorem says that if we start from an initial state, psi of zero, which is the ground state at time zero, and then we let the system evolve with the Schrodinger equation. In the end, we are going to get that the state of the system at time T, psi of T, is going to be, let's say, approximately proportional to the ground state at time T. Okay, so this is saying that for this condition to hold, we have to have a sufficiently slow evolution. So I'll write it here because it's quite important. So if the Hamiltonian is changing sufficiently slowly, what will happen is that the state would always remain in the instantaneous ground state, the evolved state. And those, I can use this idea to prepare the ground state of a final Hamiltonian starting from the ground state of an initial Hamiltonian. So before describing in detail how we can use this in quantum computing, I want to spend a few seconds, like a few minutes, discussing the proof and the bounds that are related to this theorem. So, okay, I'll just delete this. So this is, to make this statement, to give a quantitative statement, we need to be a bit more precise than what I've said here. So we are going to consider that the Hamiltonian H of T is, let's say, it's going to be equal to H of S with S equal T over the total time. So we can take, if we want to be more precise, let's say we are evolving with H prime of T, which is equal to this function. And now, in this case, we can rewrite the Schrodinger equation as a function using the variable S. And we are going to get that we're going to have a psi of S. And we're going to have that for this psi of S, the derivativeness is going to be equal to L of T psi of S. And of course, the H of S. Okay, so in this case, by doing this change of variables, we get a time here. And this change of variable is quite relevant because this parameter allows us to slow down the evolution. So the Hamiltonian, for example, we can take as final point H of S0 initial point going to H of S equal 1. And this, in terms of the time, is going to happen within a time from 0 to capital T. So by sending capital T to infinity, I make this dynamic slower and slower because it takes more time to reach the final Hamiltonian. So I have to be doing this change slower. Okay, so this is the setting. And with this setting, it's possible to start attempting to prove the theorem. So I'm going to give not a rigorous proof of the theorem which you can find in the lecture notes from childs because this is maybe not the right place to prove the whole theorem. But I just want to give some idea on how the theorem works, on how to prove the theorem and what comes out from the theorem. Because other than knowing that if we send T to infinity, the dynamics is going to be adiabatic. So it's going to stay in the ground state. We would like to know how large T has to be for this kind of statement to hold. Okay, so I'm going here. So to make the proof, the first part of the proof is going to be to find an exact adiabatic evolution. So we say that this Hamiltonian, when this condition holds, will be approximately in the ground state. But we want to find an Hamiltonian which we want to define an alternative Hamiltonian, HA, is going to be T of S plus I. So here P is the projector over the instantaneous eigenstate. So if we want to be precise here, we should put S, but I will drop the S where they are like... So I may drop the S where it's implicit, that all these things except T are going to be S dependent. So the first statement is that this is going to implement the exact adiabatic dynamics. So if I evolve with this Hamiltonian, I'm going to have that the Schrodinger equation can be satisfied. So let's say Schrodinger equation for A... Oh, sorry. Okay, so we say that if this... So the main thing here is that what does it mean that this implements an exact adiabatic evolution? It means that if I take A of S, which satisfies the Schrodinger equation with HA, so if I consider this Schrodinger equation, then with initial condition that I'm starting from the ground state, then I'll have that the solution is by A of S equal to a coefficient, the instantaneous eigenstate. So wait, let's put S here because we want to keep everything in terms of S. This is going to be a solution of this equation with some appropriate C0. And this means that the first step is defining the Hamiltonian which does the job exactly, which is implementing an exact adiabatic solution. The next thing that you'd have to do for the proof, so the next thing that you'd have to do for the proof is showing that if I take the evolution operator, let's say I would say UA and U are... So here the evolution operator UA is the evolution operator induced by this Hamiltonian, by the exact adiabatic Hamiltonian, so it's going to be satisfying the equation HA UA, and the U is going to be the evolution implemented by our original Hamiltonian, and this is going to be satisfying U. So again, both of these are... Sorry, the derivative of course. Again, both of these are functions of S, and as I said, everything except T is basically a function of S. So in this case, these two... I know the differential equation satisfied by these two operators, by these two evolution operators, and the idea is that I want to show from this that UA minus U, this should be small, that goes to infinity. So after defining the evolution, the main point as we said, we have defined the exact adiabatic evolution, and then the idea is to show that the exact adiabatic evolution is actually close to the evolution operator induced by the time-dependent Hamiltonian, the original one, when the process is slow enough, so when T is going to infinity. And finally, again, the third step here, which I will write here. So the third step is going to be to put some conditions. So it's going to be to say how large... So these are the steps that lead to the proof. I have not proved none of them at the moment, but I want to simply start by proving the first one, at least, because it will show... It will give us the state and the phases which are actually implemented by this slow evolution in the limit of T that goes to infinity. So again, let me start by... So as we said, so I'll look at the first step, and we say that this is going to be our H of S. And the idea in this first step is to show that, as we said, that the Schrodinger dynamic, this satisfies the Schrodinger equation. So let's write that explicitly. So if this would satisfy the Schrodinger equation, if we're going to have that the derivative of... Let me just write here that this is the function, the state, and we're going to have that the derivative of the state should be close to the part due to the Hamiltonian. Let's just keep the S out for consistency. This should be equal to zero if we want that the Schrodinger equation is satisfied because I just brought everything on one side in the Schrodinger equation. We can start writing this and see what comes out. The first part is going to be here. Since there are going to be a few derivatives, I'm going to use the notation x dot equals the S of x. And this is going to be the first part. And the second part we have to put in the Hamiltonian. So we're going to use the game value equation. Since a zero is an eigenvector of hs, this is going to be a zero times a zero. And then we're going to have plus this term. So plus and we're going to have a times... Okay. And this still applies to a zero. So let's put here the coefficient because of linearity. The C zero goes out from this application. So, yeah. Okay. So to be consistent, actually if we want to keep the h bar, sorry here, I forgot an h bar. Okay. So here we're going to have a one over h bar and we're going to have an h bar. Okay. So this equation, in this equation actually what we would like to simplify to resolve, like to simplify is this object. So let's write what this object is. E dot P applied to E zero, which is going to be... I just recall one second that P is E zero, E zero. Okay. And this here is going to give zero dot comma... Oh, sorry. All right, this here. And this is going to be E zero dot E zero. Okay. And E zero. Okay. So now we're going to insert this in the equation, but I'm not going to do the computation bit by bit. I'm just going to give you the result for this one. And what you're going to get is that you're going to have... You're going to have that when you insert this inside here, you're going to have that the Schrodinger equation, which should be zero, is going to be actually C dot E zero plus... This is going to be plus. Let me just get you the right sign for the last term. And you're going to have minus I is zero dot. And this is going to multiply E zero. So what happens is that the Schrodinger, all the terms in the equation vanish, except those that multiply directly the single against the E zero. And then we have an equation with just coefficient on a single vector. And this equation... Yeah, sorry. Yeah, of course, there is a C zero missing. And this equation has a solution, which is going to be that the C of T, or of S in this case, is going to be an exponential of a dynamical phase and a geometric phase. Okay, times the C at one. And because of the initial condition C zero at one, this is simply going to be the value of C. And here the phases are given by... Are the integrals of these objects that appear here? And gamma is going to be the integral of zero is zero dot. Okay, so what would happen is that the state would approximately evolve acquiring, let's say, these two different phases. One is simply what you would expect in a usual Hamiltonian, which is in a usual Hamiltonian you would acquire a phase proportional to the energy times the time if the Hamiltonian is time independent. In this case, you have to... The E depends on the time, so you'd have to integrate this. And in the second term, it's due to the... to its geometrical phase, which plays an important role. For example, when you want to consider cycles which return at the initial point. But in this case, what we are mainly interested is in the dynamical phase. And I just want to mention that if you proceed in doing... in... Proceed in proving the theorem, you'd find that the oscillations that are induced by the dynamical phase are what essentially allow you to prove the theorem. Because you're going to have that the eigenstate is... that the state is having... the different... that the state is having phases which are high. And you're going to have that when you compute integrals, these phases are going to appear in the exponential and you're going to have to integrate by parts to obtain a bound for the error. So let me... I'll skip this part because it's quite technical. And I'm just going to give you the result for the bound. So if, finally, attempt to estimate how far it's... your state, how far it's your evolution from the exact adiabatic evolution, you're going to have that... this. You're going to be able to write this equation. This is going to be, let's say, you're going to have an error which is smaller equal than time zone constant. So you're going to have an error which is... Okay, here you have a total time. So you're going to have like the error times the total time. It's... Let's say it's smaller than the norm of the derivative of the Hamiltonian because of the gap squared where the gap squared is... The gap is the... Let's say it's going to be the difference. Let me write it here and all then it's going to be E0 of t. And you're going to have that, as we said, you're going to have these two objects and we are considering this difference between the E0 state and the E1 state. And here you would have to take of this quantity is defined for some value of s and you'll have to take the maximum. So... Actually let me write it explicitly that you have to take the maximum. Okay. So... So this is what the theorem states that the error that we define and it's telling us how close the state of the evolved state is to the exact... the state implemented by the exacter Ibatica Hamiltonian and this error, if I call it epsilon... So if I call the error epsilon, let's say here I'm going to have that the time needed to achieve an error epsilon is going to be 1 over epsilon times a constant times the maximum over s h dot. So this is telling us that as I said, I'm calling epsilon the error. Okay. So this is telling us that if you want to keep a certain fixed error, so we want the final, for example, the final state to be pretty close up to epsilon to the ground state of the final Hamiltonian we are going to have to increase the time because when we reduce epsilon the time is going to increase. But on the other hand it's also it's also telling us that to achieve when we have a fixed epsilon we can achieve a sufficiently good performance if we choose the time large enough. And what is the physical quantities that are defining how large this time has to be. These physical quantities are the derivative of the Hamiltonian and the gap. So let's say if, for example, the derivative for some reason is constant we are going to have that the time is going to be scaling with the inverse of the square of the gap the time to have a certain error epsilon. So this is the adiabatic approximation because it tells this equation tells us this equation tells us how long we need how much time we need to actually achieve a certain precision. And now I want to describe how this is used in briefly describe how this is used in adiabatic quantum computation. So for this I'm going to just finally describe the algorithm and like tomorrow I will show practical examples. But today is just about understanding what the algorithm is and what are the theoretical foundation of the algorithm. So if we start from a certain Hamiltonian let's say H of I an initial Hamiltonian and we want to end in a final Hamiltonian which encodes the solution of a problem. So to do this we would have to restrict it's easier actually to restrict ourselves to some combinatory optimization problem. So we consider the problem of minimizing a certain function let's say H defined on binary to R okay. So we consider this problem mean of H of Z. So this is an optimization problem because we are given a classical function and we want to find the Z that minimize the function. So actually many problems can be written in this kind of form and also quite complicated problems which are NP hard and for example like the satisfiability problems, the problem and I will discuss some of them tomorrow and also the problem that you already saw can also be written in this kind of way. So if you have a problem which can be recast in finding the minimum of this function finding the minimum of this function can be again recast in the question of finding the ground set of a certain Hamiltonian. So we consider this Hamiltonian which is going to be the sum over all the possible strings of Z and we define the Hamiltonian, we define a diagonal Hamiltonian. So the elements of the diagonal are given exactly by H and the eigenvectors are exactly this and the eigenvectors are exactly a computational basis. So again Z is going to be a string as we said with each of these values can be 0 or 1. So this can be seen as a state defined on a set of n qubits. So we have that this Hamiltonian n lives in the space of n qubits and the ground state of the final Hamiltonian since the Hamiltonian is diagonal and we already know the energies is going to be the state which has the minimal energy. So in this case it's going to be the state which has the minimal H of Z so it's also going to be the state which minimizes it's also going to be the state Z which minimizes our function. So the ground state of this problem is going to be the equal capital T at the end of the evolution is going to be the zeta bar star so that zeta bar star is the minimum of our initial problem and in this case we can use the adiabatic theorem to generate an Hamiltonian that produces the ground state of this problem. And to do this we consider the following Hamiltonian so the interpolating Hamiltonian H of S which is going to be let me give you a simple form but this can be changed so this is going to be S times H final plus 1 minus S times H initial and I recall that S is T over T so in this case I'm giving you this should be a particular case where I've chosen a linear schedule as we'll see this may not be necessarily the best one but at time 0 the Hamiltonian is H i so it's going to be H of S equals 0 is going to be H i and H of S equals 1 is going to be H f okay and if I run if I am able to prepare the system in the initial state which is one of the requirements for the adiabatic theorem so that I must be able to prepare this initial state which is the ground state of the initial Hamiltonian then the adiabatic theorem assures us that this evolving with this Hamiltonian is going to generate the solution we are looking for okay so so this is essentially how things are going to work and for the time I need because we said that this I need a certain amount of time to do this and the time I need is going to be dictated by the derivatives of the data Hamiltonian and its minimal gap so I can get the runtime of the algorithm up with a certain precision so this is a runtime at fixed precision by computing this object so the analysis for when running this kind of algorithm for a certain problem is going to be right on a Hamiltonian which interpolates between an initial one and a final one the final one should solve your problem of interest and the initial one should have a ground state which is sufficiently easy to prepare so we should be able to do this ground state preparation and after the evolution if the time is longer than a running time which is depends inversely with the gap squared of the Hamiltonian along the evolution path the system is going to return the the ground state of the final Hamiltonian which is also the solution of our initial problem so this is just the let's say the theoretical side so let's how the algorithm should work in theory and tomorrow I will discuss examples for the algorithm and I'm going to show how it actually works in practice for some specific problems yeah so it's not so important so in this case the norm is two norm but